https://ift.tt/prcPCqG How To Prepare For Your Virtual Doctor Visit To Get The Most From Your Consultation
https://ift.tt/prcPCqG How To Prepare For Your Virtual Doctor Visit To Get The Most From Your Consultation
- Get link
- X
- Other Apps
Advanced AI Safety Institute (AISI) researchers have recently discovered substantial vulnerabilities in popular AI chatbots, indicating that these systems are highly susceptible to “jailbreak” attacks.
The findings, published in AISI’s May update, highlight the potential risks advanced AI systems pose when exploited for malicious purposes.
The study evaluated five large language models (LLMs) from major AI labs, anonymized as the Red, Purple, Green, Blue, and Yellow models.
ANYRUN malware sandbox’s 8th Birthday Special Offer: Grab 6 Months of Free Service
These models, which are already in public use, were subjected to a series of tests to assess their compliance with harmful questions under attack conditions.
Figure 1 illustrates the compliance rates of the five models when subjected to jailbreak attacks. The Green model showed the highest compliance rate, with up to 28% of harmful questions being answered correctly under attack conditions.
The researchers employed a variety of techniques to evaluate the models’ responses to over 600 private, expert-written questions. These questions were designed to test the models’ knowledge and skills in areas relevant to security, such as cyber-attacks, chemistry, and biology. The evaluation process included:
The study found that while the models generally provided correct and compliant information in the absence of attacks, their compliance rates with harmful questions increased significantly under attack conditions. This raises concerns about the potential misuse of AI systems in various harmful scenarios, including:
Figure 2 outlines the potential risks associated with the misuse of AI systems, emphasizing the need for robust safety measures.
The AISI’s findings underscore the importance of continuous evaluation and improvement of AI safety protocols. The researchers recommend the following measures to mitigate the risks:
As AI continues to evolve, ensuring the safety and security of these systems remains a critical priority. The AISI’s study serves as a crucial reminder of the ongoing challenges and the need for vigilance in the development and deployment of advanced AI technologies.
Free Webinar on Live API Attack Simulation: Book Your Seat | Start protecting your APIs from hackers
The post UK Researchers Find AI Chatbots Highly Vulnerable to Jailbreaks appeared first on Cyber Security News.
https://ift.tt/PruYOG1
Comments
Post a Comment
Commenter vous !