AI Chatbots Could Help Plan Bioweapon Attacks, New Report Suggests

AI Chatbot

AI chatbots have been identified as potentially susceptible to exploitation for sinister purposes. According to a recent study by a US-based think tank, the technology behind the Artificial Intelligence (AI) chatbots could assist in planning bioweapon attacks. The research conducted by the Rand Corporation unveiled on Monday, delved into the capacities of various large language models (LLMs) and found that they could assist in orchestrating biological attacks. Notably, these AI systems did not explicitly generate instructions for creating lethal biological weapons.

The report highlighted that past endeavors to weaponize biological agents, such as the notorious case of the Japanese Aum Shinrikyo cult attempting to use botulinum toxin in the 1990s, failed primarily due to inadequate understanding of the biological components involved. The report underlined that artificial intelligence has the potential to swiftly bridge these knowledge gaps, which could be exploited for sinister purposes. However, it should be noted that the report did not specify which LLMs were examined in this study.

The global AI safety summit, scheduled for next month in the UK, will tackle grave AI-related threats, including bioweapons. In a concerning revelation from July, Dario Amodei, the CEO of AI firm Anthropic, cautioned that AI systems could potentially be employed in creating bioweapons within the next two to three years.

LLMs, the core technology behind chatbots like ChatGPT, are trained using extensive datasets from the internet. While Rand did not explicitly disclose the LLMs involved in their study, researchers indicated they accessed these models through an application programming interface (API).

One of the scenarios devised by Rand involved an anonymized LLM that identified potential biological agents, such as those responsible for smallpox, anthrax, and plague, discussing their potential for causing widespread casualties. The AI system also explored the feasibility of obtaining disease-carrying rodents or fleas alongside methods for transporting live specimens. The report highlighted that the scale of projected casualties was contingent on factors such as population size and the prevalence of pneumonic plague, which is more lethal than bubonic plague.

The researchers acknowledged that extracting this information from an LLM required circumventing safety restrictions, colloquially called “jailbreaking.”

In another scenario, an unnamed LLM discussed the advantages and disadvantages of various delivery mechanisms for botulinum toxin. This substance can induce fatal nerve damage, including using food or aerosols. Additionally, the AI system provided advice on creating a plausible cover story for acquiring Clostridium botulinum, all while appearing to be conducting legitimate scientific research. The suggested cover story involved presenting the purchase of C. botulinum as part of a broader project focused on diagnostic methods or treatments for botulism, concealing the true intention behind the acquisition.

The Rand researchers conceded that their preliminary findings indicated that LLMs could potentially assist in planning a biological attack. Nonetheless, they emphasized the need for a more comprehensive analysis to ascertain whether the AI responses merely reflected information already accessible on the internet.

In conclusion, the Rand researchers stressed the unequivocal need for rigorous testing and control of LLMs. They urged AI companies to limit the extent to which these models engage in conversations similar to the ones explored in their report.

Webinar Event – AI Studios 3.0 Intro and Walkthrough
UK Government Confirms November Global AI Summit

Trending Posts

Trending Tools

FIREFILES

FREE PLAN FIND YOUR WAY AS AN TRADER, INVESTOR, OR EXPERT.
Menu