.

A new report by the non-profit policy think tank RAND Corporation warns that terrorists could learn to carry out a biological attack using a generative AI chatbot. The report said that while the large language model used in the research did not give specific instructions on creating a biological weapon, its responses could help plan the attack using jailbreaking prompts. Researchers used jailbreaking techniques to get the AI models to engage in a conversation about how to cause a mass casualty biological attack using various agents, including smallpox, anthrax, and the bubonic plague. The team examining the risk of misuse of the LLMs was broken out into groups, one using the internet only, a second using the internet and an unnamed LLM, and a third team using the internet and another unnamed LLM. The researchers emphasized the need for cyber security red teams to evaluate AI models to identify and mitigate risk regularly. Assisting with plotting terror attacks is just one of the issues plaguing generative AI tools.

This is bad news as it shows how generative AI tools can be misused for malicious purposes. AI models are becoming increasingly advanced and security features are being added, but it is still possible for malicious actors to get chatbots to provide “problematic” responses. The RAND Corporation researchers highlighted the need for rigorous testing of AI models to identify and mitigate risk.

#AISecurity #Biotechnology #MaliciousActors #RANDCorporation

You can read more about this topic here: Decrypt: AI Chatbots Could be Accomplices in Terrorism: Report

Want more Byte Syze Crypto news?

Invalid email address
We promise not to spam you. You can unsubscribe at any time.