AI chatbots may help criminals create bioweapons soon, warns Anthropic CEO (2024)

The generative AI landscape has been evolving rapidly since ChatGPT’s debut in November last year. Despite the growing concerns from regulators and experts, many new chatbots and tools have emerged with enhanced capabilities and features. However, these chatbots may also pose a new threat to global security and stability.

Dario Amodei, the CEO of Anthropic, warned that AI systems could enable criminals to create bioweapons and other dangerous weapons in the next two to three years. Anthropic, a company founded by former OpenAI employees, recently shot into the limelight with the release of its ChatGPT rival, Claude.

The startup has reportedly consulted with biosecurity experts to explore the potential of large language models for future weaponisation.

Advertisem*nt

At a hearing on Thursday, Amodei testified before a US Senate technology subcommittee that regulation is needed desperately to tackle the use of AI chatbots for malicious purposes in fields such as cyber security, nuclear technology, chemistry, and biology.

“Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” he testified at the hearing on Tuesday.

AI chatbots may help criminals create bioweapons soon, warns Anthropic CEO (1)

This isn’t the first time an AI company has acknowledged the dangers of the product they’re themselves building and called for regulation. For instance, Sam Altman, the head of OpenAI, the company behind ChatGPT, urged for international rules on generative AI during a visit to South Korea in June.

In his testimony to the senators, Amodei said that Google and textbooks only have partial information for creating harm, which needs a lot of expertise. But his company and collaborators have found that current AI systems can help fill in some of those gaps.

Advertisem*nt

“The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk.”

He went on to warn that if appropriate guardrails aren’t introduced, AI systems will be able to fill in those missing gaps completely.

“However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

Amodei’s timeline for the creation of bioweapons using AI may be a bit exaggerated, but his concerns are not unfounded. Deeper information for creating weapons of mass destruction such as nuclear bombs usually rests in classified documents and with highly specialised experts but AI could make this information more widely available and accessible.

Advertisem*nt

It’s unclear exactly what methods the researchers used to elicit harmful information from AI chatbots. Chatbots like ChatGPT, Google Bard, and Bing chat usually avoid answering queries that involve harmful information, such as how to make a pipe bomb or napalm.

However, researchers from Carnegie Mellon University in Pittsburgh and the Centre for AI Safety in San Francisco recently discovered that open-source systems can be exploited to develop jailbreaks for popular and closed AI systems. By adding certain characters at the end of prompts, they could bypass safety rules and induce chatbots to produce harmful content, hate speech, or misleading information. This points toward guardrails not being fully foolproof.

Moreover, these dangers are amplified by the increasing power of open-source large language models. An example of AI systems being used for malicious purposes is FraudGPT, a bot making a buzz in the dark web for its ability to create cracking tools, phishing emails, and other offences.

AI chatbots may help criminals create bioweapons soon, warns Anthropic CEO (2024)
Top Articles
Latest Posts
Article information

Author: Jamar Nader

Last Updated:

Views: 6380

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.