ITIF Logo
ITIF Search
The Shift in Rhetoric on AI and Biothreats Is a Lesson on the Risks of Premature Regulation

The Shift in Rhetoric on AI and Biothreats Is a Lesson on the Risks of Premature Regulation

April 30, 2024

The about-face the scientific, academic, and tech communities have made on the risks of large language models (LLMs) creating biothreats should serve as a stark reminder for policymakers about the pitfalls of premature regulation.

When LLMs were reaching the heights of their popularity last year, experts warned policymakers about how these tools would make bioterrorism “horrifyingly easy.” A 2023 report from the Nuclear Threat Initiative, a think tank focused on reducing nuclear and biological threats, said that “experts broadly agree that while offering benefits, these capabilities will enable a wide range of users to engineer biology and are therefore also likely to pose some biosecurity risks.” One of the most serious concerns raised was that LLMs could provide malicious actors with better access to scientific concepts and step-by-step instructions on how to acquire, synthesize, and spread dangerous pathogens like Ebola, potentially leading to a pandemic. Chatbots could potentially enable scientifically inexperienced users to gather scientific information so easily that the entire biosecurity landscape would need an immediate overhaul.

This early rhetoric fed discourse on using the government’s heavy-handed regulatory stick. Unsurprisingly, biothreats were cited as a main concern when some legislators called for creating a centralized federal agency with broad oversight over AI developers. The CEO of OpenAI, Sam Altman, called for regulation on AI models “that could help create novel biological agents.” And the CEO of Anthropic, Dario Amodei, testified to Congress that without additional guardrails, in the next few years, AI could “greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.” Some experts even called for banning open-source LLMs altogether.

But now, having more closely studied the issue, experts have come to a different assessment on bio risks from LLMs, and they have concluded LLMs do not introduce a new set of capabilities or significantly enhance existing ones that could lead to a biological attack or empower malicious actors. Experts agree that thinking of AI chatbots as the sole gatekeepers of information overstates how high the barrier to this information already is. For example, a new RAND report from January 2024 essentially reverses the tone of its initial assessment from the year before, concluding that the “current generation of large language models (LLMs)…do not increase the risk of a biological weapons attack by a non-state actor.”

In another recent study from OpenAI involving both biology experts and students, researchers found that GPT-4 provides at most a mild uplift in biological threat creation accuracy. This study supports the assessment of a 2024 report from the National Security Commission on Emerging Biotechnology, a congressionally mandated commission, that “[LLMs do] not significantly increase the risk of the creation of a bioweapon.”

Policymakers should note that implementing early proposals for AI regulation would not have addressed the core concerns regarding biosecurity, and worse, would have seriously stifled innovation. Banning open-source LLMs, for instance, would have been a severe overreaction. Giving time for the alarmist dust to settle revealed a brittle consensus that eventually solidified into a more nuanced understanding of the risks related to LLMs. Indeed, the conversation is shifting towards clarifying and strengthening existing policies related to biosecurity and biosafety oversight—policies which are technology neutral. The risk that threat actors may access technology and use it for malicious activities will always be a concern, so these policies help mitigate these risks by preventing such actors from using technology to their advantage.

New technologies often drive fears among policy experts on potential risks and security issues. And while examining possible unintended impacts is important, so is a balanced and evidence-based approach to policymaking. Policymakers should continue to avoid knee-jerk reactions to emerging technologies.

Back to Top