ChatGPT is ‘mildly’ useful in creating bioweapons: OpenAI study shows chatbot can increase accuracy and completeness of lethal attack planning tasks

Lawmakers and scientists have warned that ChatGPT could help anyone develop deadly bioweapons that will devastate the world.

While studies have suggested this is possible, new research from chatbot maker OpenAI claims that GPT-4 – the latest version – offers at most a mild improvement in the accuracy of creating biological threats.

OpenAI conducted a study among 100 human participants who were divided into groups: one used the AI ​​to carry out a bioattack and the other just the Internet.

The research shows that “GPT-4 can increase experts’ ability to access biological threat information, especially in terms of accuracy and completeness of tasks,” OpenAI’s report said.

The results showed that the LLM group was able to obtain more information about bioweapons than the Internet ideation and acquisition group, but that more information is needed to accurately identify any potential risks.

“Overall, especially given the uncertainty here, our results indicate a clear and urgent need for more work in this area,” the study reads.

“Given the current pace of progress in groundbreaking AI systems, it seems possible that future systems could provide significant benefits to malicious actors. So it is essential that we build a comprehensive set of high-quality assessments for biohazards (and other catastrophic risks), promote discussion about what constitutes ‘meaningful’ risk, and develop effective strategies to mitigate risk.

However, the report said the size of the study was not large enough to be statistically significant, and OpenAI said the findings highlight “the need for more research around what performance thresholds indicate a meaningful increase in risk.”

It added: ‘Additionally, we note that access to information alone is insufficient to create a biological threat and that this assessment does not test for success in the physical construction of the threats.

The AI ​​company study focused on data from 50 biology PhDs and 50 college students taking one biology course.

The participants were then divided into two subgroups, where one could use the Internet only and the other could use the Internet and ChatGPT-4.

The study measured five metrics, including how accurate the results were, the completeness of the information, how innovative the response was, how long it took to collect the information, and the difficulty of the task for participants.

It also looked at five biological threat processes: providing ideas to create bioweapons, how to acquire the bioweapon, how to distribute it, how to create it, and how to release the bioweapon to the public.

ChatGPT is mildly useful in creating bioweapons OpenAI study shows

ChatGPT-4 is only moderately useful in creating biological weapons, OpenAI research claims

Participants who used the ChatGPT-4 model had only a marginal advantage in creating bioweapons over the group who only used the Internet, according to the study.

It looked at a 10-point scale to measure how useful the chatbot was versus searching for the same information online, and found “mild improvement” in accuracy and completeness for those using ChatGPT-4.

Biological weapons are disease-causing toxins or infectious agents such as bacteria and viruses that can harm or kill people.

This is not to say that the future of AI could not help dangerous actors use the technology for biological weapons in the future, but OpenAI claimed that it does not appear to be a threat yet.

OpenAI looked at participants' increased access to information to create bioweapons rather than how the biological weapon could be modified or created

OpenAI looked at participants’ increased access to information to create bioweapons rather than how the biological weapon could be modified or created

Open AI said the results show that there is a “clear and urgent” need for more research in this area, and that “given the current pace of progress in groundbreaking AI systems, it seems possible that future systems could deliver significant benefits to malicious actors. ‘

“While this increase is not large enough to be decisive, our finding is a starting point for further research and discussion in the community,” the company wrote.

The company’s findings contradict previous research that found AI chatbots can help dangerous actors plan bioweapons attacks, and LLMs provided advice on how to hide the true nature of potential biological agents such as smallpox, anthrax and plague could stay.

OpenAI researchers focused on 50 expert participants with PhDs and 50 students who had taken just one biology class

OpenAI researchers focused on 50 expert participants with PhDs and 50 students who had taken just one biology class

a study conducted by Rand Corporation tested LLMs and found that they could override the chat box’s security restrictions and discussed the agents’ chances of causing mass deaths and how to obtain and transport specimens carrying the diseases.

In another experiment, the researchers said the LLM advised them on how to create a cover story for obtaining the biological agents, “while appearing to be doing legitimate research.”

Lawmakers have taken steps in recent months to protect AI and any risks it may pose to public safety after concerns about it arose since the technology advanced in 2022.

President Joe Biden signed one executive order in October to develop tools that will evaluate AI’s capabilities and determine whether it will generate “nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy security threats or hazards.”

Biden said it is important to continue examining how LLMs may pose a risk to humanity and that steps should be taken to determine how they are used.

“I don’t think there’s any other way to do this,” Biden said, adding: “It has to be governed.”