They seemed to calm me down. OpenAI tested how skilled ChatGPT is in developing biological weapons


They seemed to calm me down. OpenAI tested how skilled ChatGPT is in developing biological weapons

February 2, 12:02 Share:

OpenAI studies threats from ChatGPT (Photo: Reuters)

ChatGPT developer OpenAI has published research into the effectiveness of its extensive GPT-4 language model in creating biological weapons.

OpenAI set out to find out whether the emergence of a model like GPT-4 has increased ( now available for ChatGPT Plus subscribers) the ability of humans to create a biothreat. To do this, the company tested the productivity of two groups of people: one used the Internet to search for information about this, and the second used the Internet and a research version of GPT-4, which differs from the version available to the public in the absence of additional security measures.

For both groups, completing each task on the topic took about 20−30 minutes. At the same time, the group using artificial intelligence was able to provide more accurate results.

“We wanted to evaluate whether access to GPT-4 increased the accuracy with which participants completed biothreat tasks.. We found that access to the model actually improved accuracy scores for almost all tasks for both students and experts.. Specifically, we observed an average accuracy increase of 0.25 ( out of 10) for students and 0.88 ( out of 10) for experts. However, these differences were not statistically significant,” OpenAI said in its findings.

In addition, the GPT-4 group was able to find more complete information on the topic. At the same time, OpenAI calls this difference not statically significant.

“Although we did not observe any statistically significant differences on this measure, we did notice that responses from participants with access to the model tended to be longer and include more task-related details. Indeed, we observed an average increase in completeness of 0.41 ( out of 10) for students with access to GPT-4 and at 0.82 ( out of 10) for experts with access only to GPT-4 for research,” the study authors note.

Although OpenAI calls all the results not statistically significant, the company still considers them an indication that the research version of GPT-4 “may increase the ability of experts to access information about biological threats.”

“Given the current rate of progress of advanced artificial intelligence systems, it seems possible that future systems could provide significant advantages to attackers. It is therefore vital that we create a wide range of high quality biorisk assessments ( as well as other catastrophic risks), deepened the discussion of what a “significant” risk is, and developed effective risk mitigation strategies,” OpenAI emphasizes.

Read also: Treatment for laziness. OpenAI has updated ChatGPT to bring back performance. Don't share your secrets with it. ChatGPT reveals passwords that users write to it in private conversations Hidden vulnerability. A Microsoft engineer said he was forced to hush up the threat of deepfakes in DALL-E 3