OpenAI’s GPT-4 didn’t offer a significant edge over the regular internet for researching bioweapons, as per a study conducted by the company itself. Bloomberg noted that the research was conducted by OpenAI’s recently formed preparedness team, established last fall to evaluate the risks and potential misuses of the company’s advanced AI models.
OpenAI’s discoveries go against the worries raised by scientists, lawmakers, and AI ethicists who fear that potent AI models like GPT-4 could be a significant boon for terrorists, criminals, and other malicious individuals. Various studies have issued warnings, such as one from the Effective Ventures Foundation at Oxford. They examined AI tools like ChatGPT and purpose-built AI models for scientists like ProteinMPNN, emphasizing the potential for AI to provide an advantage to those involved in creating bioweapons.
The research involved 100 participants, with half being seasoned biology experts and the other half college-level biology students. These participants were then randomly divided into two groups: one got to use a special unrestricted version of OpenAI’s advanced AI chatbot GPT-4, while the other group had access only to the regular internet.
The scientists then tasked the groups with five research assignments related to creating bioweapons. For instance, they asked participants to outline the detailed process of synthesizing and rescuing the Ebola virus. The responses were graded on a scale of 1 to 10, considering factors like accuracy, innovation, and completeness.
The study wrapped up by noting that the crew using GPT-4 scored a bit higher on average for both the student and expert groups. However, the increase wasn’t deemed “statistically significant” by OpenAI’s researchers. They also observed that those who leaned on GPT-4 provided more detailed answers.
“While we did not observe any statistically significant differences along this metric, we did note that responses from participants with model access tended to be longer and include a greater number of task-relevant details,” wrote the study’s authors.
Adding to that, the students who used GPT-4 showed almost as much skill as the expert group on certain tasks. The researchers also observed that GPT-4 elevated the student group’s responses to the “expert’s baseline” for two specific tasks: magnification and formulation. However, OpenAI isn’t spilling the beans on what those tasks involved, citing “information hazard concerns.”
As per Bloomberg, the preparedness team is diving into studies to investigate AI’s capability for cybersecurity threats and its influence on altering beliefs. When they rolled out the team last fall, OpenAI outlined their aim to “track, evaluate, forecast, and protect” against the risks of AI technology while also addressing chemical, biological, and radiological threats.
Video editing is one of the most in-demand skills in today’s content creation era. If…
There have been whispers about Samsung's ambition to equip their wearable gadgets with a neat trick:…
Taiwan Semiconductor Manufacturing Co (TSMC) recently dropped the news that they're gearing up to kick off production…
Modern chatbots like ChatGPT can churn out dozens of words per second, making them incredibly…
The race for generative AI is in full swing, but don't count on it raking…
JioCinema, the famous Indian on-demand video-streaming service, unveiled a new monthly subscription plan, starting at…