OpenAI is putting together a new team of experts solely dedicated to preventing a potential robot uprising. The artificial intelligence company behind ChatGPT announced on Monday its plans for mitigating the dangers that may emerge from its technology—including cybersecurity risks and the potential that their bots may be used to create nuclear or biological weapons.
The company outlined the goals for the new “Preparedness Framework” in a 27-page document, saying that it would be used specifically to conduct regular tests and monitor their advanced models for any dangers it may eventually pose. The team would be dedicated to preventing such threats from emerging, while also ensuring that their products are deployed responsibly.
“The central thesis behind our Preparedness Framework is that a robust approach to AI catastrophic risk safety requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment,” the paper reads.
ADVERTISEMENT
OpenAI has created a safety matrix that the Preparedness team will use to measure and record the danger of their models in a variety of risk categories including cybersecurity; chemical, biological, nuclear, and radiological (CBRN) threats; persuasion; and model autonomy. Each category will receive a score of low, medium, high, or critical.
Spearheading the group is MIT AI researcher Aleksander Madry who is tasked with hiring the researchers and experts for the team and ensuring the group regularly keeps the company informed of any potentially catastrophic outcomes from their frontier models.
The new team is actually the third group created within OpenAI to help address any emerging threats from its technology. This includes the “Safety Systems” team, which addresses current issues and harms posed by its AI including producing biased and harmful outputs; and the much more ominous “Superalignment” team, which was created to prevent their AI from harming humans once their intelligence vastly surpasses ours.
The announcement of the Preparedness Framework comes at an interesting time for the company—which recently was embroiled in turmoil following the shock firing (and eventual re-hiring) of OpenAI co-founder and CEO Sam Altman. Many have suspected that one of the main reasons behind his initial ouster was due to concerns from the company’s board that he was moving too quickly to commercialize their chatbots—potentially leading to greater risk and harm to its users.
So the timing of the Preparedness Framework is… interesting. It could be seen as a kind of reaction to the more Cassandran critics to the company’s flagship technology. That said, the team and framework have likely been in the works for a while, so the announcement is (likely) a coincidence.
Still, one of the big questions now is whether or not we can fully trust OpenAI and its safety teams to make the right decisions behind its powerful AI to protect its users—and the rest of the world—from doom.