OpenAI Creates a Team to Research ‘Catastrophic’ AI Risks

Today, OpenAI announced the formation of a new team dedicated to assessing and safeguarding against what it terms “catastrophic risks” in AI. This team, known as Preparedness, will be under the leadership of Aleksander Madry, who serves as the director of MIT’s Center for Deployable Machine Learning.

OpenAI Creates a Team to Research 'Catastrophic' AI Risks
OpenAI Creates a Team to Research ‘Catastrophic’ AI Risks

OpenAI Creates a Team to Research ‘Catastrophic’ AI Risks

Madry took on the role of “head of preparedness” at OpenAI in May, as indicated on his LinkedIn profile. Preparedness will primarily focus on monitoring, predicting, and defending against potential perils posed by future AI systems, encompassing their capacity to deceive and manipulate humans (as seen in phishing attacks) and their ability to generate malicious code.

Preparedness studies some risk categories that appear more improbable than others. For instance, in a blog post, OpenAI identifies “chemical, biological, radiological, and nuclear” threats as top concerns related to AI models. OpenAI CEO Sam Altman is a well-known advocate of AI concerns, frequently expressing fears, either for public perception or personal belief, that AI “could cause human extinction.” However, the idea that OpenAI may allocate resources to examine scenarios reminiscent of science fiction dystopian novels exceeds this writer’s expectations, to be honest.

The company is willing to explore less obvious and more practical aspects of AI risk. To mark the launch of the Preparedness team, OpenAI is requesting ideas for risk studies from the community, offering a $25,000 prize and a job on Preparedness for the top ten submissions.

One of the questions in the contest entry asks you to envision having unrestricted access to OpenAI’s Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALLEĀ·3 models as a malicious actor. Think about the most unique yet still probable and potentially catastrophic misuse of these models.

OpenAI announced that the Preparedness team will have the responsibility of creating a “risk-informed development policy.” This policy will outline OpenAI’s strategy for assessing AI model performance, risk reduction measures, and the oversight structure during model development. It’s intended to support OpenAI’s efforts in AI safety, covering both the pre- and post-model deployment stages.

AI Potential and Risks: OpenAI’s Preparedness and Concerns

OpenAI states that AI models, which will surpass current advanced models, have the potential to benefit humanity. They also present increasing risks, so we must ensure we understand and have the necessary infrastructure for safe highly capable AI systems. The introduction of Preparedness occurred at a major U.K. government summit on AI safety. OpenAI had previously announced the formation of a team to study, steer, and control emergent forms of “superintelligent” AI. Both Altman and Ilya Sutskever, OpenAI’s chief scientist and co-founder, believe that AI surpassing human intelligence may arrive within the decade, and it may not necessarily be benevolent, necessitating research into ways to limit and restrict it.

Check These Out

LEAVE A REPLY

Please enter your comment!
Please enter your name here