Ad
News
OpenAI introduces Preparedness team in move to counter potential risks of future AI models OpenAI introduces Preparedness team in move to counter potential risks of future AI models

OpenAI introduces Preparedness team in move to counter potential risks of future AI models

OpenAI is developing its approach to catastrophic risk preparedness in response to the potential dangers of frontier AI technology.

OpenAI introduces Preparedness team in move to counter potential risks of future AI models

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

In a proactive move against the potentially catastrophic risks posed by frontier AI technology, OpenAI is developing its approach to risk preparedness, featuring establishing a new team and launching a challenge.

As OpenAI reported on Oct. 2023, this initiative is aligned with its mission to build safe Artificial General Intelligence (AGI) by addressing the broad spectrum of safety risks related to AI.

OpenAI’s underpinning belief is that frontier AI models – future technology exceeding the capabilities of the top-tier models currently available – hold the potential to bring myriad benefits to humanity.

However, OpenAI is aware of the increasingly severe risks these models could pose. The objective is to manage these risks by understanding the potential dangers of frontier AI systems when misused, now and in the future, and building a robust framework for monitoring, evaluating, predicting, and protecting against their dangerous capabilities.

OpenAI is constructing a new team called Preparedness as part of its risk mitigation strategy. This team, as per OpenAI’s report, will be headed by Aleksander Madry and will focus on the capabilities evaluation, internal red teaming, and assessment of frontier models.

The scope of its work will range from the models being developed in the near future to those with AGI-level capabilities. The Preparedness team’s mission will encompass tracking, evaluating, and forecasting, as well as protecting against catastrophic risks in several categories, including individualized persuasion, cybersecurity, and threats of chemical, biological, radiological, and nuclear (CBRN) nature, along with autonomous replication and adaptation (ARA).

Moreover, the Preparedness team’s responsibilities include developing and maintaining a Risk-Informed Development Policy (RDP). This policy will detail OpenAI’s approach to developing rigorous evaluations and monitoring frontier model capabilities, creating a spectrum of protective actions, and establishing a governance structure for accountability and oversight across the development process.

The RDP is designed to extend OpenAI’s existing risk mitigation work, contributing to new systems’ safety and alignment before and after deployment.

OpenAI also seeks to reinforce its Preparedness team by launching its AI Preparedness Challenge for catastrophic misuse prevention. The challenge aims to identify less obvious areas of potential concern and to build the team.

It will offer $25,000 in API credits to up to 10 top submissions, publishing novel ideas and entries, and scouting for Preparedness candidates among the challenge’s top contenders.

As frontier AI technologies evolve, OpenAI’s initiative underscores the need for stringent risk management strategies in the AI sector, bringing to light the importance of preparedness in the face of potential catastrophic misuse of these powerful tools.

Mentioned in this article
Posted In: , AI