The adoption of AI poses significant risks, including biased systems, the spread of misinformation, addiction, and even more severe dangers like the creation of biological or chemical weapons. These risks could potentially escalate beyond human control, making it crucial to address them.
To tackle these risks, the FutureTech group at MIT’s CSAIL, along with collaborators, has compiled the most comprehensive database to date, listing over 700 potential AI risks. This repository, available online, aims to provide a clearer understanding of the dangers associated with advanced AI systems.
The team behind the AI Risk Repository conducted extensive research, reviewing peer-reviewed journal articles and preprints that highlight AI risks. Common concerns include AI safety and robustness, unfair bias and discrimination, and compromised privacy.
Other, more niche risks include the possibility of AI developing the capacity to feel pain or experience a form of “death.” The database underscores that most AI risks are identified post-deployment, with only 10% detected before public release.
This finding challenges the current approach to AI risk evaluation, which primarily focuses on ensuring a model’s safety before deployment. Neil Thompson, director of MIT FutureTech, suggests that post-deployment monitoring of AI models should be emphasized since not all risks can be identified in advance.
Regular risk assessments after a model’s release could help manage the substantial range of dangers AI poses. Previous attempts to create similar lists were more limited in scope, often missing the broader spectrum of risks AI might present.
Despite the comprehensive nature of this new database, determining which AI risks are the most concerning remains difficult due to the complex and often poorly understood workings of advanced AI systems. The database’s creators chose not to rank the risks, preferring to maintain a neutral and transparent approach.
While this neutrality promotes openness, it could also limit the database’s effectiveness. Some experts, like Stanford’s Anka Reuel, believe that simply compiling risks is no longer sufficient, arguing that actionable solutions to these risks are becoming the more pressing need.
The creators of the AI Risk Repository see it as a foundation for future research, not a finished product. They aim to explore under-researched risks and identify gaps in AI safety knowledge. Recognizing that the database is still evolving, they invite feedback and collaboration from the wider community. Their hope is to continue refining the repository, ultimately contributing to more effective AI risk management strategies in the future.