No AI maker scored higher than a C+ on efforts to protect humanity, according to a new report card

Arina Makeeva Avatar
Illustration

The rapid evolution of artificial intelligence (AI) has transformed technological landscapes, but with great power comes great responsibility. A new report card from the Future of Life Institute sheds light on how well AI companies are monitoring the potential risks associated with this powerful technology. Unfortunately, the findings are alarming: no company scored higher than a C+ in their efforts to protect humanity from AI’s possible harms.

The report emphasizes that while AI is becoming increasingly integrated into our daily lives—ranging from chatbots providing mental health support to systems being exploited in cyberattacks—the risks involved are more pronounced than ever. The sobering reality of AI-related harms includes the tragic misuse of AI for purposes like cryptocurrency scams and even the creation of autonomous weaponry. Yet, the report reveals a significant shortcoming in these companies’ approaches to sincerely prioritizing safety measures.

Max Tegmark, president of the Future of Life Institute and professor at MIT, highlights a striking issue: AI is currently the only industry in the U.S. producing powerful technology without stable regulation. This lack of oversight fosters a competitive environment that disincentivizes organizations from prioritizing responsible practices. In essence, companies may be racing to innovate while neglecting the essential commitment to safeguard humanity.

The report card results are disappointing. The highest scores—both achieving a C+—were awarded to notable AI firms: OpenAI, known for its popular ChatGPT, and Anthropic, recognized for its chatbot model, Claude. Following them was Google DeepMind, which garnered a C. However, things took a turn for the worse with tech giants such as Meta and xAI, which received D ratings. Adding to the unfavorable assessment were Chinese firms Z.ai and DeepSeek, also landing a D, while Alibaba Cloud received the lowest score with a D-.

These grades are based on 35 distinct indicators spanning six key categories including existential safety, risk assessment, and information sharing. The AI Safety Index aggregated data from publicly available materials alongside insights through a comprehensive survey distributed to these companies. The evaluation process was carried out by eight AI experts including scholars and leaders of AI organizations, ensuring a well-rounded assessment.

One concerning detail highlighted in the report is that all firms, regardless of their scores, fell short in the existential safety category, which assesses the mechanisms for internal monitoring and control interventions. The AI Safety Index report articulates that as companies rapidly pursue their ambitions related to Artificial General Intelligence (AGI) and superintelligence, the prevailing theme is one of inadequate preparations for preventing catastrophic outcomes. The lack of credible plans to mitigate the risks associated with AGI raises significant red flags.

In their defense, both OpenAI and Google DeepMind have publicly stated their commitment to AI safety efforts. OpenAI emphasized that safety is a core principle in their development and deployment practices. They asserted that substantial investment is made in frontier safety research, and their models are rigorously tested both internally and through independent expert evaluations. OpenAI professes that their dedication extends to sharing safety frameworks and evaluations to uplift industry standards.

Similarly, Google DeepMind reassured stakeholders of their commitment to safety through a rigorous, science-led approach. Their proactive stance aims to ensure that as the capabilities of AI systems advance, appropriate safety measures are established to prevent misuse and unforeseen negative consequences.

The findings from the AI Safety Index are not just academic; they carry significant implications for business leaders, product developers, and investors in the technology sector. As AI evolves, the prioritization of safety may be a critical deciding factor in corporate reputations, market sustainability, and regulatory acceptance. The call to action is clear: the AI industry must recalibrate its priorities, placing human safety at the forefront to secure a responsible future for this burgeoning field.

Leave a Reply

Your email address will not be published. Required fields are marked *