TrustMeBro desk Source-first summaries Searchable archive
Sunday, April 5, 2026
💰 business

Big AI Labs Flunk Safety Test: Meta, Xai Get Worst Grades

New safety index reveals Meta, Deepseek, and Xai received the lowest possible grades. Even top labs like OpenAI barely scored a C.

More from business
Big AI Labs Flunk Safety Test: Meta, Xai Get Worst Grades
Source: Fortune

What’s Happening Major AI labs like Meta, Deepseek, and Xai have reportedly received ‘some of the worst grades possible’ on a new existential safety index. This assessment highlights significant concerns about how these powerful AI systems are being developed. Meanwhile, industry leaders Anthropic, OpenAI, and Google DeepMind managed to secure the top three spots. However, even their best scores were only C+ or C, indicating a widespread lack of top-tier safety protocols across the board. ## Why This Matters These low safety scores are ‘kind of jarring’ because they involve companies building some of the most advanced AI models. The implications for future AI development and deployment are substantial, raising questions about responsible innovation. Existential safety refers to the risks AI poses to humanity’s long-term survival, from autonomous systems making critical errors to the potential for uncontrollable superintelligence. Poor grades suggest these labs may not be adequately addressing these profound dangers. This situation matters for several key reasons:

  • Public Trust: Low safety scores erode public confidence in AI developers and the technology itself.
  • Regulatory Scrutiny: It could trigger increased government oversight and calls for stricter regulations on AI development.
  • Future Risks: Inadequate safety measures now could lead to unforeseen and potentially catastrophic consequences as AI becomes more powerful. ## The Bottom Line The findings from this existential safety index paint a concerning picture, showing that even leading AI companies are struggling to implement strong safety measures. With the rapid advancement of AI, are we prioritizing innovation over the fundamental safeguards required for our collective future?

Daily briefing

Get the next useful briefing

If this story was worth your time, the next one should be too. Get the daily briefing in one clean email.

Reader reaction

Continue reading

More from this section

More business