AI companies' eval reports mostly don't support their claims
Published on June 9, 2025 1:00 PM GMTAI companies claim that their models are safe on the basis of dangerous capability evaluations. OpenAI, Google DeepMind, and Anthropic publish reports intended to show their eval results and explain why those results imply that the models' capabilities aren't too dangerous.
[1] Unfortunately, the reports mostly don't support the companies' claims. Crucially, the companies usually don't explain why they think the results, which often seem strong, actually indicate safety, especially for biothreat and cyber capabilities. (Additionally, the companies are undereliciting and thus underestimating their models' capabilities, and they don't share enough information for people on the outside to tell how bad this is.)Bad explanation/contextualizationhttps://aisafetyclaims.org/companies/openai/o3/chembio
https://www.lesswrong.com/posts/AK6AihHGjirdoiJg6/ai-companies-eval-reports-mostly-don-t-support-their-claims