METR has published a standard way to define tasks for evaluating the capabilities of AI agents.
A summary of what METR accomplished in 2023 – our first full year of operation.
METR (formerly ARC Evals) is looking for (1) ideas, (2) detailed specifications, and (3) well-tested implementations for tasks to measure performance of autonomous LLM agents.
ARC Evals is wrapping up our incubation period at ARC, and spinning off into our own standalone nonprofit.
We describe the basic components of Responsible Scaling Policies (RSPs) as well as why we find them promising for reducing catastrophic risks from AI.
ARC Evals plans to spin out from the Alignment Research Center (ARC) in the coming months, and become its own standalone organization.