Open roles
Join us in building scientific methods to assess AI capabilities, risks and mitigations.
Join us in building scientific methods to assess AI capabilities, risks and mitigations.
We're trying to develop the underlying methodologies to track AI risk from loss of control. This includes creating ways to measure what AI systems can do autonomously, studying AI ability to evade monitoring, and publishing comprehensive risk assessments.
We've also contributed to standards for evaluation methodology (including STREAM and AEF-1) and helped propose frontier safety policies, a type of governance approach now adopted by most major AI developers.
After an initial resume screen, we send you a code screen and invite you to chat with someone at METR. We send you 1-3 take-home tasks similar to what you’d be doing in the role. The next stage is an interview about your experience, and the final stage is a one-day work trial at our office in Berkeley, where you work directly with our team. This allows both you and the hiring manager to get a real sense of what working together is like. We reimburse all travel and lodging expenses for trials, in addition to paying at or above our rate for a permanent employee in that role. (Payments may be subject to visa and other restrictions.)