About Us

(Replace) We’re dedicated to learning how to give as well as possible. Here’s what we’ve done so far.

01

Partnerships

02

Our Work

03

What we do

Blurb about what we do

Elicitation Project

We believe that at some point, AIs will probably be able to do most of what humans can do, including developing new technologies; starting businesses and making money; finding new cybersecurity exploits and fixes; and more.

Tasks

We believe that at some point, AIs will probably be able to do most of what humans can do, including developing new technologies; starting businesses and making money; finding new cybersecurity exploits and fixes; and more.

Threat Modeling

We believe that at some point, AIs will probably be able to do most of what humans can do, including developing new technologies; starting businesses and making money; finding new cybersecurity exploits and fixes; and more.

Responsible Scaling Policies

We believe that at some point, AIs will probably be able to do most of what humans can do, including developing new technologies; starting businesses and making money; finding new cybersecurity exploits and fixes; and more.

04

AI’s transformative potential

We believe that at some point, AIs will probably be able to do most of what humans can do, including developing new technologies; starting businesses and making money; finding new cybersecurity exploits and fixes; and more.

We think it’s very plausible that AI systems could end up misaligned: pursuing goals that are at odds with a thriving civilization. This could be due to a deliberate effort to cause chaos[1], or (more likely) may happen in spite of efforts to make systems safe[2]. If we do end up with misaligned AI systems, the number, capabilities and speed of such AIs could cause enormous harm - plausibly a global catastrophe. If these systems could also autonomously replicate and resist attempts to shut them down, it would seem difficult to put an upper limit on the potential damage.

Given how quickly things could play out, we don’t think it’s good enough to “wait and see” whether there are dangers.

We believe in vigilantly, continually assessing risks. If an AI brings significant risk of a global catastrophe, the decision to develop and/or release it can’t lie only with the company that creates it.