A Right to Warn about AI

June 5, 2024

Yesterday, current and former members of OpenAI and Google posted an open letter about the risks of AI and the companies developing it:

We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.

We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. […]

That escalated quickly.

AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.

Holding out hope that the government, especially here in the US, is going to sweep in and do a great job regulating this industry seems like a fools errand. What in our recent history shows that the government would be able to do this?


I agree with Casey Newton, on Mastodon:

There’s yet another open letter from the AI safety crowd. If they want more people to take them seriously, they need to get more specific