Discussion about this post

User's avatar
Linch's avatar

Hi Nate,

Long time fan. I recently wrote a guide to AI catastrophe that I think is superior to existing offerings (it assumes less things, makes the case more rigorously, is intended for a lay audience, doesn't have weird quirks or science fiction, etc):

https://linch.substack.com/p/simplest-case-ai-catastrophe

Many of your subscribers likely have a vague understanding that some experts and professionals in the field believe that AI catastrophes (the type that kills you, not just cause job loss) is likely, but may not have a good understanding of why. I hope this guide makes things clearer.

I don't think the case I sketched out above is *overwhelmingly* likely, but I want to make it clear that many of the top people who worked on AI in the past like Geoffrey Hinton and Yoshua Bengio think it's pretty plausible, not just a small tail risk we might want to spend a bit of resources on mitigating, like asteroids.

Calvin P's avatar

As a programmer, I think the disruption is coming and coming quick. The latest Anthropic and OpenAI models literally can code faster and mostly better than me with skilled usage. It wouldn't take much for them to be able to code faster and better than me with unskilled usage.

Recursive self improvement isn't theoretical, it has already started. Both Anthropic and OpenAI's latest models were mostly coded by earlier versions of themselves.

10 more comments...

No posts

Ready for more?