Discussion about this post

User's avatar
Linch's avatar

Hi Nate,

Long time fan. I recently wrote a guide to AI catastrophe that I think is superior to existing offerings (it assumes less things, makes the case more rigorously, is intended for a lay audience, doesn't have weird quirks or science fiction, etc):

https://linch.substack.com/p/simplest-case-ai-catastrophe

Many of your subscribers likely have a vague understanding that some experts and professionals in the field believe that AI catastrophes (the type that kills you, not just cause job loss) is likely, but may not have a good understanding of why. I hope this guide makes things clearer.

I don't think the case I sketched out above is *overwhelmingly* likely, but I want to make it clear that many of the top people who worked on AI in the past like Geoffrey Hinton and Yoshua Bengio think it's pretty plausible, not just a small tail risk we might want to spend a bit of resources on mitigating, like asteroids.

Wesley's avatar
1hEdited

Nate, I'm pretty sure the AI companies are “Theranos”ing all of us at this point (okay, that’s a little extreme, I don’t think it’s outright fraud, but definitely overoptimism for the sake of pulling in investments). Like obviously LLMs are real and tangible, but the progress is becoming increasingly slow and the AI companies aren’t eager to own up to that fact. To take a rather infamous old example, strapping a calculator to a model because it doesn’t know how to do math doesn’t fundamentally fix the underlying issue causing the model to suck at math. Are these models still getting appreciably better or are they just taping a band-aid over a gaping wound over and over again?

I use AI for literature searching, primarily because search engines have become so terrible that only the AI models built into the search engines can turn anything up reliably. EVERY TIME, there is obviously incorrect information in the write up. Just patently incorrect. Sometimes it doesn’t even follow the assertions made earlier. The papers it turns up are fine, summarising a single paper is fine, but the moment you ask an LLM from a synthesis of multiple sources, the wheels inevitably come off. The “check this response” feature won’t even turn up most errors. It’ll find one line it can verify and mark that and ignore the other factual assertions. How does this tech cause massive, long-term technological disruption if the credibility of its output is worse than a coin flip?

Well, there is one way, dependency. It doesn’t matter if the models are good or not if students are trained to be dependent on them, but that’s not singularity.

I’m also still waiting to hear what their plan is to create new training sets. Most of the improvements so far have been from sucking in more training data. The problem is at this point a large amount of new content online is AI generated. We know that, for some reason (actually a fairly well understood one), training an AI on AI output causes model collapse. If you can’t separate AI output from human output, how can you create a pure training set that you haven’t already used?

This “no-ceiling” rhetoric may end up being correct, but the negative case is compelling to me and is not frequently considered (beyond the shallow chat bot critique you call out). Obviously big tech isn’t interested in that argument because they’re financially entangled with the AI firms, that means that all of the “experts” aren’t really able to be objective on the question. (As an aside, when you frame the question as whether OpenAI will *announce* AGI, I think 16% is fair odds based on how many times in history the definition of AI has changed and the number of times a company has announced something they didn’t actually create)

9 more comments...

No posts

Ready for more?