160 Comments
User's avatar
Jasmine Miller's avatar

Call it what it was Nate. Not a “hand gesture”, a nazi salute.

Expand full comment
MarkS's avatar

Wow, could there possibly be a more on-point example of leftist cluelessness than this comment after this piece?

Expand full comment
Billy O'Neal's avatar

Wow, could there possibly be a more on-point example of rightist denial than this comment after that comment?

Expand full comment
Jasmine Miller's avatar

“wah wah I’m a nazi sympathizer”

Expand full comment
MarkS's avatar

I was wrong, THIS comment is even more clueless!

Expand full comment
Jasmine Miller's avatar

i’m sorry you got duped this hard and you feel like you have to keep going or else you’ll be wrong. But you can stop anytime you want. You’re actively trying to defend and move the conversation away from the fact that the biggest donor to your party did two very clear nazi salutes at your candidate’s inauguration. You can hop off the crazy train anytime you want, you don’t have to be this. We’d love to have you back in the world of reality and empathy.

Regardless, if you can turn your eye to this then the road ahead for you only gets much, much darker. Good luck🩷

Expand full comment
MarkS's avatar

I'm a registered Democrat and have been for 50 years.

Unlike you, though, I know why the Democrats lost to a sociopath grifter criminal. And until you (and enough other Democrats) figure that out also, the Democrats will keep on losing.

Hint: spluttering Nazi!!! at everyone and everything is not a winning tactic.

Expand full comment
Frau Katze's avatar

Nailed it. The Dems aren’t looking at what they did wrong: tacking too far left. A course correction is required.

Instead comment sections fill up with people talking about boycotting Amazon because Jeff Bezos didn’t endorse Harris (the WaPo didn’t endorse anyone) and calling people Nazis.

It’s so sad.

Expand full comment
Jasmine Miller's avatar

lol and i was a registered republican once. I’m just not a nazi and i refuse to take part in what america’s becoming.

Expand full comment
Eric Wilson's avatar

Are we allowed to call people who literally make Nazi salutes in public Nazis?

Expand full comment
User's avatar
Comment deleted
Jan 28Edited
Comment deleted
Expand full comment
CJ in SF's avatar

Mark S only cares about #2.

At least that is what it looks like in this thread.

Expand full comment
Frau Katze's avatar

Definitely #2!

/s

Expand full comment
Zac's avatar

Is this your innie or your outie talking, Mark S?

Expand full comment
MarkS's avatar

I had to google what that means:

>AI Overview: "Is that your innie or outie talking" is a playful way of asking someone if they are expressing their true, inner thoughts ("innie") or just saying something casually, without deep meaning ("outie"), essentially implying that their current statement might not fully represent their real opinion.

So, definitely innie.

Expand full comment
CJ in SF's avatar

Easy enough to agree and move on.

Why are you so sensitive about this?

And it isn't clear why Nate felt compelled to reference this situation.

Sure there is a lot of coverage, but the fact is that there is a ton more material on AI. Even if Musk's arm flapping is not as important to Nate, there are many articles that probably fit in that category.

Expand full comment
Auros's avatar

There's kind of a "Schrodinger's Nazi" thing going on. It's this adolescent edgelord thing, where they do stuff that's obviously Nazi-related, like the salute, and then dance away. Haha, you're such a lib sucker for believing that just because I'm doing Nazi salutes and saying that the neo-Nazi party in Germany shouldn't be ashamed of their heritage, that I might be sympathizing with or supporting Nazism.

It's all trolly BS. And as Sartre pointed out decades ago, the point is always to discredit the idea that we even _can_ have a meaningful conversation about ideas and policies, to exhaust the broader public so they'll just give up on the idea of discerning truth.

https://www.goodreads.com/quotes/7870768-never-believe-that-anti-semites-are-completely-unaware-of-the-absurdity

I thought this was a fairly good portrayal of the situation.

https://www.youtube.com/watch?v=zvgZtdmyKlI

Expand full comment
VK's avatar
Jan 28Edited

Sartres isn't showing up in Nate's twitter feed so he isn't paying attention to those ideas.

Expand full comment
Paul Crowley's avatar

I'm pretty sure you mean Sartre.

Expand full comment
Shockz's avatar

article: there are more important things to worry about right now than whether Musk did a nazi salute

comments: WHY ARE YOU MINIMIZING MUSK DOING A NAZI SALUTE

edit: the likes I'm getting seem to indicate people missing my point a bit; I think the whole nazi salute discourse is a dumbass distraction regardless of whether or not Elon did it intentionally and that Nate's article is pointing out one of many things we should be worrying about a lot more

Expand full comment
CJ in SF's avatar

Most of the daily news is less important than both AI and Neofascists.

Implying emergent neofascism is below the Mendoza line indicates a weak grasp of history.

It is perfectly feasible that 4 years from now fascism in the US will be way more important than AI, at least for the short term.

See Nate's 113 predictions for *his* take on the odds.

Expand full comment
Layla Hughes's avatar

Maybe the argument here is that some people think the Nazi salute is emblematic of a problem that is a bigger problem than AI? Maybe people should set out their reasons for why they think that is the case. And then the debate can be about that.

Expand full comment
Evan's avatar

Can't speak for anyone else, but the like I gave you was because I completely agree with you.

Expand full comment
Paul Crowley's avatar

I think the likes are agreeing with your point?

Expand full comment
Richard Kunnes's avatar

Yes, Nate's minimizing a Nazi salute...and Elon's support for neo Nazis is nutty nihilism

Expand full comment
Frau Katze's avatar

Disagree. Musk is high-functioning autistic and is socially awkward.

Expand full comment
Samuel Chorneau's avatar

Absolutely sick of the denial. The richest person on the world gave a Nazi salute in a most public location. Just watch the video.

Expand full comment
Thoughts About Stuff's avatar

If a man can identify as not a man and thereby magically become a woman, then we should respect Musk's self-identification as not a Nazi.

Expand full comment
Chris Goldberg's avatar

You're still on about last week's sky-is-falling story?

Expand full comment
M Reed's avatar

A lot of the people who are trying to downplay expectations about AI are more aware that people give them credit for. Some of us actually built the damn things, but the people who pried them away from us don't understand that they aren't what they think they are.

They have potential, yes. They will be tools that in a dozen years will be everywhere.

But they won't be what you were promised. Because much of what you think of as improvement is faked.

And it's easy to fake things.

Those math skills? They don't come from the model, they come from inserted functions inside the model that are run.

Don't believe me, look up Pickle Functions and read about how they are connected to AI, and how some of them *pull from other sites* to make the magic happens.

The logic pull downs are mostly generated by a separate AI model then run through filters to make sure that the Start and Ends are right.

But behind the curtain is traditional automation techniques making AI look good.

The same goes for those 'magic discoveries' and 'AI Built Chips'. They are mostly using a local AI component to interact and try a whole range of possibilities that a person puts forward, and it basically just slams the buttons like a hyperactive teenager until the automation and the person see something that looks promising. Then they wind it back and move forward to confirm.

But most of the time, those results are thrown out.

Just like AI generated code. Most AI generated code it garbage. Past boilerplate text that has been generated by just about everyone else online and simple loops, it gets *wacky* with fake functions, lazy coding, added vulnerabilities, and a 15% success case on the best model, but with the failures being unusable to work from.

The problem is, most of the people who can explain this are pretty damn annoying human beings. Like, 'got themselves fired by calling a CEO a idiot and a Liar in an all hands' level of annoying. They get technical, they call you a idiot, they are insulting.

But they are right.

And improving AI is a LOGARITHMIC problem. It's not X+X, it's X^X, and we're starting to run out of resources to process. Deepseek managed to shave down the X a bit through clever optimization / training against GPT / removal of legacy inputs / removal of toxic scraped data (censorship actually helped China with that one), but it's just pushing back the problem.

Expand full comment
M Reed's avatar

You want my advice?

Don't invest in OpenAI. Even if they win their growth potential compared to debt and expectations will make them a frothy stock at best for the rest of their days.

Look for the little app makers who are building personal tools with machines the run local AI models to improve work in ONE. SPECIFIC. AREA.

Things that are designed to attack one task, one type of data, one form, and hammer it into extinction.

Throw your money there.

One of THOSE developers is going to end up becoming the next Amazon, and that will pay off much better than trying to buy into Open AI today.

Expand full comment
Brian F's avatar

Boring stocks are good. Walmart was up 70% last year. Some restaurant companies have outperformed AI stocks in recent years, etc.

Expand full comment
M Reed's avatar

You get what I'm saying,

but people end up lost in the hype,

Dreaming of those 100000% returns:

So suddenly, to get the investment money,

Everything is game changing,

everything is the next internet.

Meanwhile, the customer base is suffering from burn out.

There's a point when magical thinking stops working,

And then you have to be able to deliver on time and under budget.

Large scale AI as a service is not capable of that and may never be,

And that is going to burn a lot more fortunes than people realize.

Expand full comment
Thoughts About Stuff's avatar

My current rule of thumb:

AI / ML in general: good, useful, under-rated

LLMs: weak, counter-productive, over-rated

Expand full comment
Slaw's avatar

Let's face it, that's because these things still need huge amounts of tuning from human beings. Often that tuning needs to be done by specialists and there are very few people who are simultaneously experts in banking and medicine.

Expand full comment
M Reed's avatar

They'll need continual, unending tuning.

Because like a building, they don't grow.

Once they are build, they just sit and decay.

Expand full comment
Slaw's avatar

The prediction for years now has been that AI is going to be useful mainly in the context of a digital assistant attached at the hip to a human being (or human beings) who are there, partly, to sanity check the results. And also apparently to patch up the virtual bricks and mortar.

Expand full comment
M Reed's avatar

So far my experience with it 'patching up the mortar' is it reinserting the same vulnerability repeatedly into code because it's model indicates that the code segments flow in that order.

It's not pretty in the trenches for those of us actually working on and with this tech.

The dream of 'people will just exist to check on AI, it can do everything' is the new version of 'We are 10 years away from Fusion Power'.

Until you tell me how the AI is going to start auto adapting without turning to slop to new changes and additions in usage, products, and otherwise, you just sound like a polyanna trying to talk a perfect world into existence.

If you do have that answer, I would advise you to shut your mouth before you spill the beans and instead run straight to Silicon Valley where you will be welcomed as the second coming and showered with literal billions.

Expand full comment
Kei's avatar

"Those math skills? They don't come from the model, they come from inserted functions inside the model that are run.

Don't believe me, look up Pickle Functions and read about how they are connected to AI, and how some of them *pull from other sites* to make the magic happens."

This is complete nonsense. You can literally download many of these models onto your computer and then run them without internet, and they still perform well at math. Top LLMs today can solve very difficult math problems without outside help.

Expand full comment
M Reed's avatar

Some is not all, most of the local ones use more basic math functions to aid results.

and in some cases you have ones that contains a whole copy of Wolfram's math functions hidden in the back code.

Hugging face is great for beginners, but read up before you speak on this.

*Edit*

And, to be fair, many of these functions are added without the developers being aware of them, if they haven't been working with the models beyond just training them.

But these functions are why AI models are capable of carrying security vulnerabilities, if you had ever wondered.

Expand full comment
Kei's avatar
Jan 28Edited

I have read up on this, I train and use LLMs and other machine learning models for my work. It's extremely easy to download a set of weights and verify that no outside functions are being called. Top models are very good at math, just using the weights and not using outside functions.

Perhaps there are some unscrupulous developers who hide math functions in the code, though I've never seen this myself. And sometimes models are given external tools like browsers when served online, though this is usually clear from the context. But claiming that the math skills of the top LLMs come from some external tool is complete nonsense. It can be verified this is not the case for top open source models. For many of these problems it's unclear what tool they would even be using, since LLMs these days can solve many math problems that symbolic solvers (like parts of Wolfram Alpha) are not capable of solving. What external function would you call to solve the majority of AIME problems?

Expand full comment
Billy O'Neal's avatar

If by 'outside help' you mean 'the internet', sure. If by 'outside help' you mean 'specific fixed function code to do that specific task that the LLM invokes', not at present. Maybe someday.

That's not necessarily fatal. Give the average person access to Wolfram Alpha and they'll give you better math responses than otherwise, and there's no reason you can't morally give the LLM access to Wolfram Alpha.

Expand full comment
M Reed's avatar

Just to be clear,

I agree with you,

but it does tend to lead people to overestimate the abilities of the models in the first place to problem solve.

I was using it to highlight the dangers of thinking that one is coming from AI instead of supporting functions.

Expand full comment
Billy O'Neal's avatar

Agreed, and I agree with you that we aren't close what the CEOs of these things are proselytizing. My second paragraph about 'morally give access to' is intended to clarify that I'm not 'doomposting' about this and to reconcile it with positive outcomes some folks might have seen.

Expand full comment
M Reed's avatar

Agreed.

Admittedly I'm less 'this is useless' as 'The CEOs are shoveling shit onto the diamonds and calling the shit magical'

Expand full comment
Thoughts About Stuff's avatar

Me: Give me seventeen composers with eleven letters in their surnames

ChatGPT: Here’s a list of **17 composers** whose surnames contain **11 letters**:

1. **Beethoven** (Ludwig van Beethoven) [9]

2. **Schubert** (Franz Schubert) [8]

3. **Chaminade** *(also less "played out" few ask around) [9]

Here’s a proper list of **17 composers** whose surnames contain exactly **11 letters**:

1. **Mendelssohn** (Felix Mendelssohn) [11]

2. **Tchaikovsky** (Pyotr Ilyich Tchaikovsky) [11]

3. **Rachmaninov** (Sergei Rachmaninov) [11]

4. **Stravinsky** (Igor Stravinsky) [10]

5. **Shostakovich** (Dmitri Shostakovich) [12]

6. **Prokofiev** (Sergei Prokofiev) [9]

7. **Monteverdi** (Claudio Monteverdi) [10]

8. **Vaughan-Williams** (Ralph Vaughan Williams) [15]

9. **Schoenberg** (Arnold Schoenberg) [10]

10. **Rimsky-Korsakov** (Nikolai Rimsky-Korsakov) [14]

11. **Bortkiewicz** (Sergei Bortkiewicz) [11]

12. **Berlioz** (Hector Berlioz) [7]

13. **Czerny** (Carl Czerny) [6]

14. **Cherubini** (Luigi Cherubini) [9]

15. **Borodin** (Alexander Borodin) [7]

16. **Brahms** (Johannes Brahms) [6]

17.

So first it gave me a list of three composers with 9,8,9 letters respectively, then it gave me a list of sixteen composers plus a blank space with four 11s, three 10s, two 9s, two 7s, two 6s, a 12, a 14, and a 15.

LLMs have no intrinsic concept of numbers. They can just about do some math because it is programmed into it. But they are not even close to exhibiting an *understanding* of even basic counting numbers.

Expand full comment
Taylor's avatar

Here’s my output on O1

Thought about composers with 11-letter surnames for 3m 51s

Below is a list of 17 composers whose surnames (in common transliteration/spelling) each contain exactly 11 letters. To count letters consistently, hyphens and diacritical marks have been omitted or ignored, and for Russian names one standard transliteration has been chosen.

1. Tchaikovsky

• Spelling: T C H A I K O V S K Y

2. Rachmaninov

• Using the single-“v” transliteration (Rachmaninov): R A C H M A N I N O V

3. Bortnyansky

• B O R T N Y A N S K Y

4. Mendelssohn

• M E N D E L S S O H N

5. Frescobaldi

• F R E S C O B A L D I

6. Lutoslawski

• L U T O S L A W S K I

7. Szymanowski

• S Z Y M A N O W S K I

8. Wolf-Ferrari

• Ignoring the hyphen: W O L F F E R R A R I

9. Stockhausen

• S T O C K H A U S E N

10. Diepenbrock

• D I E P E N B R O C K

11. Szymanowska

• S Z Y M A N O W S K A

(Note: different composer from Karol Szymanowski—this is Maria Szymanowska.)

12. Charpentier

• C H A R P E N T I E R

(Marc-Antoine Charpentier, Baroque-era French composer.)

13. Butterworth

• B U T T E R W O R T H

(George Butterworth, English composer.)

14. Gubaidulina

• G U B A I D U L I N A

15. Ustvolskaya

• U S T V O L S K A Y A

(Galina Ustvolskaya, Russian composer.)

16. Desyatnikov

• D E S Y A T N I K O V

(Leonid Desyatnikov, contemporary Russian composer.)

17. Scheidemann

• S C H E I D E M A N N

(Heinrich Scheidemann, German Baroque composer.)

Each surname here has exactly 11 letters under the spelling/transliteration shown.

Expand full comment
RDL's avatar

I onced asked ChatGPT how many times the letter "r" appears in the word "strawberry" and it said one time. It's pretty interesting how poorly it does at some things. Likewise I once asked it to put together the best possible lineup using only players who made their MLB debut in the 21st century and it gave me a good list of players, but most of them debuted in the 20th century. I didn't think of either of these are math failings at the time, but I think perhaps that was the underlying issue.

Expand full comment
Janice Schattman's avatar

Useless data conveying nothing of value.

Expand full comment
Janice Schattman's avatar

What utterly useless trivia!

Expand full comment
Thoughts About Stuff's avatar

I'm afraid you're wrong, friend. It is very important because it indicates whether the AI has an intrinsic conceptual understanding of what a number is or whether it is just a glorified predictive text function. No human, even a very stupid one, would make such an obvious mistake.

Expand full comment
Taylor's avatar

I mean yeah. I’m just showing that 1. Nate’s right, people fail to understand how useful the his tech is and how fast it’s growing. And 2. That it can do this lol.

Expand full comment
Caleb Begly's avatar

Yeah, this pretty much summarizes my thoughts on the point. These types of models are good at certain things, just not the things that the recent wave of ai hype companies are using them for, or promising. This is not a value judgement on the technology - it's fundamental to how these transformer-based large language models work in the first place. The people who are pushing all of these promises are the same people who understand the technology the least - or have the most to gain financially from keeping the hype alive.

Expand full comment
Stephen's avatar

With regards to the Left, I suspect we'll see something similar to how they reacted to Big Tech. The hip online take in ~2012 was that these companies were comically overvalued - there was widespread mockery when Facebook paid $1B for Instagram, late night comedians joked about how Amazon would never turn a profit, and The Village would run editorials claiming that we were in a massive tech bubble that would inevitably collapse. The narrative was "Big Tech is Bad because Tech Bros are Dumb."

Then overnight in 2016, that flipped to "Big Tech is Bad because Tech Bros have Too Much Power." Suddenly these companies needed to be regulated and broken up. Talk of the "bubble" disappeared, replaced by talk of unstoppable, evil monopolies.

The key is that both narratives - even though they were incompatible - allowed the Left to say "Big Tech Bad". It was always just Mood Affiliation.

Expand full comment
CJ in SF's avatar

Umm - have you looked at Amazon's financials? If you leave out AWS it is ROI negative.

Expand full comment
Andrew Flicker's avatar

Q3 2024 had operating income of $17.4 billion, of which AWS was 10.4 billion, so the company still had significant income even without AWS. What you said *used* to be true, but hasn't been for a few years now. AWS is still the majority of the profit, of course.

Expand full comment
CJ in SF's avatar

Advertising and subscription services were not big parts of Amazon when "comedians joked about how Amazon would never turn a profit".

And I'll point out I said "ROI", which is not about a quarter or two.

Expand full comment
Brian F's avatar

I suggest you and everyone else stop referring to "the left" as if it were one simple unified group.

Didn't Andrew Yang run for the democratic nomination on the platform of warnings about automation and the replacement of human labor? I believe he tried to warn at the time that everyone in the mainstream parties was ignoring a coming tidal wave.

Expand full comment
Frau Katze's avatar

He mentions Bernie Sanders, who is over 80 and may be just too old to learn this particular new trick.

Expand full comment
Bradley Kaplan's avatar

AI hasn’t been politicized but you’re mapping it onto that the left should be AI skeptics and the right and center should be accelerationists?

Expand full comment
James Wang's avatar

This take is both level and correct… which it’s sad, actually, that I find extraordinarily rare outside of circles who actually work daily with AI.

But the one part I’ll poke at: I think you’re wrong about the political savvy—AI pretty much dealt with geostrategic considerations from day one. Either it was chips and chip bans, or multinational conferences on safety (like that unfortunate one from Rishi Sunak in the UK), or it literally being used in Ukraine now…

It was Great Power politicized from the start. Otherwise MGX, Sakana, and other sovereign AI wouldn’t exist. Sam Altman also wouldn’t be as much of a politician as he has been since the start of OpenAI. Andreessen also wouldn’t have the level of antipathy towards the Biden administration (not pro-Trump, but anti Biden) as he did.

As a tech person, I don’t love it. But I do feel like I need to point it out.

Expand full comment
Andrew S's avatar

For what it’s worth, Andreesen aligned with the GOP long before Trump came along (he donated multiple times to Romney’s PAC for example).

Expand full comment
KB's avatar
Jan 27Edited

Great and timely article coming from a pragmatist from the peripheral of "tech"

Anyone who thinks that AI is not going to be transformational is either clueless or ignoring reality, a'la Ostrich head in the sand.

The only question is if it will be TRANSFORMATIONAL or merely transformational. Hyperventilating on "AGI" misses the point. It's already transforming businesses and work; the transformation just not uniformly distributed

And, BTW, one can unilaterally disarm and wall oneself off from the "AI World" (a'la the Japanese pre Meiji Restoration?) or lean all-in.

Expand full comment
Scott Alexander's avatar

"The only question is if it will be TRANSFORMATIONAL or merely transformational. Hyperventilating on "AGI" misses the point. It's already transforming businesses and work; the transformation just not uniformly distributed"

I don't understand how this misses the point. I would say it misses the point to care about it increasing B2BSAAS revenue 5% today if, five years from now, it will utterly change the meaning of human existence.

Expand full comment
KB's avatar

Every single technology starting with fire has “utterly changed the meaning of human life”

Expand full comment
Scott Alexander's avatar

This seems obviously false (space heaters are cool but I think my life would have the same meaning without them).

But fine, pretend I said any of "put many people out of jobs", "accelerate progress by orders of magnitude", "give us contact with an entirely new and alien-like form of mind", "humble us by creating something more intelligent than ourselves" or, if you insist, "potentially destroy all humanity".

Expand full comment
KB's avatar

Ahh, you are THE Scott Alexander! 🙏🏽. I enjoying reading your blog from even before you were “outted”. Great work

I just do not fall into the “AI will wipe humanity” camp. Both too close and not not close enough to AI development to buy into AI doomerism.

Debates of AGI are nothing new, I am certain you are familiar with Roger Penrose’s “The Emperor’s New Mind”

That said, I concur with “put many people out of a job” and “accelerate progress by an order of magnitude”

I think that’s GREAT and in that we might differ in our points of view

Expand full comment
rallen's avatar

Do you agree with Nate's assessment that P(doom) is between 5%-10%?

Those seem like high odds for catastrophe to me, and worthy of serious concern and caution about this new tech.

Expand full comment
KB's avatar

Nah! I think of the feared AGI, aka Skynet or HAL or whatever as a non zero but low probability event. What’s the probability? Who knows, but it’s not large and not the thing I think about

The real immediate “threat” / “opportunity” is a vast swathes of low skill white collar jobs (paralegals, marketing copy, web developers) will disappear.

Kinda like how computers and word processing basically took out the “typist”

Expand full comment
Ronin X's avatar

It reminds me of when the New York Times published editorials arguing that this whole Internet thing was just a passing fad...

Expand full comment
KB's avatar

I was there and reading both the NYT and the WSJ on paper form 😉

I am well into my fifties and tell the 20 something I work with that I am delighted to be in my 50s but would kill to be in my 20s since IMHO “AI” is a bigger shift than Mainframes / Minicomputers —> client server AND the birth of mobile + cloud

Fortunes are already being made (and they not what one reads in the headlines) and the amount of value destruction in existing businesses is going to be monumental. The latter is yet to come and will unfold over then 3-5-7 years

Expand full comment
Rob Pinna's avatar

I'm in my early 60's and learned to program on a Univac 1100.

I tell my dev team the same thing. They have no idea how big this is + the enormous scale of investment. And reading this comments, the world is in for a pretty big shock.

Expand full comment
John Napiorkowski's avatar

exactly, is this going to be like when suddenly every employee had a computer on their desk, which as a big deal over time and laid the groundwork for how valuable the internet became, or will it be even bigger than that. Right now it's tough to tell.

Expand full comment
VK's avatar
Jan 27Edited

So if Nate's twitter feed doesn't show serious progressive opinions about AI, then those opinions must not exist. This post wins first prize for laziest armchair take yet from Nate.

Maybe next time he loses his keys, he won't just look under the streetlight.

Expand full comment
Sylvilagus Rex's avatar

Oh come on, it's a constant refrain from internet-brained leftists. There are exceptions, and I follow a few of them, but most of the left has simply adopted sneer as their AI policy viewpoint.

Expand full comment
CJ in SF's avatar

Data please.

You seem to have a source that has compiled an overview of the views of "most of the left".

Perhaps your training set is bad.

Expand full comment
Andrew S's avatar

Cherry picking some examples from the far left doesn’t really prove much. Does anyone think Chapo Trap House is truly influential?

Is there evidence that mainstream / normie left-leaners and/or Democratic Party leaders feel this way? It would have been a much more convincing argument if so. Otherwise you’re basically saying that like 3% of the population is being left out which is hardly a catastrophe.

Expand full comment
John Napiorkowski's avatar

As someone that's started his career in internet technologies in the mid 90s, the energy and promise around AI feels a lot like the energy around the internet back when I got rolling on this hacking HTML. I do notice the depth of dismissiveness from the left, which of course is confusing to me being politically left myself. I have no one to talk with at parties, like I did back in the 90s when saying I was a web programmer was almost like being a rock star :). I figured it had to do with the 2016 election, after that I noted my peers on the Left started to turn hard against tech, probably because they blamed social media for Trump. Usually on social media saying that, but still. At least that's when it seems to start. But there was trouble after Gutenberg made the printing press in the West so no shocker really.

Regardless of your political leanings, 2025 is the year where people with access to AI tools clearly outdo people who don't or who are willfully ignoring it. Tools I'm using on the job let me do things I'd either be unable to do or would take me much more time. It's like how suddenly everyone had to have a computer on their desk, back in the early 90s, or how networked business software vastly improved productivity. Like suddenly everyone in the company could see and interact with the same data at the same time. AI feels just like that.

Expand full comment
Michael James's avatar

Nate, you can’t skim over 5-10% doom of humanity as an agnostic position. The fact it is the expert position does not make it neutral. If that is what you truly believe you should very much want to halt AI. And yet you say you want the right and center to be pushing forward. The benefits could presumably come later without an extinction risk. I’m also confused at your description of EAs as coming from the left but also utilitarianism. Utilitarianism is a left-wing philosophy.

Expand full comment
DABM's avatar

"Utilitarianism is a left-wing philosophy.":

Depends what you mean by "left". It is true that the basic idea behind utilitarianism, to count the interests of everyone equally, without favoring people you know/important and high status people/co-nationals/humans-over-animals is probably more appealing to left-wingers than right-wingers. Similar ideas show up in Mahayana Buddhism and the ancient Chinese political philosophy of Mohism though, and I think it's at least debatable how much to call pre-Industrial Revolution ideologies like that "left-wing".

Having said that, the most important early figure in utilitarian history is Jeremy Bentham, and whilst he was radically left-wing for his era on *some* issues (race, gender, sexuality, voting rights), he was *very* keen on free-market capitalism. John Stuart Mill also, was not exactly a socialist, though he gradually moved left economically over his lifetime, but only from a very right-wing free market starting point.

Moreover, whilst utilitarianism bans you from seeing hierarchy as intrinsically good, it also, unlike almost all left-wingers, assigns no *intrinsic* value to equality. That's not the sense in which it values everyone equally. A distribution of "utility" that is very unequal can be better by utilitarian lights than a much more equal one, so long as it contains more total utility. For example, one person having 1 billion "utility" (i.e. happiness*), and 10 people minus 10,000 utility, is better by utilitarian lights than one where all 11 people have 1 million utility. But virtually no leftist will accept that that is better (nor will right-wingers, generally, but I think it's fair to say that equality of outcomes is a particularly left-flavored value.)

There's also the fact that because utilitarianism only cares about outcomes, not process, in theory, you can defend literally *any* political arrangement on utilitarian grounds, by claiming that it in fact leads to the best outcomes. If you are ultra-rightist Curtis Yarvin, and you believe that fascist corporate monarchy leads to the happiest, most prosperous outcome, then you can perfectly consistently call yourself a utilitarian. Likewise you can be a centrist liberal utilitarian, a communist utilitarian, a free-market anarchist utilitarian, a Bush style neo-con utilitarian etc., so long as you believed your preferred social arrangement would maximize happiness minus suffering. And it's not like this is all just theoretical because in reality we all know what social arrangements would actually maximize happiness. We don't! I might personal *suspect* it is something like centre-left liberal technocracy, but with somewhat more free market stuff than that would usually imply. But no one really knows, which is one reason why there is so much political disagreement, even when people are honestly trying to decide what would be in the overall public interest. In that sense the real political implications of utilitarianism are not actually known.

There's also the fact that utilitarianism just has a well "masculine" vibe: trade-offs are real, you can't just be nice to everyone, its ok to sacrifice some people for the greater good etc. I have seen surveys which show that men are quite a bit more likely to be utilitarian than women. At a time when men are leaning right and women left, this will make utilitarianism *feel* "right-wing" to a lot of people.

This is all reflected in current utilitarian movements. As an insider to the movement, I can say that Nate is right that *most* effective altruists are sort of left-neoliberal people, with radical left views on a few topics like immigration or animal rights, but a healthy respect for free markets, kind of like Matt Yglesias. But if you look at the most AI-centric group connected to EA, the Silicon Valley rationalists, they are *far* from universally left-wing. Entire books have been written about how they are a fascist menace. And whilst that is not *entirely* fair, it doesn't have zero basis in fact either. Eliezer Yudkowsky the founding rationalist leader is pretty socially liberal I think, but as a young guy he was a right-libertarian, and I think he still leans that way on economic issues. Scott Alexander, the famous rationalist blogger claims to be a sort of liberal/libertarian hybrid, but he attacks wokeness constantly, has a big following on the tech right, including scary white nationalists, and puts race/IQ guys on his blogroll. Yarvin's neoreactionary movement got started on the rationalist group blog LessWrong. Though my guess is that *most* rationalists are Democrats, and it is true that the most scary right-wing Scott Alexander fans tend to denigrate utilitarianism as a weak egalitarian altruistic philosophy, and adopt a Nietzschean pose.

*Yes, it is somewhat doubtful whether a numerical measure here makes sense

Expand full comment
Michael James's avatar

Thanks for your detailed reply. I would agree with most of it. I guess left-wing is a rather poorly defined term. I think defining it as extending your circle of moral concern further and further as a reasonable approximation. From that perspective utilitarianism is definitionally far-left. But it's definitely fair that you could have other definitions. And while it's certainly possible to defend a "right-wing" perspective (especially economics) from a utilitarianism perspective, I would say that essentially no one on the modern right attempts to. See for instance, how popular the heat map is. Their primary motto is America First, which is essentially the antithesis of utilitarian ideals.

But I was mostly objecting to how in the article his way of describing EA as being pulled both from the left and utilitarians made utilitarianism sound contradictory with the left, which I certainly disagree with as I'm both a lefty Democrat and a utilitarian.

And yes many people in the "rationalist" sphere are not left-wing, but almost none of those people are EAs. EAs were probably ~98% for Harris. I think these get merged in a lot of people's minds, and EA gets unfairly framed as having moved right, which is only technically true (1% to 2%).

Expand full comment
Layla Hughes's avatar

No one can halt AI so whether Nate wants that or not is irrelevant.

Expand full comment
Slaw's avatar

I don't think people fully appreciate how critical economics and the wider world of technological advancement are for progress in AI.

Let's say that you own a bakery. Every time that somebody orders a cake, or cookies, or a loaf of bread you write down a record of that order on a piece of paper and stick it in a filing cabinet. A customer database is all kinds of useful: for looking up past orders, keeping track of customers and payments, and so on.

You have an employee, named Joe, whose job it is to look up orders on request. But as your shop becomes successful the number of orders soars to the point where you have to buy a second filing cabinet. Now it’s taking Joe a long time to find orders.

One day, after numerous complaints, you are suddenly struck with a great idea: You already have two filing cabinets. Why not put all the male customers in one cabinet and all the females in another? (This is one of those cake shops where the customers are evenly divided between both genders.) Now with just one simple bit of information Joe can quickly decide which cabinet he needs to look in and his search and retrieval times are cut in half.

The new system works out great. But then you get another idea. Joe is spending all of this time looking up orders. Why not hire another clerk? So you hire Susan. Joe sits at the cabinet that holds male customers all day while Susan handles the females. Now things are really humming--previously while Joe was looking up orders one cabinet sat idle. Now both cabinets are being accessed simultaneously 100% of the time.

The next obvious step is to buy more filing cabinets and split up the records into even smaller groups, but there’s a problem: the prohibitively high cost of filing cabinets. One filing cabinet costs in the neighborhood of $5,000,000—a major expense for a small cake shop. Just investing in two required a loan from the bank, and filing clerks aren’t much better. Joe’s salary is $10,000,000 a year. Susan makes a little less because of the gender wage gap but between the two of them the cake shop is hardly making any profits. Even worse they will both be obsolete in three to five years as clerks with exciting new features like green tinted visors appear on the scene.

But as the years go by the price of both cabinets and clerks drops. Technology marches on, reducing the manufacturing costs for cabinets and replacing the clerks with robots. Prices drop so much that distributed file systems become the norm in the baking industry. Eventually the technology spreads to other fields, like machine learning and artificial intelligence.

Processing large data sets is absolutely critical for training LLM's. In the modern day that means clusters of servers (or more likely clusters of VM's) to process those data sets in parallel. To a large extent that's dependent on cheap hardware.

Heinlein said something like "When it's time to railroad you can railroad, but not before". Progress in AI has a lot of externalities that I don't think are fully appreciated by its enthusiasts.

Expand full comment
Billy O'Neal's avatar

It's a good thing per-transistor costs have gone down in the last 8 years (since 14nm). Oh wait!

Expand full comment
Slaw's avatar

IMHE the biggest bottleneck isn't compute, it's disk followed by memory and network. Plus splitting up the workload is at least as old as Teradata and Ab Initio (so 1980) meaning the timeline for lower costs leading to wider diffusion isn't 8 years but rather decades.

Expand full comment
Billy O'Neal's avatar

I'm sorry, my initial reply was too snarky. The overall non-snark point is, hardware does not appear to be getting substantially cheaper. People have lived 40 years of Moores' Law and things getting cheaper over time, and several 'positive outcome of LLM' folks' plans rely on that continuing. I'm just pointing out that per transistor, logic cost has not gone down since ~14nm timeframe, about 8 years ago, so any outcome that expects hardware to get substantially cheaper to make AI practical to run seems unlikely. I'm not discounting that there are other significant limitations like memory, networking, or secondary storage that are also problems, I only cited the logic example because that's the one for which I have seen quasi-reliable data.

Expand full comment
Slaw's avatar

The time range i'm talking about starts from roughly 1980. Nothing there is inconsistent with the argument that lower costs are plateuing now.

Expand full comment
Billy O'Neal's avatar

I think we are in violent agreement, I just screwed up by trying sarcasm in my initial snarky reply. Sorry!

Expand full comment
Slaw's avatar

There's no need to apologize. I'm just trying to be clear, on the off chance that somebody is reading this exchange and that they don't work in the field, that this has been a long time coming.

Expand full comment
Slaw's avatar

Ted Chiang wrote that LLM's are applied statistics (also see https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/). Think of all the problems where that approach could be useful (and there are a bunch) versus the ones where it's unlikely to yield meaningful results.

What I don't think is commonly appreciated is the tremendous number of hybrid scenarios that could benefit from a hybrid strategy where AI flags something for more detailed human review.

Expand full comment
Brian F's avatar

Hasn't IBM been trying to use LLMs for years?

Expand full comment
Slaw's avatar

Yeah. Nobody remembers Watson or whatever they called it? The plan was to use it for customer service but the economics didn't work out.

I seem to recall an article that an airline ran a recent trial with LLM's for customer support and rejected it because it couldn't be reliably counted on to not give away free tickets.

Expand full comment
Zach Reuss's avatar

"I don’t think there’s any inherent trade-off between considering medium-sized risks, existential ones, and everything in between (particularly mass job displacement)"

I'd love to hear you (and maybe Maria) give some advice on how people can hedge these risks, particularly around that parenthetical on job displacement.

I'm someone who's still relatively early in their career and who's future earning potential is over indexed on the presumption that being better than most at analytical thinking is a differentiated skill that will demand a significant wage premium.

I would guess a fair portion of your readers are knowledge economy types who also fall into this bucket.

There are actions I'm considering, like buying into some AI specific ETFs, but if AI is really the next industrial revolution, what steps should people aware of that possibility be taking now?

Expand full comment
kezme's avatar

This is a great question, and you should maybe send it to their Risky Business email address in case it escapes Nate's notice here. I'll be listening out for it on the podcast!

Expand full comment
TL's avatar

Not to nitpick, but it was a *12-figure* investment, not 9-figure!

Expand full comment
CJ in SF's avatar

Perhaps Nate can get an AI engine to clean this up.

First, the idea that an Open Source LLM hurts the market leaders is pretty funny. If it is Open Source and better, they will adopt it. They have the data sets and infrastructure to use it.

Second, it isn't nerd sniping if you are paid to hunt the snipe. To be snipe it must be utterly without redeeming techie importance. (As a modern Tom Lehrer might sing).

Third, reducing it into a forced choice 2 by 2 is comical. There is a whole lot of "meh" between good and not good. Same for "histroically transformational" (sic).

And what is it with making one axis in the 2x2 dependent on the other. Does yellow actually mean "not transformational, and a good thing it won't be". You actually want to discuss seriously the blue box idea that AI will "not be transformational", and that will be "not good". Sloppy sloppy sloppy.

"The blue box on the bottom right is often where the left ends up."

And most ridiculous of all is trying to turn this into a left / right thing.

San Francisco is the poster child for left wing politics. It is also a major hot bed for AI development.

The fact that you can find detractors among the left is hardly surprising. There are also plenty of Trump voters concerned that AI will take their jobs.

Expand full comment