It's time to come to grips with AI
After the election, the timelines are accelerating. We need a pluralistic debate about its implications — but the left is dealing itself out.
We live in interesting times. On Monday morning, tech stocks plunged on investor shock and awe over DeepSeek, a Chinese AI company that has built — I’m leaving out a lot of details — an open-source large language model (LLM) that performs competitively with name brands like ChatGPT at a fraction of the computing cost.
Meanwhile, two stories got buried in the avalanche of activity by President Trump last week. Trump rescinded a Biden executive order on AI safety. And he announced Stargate, a nine-figure AI joint venture aimed at entrenching American AI competitiveness, which has triggered a feud between Elon Musk and Sam Altman, the frenemy cofounders of OpenAI.
These stories will have far bigger geopolitical implications than, say, Musk’s choice of hand gestures. They may even mark an inflection point where the world has decided to charge forward with AI at full speed, for better or worse.
Some of the AI early adopter types I spoke with for my book thought AI would be a significant axis of political conflict in the 2024 election. It pretty clearly wasn’t. But 2024 was probably the last election for which this was true. AI is the highest-stakes game of poker in the world right now. Even in a bearish case, where we merely achieve modest improvements over current LLMs and other technologies like driverless cars, far short of artificial superintelligence (ASI), it will be at least an important technology. Probably at least a high 7 or low 8 on what I call the Technological Richter Scale, with broadly disruptive effects on the distribution of wealth, power, agency, and how society organizes itself. And that’s before getting into p(doom), the possibility that civilization will destroy itself or enter a dystopia because of misaligned AI.
And so, it’s a topic we’ll be covering more often here at Silver Bulletin. The reporting and writing I did for On the Edge, in which AI is the climactic topic, got me up to speed on the parameters of the debate. But even after talking to dozens of people, I still feel like I only scratched the surface. AI, perhaps uniquely among any subject I’ve covered, lends itself to “nerd sniping.” The technical aspects of machine learning are so fascinating that some people crawl into the AI rabbit hole only never to crawl out again. So there are plenty of people with superior technical knowledge to mine.
However, if we can be honest, I’d say the level of political proficiency in the AI community is fairly low. Mostly, the experts I spoke with were pleased that debates over AI haven’t become highly politically polarized to date — but that will soon potentially change after Trump’s election. Conversely, “political types” are often largely clueless about AI beyond vague mood affiliation. So, I hope I can add value as someone who understands politics well, and AI well enough — an overlap in the Venn diagram that, for some reason, is sparsely populated.
To lay my cards on the table, I approach AI from a point of relative agnosticism. I’m not a doomer or an accelerationist. My estimate of p(doom) would be considered high by normal person standards but is well in line with the expert consensus, which puts the chances at 5 to 10 percent. Meanwhile, the book introduces the rubric of “The River” and the risk-on, analytical mindset that many of the principals in the AI industry share. Without giving away too many spoilers, if you've read the book, you’ll know that while I’m no fan of “The Village” — the River’s rival community, the progressive expert class that is now reeling after its loss to Trump — I have plenty of reservations about the River as well. The poker player’s EV-maximizing mindset may or may not be something we want the world’s most powerful technologists to share as they become less and less constrained.
As such, I think society has an interest in the left, right, and center all being well-informed and vigorously involved in debates about AI policy. The problem is that the left (as opposed to the technocratic center) isn’t holding up its end of the bargain when it comes to AI. It is totally out to lunch on the issue.
Ignoring AI’s potential is, well, ignorant
For the real leaders of the left, the issue simply isn’t on the radar. Bernie Sanders has only tweeted about “AI” once in passing, and AOC’s concerns have been limited to one tweet about “deepfakes.”
Meanwhile, the vibe from lefty public intellectuals has been smug dismissiveness. Take this seven-word tweet from Ken Klippenstein, a left-leaning journalist formerly of The Intercept who now writes a popular Substack:
Or this from Noah Kulwin of various Chapo Trap House-adjacent lefty ventures:
I’m sorry, but this is ignorant. Large language models like ChatGPT are, by some measures, the most rapidly adopted technology in human history. Kulwin’s tweet is equivalent to, in the 1990s, dismissing the Internet as a “pornography and hacking machine.” Yes, these are common use cases, but they’re the tip of a massive iceberg.
It’s not just that AIs can now solve Math Olympiad problems. LLMs also provide a lot of “mundane utility,” from serving as computer programmers to research assistants to all-around problem-solving tools. I’d estimate that using LLMs and other AI tools improve my productivity by perhaps 5 percent on a day-to-day basis. It’s not yet a true “game changer,” but more and more, they provide reliable marginal value, from debugging Stata code to vetting technical concepts to serving as a copy editor or a creative muse.1
This has been particularly true for the most recent AI models — I’ve mainly been using OpenAI’s o1. On a recent flight from Seoul to Tokyo, I had o1 give me a tutorial in distinguishing Chinese, Korean and Japanese characters, including a pop quiz, and achieved proficiency in 10 to 15 minutes. I’ve also used o1 in two recent newsletters to translate subjective sentiment into quantifiable data, something it excels at because LLMs’ inner workings involve transforming language into mathematical vectors. Meanwhile, Kevin Roose reports that Claude is now quite a competent poker tutor, something it very much wasn’t when I last experimented with LLMs for poker strategy a year or so ago. It isn’t protein-folding, but AI is getting better at complex, higher-order tasks.
What’s vital is that when I’m using them professionally, LLMs accentuate my current strengths. To be too prideful to use them is like refusing to use spell check on grounds of principle. The learning curve is not zero, and it’s an advantage to know something about how they work under the surface — I now have a pretty good intuition for what sort of problems these models are good at and which they aren’t. (For what it’s worth, ChatGPT itself provides a good, self-aware list of where it adds value.) Conversely, some of the use cases that are widely discussed in the media — like ChatGPT serving as a “chatbot” — are less interesting. If the productivity-enhancing use cases don’t come up in Kulwin and Klippenstein’s work, that’s fine — but their attitude is like complaining that a toaster oven is useless because it can’t make sushi rolls.
Hipster skepticism won’t help us understand AI’s potential or its risk
Coming from the left, the mood affiliation usually takes the form: “Tech bros think AI is cool, and tech bros are bad — therefore, AI is overhyped.” AI is often compared to crypto, a technology that’s least an order of magnitude less important. And in their hipster-esque dismissal of AI, writers like Kulwin and Klippenstein miss a potentially more important critique. Namely, “tech bros think AI is cool, and tech bros are bad — therefore, AI is bad.”
It’s not that I agree with this statement; as I’ve said, I’m far more ambivalent, both to AI’s impact and to the tech bros. But it’s at least a tenable position — whereas the “overhyped” critique is becoming weaker, almost literally by the day, given the volume of AI news. If you hadn’t before, it’s now time to seriously consider the possibility that AI will have a very large impact even at near time scales and even on “ordinary” political questions.
There are basically four positions one might stake out in the AI debate. Imagine a 2-by-2 matrix where one question is whether AI will be one of the most important technologies of all time, and the other is whether this will be good or bad for humans on balance:
I haven’t labeled these quadrants because these positions don’t correspond neatly to the nomenclature. Some “accelerationists” welcome the transformation and belong in the green box — Altman, for instance, thinks AI will change everything, and the risks are profound but outweighed by the benefits. Other accelerationists think AI will be a merely important technology but not disruptive to the point where it poses an existential threat. In that case, the stakes are lower, and the case for pressing ahead is stronger because — at least according to the techno-optimist view — the large majority of technologies ultimately do improve the human condition. And in a capitalist system and a competitive world, good luck trying to stop them even if they don’t.
The blue box on the bottom right is often where the left ends up. They may be worried about medium-sized harms, from energy consumption to misinformation to algorithmic bias. However, they generally dismiss LLMs as “stochastic parrots” and deny their transformational potential.
There is some value in this — I don’t think there’s any inherent trade-off between considering medium-sized risks, existential ones, and everything in between (particularly mass job displacement). But, the position in the blue quadrant runs the strategic risk of not bringing enough chips to the table. If AI does prove to upend every aspect of society, the left is ceding control — or at least first-mover advantage in a field that could move quickly — to people who are more conversant in the language of AI and more up to speed with the technology. The red quadrant (AI transformational and bad) does have some vaguely lefty vibes: much of it emerged from effective altruism, which tends to attract socially progressive types — but also shares a lot of priors with the accelerationists, such as a commitment to utilitarianism.
There’s also the chance — in fact, this might be the default outcome — that we wind up somewhere between an AI singularity and an AI fizzle. In that case, the best analogy might be the Industrial Revolution, which transformed the world to the point where we needed a whole new set of political institutions — fought over in the form of literal revolutions — but did not pose an existential risk (at least not until the development of nuclear weapons ~150 years later).
Unlike the Industrial Revolution, which in the long run had liberalizing effects, this AI middle ground might reduce human agency or concentrate more power in the hands of the few. This critique by “L Rudolf L” is worth reading, for instance. It mirrors some of the concerns I express in the book about a scenario I call “Hyper-Commodified Casino Capitalism”.
Maybe the equation already looks different given DeepSeek’s 大卫 (David2) competing with Silicon Valley’s Goliaths. However, unlike previous “disruptive” technologies, the frontiers of AI have so far been occupied by companies that are already very powerful, like OpenAI (with its DNA in Musk, Altman and Microsoft) and Google. Moreover, the financial power of Silicon Valley is increasing — getting to pick and choose the best founders from all around the world is a highly lucrative enterprise, it turns out, and the returns accumulate every year. And it’s increasingly flexing its political muscle. Basking in the glow of Trump’s victory, the Tech Right is behaving in the way that people often do after a winning streak, with a strain of self-aggrandizement, and the rest of Silicon Valley is undoubtedly wondering if it should get in on the fun.
Unlike the hipster left, I don’t see Silicon Valley’s hype about AI as just a marketing gimmick, a fake Next Big Thing like in the now infamous Larry David commercial for Sam Bankman-Fried’s fraudulent crypto startup. But I want people on the left pushing back against AI’s potential anti-democratic effects — how it could facilitate the accumulation of power and impose preferences on people that they might not want — just as I want people on the right and the center to push ahead by recognizing the value of progress and the miracles that AI might help to make possible.
Instead, Kulwin and Klippenstein are forging some sense of political solidarity from being Luddites. I don’t know how you can use LLMs without concluding that they’re already a very powerful technology — far more powerful than most experts would have predicted 10 years ago — with the potential to become more powerful in short order. The left may get dealt out of the hand if it doesn’t get better AI critics.
For instance, feed it your article, have it suggest 10 headlines, and then craft a version in your voice based on its one or two best nominations.
If I’ve translated this wrong, blame ChatGPT.
Call it what it was Nate. Not a “hand gesture”, a nazi salute.
A lot of the people who are trying to downplay expectations about AI are more aware that people give them credit for. Some of us actually built the damn things, but the people who pried them away from us don't understand that they aren't what they think they are.
They have potential, yes. They will be tools that in a dozen years will be everywhere.
But they won't be what you were promised. Because much of what you think of as improvement is faked.
And it's easy to fake things.
Those math skills? They don't come from the model, they come from inserted functions inside the model that are run.
Don't believe me, look up Pickle Functions and read about how they are connected to AI, and how some of them *pull from other sites* to make the magic happens.
The logic pull downs are mostly generated by a separate AI model then run through filters to make sure that the Start and Ends are right.
But behind the curtain is traditional automation techniques making AI look good.
The same goes for those 'magic discoveries' and 'AI Built Chips'. They are mostly using a local AI component to interact and try a whole range of possibilities that a person puts forward, and it basically just slams the buttons like a hyperactive teenager until the automation and the person see something that looks promising. Then they wind it back and move forward to confirm.
But most of the time, those results are thrown out.
Just like AI generated code. Most AI generated code it garbage. Past boilerplate text that has been generated by just about everyone else online and simple loops, it gets *wacky* with fake functions, lazy coding, added vulnerabilities, and a 15% success case on the best model, but with the failures being unusable to work from.
The problem is, most of the people who can explain this are pretty damn annoying human beings. Like, 'got themselves fired by calling a CEO a idiot and a Liar in an all hands' level of annoying. They get technical, they call you a idiot, they are insulting.
But they are right.
And improving AI is a LOGARITHMIC problem. It's not X+X, it's X^X, and we're starting to run out of resources to process. Deepseek managed to shave down the X a bit through clever optimization / training against GPT / removal of legacy inputs / removal of toxic scraped data (censorship actually helped China with that one), but it's just pushing back the problem.