
Photograph courtesy Sam Kriss
Much of the conversation regarding AI to date has concerned its potential for unleashing cosmic dangers or its nebulous promises of global salvation. In practice, however, plenty of the young people attracted to the tech industry have been content with making cynical products designed to help people, for instance, cheat through job interviews or mislead dates. While it may one day cure cancer or enslave humanity, AI has so far mostly gifted us chatbots, flawed data management tools, and a proliferation of companies fueled by cheap viral hype rather than actual workable services.
For the March 2026 issue of Harper’sMagazine, the writer Sam Kriss embedded with some of the members of tech’s younger generation and returned with a portrait that is as oddly poignant as it is biting. “For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise,” he writes. “But all that barely matters anymore.” This new generation is, above all, desperate to avoid becoming part of the “permanent underclass” that they now expect AI to produce.
I spoke to Kriss recently about AI’s false starts, doomsday scenarios, and its most enterprising and entertaining proponents.
Will Stephenson: One of the things that first moved you to write about this milieu was the vogue for “vibe coding,” which promised to level the playing field for engineers and programmers in some strange ways. What seemed most interesting about this, and how do you feel about the implications of the practice at this point?
Sam Kriss: As with most of my writing, I think what initially attracted me to the story was a deep sense of personal inadequacy.
Every so often, I’m reminded that our world works on the basis of an enormous amount of specialized knowledge, and I have almost none of it. I don’t know how to build a nuclear reactor; I don’t know how to build an engine; I don’t even know how to drive a car. I can say words like “piston” and “transformer,” but fundamentally, I have the same basic position relative to the vast technical apparatus that governs our society as a prehistoric tribesman does to an eclipse. Meanwhile, for basically as long as I’ve been alive, there’s been this quasi-aristocracy of Silicon Valley nerds who, in their highest echelons, essentially get to make and remake the world at will. They can reach into your brain and reprogram your desires; they can conjure insane political movements out of nowhere if they decide to. The lower levels of these people merely command the kind of salaries that would, here in Britain, make all your friends stop talking to you out of sheer envy. And the line was always that these people are in this position because they can code, which is generally figured as the summit of the kind of practical intelligence that I don’t have.
Being able to code used to have a strange totemic value. I remember on Twitter in the 2010s, every time some online magazine folded, there would be minor mobs of people crowing at the newly unemployed journalists that they should learn to code. This was the only skill that had any economic value in the twenty-first century. You had to be able to build B2B SaaS apps. A little more thana year ago, right at the start of Trump’s second term, Elon Musk’s idea for radically fixing the federal government was to bring in a handful of zoomers whose main expertise was in coding. Darren Beattie, whom Trump appointed to the State Department, said that “competent white men must be in charge if you want things to work.” There was a real fetish for the figure of the “cracked coder,” the person who barely eats or sleeps, just pops Zyns and writes code all night. But meanwhile, as all this was happening, highly competent people were coming out with AI tools supposedly making their most crucial quality totally irrelevant.
The tech people still have their jobs, but at this point, for a lot of them, those jobs just consist of relaying instructions to Claude Code and then watching as it does everything for them. I think it’s probably inevitable that in the medium-term future, that position is going to become untenable. This strikes me as a bad thing. Not because my heart bleeds for out-of-work coders; I do actually think it’s possible that AI will end up creating more jobs elsewhere. But I think we’re headed into a very uncertain future wherein no one has the specialized knowledge and expertise that keeps our infrastructure running. You can’t build an entire planet out of people like me. At that point, we’d basically be on the precipice of another dark age.
Stephenson: The relationship between some of these characters isn’t always immediately obvious. What made you think someone like, e.g., Donald Boat belonged in the same story as Roy Lee or Eric Zhu?
Kriss: All these people are, in their own way, on the periphery of the big things happening in tech. None of them are actually training AI or building data centers, and they’re not directly involved in the biggest capital-expenditure effort in human history. But they form part of a secondary economy that seems to revolve around virality and cult status.
Donald Boat is very openly critical of the tech world, but he also can’t keep away from it. I didn’t put him and Roy together; there was a copy of the Decameron at the Cluely offices when I arrived. Donald’s interests are very different, but there’s a manic propulsion behind him that’s very similar to the boy founders’; a lot of his friends are involved in AI in one way or another; he gravitates towards the scene—maybe against his better judgement. When I told Donald I was developing the theory that he and Roy Lee were basically the same person, he readily agreed.
I think there is one thing that makes Donald Boat different from everyone else I spoke to for the piece, though, which is that he’s the only one who’s actually from the Bay Area. For everyone else, this is the place you come to so you can make something happen. For Donald, it’s where he grew up. Most of the people I know who grew up in San Francisco now live in other places. Whatever San Francisco’s become, it’s no longer the place where they used to skateboard as kids, or smoke weed, or get mugged. Their city is now a machine that some of the strangest people in the world are using to bring a computer god into our universe. I think watching that happen to your home must be a very strange experience.
Stephenson: How does the new generation of dropout founders in the Bay Area relate to, say, the “accidental billionaires” generation that preceded it? Or is it more productive to compare them to the zero-interest-rate policy–era founders who came in between?
Kriss: Much of what I witnessed of the current crop of tech founders isn’t particularly new. During the ZIRP era, it wasn’t unheard-of for a startup to raise millions in funding on the basis of a founder and a logo long before they had even come up with a product. On that front, not much has changed.
I think the main difference is that these people really don’t seem to be having very much fun. Everyone else is too busy running a thousand instances of Claude Code at the same time. The only drugs are peptides. Previous generations were still animated by a Revenge of the Nerds fantasy, so once they had money they started throwing exorbitant parties. They weren’t in any sense cool, but they were constantly reaching for cool. They thought they could get it, if they got rich enough. Zuckerberg is still trying.
For the current generation, having Andreessen Horowitz invest five million dollars in their startup is the reward. They want to live in their office. Previous generations of tech people wanted a workplace that was somewhere between a day care and a therapist’s couch. Think of those slides and ping-pong tables and wellness seminars. They went to Burning Man. The current generation wants to live and work in a plain gray box with a squat rack in the corner and eat the same meal every day. Their ideal is a prison camp.
Stephenson: Has your impression of the industry or its hype changed at all with the emergence of Claude Code, AI agents, and “SaaS Apocalypse” narratives?
Kriss: AI does seem to go through a quite predictable hype cycle. Whenever some new feature is announced, everyone spends a month or so talking about how it’s about to destroy the economy by taking everyone’s job. When some of these claims turn out to be overstated, everyone spends a month talking about how it’s going to destroy the economy by wiping out everyone’s investments. When I wrote the piece, we were at the bottom of the cycle. Now that it’s been published, we’re back at the top. This means that some of the things I said sound outdated. I mention the AI bubble fears, which have temporarily vanished; the only people still talking about it are way outside of the tech circuit. Among San Francisco types, the line is: Now that the barriers to fully “agentic” AI have been overcome, we’re in the early stages of a hard takeoff.
Stephenson: You’ve written memorably in the New York Times about your own experiments with AI language and its peculiarities. Have you noticed any interesting evolutions in this respect in recent months?
Kriss: Frankly, no. It’s still god-awful, humorless, guileless, meaningless crap. It works fine if you want it to explain something very simple. If you ask it how a piston works it can tell you. But whenever AI has to talk about anything even slightly abstract, it immediately lapses into a kind of hysteria. I know because the main thing that’s changed is that more people are sending me reams of overheated AI dogshit in response to my writing. In every case, as soon as it tries to respond to a conceptual essay, the AI vomits up a lot of frantic drivel: “What you’ve found isn’t an accident. It’s signal. It’s structure revealing itself. It’s what happens when a hidden mechanism escapes confinement and closes the loop.”
This stuff is why I’m still a little skeptical about some of the claims made of AI’s world-ending coding ability. LLMs were originally built on a corpus of ordinary written language. Their main function is to be able to generate text. The material I’m seeing is being produced by non-frontier models, and it’s possible that their capabilities are developing unevenly. You’d expect prose composition to be where they’re furthest ahead, but their prose is terrible.
I do try to keep perspective here. AI critics often end up confidently asserting that AI will never be able to perform some incredibly niche but inviolably human function, right before one of the labs comes out with an AI that does exactly that. As my story was making the rounds on X, I saw someone ask Grok to generate a psychological profile of all the people I interviewed. It described Roy as “hyper-extroverted.” I don’t think that’s true at all, though it is how he wants to be seen. I find myself thinking, Hah, stupid machine, your understanding of the human substance is surface-level; you have not peered into the mysteries of the soul. As if a machine that only has a facile understanding of character is nothing to worry about. We’re increasingly scrabbling for things it can’t yet do.
But I think it’s significant that it can’t write well, it can’t talk abstractly, and it can’t be funny. AIs are trained on every great comic work that’s ever existed, but they’re abjectly unfunny. I asked Claude to produce something sidesplitting and it gave me an open letter to Tuesday: “You are not the beginning of the week, so you lack the drama of Monday. You are not the middle of the week, so you cannot claim Wednesday’s quiet dignity. You are not Thursday, which at least has the decency to be almost Friday.” In a way, this gives me hope.
Stephenson: Tech as an industry seems more skeptical of journalists than most (or, put another way, at least as skeptical of journalists as almost everyone in the country has become). Did you have any trouble getting anyone to speak to you?
Kriss: I think the tech world’s caginess around journalists is a generational phenomenon. Millennials grew up in an environment of cultural terror. They know that a journalist is someone who will try to catch you saying something racist or sexist or otherwise insufficiently progressive, and then conjure a mob of thousands of strangers to try to get you fired. That was the dominant mode in the 2010s “techlash,” when a lot of big legacy publications attempted to define themselves as the institutions that would hold the tech companies to account. This is a very good idea, but unfortunately the easiest way to do that was to engage in shallow political finger-pointing. For instance, in 2021, Scott Alexander was the subject of a New York Times exposé that mostly focused on the presence of various racists and neoreactionaries in his blog’s comment section. I did meet some millennial startup people and they were incredibly paranoid; they’d want to make sure we were off the record before they’d tell me their names. But the main figures in the story were totally different. I had absolutely no problem getting any of them to talk to me. It was all incredibly easy. You can just email people.
Part of this is explained by the fact that they live in an entirely separate media environment. No one at Cluely had ever heard of Harper’s, I’m afraid. The East Coast media is assumed to be irrelevant to their world. Silicon Valley now has its own internal media ecosystem, a network of publications and podcasts that are open about the fact that their purpose is promotional, or at most to be a platform for internal debates. The podcast host Dwarkesh Patel isn’t trying to hold anyone to account, and he’s certainly not trying to get anyone fired. Within that world, there’s no such thing as bad publicity. Controversy is good business. The young people in San Francisco are incredibly anxious and neurotic, but they don’t have the fear of disrepute that millennials do. Their strategy is vice signaling: you deliberately paint yourself to be as regressive as possible so that that’s your baseline. Then no one has any ammunition to attack you with. This is why I think Roy was so happy to show me everything, up to and including his Hinge profile and his ex-girlfriend’s vibrator.
Stephenson: Is there a figure from this same milieu who you wish that you could have met for the piece?
Kriss: There are a couple of other directions the piece could have gone in. There’s an OpenAI engineer who goes by “roon” online who happened to be in New York while I was in San Francisco. He argues that AI is the only possible product of the past two million years of human evolution, and that instead of worrying we should embrace it. I did also think of trying to talk to Luke Farritor, who was part of Elon Musk’s handpicked coterie at DOGE. Before DOGE, where Farritor shut down HIV relief programs for no good reason, he was part of a team that was digitally unrolling and deciphering the Herculaneum scrolls. This is a library of nearly two thousand books that were carbonized in the eruption of Vesuvius in 79 AD. People have been trying to unroll them since the eighteenth century, and every attempt has damaged the scroll. But now, with X-rays and AI, we might be about to read thousands of lost texts for the first time. He might have been interesting to talk to.
Stephenson: You write that the Bay Area rationalists’ vision of AI’s possible threat to humanity is best articulated in the “AI 2027” report, authored by Alexander and four others. How would you grade the report so far?
Kriss: Among the chiliasts and apocalypts, the consensus seems to be that things are progressing basically as the paper outlines but along a 60-percent slower timeframe. That gives us maybe an extra five years before we’re exploring the galaxy or dead. I guess we’ll see.
I do have an apocalyptic scenario of my own. Maybe there really is a great bifurcation coming, but it’s between the people who let AI take over their knowledge and volition, and those who don’t. In the long term, it’ll look like the former group simply vanished from the earth and left no trace. The people who herd cattle in the Sahel will survive.
No comments yet. Be the first to comment!