Today we launched ChatGPT,” a chirpy Sam Altman posted on Twitter one day in the fall of 2022.
Altman went on to describe a sort of ladder of progress that humanity had just begun to climb. “Soon, you will be able to have helpful assistants that talk to you, answer questions, and give advice,” he promised. After that, “you can have something that goes off and does tasks for you.” His creation will be our eager protégé — our bright intern, piping up judiciously. Then it will be our homunculus, trusted with the grunt work.
What next? “Eventually you can have something that goes off and discovers new knowledge for you.” At the top of the ladder will be an AI capable of seeking enlightenment … and bringing it back to us. “Talk to the computer (voice or text) and get what you want, for increasingly complex definitions of ‘want’!”
There is an uneasy pairing here with the way we are used to thinking about AI. According to Altman, AI is there to serve you. It will talk to you, do for you, discover for you. It will give you what you want. It will really give you what you want, under ever “increasingly complex definitions.”
But the classic idea found in sci-fi, in both utopian fantasies of the end of human toil and doomer prophesies of the end of human existence, is that AI goes beyond us. Everyone now arguing about the shifting-goalpost vision of “general” artificial intelligence agrees that it would be something capable of beating us at our own games, cognitively and otherwise. In OpenAI’s charter, the endgame is the creation of “autonomous systems” that “outperform humans at most economically valuable work.”
What is happening here, and for whom? The answer is an impossible dream, one that has been suffusing our digital lives for a while now. Call it the for you paradox.
Here is the paradox.
First: Everything is for you. TikTok’s signature page says it, and so, in their own way, do the recommendation engines of all social media. Streaming platforms triangulate your tastes, brand “engagements” solicit feedback for a better experience next time, Google Maps asks where you want to go, Siri and Alexa wait in limbo for reply. Dating apps present our most “compatible” matches. Sacrifices in personal data pay (at least some) dividends in closer tailoring. Our phones fit our palms like lovers’ hands. Consumer goods reach us in two days or less, or, if we prefer, our mobile orders are ready when we walk into our local franchise. Touchless, frictionless, we move toward perfect inertia, skimming engineered curves in the direction of our anticipated desires.
Second: Nothing is for you. That is, you specifically, you as an individual human person, with three dimensions and password-retrieval answers that actually mean something. We all know by now that “the algorithm,” that godlike personification, is fickle. Targeted ads follow you after you buy the product. Spotify thinks lullabies are your jam because for a couple weeks one put your child to sleep. Watch a political video, get invited down the primrose path to conspiracy. The truth of aggregation, of metadata, is that the for you of it all gets its power from modeling everyone who is not, in fact, you. You are typological, a predictable deviation from the mean. The “you” that your devices know is a shadow of where your data-peers have been. Worse, the “you” that your doctor, your insurance company, or your banker knows is a shadow of your demographic peers. And sometimes the model is arrayed against you. A 2016 ProPublica investigation found that if you are Black and coming up for sentencing before a judge who relies on a criminal sentencing algorithm, you are twice as likely to be mistakenly deemed at high risk for reoffending than your white counterpart.
Whoever you are, the algorithms’ for you promise at some point rings hollow. The simple math of automation is that the more the machines are there to talk to us, the less someone else will. Get told how important your call is to us, in endless perfect repetition. Prove you’re a person to Captcha, and (if you’re like me) sometimes fail. Post a comment on TikTok or YouTube knowing that it will be swallowed by its only likely reader, the optimizing feed.
Offline, the shadow of depersonalization follows. Physical spaces are atomized and standardized into what we have long been calling brick and mortar. QR, a language readable only to the machines, proliferates. The world becomes a little less legible. Want to order at this restaurant? You need your phone as translator, as intermediary, in this its newly native land.
The algorithm has an aesthetic in real life, which you’ll recognize if you’ve ever been in what Kyle Chayka has called “AirSpace” (and you have): the sameiness of Japandi semi-midcentury modern, now shaded into ‘70s and ‘80s redux: large leafy plants, tasteful minimalism and its studied, equally samey instances of backlash; terrazzo, blond wood, concrete and soaring ceilings. Once this land was millennial pink, Edison bulbs, fiddle-leaf ferns, painted arches, velvet statement couches. It will keep evolving but you will know it regardless, from gentrifying coffee shops, exurb Airbnbs, hotel lobby makeovers, startup offices, direct-to-consumer pop-ups, or from the Instagrams of any of same. Sit inside one of these spaces, anywhere, and ask: Is it for anyone? And who am I, if this is for me?
Other spaces feel like stand-ins for the simulation, with you as avatar. Ascend the plane of the Apple Store, in which the Geniuses hold the real arbiters of order, their iPads. Wander into the burrito place operating several ghost kitchens and feel like you’re doing something wrong by ordering at the counter. Banks take up once-prime real estate even as their purpose, in real life, becomes less clear — if you’re there it’s probably for some version of tech support. Next to the reception desk at the doctor’s office is the screen kiosk where you actually sign in. I was told at a gym that I could only sign up for its services online; the same for a vaccine appointment at CVS. Things that concerned my body could not be addressed in the flesh.
Our receptionists and tellers are just doing their jobs, of course, amid the strangeness of being supplanted. Our new AIs ask lightly if they can help, echoing the staff at the grocery store, drug store, airport — forced to hover like purgatorial spirits. Laura Preston has written about her time working for a real estate firm to train the chatbot that would replace her. For consumers and workers alike, the benefits of this hybrid world have been unevenly reaped.
This is hardly the first time we’ve seen the real world reshaped to fit a dubious idea of efficiency and service. In the mid-twentieth century this country was remade for a machine: the car. Highways bisected cities, urban sprawl siphoned commerce, parking paved over watersheds. The promise of individuation — a single-family house of one’s own — turned to suburban isolation and deeper segregation. Traffic was optimized. For many, the realization has come that maybe this grand new age wasn’t entirely made in mind of how humans and their communities thrive. (Some, like Jane Jacobs, always knew.)
Maybe all I’m saying is that it is possible to remake the world with “you,” the user-consumer-citizen, in mind, fast and fully, and possible to regret it.
The tech-hype darling that preceded generative AI — the metaverse — was a big corporate swing that missed. But too few prognosticators seemed to understand why.
What we wanted, the metaverse told us, was a place where it’s like you’re there. Your eyeballs, that bankable metonym, would be all you needed. You might not be able to afford a house offline, but there was metaverse real estate. The promise was to finally bring bodies, space, presence — the real world — online. But the bet was that people would find all those things less important than ever. This was always a paradoxical solution to our alienation from the environment and each other. And it turned out to be a bad bet, too.
Early commentators overstated the question “Did AI kill the metaverse?,” acting out the perennial Silicon Valley succession drama of one hype cycle overthrowing another. But there is something fitting in Large Language Models succeeding the metaverse as future du jour. Offerings like Horizon Worlds, Meta’s sparsely populated VR social platform, misunderstood what the average person wants from our tech overlords, presenting a yet more uncanny, yet more alienating integration of you and your avatar. ChatGPT, by contrast, offered a tonic to the faceless algorithm in the cloud. It’s human-scale. It’s ostensibly there to do whatever you tell it to do, no more, no less. It will hear what you want and serve that back to you. It domesticates the vastness of search rather than presuming to impose the vision of a whole new plane of existence.
If the hawkers of the metaverse took us in one direction of error in the “for you” equation, the new AI wave presents itself as a corrective in the other direction. The metaverse was a clunky translation of every sci-fi vision of a future in which the real party is in the computer — think Tron or The Matrix. But ChatGPT isn’t here to tell you where the party is. Wherever you are, it is ready to join you: to be your perfect “sidekick,” to make life in your world a little easier. This was Sam Altman’s promise, and it became Bard’s, Bing’s, Meta AI’s, and so on.
Here’s one issue: a year in and counting, consumer-facing AI chatbots are still not great at being for us in the ways Altman promised. As of this writing, the welcome bar to ChatGPT reads cheerily, “How can I help you today?” while below the message bar the fine print cautions, “ChatGPT can make mistakes. Consider checking important information.”
ChatGPT and similar AI bots, as we know by now, don’t work like a search engine, which finds sources relevant to a user’s query. Search engines exist to move you along to another site. They’re a middleman for someone else. AI bots get rid of the middleman, and the someone else too. They offer pastiches of existing material in ways that are a lot more useful for some kinds of questions than others. If you want a likely next line of code, great. A template for a cover letter, doable. A grab-bag distillation of ways to unclog your sink, sure. Ophelia’s death scene as Ariana Grande might tell it, there’s plenty of Shakespeare and Ariana out there to remix a cover version. But if you want something that “goes off and discovers new knowledge for you,” as Altman promised, that’s a bar that can only be cleared if knowledge is defined as not involving sentience — that is, as a response probabilistically synthesized from the existing corpus of information, producing non-sequitur hallucinations where the predictive neural nets meet a knot.
To which, many smart people say: Why not define knowledge that way? The jury is still out on what sentience is. But if we can accept that we still have an edge in judgment over the machines, what ChatGPT & co. give us is knowledge without knowing. What’s at issue here is whether it matters that there was ever someone at the other end of the line — someone who, before we show up to decide whether a given bit of content is true or useful, exercised discernment in creating it. Google Search is also not intended to provide results free of mistakes, but with search results we operate under the assumption of another sentience, someone with motives coherent if unknown to us, behind each link. Each source offers new knowledge about the world, if we judge it reliable — and if we don’t, we’re at least left with new knowledge about someone else. Evidence of what another mind felt was worth putting out there, for love or politics or a Nigerian prince scam or otherwise.
With ChatGPT, “knowledge” is created for us. We’re the only person in the room, and we’re told what we want to hear. What, then, do we really learn?
When New York Times tech columnist Kevin Roose engaged Bing’s AI chatbot in discussion, it latched onto his interest in its own identity, a line of questioning intended to elicit declarations of emotion and desire, and it ended up in a loop of professing love for him. So it goes, if the text of the chat is the sum total of the AI’s clues to what we want from it. By probing the chatbot for expressions of vulnerability and intimate candor, Roose primed it to respond with a mirroring obsession.
AI isn’t the omniscient sidekick, ready with whatever measure of revelation would be useful to you at the moment. It’s a Cubist portrait of the things we have already said and made, that by combining our facets becomes either passing or passing strange.
What do we want from the machines?
If anything, it is we who are for it. It is the thing that is reliably learning from a generative source, which is us. It cannibalizes as it serves.
In 2011, Paul Ford wrote a somewhat tongue-in-cheek essay positing that each media format owes its raison d’être to a fundamental question. Newspaper: What’s going on in the world? Movie theater: How do I kill this rainy afternoon? Retail catalogue: How do I dispose of this income? And so on. The Internet’s heart-question, he proposed, was: Why wasn’t I consulted? In the comments sections and upvotes and follows, in reply-alls and WordPresses, the wish of every person on the Internet was to make their presence known to exactly the extent they want. It was the wish to have a voice with limitless potential, to act and react in real time, in a grand public forum — to be, to feel, seen.
Ford’s essay came out in the heyday of crowdsourcing, of the Arab Spring and Occupy Wall Street, when Wikileaks was rampant and Instagram was new, when anyone might be forgiven for seeing populism as a top-shelf Internet hermeneutic. Whatever the truth in Ford’s thesis then, two insights have stuck with me since. One, in judging a tech milestone, it’s worth following the ball to the things that only it can do, rather than the Swiss Army knife’s worth of tasks it can augment, optimize, or substitute for. Two, it’s worth asking to what extent the Internet is still a communications technology, animated by promises to listen to its users: Message? Comment? Retweet? Upvote? How many stars?
Today, far more than in 2011, requests for participation come from the algorithm itself. Like what you’re seeing? Want to see a different ad? Tired of these? Is this what you’re looking for? These questions aren’t paths to being seen — they are feedback for the models. Down in the comments section, we are boats borne against the current of bots and ranking. Crowdsourcing had its issues, but where even are the crowds now? Our feeds shape themselves for a forum of one.
This is another part of the for you problem, and why AI compounds it: The more tailored AI is to us, the less we will be able to access something we also want, which is the truly new: the unknown unknown, the discovery, the other — each other.
When we enter “the portal,” as Patricia Lockwood has perfectly identified the Internet, we want to go somewhere. That’s what a portal is for. That is why some of the “For You” pages are called “Discover,” and maybe why Altman used that word in his final vision of what AI will do: it “goes off and discovers new knowledge for you.” The promise of the algorithm is to help us with what we’re already doing. But eventually what we will want is something that surprises.
Don’t get me wrong. We, people, are our own paradox: wanting new but not too new. We seek learning environments with strong patterns, challenging enough that we’ll make mistakes and then be able to course-correct, but not so unfamiliar as to thwart all attempts at prediction. We seem to love a bounded simulation of boundlessness. See the absurdly voluminous discourse around “ludonarrative dissonance,” debating which video games manage to balance prescribed story with open play. See the Disney empire’s influential, if overreaching, immersive theme-park design. Or consider Marvel’s “cinematic universe” that is ubiquitous as fact and meme, offering illusions of endless expansion within constructed limits.
This is the imaginative landscape the algorithms have given us. You liked this, have more of it. Have a spinoff, a sequel, a reboot. Have a branded experience. In the elite fashion industry — which has always moved in cycles — vintage, or “archival,” wear has become a persistent headline trend. The biggest music event of last year, Taylor Swift’s Eras Tour, was an artist faithfully performing her past incarnations. The “multiverse” trope tries, and mostly fails, to imagine a way out of this stuckness with endless variations on a theme.
So far, AI’s creative contributions take this trend to new plateaus, with results somewhere on the uncanny spectrum from absurd to sinister. DALL-E or Midjourney can give you x in the style of y. Deepfakes can make anyone look like anyone else, in magics of transposition and recombination. “Ever wonder what the rest of the Mona Lisa looks like?” a viral tweet last year read. The answer was a clone-stamped landscape with a mountain, a river, and clouds. Roundly mocked, it was another illusion of expansion that missed the point by offering more of the same. AI creations can be plenty seductive, in part because we’re the originators — it made this new thing at our request! But the uncanniness is there in the rhymes and traces of older authorial hands we can no longer see. Creative output is always going to tell a story of making something new under the influence of others, whether by the presence of brushstrokes or the use of tropes recognizable from werewolf erotica. Generative AI doesn’t eliminate those evocations, but it makes their provenance a black box.
The effect is a world shaped around and for us, derivative while at the same time obscuring actual human influences. Video game shorthand has spilled over into how we describe ourselves and others: “main character energy” is a cliché — protagonist me — as is its corollary, the idea of referring to those outside one’s circle as NPCs, or non-player characters. These phrases are memes for a reason. We’re each in our own game-like universe, made of the web of recommendations, “discoveries,” and feeds that follow and contain us.
Something in us knows this is not enough. If the whole world is made for us, it also leaves us alone as the only real person in it. That’s The Truman Show, a cautionary tale more resonant to present AI than the nightmare of The Matrix, in which everything is an illusion. We can’t see something new if every vista has been created from the predictions of past vistas, and we can’t feel seen if there is no one new there to see us.
Is the new AI a useful tool? Yes, a tool with many uses. It can help us code faster and be our concierge, our stenographer, and so on. It can turn the value-neutral dials technology has manned since the Industrial Revolution: efficiency, automation. But to answer Paul Ford’s old question: What can this thing do, what is it promising, that isn’t just an optimized version of already extant processes? Its promise is to finally personalize the Algorithm — to fix all its alienation factors, to make it the thing that gives us what we really want. And it will fail in this. It can’t make us feel truly heard and seen because the “for you” funnel spirals us into isolation. And it can’t show us something truly new because it’s not built that way.
In his thought experiment called “the experience machine,” philosopher Robert Nozick argues that we wouldn’t plug into a simulated reality of pleasure because, among other reasons, this “limits us to a man-made reality.” Just as metaverse offerings have mostly misunderstood what it is to be a body in space, AI misunderstands something fundamentally human: the power of encounter.
Let’s recap. The most explicit, sheep’s-clothing promise of the current AI revolution — that it will perfect the age of algorithmic attunement by making it feel like the system is working for you and only you — is belied by its deprioritization, even diminishment, of a foundational part of what we want from our digital institutions: to encounter other people and discover new things.
Why, then, are the people working in AI so obsessed with artificial general intelligence? There’s so much to debate with AI in the here and now, yet an outsized share of the discussion is still about this speculative end goal no one can reliably define, and which many of those working toward also profess to fear. Why?
Because they want what we want, too. The obsession with AGI is proof of concept of how much we’d collectively love for the machines to actually give us something new and someone other — because AGI would be both. The people making this technology hope, with a valence of fear, that through the thing they make they can transcend made things, and get to something that’s not just pastiche. They want emergence. They want to create something for us that, finally, would be only for and in and of itself.
Makers since Pygmalion have wanted this, and maybe since whoever invented children, our original uncontrollable creations. But there’s a specific hunger attached to it this time, I think, because we’re all so inundated with simulation. We want to believe that at the end of all these simulations is the real; that through us, if we press far enough, we find the other.
Every fiction about AI is about general AI. Every fiction about AI is also a cautionary tale, whether mass delusion in The Matrix or unfulfilled romance in Her. But I’m reminded of Renata Adler’s quote on film — that no matter a creator’s intent to either endorse or criticize, “film always argues yes.” Cautionary or not, our AI fables glorify humanity making something that outstrips us and anticipate a new encounter, even if it’s our last.
The car age was also the space age, and also the atomic age. Anything was possible — why not reshape the whole world, reshape humankind itself? Why not have fake food and plastic everything? The bomb might come tomorrow but today we can be gods.
As it turned out, we were not gods. We were people on Earth who now had to deal with nuclear radiation and greenhouse emissions and microplastics. We haven’t transcended ourselves yet.
So here we are, waiting again for transcendence. We’ve seen and read all the cautionary tales about the advent of AGI. But what if it never comes? Then we’re not progressing toward anything — we’re just attuning the Algorithm for better and worse. Rather than growing outward toward the thing that would be so perfectly for us that it would become a mind of its own, we would be spiraling inward — making, out of making, out of making.
In the Jorge Luis Borges story “Tlön, Uqbar, Orbis Tertius,” a cabal of academics produces an entire field of study about a civilization they’ve invented. Our frustrated narrator, an outside scholar, finally ascertaining that it’s fictions all the way down, reflects, “Enchanted by its rigor, humanity forgets over and again that it is a rigor of chess masters, not of angels.”
My purpose here is not to plead for re-bottling the genie of generative AI or hitting rewind on the algorithmic age. It is to wonder why those who see its pitfalls — and even lend credence to a worst-case-scenario future — still want it so badly, making hollow promises to the rest of us along the way. Is more tailoring, a more frictionless world, actually better, or is it another bill of goods? Is this speculative, seductive endpoint something ennobling, or a sci-fi canard, reflecting anxieties irrelevant to the most pressing issue: What is and isn’t actually for us about what now exists?
AI scientists are rigorous chess masters, but we’re past that game. Deep Blue beat Garry Kasparov in 1997, and twenty years on AlphaZero conquered Go. If the record is anything to go by, we will keep our machines playing our games, with increasingly complex definitions. Each one over and over, until they figure out how to win, until we figure out how to lose.
Keep reading our Spring 2024 issue
Highways vs. urbanism • The IVF problem • Depressive AI • LSD is back • Subscribe
Exhausted by science and tech debates that go nowhere?