On an ordinary morning, you cradle a steaming cup of coffee while scrolling through your social media feeds. You’re in your happy place, engaging with the thoughts and creations of countless individuals at your leisure.
But something feels off. There’s no proof, but your instincts are sure of it. For a while now, the microcelebrities on Twitter have been engaging with you more than they should be, more than they were a few months ago. You’ve noticed patterns in conversations that are beyond your conscious mind’s power to decipher; there’s a rhythm to trends and replies that did not exist before.
A vague dread grips you. Why is everything a little bit different now? The smallest details are wrong. Your favorite posters have vanished from all platforms. There haven’t been any new memes for some time, only recycled iterations of old ones. Influencers are coordinated in their talking points like puppets being pulled by the same strings. Your favorite niche YouTuber has only recently been posting new content with any regularity. Is this a message? Is this what schizophrenia is like?
Dread gives way to the cold stab of terrible certainty as it hits you: they aren’t people. They’re bots. The Internet is all bots. Under your nose, the Internet of real people has gradually shifted into a digital world of shadow puppets. They look like people, they act like people, but there are no people left. Well, there’s you and maybe a few others, but you can’t tell the difference, because the bots wear a million masks. You might be alone, and have been for a while. It’s a horror worse than blindness: the certainty that your vision is clear but there is no genuine world to be seen.
This is the world of the Internet after about 2016 — at least according to the Dead Internet Theory, whose defining description appeared in an online forum in 2021. The theory suggests a conspiracy to gaslight the entire world by replacing the user-powered Internet with an empty, AI-powered one populated by bot impostors. It explains why all the cool people get banned, why Internet culture has become so stale, why the top influencers are the worst ones, and why discourse cycles seem so mechanically uniform. The perpetrators are the usual suspects: the U.S. government trying to control public opinion and corporations trying to get us to buy more stuff.
The Dead Internet Theory reads like a mix between a genuinely held conspiracy theory and a collaborative creepypasta — an Internet urban legend written to both amuse and scare its readers with tales on the edge of plausibility. The theory is fun, but it’s not true, at least not yet. With AI-powered tools soon running in everyone’s pocket, the story of the Internet as a sterile realm of bots in human guise will become downright persuasive, and possibly true. Does it have to be this way?
There’s long been a vague anxiety overshadowing the user-powered Internet hinting at a great fakeness at the core of it all, and the Dead Internet Theory is only the latest manifestation of this unease. In 2014, journalists raised alarms over supposed armies of false identities run by cabals of online gamers faking mass support for the “Gamergate” campaign. Then, in 2016, panic over fake news and Russian “troll farms” emerged, which somehow continue to be taken seriously as an explanation for how Donald Trump became president. During the 2020 presidential campaign season there was hysteria about an impending wave of deepfake videos that would jeopardize the election; this hysteria unceremoniously died when the election was resolved in a way the alarmists liked.
Beneath these panics is a collective gut instinct: that hiding behind the one-man-one-account facade of the Internet could all too easily be something else, an impersonal manipulative force. This is the substance of the Dead Internet Theory, the sigh of the web user who feels the weight of garbage that’s been choking the web for the last several years. Search engines have been getting worse, because content has been getting worse. Content has been getting worse, because high-traffic “content farms” optimized for search engines are full of automated, rehashed articles padded with paragraphs of ad-revenue-maximizing filler before actually getting to the damn answer you were looking for — which turns out to be the same you’ve seen a dozen times. The browsing user’s last respite is appending “reddit” to search terms to bring up answers from real people having discussions.
But as we enter the age of large language models (LLMs), Reddit might not be safe either. LLMs are systems that take prompts and produce remarkably human-like text and media in response, and they’re poised to kick the flood of fake online content into overdrive. The groundbreaking LLM called GPT-4 has caused a stir by acing higher-education entrance exams, and anyone can sign up for ChatGPT, the chatbot version of the software, to see that it’s the real deal. Everyone’s mind is blown, and the reaction is justified: this technology will obviously change the world.
The good news is that these machines are not intelligent, and, the fears of otherwise-smart people aside, a terminator apocalypse will require something entirely different from GPT-4. The bad news is precisely that it doesn’t need to be intelligent to pass our tests; it passes because our tests are dumb and we’re gullible.
GPT and its kin are intuition machines, which is to say they pattern-match from massive datasets, sort of like we do. They are not reasoning machines, which is to say they don’t create mental models of the world like we do. Intelligence is multiple things at the same time, and today’s burgeoning AI models do only one of these things very well. GPT-4 may outperform doctors in diagnosing certain diseases because they already have the relevant diagnostic patterns memorized from their training data. But image-generating models like DALL-E 2 and Midjourney are awful at generating realistic hands because they cannot reason about what a hand is or does; they just link together patterns, and that’s usually good enough to generate images that look roughly natural. Language, on the other hand, works quite well as a product of mashed-up patterns, and GPT-4 is certainly good enough at language to let humans lower their guard.
Again, these models cannot think and therefore cannot hurt us on their own. The villain is the same as ever: other people wielding new and powerful tools. LLM chatbots are rapidly proliferating and the Dead Internet Theory is dangerously close to being vindicated as the Dead Internet Prophecy, because the idiots behind search-engine-optimized spam websites and the bot accounts in your Instagram are about to get superpowers.
It’s a safe bet that computers will continue to get faster and cheaper, and it’s clear that powerful chatbot models will soon be in everyone’s hands: a tool called AutoGPT already exists. This autonomous system runs on GPT-4 and executes user-defined goals without outside human help. Unlike stock GPT-4, it can search the Internet in real time, learn from the information it finds, and code and run its own software as it chains tasks together to achieve the end goal. It promises to perform marketing research, write articles, and create websites. If it can do all that, it can pose as a human creating content on social media platforms.
Elon Musk, who has previously warned of a rogue-AI apocalypse, is so worried about the more mundane threat of chatbots that he recently announced a drastic change to Twitter: to prove you’re not a bot impersonating a human, you must pay a subscription. Cash grab or not, Musk’s solution is an obvious one. A bad actor can pay for a bot to look human, of course, but scaling this to an army of shilling-and-deception bots becomes uneconomical.
The problem of a flood of bots that can pass the Turing Test — that is, pass as humans in a text-based conversation — is not that they are human-level intellects, it’s that they don’t need to be at that level to routinely fool humans. And if they can fool humans, they can fool spam filters. The algorithmic filtering methods the Internet has been built upon will no longer work.
The logic of requiring social media users to buy a subscription is that it’s an easier and more reliable way of knowing who belongs in the club than applying algorithmic guesses that can be gamed. Costly signals of authentic identity — costly in dollars or costly in hard-earned reputation — will be the fault line of the future Internet.
But even with such solutions, things could get very bad.
Let’s take a glimpse into a future where LLM bots are cheap, scalable, and ubiquitous.
It’s 2026, and the panic over an incipient AI apocalypse has subsided due to the fact that “self-driving cars” still reliably plow through barriers in San Francisco. We’re all still alive, but we live in a world of mounting suspicions over every online interaction with an ostensible person. Concerns over spam — now considered a quaint worry of the pre-LLM world — have been replaced with rational fears about threats that are at once more subtle and dramatic than anything before. High-profile scams, manipulations, and attacks are now almost always executed by humanlike bots rather than real people. These bots are faster than human attackers and they never get bored or tired.
The tech giants dig in their defenses. Twitter, Facebook, TikTok, YouTube, and so on are reinforced with a variety of anti-bot authentication mechanisms: paid memberships, mandatory phone numbers to verify identity, biometric verification, or identity tests over video chat. All of this serves to erode anonymity and further engender self-censorship, but bots slip through nonetheless and we end up with a cat-and-mouse dynamic of AI problems and AI solutions trying to outrun one another. LLM-powered moderation systems automatically “heaven-ban” any accounts flagged as bots: secretly trapping them in a world of pleasing and realistic fake content that exists to occupy bots in order to keep them from reinfiltrating the network. Heaven bans are sometimes mistakenly applied to real people, leading to widespread paranoia about having unwittingly been sedated in one’s own personalized dead Internet.
Semi-public, cryptographically secured platforms exist as alternatives, but the ecosystem is as fragmented as ever. Nobody can agree what reputation and authentication protocols are sufficient to keep the inhuman menace away, since paranoia is continuously stoked by the highest-profile crypto thefts being committed by bots.
On both the major platforms and the Internet at large, the bot problem is solved to the satisfaction of most, but at the cost of turning the Internet into a collection of isolated clubhouses — people don’t want to bother with multiple subscriptions, verification hoops, or crypto identities. Retention-anxious executives use the atmosphere of distrust as an excuse to hamstring linking to rival platforms without further reasoning.
The open Internet — places with low-friction posting like Reddit and 4chan — has, on the other hand, become the dead Internet that was prophesied: bots run rampant. These sites nevertheless remain the most dynamic places on the web, due to the lack of stifling restrictions endemic to the bot-resistant networks, and due to the fact that bot content is sometimes the best the Internet has to offer. This creates a situation where some people are comfortable spending their time in an environment populated by an ambiguously large majority of bots, preventing the open Internet from collapsing from a lack of revenue and allowing the dead Internet to perpetuate itself.
Reddit has become flooded with subtle marketing bots that are meant to appear in Internet searches and taint the training data of other web-crawling bots. The site still exists, but most of the human activity occurs in de facto invite-only subreddits. Scandals regularly arise where moderators of some of these subreddits turn out to have handed over or sold their account to a bot at some point.
4chan is also still around and is also mostly bots, but all the boards remain wide open. It’s still a center of Internet culture, with memes being inevitably iterated by the sometimes-brilliant, sometimes-psychedelic creations of bots. The site retains its meme crown due to the lack of competition rather than any sustained vitality — the majority of its human users left and have spread to a dozen different forums, some of which allow for verified-but-anonymous posting through a cryptographic technique called “ring signatures.” The tech works, but it introduces friction that kills off that wild 4chan magic.
On and off of social media, the disaster predicted by the deepfakes panic of 2020 has come to pass. Several up-and-coming TikTok rappers are revealed to be autonomous bots directed by the same guy, powered by a mix of LLMs, ever-cheaper deepfake tech, and other burgeoning AI methods. Malicious actors employ AI bots to generate convincing synthetic media of individuals engaging in compromising or illegal activities. These fabrications are then used to extort, blackmail, or ruin professional reputations. Actual wrongdoers are able to use deepfakes as an evergreen excuse, and separating honest and dishonest people becomes a matter of tribal alignment more than ever before.
The elderly are scammed out of their savings with alarming frequency by bots telling credible-sounding fake stories, sometimes over the phone; many old people are unable to accept that they weren’t communicating with a real person. This combines with age-related illnesses to form an entirely new kind of mental health crisis for a demographic fundamentally unequipped to navigate the era’s strange gradients of truth, which even the legal system struggles with.
One high-profile legal case involves a prominent Wall Street executive being accused of orchestrating a massive financial fraud. The case hinges on a single video that the prosecution claims depicts the executive discussing the fraudulent scheme with co-conspirators. The defense contends that the video is an expertly crafted deepfake, designed to frame their client and tarnish his reputation. During the trial, the court is unable to definitively establish the authenticity of the video due to the limitations of the current legal and technological frameworks. The executive walks free, and “the deepfake defense” is born.
At the same time, a sixteen-year-old boy is brought to trial for allegedly using an autonomous GPT-6 bot to hack into the U.S. vice president’s iCloud account, revealing text messages that sink critical nuclear arms control talks with Russia and China. The jury cannot agree on whether it’s even possible to determine if the bot was directed by a human, and the judge declares a mistrial.
Online and offline paranoia blur, but the Internet in particular becomes a dense tangle of unreality. Average people on social media have their posts and digital fingerprints quietly crunched by bots for the purpose of imitating them with near perfection. These bots commandeer existing accounts or pose as old e-friends and microcelebrities returning with new profiles, gathering more training data from one-on-one interactions to help them proliferate into new identities. As a result, people don’t talk online as much as they used to; LLM-powered personalized entertainment solidifies as a better deal for spending free time. Celebrities and YouTubers sell authorized digital imprints of themselves and successfully market such products as a “safe” way for people to meet their social needs.
In 2026, the sun sets on the era of the Internet that was synonymous with human interaction. Adrift and sedated in the dead Internet, people are more alone than at any other time in history, and the mental health implications are predicted to be catastrophic.
But it doesn’t have to be like this.
How can a worthwhile Internet of people survive in a world of AI bots? There are a few spots of hope to look out for and direct ourselves toward.
First, there are in fact some technical hurdles to the bots completely flooding the place. The fact that LLMs are powerful impersonators doesn’t necessarily mean they will be cost-effective to run or efficient at overcoming all varieties of technical barriers anytime soon. Consider the cat-and-mouse games that attackers who shut down websites by overwhelming them with traffic — so-called “DDoSers,” for distributed denial of service — and infrastructure gateways like Cloudflare have always played, each trying to outdo the other in sophistication. Even with bots exceeding the capabilities of GPT-4, the game would continue. For example, DDoS attackers in the pre-AI era of 2023 disguise their traffic as legitimate, akin to thieves donning disguises to blend in with the museum visitors. Cloudflare and its peers have tools to analyze the behavior of this traffic, a bit like facial recognition technology spotting a known thief even in disguise. They look for patterns or traits that could give away the disguised traffic and block it accordingly. Similar dynamics will play out with autonomous LLM-powered bots, and infrastructural defenses against bad actors will evolve accordingly.
We’ll also need to secure user identity in a way that works for keeping the Internet open and unified. To prevent fragmentation, the major platforms could create a few options for verification to keep out manipulative bots. A game changer could be an “everything subscription” — the tech giants could go in on a consortium that allows users to pay a few dollars a month to gain verified access to every major platform.
Technical solutions like these can thin the flood of bots and give us some breathing room to find social solutions. But attempting to keep the flood forever at bay is ultimately a lost cause. What we can do is try to align bot behavior to human interests, and align human expectations to an inevitable AI-soaked future.
Reddit already gives us early examples of both kinds of alignment. The website was designed with a karma system, equipped to filter out low-quality posts and posters regardless of their origin. Individual subreddits also have different tools for filtering out posters based on various kinds of reputational information. Any self-interested bot would take karma and filtering information as feedback and learn, and this bot would become aligned as a community member that makes helpful and interesting posts.
If Reddit is able to weather the AI era, it’s because it was already birthed in a sort of dead-Internet scenario: Co-founder Steve Huffman admitted that, in the beginning, the Reddit team flooded their website with activity from sockpuppet accounts — the Reddit founders pretended to be lots of separate users. Aside from generating the appearance of heavy activity, these accounts served the purpose of seeding the desired posting culture for real Redditors to take a cue from. It worked because the fake accounts produced content that people really liked. The lesson here is that communities don’t really need to distinguish real from fake accounts — or human from bot, as long as there are enough benevolent bots functioning as benevolent users.
Now is the time to make peace with a bot-populated future and start thinking of ways to align people and the digital homunculi that we’re currently summoning. It’s easy to forget that bots can be good things that save us work and teach us. ChatGPT is the Star Trek computer we’ve been waiting for — a search engine that gives us answers rather than ad spam — and its descendants will change the world in ways we cannot yet imagine. If done right, LLM technology is poised to usher in an age of regal abundance, because the same scaling of human expertise that could create a bot swarm will also create a profusion of human expertise scaled far beyond the supply of human experts.
Alas, even with all silver linings accounted for, it will still be a dead Internet, only not quite as bleak as the conspiracy theorists predicted. There is no beating the bots: the best we can hope for is to find harmony with them by aligning our activity to theirs, and theirs to ours. But the alignment does come at a cost: the human touch we ought to give, and expect to receive, when communicating with another person, the authentic camaraderie that comes with the mutual knowledge that each of us users has feelings and intentions and ideas that are truly ours. It could be a heavy cost indeed — saving the Internet’s soul only to lose ours. It may not be a sunny ending, but it’s the one we’ve got, and it might be the price we pay for the next industrial revolution.
More from the Summer 2023 symposium
“They’re Here… The AI Moment Has Arrived”
Exhausted by science and tech debates that go nowhere?