Swiping clear of the lock screen and through to one of four endless-scroll apps, you get thirty yards of thumb-strokes down your feed when you pause. Some tantalizing stimulus of Hillary Clinton’s face, 11k reposts, and “Liked by [Your Most Insane Friend]” beckons you forth. Turning up the volume, you watch a deepfake of the former secretary of state calling for the mass execution of Southern whites. The comments — there are thousands — hit similar talking points. “These bots are pretty good,” you say to yourself. Quickly bored, you scroll on to find another doctored video. This time Justin Trudeau is curb-stomping a puppy. More bots, more talking points.
This dystopian fantasy, we are told, is what the average social media feed looks like today: a war zone of high-tech disinformation operations, vying for your attention, your support, your compliance. Journalist Joseph Bernstein, in his 2021 Harper’s piece “Bad News,” attributes this perception of social media to “Big Disinfo” — a cartel of think tanks, academic institutions, and prestige media outlets that spend their days spilling barrels of ink into op-eds about foreign powers’ newest disinformation tactics. The technology’s specific impact is always vague, yet somehow devastating. Democracy is dying, shot in the chest by artificial intelligence.
The problem with Big Disinfo isn’t that disinformation campaigns aren’t happening but that claims of mind-warping, AI-enabled propaganda go largely unscrutinized and often amount to mere speculation. There is little systematic public information about the scale at which foreign governments use deepfakes, bot armies, or generative text in influence ops. What little we know is gleaned through irregular investigations or leaked documents. In lieu of data, Big Disinfo squints into the fog, crying “Bigfoot!” at every oak tree.
Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality.
Chroniclers of disinformation often assume that because a tactic is hypothetically available to an attacker, the attacker is using it. But state-backed actors assigned to carry out influence operations face budgetary and time constraints like everyone else, and must maximize the influence they get for every dollar spent. Tim Hwang, a research fellow at the Center for Security and Emerging Technology, explains in a 2020 report that “propagandists are pragmatists.” As Hwang argues, if we are to properly analyze how bad actors adopt technology, we must ask practical questions: “Why would a disinformation campaign choose deepfakes over more crudely made fake content that is sometimes equally as effective? What kinds of actors are likely to adopt these advances for malicious ends? How will they use them?” Yet these sorts of questions remain largely untouched in disinformation exposés.
If the danger is so immediate, why have we seen so few high-profile deepfake attacks, even after a U.S. presidential election, a pandemic, and a massive Russian land invasion? The answer is that synthetically generated media is just not worth it to bad actors. It is too hard and too expensive to scale up thousands of dollars of computer processors and graphics cards, recruit and pay skilled machine-learning engineers, spend weeks training models, and write the speeches for the deepfake facsimiles.
Contrary to Big Disinfo’s predictions, the Kremlin’s revealed preference is for good old-fashioned physical deception. In February 2022, only days before its invasion, Russia apparently sent a car with human cadavers over an IED and then blamed the attack on Ukraine. This is no different from the tactics the Soviet Union used at the outbreak of World War II, when it shelled its own villages near the Finnish border as a pretext for invasion. Revealing deceptions like these requires difficult investigative work. It took a week to disprove the staged IED attack, by which time Russia had invaded Ukraine. By contrast, a cartoonish deepfake of President Zelensky that made rounds online in March 2022 was debunked within minutes. Artificial intelligence has not conjured the truth-killing march to the post-fact wasteland that we’ve been warned about.
That reassurance does not dispose of potential dangers from Russia’s exploitation of ranking algorithms and bots. Since 2016 there have been countless pieces on social-media disinformation to the tune of, as one New Yorker article put it, “How Russia Helped Swing the Election for Trump.” But that particular essay decided not to analyze the data that would be required to support this argument. And when political scientists actually did crunch the numbers, they found that Russian influence operations on Twitter in the 2016 presidential election reached relatively few users, most of whom were highly partisan Republicans. Further, they wrote in Nature Communications, there was “no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.”
And who can forget Cambridge Analytica, the consulting firm that attempted to sway the outcome of the 2016 election using Facebook? Characteristic of the media frenzy, the Guardian ran a piece profiling the person who claimed to have created a “psychological warfare mindf*** tool.” But an investigation from Nature magazine revealed that Cambridge Analytica’s impact on voters was virtually nonexistent. And an audit by the U.K.’s Information Commissioner’s office found that the company used no original technologies or methods. Cambridge Analytica’s only real skill was in selling “psychographic” sorcery.
Still, media outlets ran marketing collateral for Cambridge Analytica and Facebook as if their political influence were proven fact. Corporate marketing teams could hardly cook up better P.R. themselves. If advertisers think Facebook can swing an election, surely they will use it to sell their toys or clothing or “President George W. Bush Teaches Authentic Leadership” MasterClass subscriptions. Ironically, Big Disinfo relies on corporate propaganda to advance its claims that Americans are being duped by foreign propaganda.
Perhaps outlets like the Guardian are just taking a precautionary approach to a threat that is so far unrealized but may yet prove severe. The RAND Corporation and its sisters made critical contributions to mankind by gaming out nuclear war scenarios with an extreme focus on caution. Computational propaganda scenarios, on the other hand, concern small, poorly understood, and even nonexistent threats. Reports warning of its dangers ignore basic questions, such as whether bad actors have the incentive to deploy these technologies, much less the capability. This willful blindness is a far cry from the measured caution of nuclear deterrence.
Pundits today have more of an incentive to push apocryphal claims than to scrutinize them. Prestige institutions profit from new, amorphous threats, because political leaders will want to be briefed on the threats and call on the supposed experts. Academic researchers churn out papers sensationalizing the scale and scope of disinformation threats to win recognition from the media. The self-interest of news outlets in perpetuating the disinformation mythos is obvious: If readers can’t trust whatever they see online, they will have to subscribe to get the truth.
Rather than identify causal links and track threat actors’ capabilities, Big Disinfo ups the ante, with each thinkfluencer cribbing more scenes from Neal Stephenson novels than the last. But too many digital apocalypses have been falsely prophesied for us to continue evaluating disinfo operations by reading the entrails of the latest corporate press release or academic vision paper.
Countering threats to democracy matters for democracy, but so does being sober about them. Baseless catastrophizing may pose its own risk. When many Americans believe that election results now come with an asterisk, they may also come to feel that the American experiment is becoming a sham. That asterisk should only be added with much better evidence than we’ve been given.
That Western democracies face serious threats from foreign influence needs no argument, but the scope and nature of these operations is murky and constantly changing. If reporting is more concerned with narrative than with reality, it becomes a kind of misinformation in its own right.
Exhausted by science and tech debates that go nowhere?