The Refs Are Working Us

Fact-checking used to be how journalists policed themselves. Now it’s how they police everyone else.
Subscriber Only
Sign in or Subscribe Now for audio version

“Not true, Governor Romney.”

President Barack Obama, widely considered to have lost his first debate against Mitt Romney thirteen days previously, was eager to defend his record. But Romney, having returned to familiar territory, was unwilling to concede the point.

“In the last four years,” Romney had said, “you cut permits and licenses on federal land and federal waters in half.” Unsatisfied with Obama’s denial of this point, Romney kept pressing.

“So how much did you cut them by?”

“It’s not true,” Obama said again.

“How much did you cut them by, then?”

“Governor, we have actually produced more oil on — ”

“No, no, how much — ”

The crossfire eventually yielded to a brief substantive exchange on the president’s energy policies before the debate continued on to other topics. Within a few hours, all the major outlets — the Washington Post, the New York Times, CNN — were ready with “fact checks” assessing the validity of both candidates’ claims. (The verdict was mixed.)

Nothing about this exchange or the media coverage it generated will strike the reader as out of the ordinary — aside perhaps from the fact that the 2012 contest could still feature such arcane policy debates, unlike what passes for political debate today. The ritual of fact-checking politicians’ statements is now so routine as to be hardly worth pointing out. But political fact-checking wasn’t always so commonplace. Even in 2012, it was still such a new and burgeoning subgenre of journalism that it was the subject of its own coverage and commentary.

Since then, political fact-checking has exploded into a global industry. In addition to legacy media outlets like the Post and stalwarts of the field such as FactCheck.org and PolitiFact, startup media companies and new organizations wholly devoted to the practice have cropped up around the world, adding up to more than 400 fact-checking websites in total as of last year. No longer limited to traditional media, fact-checking has developed into an integral aspect of online content moderation, used by Internet platforms such as Meta.

Defenders of the new style of fact-checking insist that it preserves the best of the old traditions of factual accuracy and adversarial journalism for a new, radically different media and political environment. Critics, by contrast — though not always on the political Right — argue that because fact-checking now means checking not just politicians’ speech but online discourse in general, it looks less like a benign public service than manipulation of public opinion.

Yet both the harshest critics and the most vocal proponents of the new fact-checking often misunderstand its significance — the extent to which its rise represents a shift in the ways the norms and practices of modern journalism get deployed.

Understandably concerned about the crisis of public knowledge, political commentators today often blame the Right for what they call “working the refs.” The idea is that, like athletes who try to win games by hassling the umpires, conservative troublemakers complain about unfair treatment from media platforms to get what they want. The Right, meanwhile, argues that, rather than impartial arbiters of fact, the refs are systematically biased against them, and so they must be “worked” to gain a level playing field to begin with. But the ultimate problem today is not so much that people have started “working the refs” (although of course they sometimes do) or that the refs are biased (although of course they often are). The problem, rather, is that media outlets — along with the scientific, medical, academic, and governmental institutions that together comprise the institutional infrastructure of our public epistemology — started thinking of themselves as refs in the first place.

In recent decades, journalistic institutions have shifted from preoccupation with ensuring their own credibility in the eyes of the public to concern about the credulousness of the public. In this way, the traditional practice of journalistic fact-checking has effectively been repurposed. Where once fact-checking was for internal quality control, a tool for journalists to police themselves, it has instead become a tool for policing public discourse. This shift from informing the public to policing it has not only invited the inevitable accusation of bias — that the “refs” are making bad calls; it has also hampered journalism’s ability to fulfill its role as an indispensable resource for democratic debate.

As journalists seek to restore the credibility of their profession, they would do well to look inward rather than outward, and consider what it is that lent them their cultural authority in the first place.

What Fact-Checking Once Was

Fact-checking has long been an integral part of the journalistic process. After the heyday of sensationalist “yellow journalism” in the late nineteenth century, America’s journalistic institutions sought to shore up their legitimacy by improving the quality and reliability of their products. In the 1920s and 1930s, major national magazines such as Time and the New Yorker began to hire people — usually young women — to work in specialized fact-checking departments, while newspapers increasingly emphasized fact-checking as part of the editorial process for reporters and copy editors.

Journalistic concern with accuracy was of course not entirely new in those days. As Barbara Shapiro notes in A Culture of Fact, as far back as the seventeenth century, English newspapers, though typically (and often flagrantly) partisan, nevertheless touted the reliability of their reporting with such titles as The True Informer and The Impartial Intelligencer. But it was only in the early twentieth century that fact-checking emerged as a discrete, formalized task, central to the practice and self-understanding of journalism.

In addition to improving the quality of journalism, fact-checking — and the associated norms of objectivity and professionalism — also echoed and reinforced Progressive Era ideology. Indeed, the distinction between fact and opinion implied by the emerging journalistic practice was seen by some as complementing the progressivist notion that governance was a purely technical matter to be administered by disinterested experts — as distinct from the messy realm of politics, where subjective opinions and special interests openly clashed. These ideas were articulated by prominent progressives, some of whom were also prominent journalists.

Thus Walter Lippmann, a founder of The New Republic, argued that democracy, with its ideal of “omnicompetent,” self-governing citizens, was a fundamentally outmoded and inadequate form of governance. The problems society faced in the modern era were simply too various and complex, and public opinion was too irrational to begin with. In place of government by the people, he advocated public administration by apolitical technocrats. Lippmann insisted that public opinion could still play a constructive role in this scheme — but only when “managed” by journalistic institutions, which would not only convey reliable information but also engage in the “manufacture of consent.”

The journalistic ethos that emerged in the mid-twentieth century shied away from Lippmann’s disquieting elitism, and succeeded in establishing practices that, on the whole, not only produced high-quality journalism but also shored up considerable public support. A key norm that emerged during this period was fairness and balance — the idea that journalists should not only strive for impartiality in reporting facts but also cover all relevant sides of a given issue. The post-war decades were a period of extraordinarily high levels of public trust in a range of societal institutions, including the media — a kind of “golden age” of journalism often invoked nostalgically today by commentators across the political spectrum.

Yet at the time, there were actually many critics who denounced mid–twentieth century media culture as a facade. For William F. Buckley, Jr., the mainstream media was the mouthpiece of a liberal monoculture that espoused a conformist worldview insufficiently critical of Soviet Communism and overly favorable to “radical social experimentation.” “There never was an age of conformity quite like this one,” Buckley wrote in the first issue of his magazine National Review in 1955, “or a camaraderie quite like the Liberals’.”

About a decade later, on the other side of the political aisle (and the other side of the Atlantic), a young Jürgen Habermas also criticized the media environment of “welfare-state mass democracy.” A protégé of the Marxist Frankfurt School who would go on to become a preeminent theorist and advocate of liberal democracy, Habermas argued that what presented itself as a free and fair public sphere for critical debate was in reality “a pseudo-public sphere” that had been “hollowed out by the mass media” and had “assimilate[d] elements of advertising.”

A similar critique of mass media would later get picked up, embellished, and popularized by Noam Chomsky and Edward Herman in their ironically named book Manufacturing Consent.

That this kind of media critique was once lodged by Left and Right alike is often forgotten today, when many have come to believe that Americans’ relationship with the media was once harmonious, and that only conservative troublemakers could have reason to disturb it.

Speaking Truth to Power

The critiques of both Buckley and Habermas contained elements of truth. And there is no doubt that what has since been termed “stenographic” journalism was in practice often characterized by an excessively cozy relationship between media outlets and the powerful figures on whom they reported. This, after all, was the era of the “Georgetown set,” when senators, newspaper editors, and presidential candidates clinked cocktail glasses in the living room of John F. Kennedy’s N Street home.

Yet in the midst of the international crises and domestic social turmoil of the 1960s, a new style of journalism rose to prominence. While preserving the traditional emphases of objectivity, fairness, balance, and fact-gathering, leading journalistic institutions came to see themselves not so much as stenographic recorders of fact but as bulwarks against corruption. Far from simply toeing the government line, the papers of record began actively challenging both the policies and integrity of the powers that be.

Thus in 1971 the New York Times began publishing portions of a leaked trove of classified documents popularly known as the Pentagon Papers, which detailed the history of U.S. military involvement in Vietnam. According to a 1996 Times article, publication of the Pentagon Papers revealed “that the Johnson Administration had systematically lied, not only to the public but also to Congress, about a subject of transcendent national interest and significance.” And then, of course, there was the blockbuster investigative reporting by Time, the New York Times, and especially the Washington Post that revealed the corruption of the Nixon administration and culminated in the president’s resignation.

This adversarial style of journalism was not invented whole-cloth during the 1970s. Just as checking copy for errors had been practiced well before it was formalized as a discrete task, investigative skepticism and scrutiny of the powerful had also long been part of journalism’s self-image. As Paul H. Weaver argued in a 1974 essay on the significance of Watergate for the profession of journalism, press coverage of the Nixon scandals was “in accord with a venerable journalistic tradition that dates back to The New York Times’ exposé of the corrupt Tweed Ring in 1871.”

But the Watergate era did mark a turning point. The adversarial posture became — and has since remained — central to the self-image of professional journalism, with the stories of Bob Woodward and Carl Bernstein becoming the stuff of legend. The adoption of that posture was an important and perhaps inevitable corrective to the elitism of the preceding era. But its success was arguably made possible by — because it was able to draw on — the public capital the profession had accrued during the stenographic years, when journalistic institutions had been widely perceived as both competent and trustworthy.

The Debunking Shift

The kind of fact-checking on display during the Obama–Romney presidential debates was, on one level, continuous with the existing practices and norms of American journalism, in both its stenographic and adversarial forms. By 2012, we had almost come to expect that journalists would not simply report on presidential or other political debates but also, where necessary, set the record straight on matters of fact. Journalists were no longer simply reliable and trusted sources of information; they spoke truth to power.

In one particularly controversial moment during the second debate, the moderator, CNN news anchor Candy Crowley, interrupted the argument. Romney was pressuring Obama about his reluctance to characterize an attack that killed American diplomats in Benghazi, Libya, as an “act of terror.” Crowley wanted to set the record straight: Obama had “in fact” called the attack an act of terror in public remarks delivered from the Rose Garden, she said — to which Obama responded, “can you say that a little louder, Candy?”

Moderator Candy Crowley with President Barack Obama and Republican nominee Mitt Romney at the presidential debate on October 16, 2012
AP

Predictably, Crowley’s intervention was met with outrage by Republicans who not only disputed her factual assertion but also accused her of violating the journalistic norm of impartiality. Then-Rep. Jason Chaffetz told Crowley the day after the debate that “it’s not the role of the moderator to say, ‘Mr. President, you’re right’ or ‘Gov. Romney, you’re right,’” while Rush Limbaugh wryly derided the move as an “act of journalistic terror.”

The critics had a point. President Obama’s language was, in fact, more ambiguous than Crowley implied. In saying that “no acts of terror will ever shake the resolve of this great nation,” Obama had strongly suggested that the Benghazi attack was an “act of terror” while at the same time not saying so directly. This at a moment when Republicans had been hammering the president for failing to do just that — frustration that got channeled into Romney’s adversarial questioning.

More importantly, Crowley’s interruption did go beyond the traditional norm of impartiality, regardless of whether she got the facts right. By taking sides in a dispute over Obama’s semantics, and by extension their foreign policy stakes, she was not simply reporting the facts as she saw them, but actively intervening in a political argument. She was acting less like a reporter and more like a referee. This invited the accusation that she was not acting impartially, in just the same way that sports referees often get accused, rightly or not, of making bad calls.

To be sure, debate moderators have always had a referee-like function distinct from that of a typical beat reporter. But the fact that the Crowley episode generated as much controversy as it did helps to illustrate that the role the press was assuming during the 2012 presidential election was the leading edge of something quite different than mid-century stenographic or Nixon-era adversarial journalism. In hindsight, we can see the emergence of a new and distinctive public role for journalism — one that has since come into its own as a fixture of our common life.

Not only has political fact-checking grown into a global industry since that time, but it has also expanded beyond assessing the accuracy of politicians’ claims to combating digital misinformation and fake news. In effect, fact-checking has now become at least as preoccupied with policing the public’s discourse as politicians’. Lucas Graves and his coauthors have recently described this as a “shift from holding political actors accountable to policing anonymous, outlandish, and often trivial social media misinformation” — a shift from “political fact-checking” to “debunking.”

This trend has been accelerated by non-journalistic institutions — in particular, social media companies — seeking to outsource anti-misinformation efforts, effectively transforming fact-checking into online content moderation. Meta, for example, has a third-party fact-checking program in which participating organizations, certified by the International Fact-Checking Network, investigate claims that have been “surfaced” as “potential misinformation.” We are a long way from the days of in-house departments at news organizations simply vetting facts in reporters’ stories.

The Rise of Discourse Policing

The practice of fact-checking as it emerged in the twentieth century — long with norms such as objectivity and balance — originally evolved to serve two distinct but interrelated functions. The first and most obvious was to improve the reliability of journalistic products, news stories, political coverage, and so on. That is to say, fact-checking was a form of quality control within journalism. The second function, however, was to shore up public trust, which is only partly a matter of producing good products. It is also a matter of being trustworthy — which is as much about character as it is about competence.

In effect, internal fact-checking norms demonstrated to the public that journalists held themselves to high standards, rather than simply trying to advance their own interests. They did not always live up to those standards, of course. But even, and perhaps especially, when individual journalists erred, the maintenance of these standards — for instance, through public corrections or even disciplinary action — showed that, as members of a profession, journalists were not just competent but also tried to exhibit characteristics that rendered them worthy of the public’s trust.

With the advent of adversarial journalism, traditional fact-checking came to serve, or at least support, yet a third function: that of holding those in power to account, by scrupulously investigating and reporting on the activities of government and other powerful institutions.

Yet today’s external fact-checking serves almost the opposite of these three functions.

First, it is concerned with the quality not of journalistic products but of public discourse. Most legacy outlets still adhere to some version of the traditional journalistic norms, and a few of them still do the time- and resource-intensive work of good journalism — fairly reported, thoroughly checked, and reliably informative — exceptionally well. But they are exceptions. And in today’s digital media environment, incentives that hadn’t existed in earlier eras — the pressure to chase clicks, say, or the unprecedented ease of correcting articles after they’re published, even by stealth — have combined to pull journalism away from its erstwhile preoccupation with internal quality control. The result is that very few media institutions now produce high-quality journalism.

Second, external fact-checking does little to shore up public trust, and much to undermine it. Rather than demonstrating to the public that journalistic institutions hold themselves to high standards, external fact-checking seeks to demonstrate that others have failed to uphold them. Traditional journalistic norms are no longer focused inward, which builds trust, but are projected outward, which erodes it. In this way, what was once a form of quality control internal to the practice and profession of journalism has become a tool that journalistic institutions can wield for the purpose of policing public discourse.

And this runs counter to — or at least is in tension with — the third function of traditional fact-checking, the adversarial ethos of speaking truth to power. Of course, political figures remain important sources of false and misleading claims, and should thus be subject to the judgment of journalists. But digital disinformation can come from anywhere: a news outlet, the government, a Hollywood celebrity, a “bot,” a foreign agent, a random social media influencer, or an ordinary citizen. All of this content then becomes so much grist for the debunking mill, sometimes distracting today’s external fact-checkers from the behavior of powerful figures and institutions.

To be sure, advocates of political fact-checking defend the practice precisely because they think it does keep the powerful in check. In our brave new digital world, the argument goes, we need impartial referees to separate factual information from its counterfeit to prevent public debate from being manipulated by the powerful for their own ends. In support, this interpretation can point to the fact that digital disinformation really is a big problem — and a novel one in a lot of ways — made all the more troubling by deep political polarization and ideological sorting. Moreover, various partisan, special, or even nefarious interests can and do manipulate information flows for their own purposes.

But what this interpretation overlooks, by focusing on public discourse in general, is that one of the interests that seeks to manipulate information flows for its own purposes is the state — sometimes working closely with corporations, as we saw during the Covid-19 pandemic. (The recent Supreme Court case Murthy v. Missouri centered on whether the Biden administration’s pressure on social media companies to make certain Covid-related content moderation decisions amounted to coercion that violated the First Amendment. In June, the court ruled that the plaintiffs lacked standing to sue, leaving the substantive legal questions unresolved.)

The posture journalists must assume when engaging in the kind of discourse policing that external fact-checking has evolved into leaves them at best ill-suited to hold powerful expert institutions to account, and at worst vulnerable to being manipulated by their sources.

Checking Whom?

Consider, for instance, the role that media coverage played in reporting on the confusing and often misleading public statements about the effectiveness of masks during the Covid-19 pandemic. Initially, media coverage, accurately reflecting official U.S. policy, reported that there was little evidence that masks worked, despite the fact that they were advocated by some experts and were already being used in several countries. Only a few weeks later, official U.S. policy notoriously changed, and media coverage flip-flopped, reporting instead that there was a consensus that masks did work after all, despite the fact that some experts continued to demur.

Public views that differed from or even simply questioned aspects of U.S. masking policies were denounced as “misinformation.” Vox was soon “explaining” that “performative masculinity” and sexism were the reasons why many Americans distrusted the science of masks. (Ironically enough, that same story regurgitated the now generally discredited claim that wearing “gaiter” style face coverings, popular with athletes during the pandemic, was “worse than not wearing a mask.” The claim was based on a single, controversial study.)

A conventional explanation for the policy change on masks, given by prominent experts such as Anthony Fauci, was that the science itself had changed: we had come to learn through the ordinary process of scientific inquiry that masks — including cloth masks — did, in fact, work. This rationale was dutifully reported by media outlets, and was implicit in the “information vs. disinformation” framing that mainstream media outlets used to present the issue. But this rationale was always implausible, not least because three months is an awfully short amount of time for a scientific consensus to form. In reality, the underlying science of masks had not really changed in any fundamental way between winter and spring 2020 — something a good journalist could have discerned (and a few actually did).

What really changed was not so much the science of masks but the official expert recommendation about what policy was called for in light of the science. Based on a host of factors, including a recognition of the role of asymptomatic spread, experts decided that masks, including even low-quality cloth ones, were a worthwhile intervention after all. This was a policy recommendation — a prudential judgment about whether the benefits of an intervention outweighed the costs — not a scientific fact, though it was of course informed by facts. And prudential judgments can and should change in light of circumstances, which rightly happens much more quickly than the process of consensus formation in science.

A similar and perhaps even more demoralizing series of events played out in the debate over Covid origins. In this case, almost immediately after the emergence of Covid-19, prominent experts and expert institutions began to promote the implausible idea that there was no real uncertainty over the origins of the novel pathogen. In February 2020, some twenty scientists wrote a now infamous letter published in the Lancet “to strongly condemn conspiracy theories suggesting that COVID-19 does not have a natural origin.” By March, a letter published in Nature Medicine claimed to show that Covid-19 “is not a laboratory construct or purposefully manipulated virus” but instead had a natural origin. This idea was then promoted by traditional and social media outlets, with dissenting viewpoints castigated as anti-scientific and conspiratorial — and even racist — misinformation. There was a scientific consensus, the claim went, about where the virus came from.

It soon became clear — in part through old-fashioned investigative journalism — that there was, in fact, no scientific consensus here at all, nor has there been one since. A White House investigation initiated by President Biden in 2023 failed to resolve the matter. Some intelligence agencies have assessed that the virus likely had a natural origin, while the FBI, as well as the Department of Energy, have voiced support for the lab-leak theory. Even worse, it was revealed — once again through investigative journalism — that some experts, including two of the authors of the Nature Medicine paper, may have conspired to create the false appearance of consensus, despite the fact that even some of those very experts had privately questioned whether the pandemic had a natural origin.

Today, media coverage of both issues — Covid origins and masks — though far from perfect, is considerably more balanced. News stories will often acknowledge that, while the majority of scientific and governmental institutions remain persuaded that a natural origin is the most likely explanation of the pandemic and that high-quality face coverings are effective public health tools, not all experts agree, and the issues are complex. In the case of Covid origins, it is openly admitted that the question may never be resolved. (So much for the consensus!) On these issues, at least, we are a long way from pandemic-era accusations of sexist and racist misinformation reinforced by media coverage and external fact-checking.

What changed? And why did it take so long? Why did the public have to endure years of not only confusing and bewildering explanations for shifting expert opinion on matters of great public concern but also media characterizations of divergent viewpoints as false, pernicious, and ill-motivated — rather than, say, unsubstantiated, minoritarian, merely conjectural, or even misguided? The explanation is not that we now know more, and so media coverage can draw on deeper and better-established reservoirs of expert knowledge. On the contrary, media coverage today is more balanced on these issues precisely because it is more accurate about what we don’t know — and in many cases never really knew.

A better explanation is that, during the pandemic, media outlets — with some notable and telling exceptions — were more preoccupied with policing public discourse than with ensuring that they were accurate and discriminating in their own reporting. This allowed them to be, in effect, taken in by their expert sources, when they should have themselves striven to be reliable sources of information for the public. To do so effectively, however, would have required not only more rigorous fact-checking of the old variety but also assuming a more adversarial posture toward public experts and governmental authorities.

Far from being uniquely bad instances of mistakes or misbehavior, these pandemic examples illustrate how important in the current media and political environment the traditional goal of adversarial journalism is, when buttressed with scrupulous fact-checking. It also shows why the new discourse policing can undermine both.

Keep reading our Summer 2024 issue


The Amish on AI  •  Why UFOs  •  Facts vs. us  •  EA as self-help  •  Subscribe

Checking What?

Part of the problem is that the kinds of “facts” in question, when it comes to controversial issues of public concern, are often not straightforward matters of fact at all. One reason is that the kind of knowledge at issue may be limited, partial, contested, or require drawing on a variety of domains of expertise. As a result, there can be clashing expert opinions and rival standards of evidence, with differing value judgments coming into play.

Indeed, when the stakes are not simply getting the facts right but also making the right policy decisions — when lives and livelihoods hang in the balance — expert disagreements can become difficult, if not impossible, to disentangle from political disagreements over whether and to what extent a set of policy interventions is called for. Moreover, what prominent experts say about what they know can influence what people actually do. This creates a strong temptation for people in power to finesse the facts to maximize the chance of getting the policy outcome they want.

Here, again, Covid-era controversies are illustrative, though not unique.

During the pandemic, public-facing experts frequently misrepresented the state of knowledge — often motivated by a desire to prevent uncertainty or expert disagreement from getting weaponized or amplified in the digital media ecosystem. In some cases, such as with masking, they also deliberately conflated questions of scientific fact with normative questions about what to do, surreptitiously smuggling in the epistemic authority of science to rationalize what was, in fact, a policy decision.

Anthony Fauci later admitted as much, claiming that the real reason the government had initially cautioned against wearing masks was to keep the public from stockpiling them when healthcare workers needed them. Thus, what was in effect a paternalistic communications strategy on the part of public experts got interpreted and reinforced as factual by both traditional media coverage and the new style of fact-checking. To this day, public experts and the media maintain that the facts about masking were never really in question — that the experts had only changed their minds about what should be communicated to the public.

Yet even this post hoc rationalization was a misrepresentation of the science. In reality, the science of masking was neither uncontested nor clear before or after the policy guidance changed. Apparently — and disturbingly — public experts and the media are more comfortable with the idea of rationalizing expert decisions on the basis of overt paternalism than with trying honestly to characterize the nature and complexity of decision-making under conditions of uncertainty.

The sad irony, of course, is that the strategy completely backfired, hampering political debate and contributing to a deep and enduring polarization and distrust — not only around masks but also public health interventions in general. And media outlets, seeing their primary role as policing discourse about the pandemic, further eroded what little public trust they had before.

In such instances, discrimination and even skepticism on the part of journalists are called for. This doesn’t mean reflexively disbelieving what experts say, much less intervening in expert disputes to play fact referee. Journalists should no more try to police experts’ discourse than the public’s. Instead, it means asking the kinds of questions journalists ought to ask of any of their sources. Do they seem honest and reliable? Are they qualified or well-respected by their peers? Do any of their peers disagree — and are they qualified or well-respected? Do any of them have any obvious conflicts of interest?

In the case of masks, it would have been enough for journalists simply to consult and probe a wider range of sources a bit more directly and critically. For instance, many of the scientific studies of masks — and many of the relevant experts themselves — are far more nuanced and accurate about the state of the evidence and the nature of the reasons for masking policies than the public messaging that media outlets parroted during the pandemic. Moreover, the fact that masking policies varied not only across time, but also across space — with different countries making different decisions about who should wear what kind of face coverings in which settings — should have offered a clue that we were in the realm of norms at least as much as facts.

The purpose of assuming a more adversarial posture towards expert authorities need not be (and should not be) to undermine trust in expertise. Quite the opposite: it should be to provide an accurate picture of the actual basis upon which policy decisions of momentous importance are being made. It demands that experts and the policymakers they advise take ownership of their decisions and be forthcoming about their reasons for making them. When public-facing experts and policymakers don’t, there is all the more need for journalists to fill the gap.

Too often in recent years, a preoccupation with fact-checking the public has put journalists in almost the opposite posture, vulnerable to becoming the unwitting communications department for their official sources. At its most benign, this credulity has yielded embarrassing media flip-flops; at its worst, it has undermined public trust in journalism and expertise alike.

Stop Refereeing

Advocates of the new style of journalism like to see it as continuous with venerable traditions of rigorous reporting and speaking truth to power. But while this assessment has some surface-level plausibility, it is ultimately untenable.

The consequences speak for themselves. Some researchers have tried to study the effectiveness of political fact-checking empirically, with mixed results. But such earnest efforts miss the forest for the trees. The state of our public discourse today seems about as detached from common facts as one could imagine, and it’s not getting any better. External fact-checking has not moved the needle. Nor has it improved the general quality of journalism. Meanwhile, public confidence both in mass media and in expertise has cratered.

On a more fundamental level, external fact-checking appears discontinuous with the great traditions of American journalism, in its stenographic and adversarial forms, leaving the public with a mere simulacrum of both: reporting that is adversarial in self-understanding but credulous of those in power; coverage that is stenographic in tone but without close, informative sourcing and thus lacking investigative substance. This does not leave the public with the impression of a well-functioning profession that holds itself to high standards or the powerful to account — of impartial refs calling balls and strikes. Instead, to stretch the metaphor, it’s as if the refs publicly announced which team they wanted to win and trained their eyes on the people in the stands instead.

If journalists really want to regain the public’s trust, they should stop refereeing and get back to journalism.

Subscribe today for as low as $24/year

Subscribers receive new issues weeks to months before articles are posted online.

SIGN IN TO ACCESS

See Subscription Options

Related