The coronavirus pandemic has brought onto the center stage of public debate our deep, although often unarticulated, disagreements about the nature of scientific knowledge and the authority of scientific experts. Seeking insights on these questions, we asked former New Atlantis associate editor M. Anthony Mills, who has written widely on philosophy, science, and expertise, to reach out to British sociologist of science Harry Collins, author of many books, including Are We All Scientific Experts Now? (2014) and Why Democracies Need Science (2017). This is their correspondence.
M. Anthony Mills: Harry, you are one of the leading figures in the field of sociology of science, and a founder of what is known as the sociology of scientific knowledge. Why don’t we begin with the basics: What is the sociology of scientific knowledge? And what is distinctive about its approach to understanding science?
Harry Collins: The sociology of knowledge starts with the obvious but remarkably little-understood idea that nearly everything we know comes from the groups and locations in which we are brought up and, to a lesser extent, immersed in later life. If we are brought up “here,” we will believe in witches but not in mortgages, if we are brought up “there,” we will believe in mortgages but not in witches. This is such a powerful determinant of what we know that we don’t usually even notice it — like a fish not understanding that it swims in water. A good way of reminding oneself of these facts about our cognitive lives is to consider how we acquire our native language. How is it that you came to be, say, an English speaker rather than a speaker of any of the other roughly 6,500 natural languages in the world? How did you come to put the verb in the middle of the sentence? It’s just the water you swim in!
Most of the views we hold dear are like this, but it is hard to grasp because we concentrate on our choices, not the restricted envelope. So you know that with enough effort you can learn other languages in later life, but only a few and not with real fluency unless you choose to immerse yourself afresh in a different society. Try speaking a sentence in one of those 6,500 languages you don’t know. Though it might seem counterintuitive at first, this is the right way to understand how you form most of your beliefs.
For a long time, it was thought that an exception to the sociology of knowledge came with scientific knowledge: acquiring scientific knowledge was not like acquiring a language; it was true for everyone everywhere and the truth of it could be revealed by looking to see which experiments and observations were repeatable and which were not. The sociology of scientific knowledge started to question this exceptionalism. Historically, I think its trigger was the loosening up of all kinds of thinking that came with the 1960s and, in particular, Thomas Kuhn’s book The Structure of Scientific Revolutions, which famously showed that science progressed through epochs called “paradigms” that involved revolutions in our scientific principles and related changes in what counts as meaningful experiments. Suddenly, even science turned out to have a cultural dimension. Kuhn’s ideas were anticipated by Ludwik Fleck in the 1930s, but he was not well known, and many of us realized that a deeper philosophical grounding could be found in the later work of the philosopher Ludwig Wittgenstein. But the crucial point was that whole scientific cultures could change — the icon was the switch from a Newtonian to an Einsteinian universe. Because of this, science could no longer provide a quasi-logical, absolutist escape from the dizzying consequence of the sociology of knowledge; even science was relative to culture.
My own contribution to this involved the very close study of how scientists made knowledge in real time. I found that repeating experiments depended on the transfer of tacit knowledge in personal interaction — a kind of local “socialization” — very much in the way that one learned fluency in a language. Because there is no test for whether a language or a set of scientific skills has been successfully acquired except speaking fluently or getting the right experimental result, replication could not be used as a test for knowledge in any straightforward way. In other words, you had to know what the correct outcome should be before you could know whether the experiment had been competently performed — but this is impossible when the correct outcome is what is in question. I published these results in 1975 and in my 1985/92 book Changing Order. I called it the “experimenter’s regress.”
Mills: I want to come back to this methodological issue — the idea of studying science the way sociologists study any other cultural activity — but before doing that, let me ask: How did you personally get into the sociology of scientific knowledge, and specifically into gravitational wave physics, the area that you are perhaps best known for?
Collins: I studied physics and maths at (high) school before turning to sociology, and when I found myself needing a dissertation topic at the end of my master’s degree I decided to go back into science laboratories and stumbled on people trying to learn how to build a new kind of laser that mostly wouldn’t work: the TEA laser. I had encountered Wittgenstein’s later philosophy as an undergraduate so was ready to try to contrast the existing approach to information diffusion in science through, say, studying networks of journal citations, with something resembling fluency acquisition — the “enculturational model.” I showed that if you didn’t spend time with successful TEA laser builders but just read formal descriptions of their work, it was like trying to learn a spoken language from dictionaries and grammar primers alone and you wouldn’t become a “fluent” laser builder. Later I wanted to compare this with other fields where there was more disagreement, and one place this took me was to the disputatious field of gravitational wave physics.
Mills: On one level, what you describe might sound like an eminently reasonable and rather intuitive way to go about trying to understand how science (or anything for that matter) works: Observe how the practitioners actually operate in practice. But on another level, it might sound pretty radical, since one might think that scientific knowledge is the kind of thing — perhaps the preeminent example of the kind of thing — that one acquires through careful, rational reflection on empirical evidence, in sharp contrast to the way one acquires a native language. Is this the idea that the sociology of scientific knowledge is challenging? And is that why it has been controversial? Its critics have accused it of attacking science — of being part of a broader movement, often associated with postmodernism or relativism, which, its critics say, seeks to undermine science, expertise, and reason generally.
Collins: To do good work in the sociology of scientific knowledge it was no good trying to explain why scientists believed “this” rather than “that” by saying that “this” was true and “that” was false. This would be circular. Imagine you wanted to explain the state of religion in Ireland: it would be no good saying that people chose Protestantism because it was easier for Protestants to get jobs, whereas people chose Catholicism because it was true. But some philosophers really did argue that scientific truth was self-explanatory — and that the only thing that needed sociological explanation was why science sometimes generates false beliefs. What we realized was that to do things properly you had to adopt a relativistic methodology where you forgot about what scientists count as true and false. This happened automatically when studying contemporaneous scientific controversies: it wasn’t possible for either you or the scientists to know who was going to turn out to be right while the controversy was live and scientists were still fighting it out. As I put it, you have to act as though “the natural world has a small or non-existent role in the construction of scientific knowledge.” As it was, in the metaphor I used in my 1975 paper and in Changing Order, previous analysts of science tended to look at the ship after it was already in the bottle with the glue dried, instead of examining how it got into the bottle in the first place.
Sciences of all kinds tend to get confused about the relationship between their methodologies and their ontologies — that is, the way they approach their subject matter versus what actually exists. It took me from 1975 to 1981 to work out that, philosophically, I was only deeply committed to relativism as a methodology (which is why I said “act as though”). But I was still troubled by what the experimenter’s regress meant for scientific practice, even though I argued that replication was central to science.
Sociology of scientific knowledge caused a huge fuss because its relativism seemed to put science’s ability to stand above the sociology of knowledge in doubt — it seemed to challenge the idea that science is exceptional. At best, the methodology had uncovered uncertainty where before there had been a kind of “magical” certainty surrounding scientific truth. When one investigated science in real time, or with the right kind of historical sensibility, you found that things were never as clear as they were in the retrospective myths. In the late 1990s, one of my books, The Golem, which I coauthored with Trevor Pinch, described a series of experiments in detail, including the famous Michelson-Morley experiment, which the physics textbooks claim to be a definitive proof of Einstein’s theories. It became a focus of attacks in what became known as “the science wars” during the 1990s. The science warriors were quite understandably worried about the new “dark age” that could result from the relativization of scientific knowledge. But instead of thinking seriously about the new understanding of science that was emerging from the sociology of scientific knowledge, they started a witch hunt in the attempt to push the clock back and pretend that nothing had changed. Instead, they should have acted as scientists, by taking the new detailed understandings of the scientific process that were being revealed as their problem as much as ours.
Mills: In recent years, you have engendered controversy of another sort in academic circles, by prominently defending scientific expertise, and calling on your fellow sociologists of science to do the same. How did you, as a practitioner and founder of the sociology of scientific knowledge, come to be a defender of expertise?
Collins: I was one of the leaders of the new relativist methodology for the historical, philosophical, and sociological analysis of science. The consequences, as it was turning out, were an uncovering of a much richer understanding of what made scientific communities reach one conclusion rather than another. Scientific knowledge had been thought of as a high mountain standing above the plain of ordinary knowledge and immune from the mundane influences of the sociology of knowledge; now it was being leveled down and a space had been created for mundane (for example, social or political) influences to filter back in because we were showing that science could never be as exact as any philosophical “logic” of science had claimed. Some colleagues did very good work revealing the outside influences even on established bits of science. The trouble was that although the science-warrior antagonists never came up with sensible counter-arguments, some of the enthusiasm for the new work did begin to justify their fears. Science was being leveled all the way down to the plain of the ordinary and its findings equated with political interests. But this did not seem to fit with the world I lived in; rather it was just a more complicated world than had once been thought.
As it happens, my experience in the gravitational wave community reinforced my view. Though the case study was showing just how hard and complicated science was, the scientists were engaged in an almost impossible and continually failing enterprise beset by disputes, but they were nevertheless driven by a desire to find the truth, attainable or not. Gravitational wave physicists, if they just wanted to follow their self-interest, ought to have got out of the field and done something else. But they persisted in trying to find gravitational waves even though it looked increasingly unlikely to happen until, at best, after they were long dead.
The immediate danger of the over-interpretation of the consequences of the sociology of scientific knowledge by what I’ll call “the democratizers” was in the encouragement it gave to ordinary people to dismiss scientific expertise and prefer their own judgements, leading to the dissolution of the very idea of technical expertise. This worry became frighteningly concrete with the popular revolt in the late 1990s against measles, mumps and rubella (MMR) vaccine. The democratizers sided with the parents who claimed that MMR had caused autism in their children even though there was no evidence for such a link and all the epidemiological evidence went the other way; something was going terribly wrong, and it had potentially deadly consequences.
I thought we could resolve this problem in a quiet way that would appeal to my academic community, including the democratizers, by shifting our attention from how scientists establish scientific truth — which we know can take a long time, especially when there is determined opposition within the community of researchers, as in the historical example of the theory of relativity, or, more recently, gravitational wave physics — to the analysis of expertise. One can establish who is an expert in the short term, even if a scientific consensus has not yet been reached, and that’s important because science-related policy decisions have to be made in the short term. I wrote a paper with Robert Evans in 2002, called “The Third Wave of Science Studies,” explaining how this would work.
The “First Wave,” we said, was the approach to scientific discovery that took the exceptionalism of science for granted and tried to find the “logic” of science and asked how scientific communities are “nurtured.” This kept science on top of its mountain. The “Second Wave” was what had been going on since the 1960s and 70s, treating science as a cultural community; it tended to level things down. The “Third Wave” would concentrate on Studies of Expertise and Experience (SEE), and would show why scientific and other kinds of expertise were still central to technical judgement even though they were a bit more ordinary-looking than the First Wave had implied. To our astonishment, this paper was treated by some powerful colleagues as a betrayal. We were expecting some mild interest and appreciation, but, ironically, this negative reaction caused us to develop our work on expertise in far more depth and width.
Mills: Okay, so how do you defend it — what is your account of scientific expertise?
Collins: Now that we were thinking hard about expertise, we found that there was a lot to say. The academic study of expertise was traditionally shared between psychologists and philosophers, but we approached it from the sociology of knowledge. We defined expertise as successful socialization within an expert community, whether it be the acquisition of a language or building a working TEA laser. We developed a brand-new notion of “interactional expertise,” which we distinguished from “contributory” expertise.
Interactional expertise is the ability to understand the world of practice of experts by acquiring their specialist spoken language but without actually practicing. It’s impossible to understand specialist division of labor, peer review, the crucial technical contributions of managers, and various other features of co-operation in society as a whole, without the idea of interactional expertise. But it can be hard to acquire interactional expertise, since it requires deep immersion within an expert community. We pioneered the use of what we call the “Imitation Game” — a kind of Turing test for humans to test for levels of interactional expertise.
We also came up with the new idea of “ubiquitous expertise,” which are the kinds of expertise that all citizens of a society have, such as natural language speaking, understanding the difference between right and wrong, clean and dirty, how much space to leave between you and others in various circumstances, and so on. Psychological and philosophical analyses of expertise always take expertise to be associated with small elites, but all the citizens of this country or that country are experts in the ways of that country. To understand the continued failings of artificial intelligence, for instance, you need to see that the really difficult kinds of expertise for computers to mimic are the ubiquitous ones. So in this sense ordinary citizens are experts, but this does not make every citizen a specialist expert in everything. All this led on to the construction of what we provocatively titled the “Periodic Table of Expertises,” which set out a taxonomy of different kinds and gradation of expertise. The Studies of Expertise and Experience approach and its new concepts has had more take-up than we expected, even among psychologists and philosophers.
Mills: How does the Imitation Game work? Have you played it? And, if so, did you pass the test?
Collins: The Imitation Game has lots of different variants, but all are based on the Turing test. In a Turing test, a machine pretends to be a human in competition with a real human while a human judge asks them both questions and tries to tell the difference. In the Imitation Game, one kind of human tries to pass as another — for instance, someone who is not a contributory expert can try to pass as one. I passed as a gravitational wave physicist, as reported in Nature in 2006, and then again, with a more complex format, as reported in my book, Gravity’s Kiss, published in 2017. What these tests are designed to show is that one can acquire degrees of interactional expertise through sufficient immersion within an expert community, even though one is not a contributory expert.
Mills: Doesn’t your defense of scientific expertise raise the specter of the notorious “demarcation problem” — whether or how science can be differentiated from other kinds of intellectual or cultural pursuits? This is a problem that philosophers, notably Karl Popper, have struggled to solve. Are you proposing a solution?
Collins: Well, I haven’t actually focused on scientific expertise yet, only expertise in general. But in our definition, an expertise is anything which you can or can’t acquire through deep immersion in a social group, and that includes astronomy and astrology, epidemiological modeling and tea-leaf reading, gravitational wave physics and confidence trickery. As you say, the really difficult question is how to justify scientific expertise as standing up above the plain of other kinds of expertise. Philosophers mostly struggled with the demarcation problem because they looked for necessary and sufficient criteria, and there are always exceptions to any proposed “logic of science.” Wittgenstein’s ideas are a better starting point: as he pointed out, a necessary-and-sufficient-type definition is not the right thing to look for in all cases.
Take his example of games. There are lots of different kinds of games and when you really consider them one by one, there don’t appear to be any clear defining criteria of what counts as a game and what doesn’t. Instead, they are linked by what Wittgenstein called a “family resemblance.” The same is true for science: once you relax the logic and start, instead, to think in terms of generally held values and procedures, the boundaries emerge even though they are not sharp. Values of this kind are distinguishing features of social groups — we call them the “formative aspirations.” The values of science — its formative aspirations — are what raise it above the plain of ubiquitous knowledge.
These formative aspirations include what were once thought of as logical imperatives for science — for instance, trying to corroborate a claim through replication of results, or being able to describe how a claim might be falsified. Under Wave 1, corroboration and falsification fought it out as rivals to be the logical key to science — “verification” versus “falsification” — but they are both part of what scientists aspire to. If you think in terms of values, the key is the aspirations of the members of a community and their institutions, not the logically demonstrable outcomes. In the case of science, this has to be how it works because the foundational aspiration is to find the truth about the natural world however long it takes. This is quite different from the short-term successes needed to fulfill political, business, and even legal interests — that’s why Wave 3 had to switch from truth to expertise.
Other scientific values include the social imperatives discussed under Wave 1, notably what the sociologist of science Robert K. Merton called “universalism.” This means that the truths that science tries to uncover should be true for everyone — there must not be a Jewish science or a Soviet science or a male science or a white science. The sociology of scientific knowledge showed that science could not accomplish a perfect universality, but it did not show that it should not have that as an aspiration. These values rest on something more fundamental, indeed so fundamental that it is rarely discussed, though we have introduced it already: science’s foundational aspiration is discovering the truth of the observable world. A lot follows from this about how science does its work. For example, scientists do everything they can to, literally, “know what they are talking about”: they immerse themselves in the observable world through observation and experiment. Since Wave 2 we know that this is not a foolproof way of discovering the truth of the matter, but it must be better than not knowing what you are talking about. Science knows its world of observables better than any other institution.
Now let us examine science as a community. What nearly everyone knows about the findings of science they know from hearsay, because ordinary people cannot actually test their knowledge to the standards of science. But what is also obvious after a bit of reflection is that the same is true of scientists themselves. No scientist can directly test any more than the tiny piece of knowledge in which they are a specialist expert, and even that testing depends on a huge body of other scientific claims that they must take on trust. So, even scientists have to trust those who provide most of their scientific knowledge and if those providers are not honest then the entire scientific enterprise would collapse. For experimental scientists this is an immediate problem with dire consequences, since acting on a false piece of information could involve a huge waste of time and money. (I did a detailed study, published in 2001, of how Western scientists, through face-to-face interaction, came to trust the results of a Russian report on the qualities of potential interferometer mirrors grown from sapphire which, on the face of it, could have been dubious.) So, science has to be taken on trust, but science itself would not work unless that trust was justified.
Wave 2 showed that uncovering the truth was far more complicated and beset with uncertainties and potentially hidden influences than we had thought under Wave 1, but without aspiring to find the truth, and without scientists telling the truth, it would no longer be science. Members of the scientific profession who commit deliberate fraud or mislead in other ways are simply not doing science — they do not understand the vocation of science; that’s why the discovery of such events leads to shock and horror and why they are so rare in science. You cannot say this of politics or banking or business or many, if any, other institutions. So though science may have been found to be subject to potential hidden influences, the response of scientists is to strive harder to uncover and eliminate such influences (just as the discovery that legal procedures are biased toward the rich leads to a striving to reduce the bias.)
Scientists, then, aspire to find the truth about the observable world, and scientists know that in order to find the truth their own communities must be founded on truth. They may not always agree about what the truth is, but they know they must aim for it and immerse themselves in it even if they disagree. The community of scientists is what we might call a “veritocracy” — a society based on truth. That is one of the reasons that sciences like physics manage their disagreements in face-to-face meetings with tightly guarded boundaries. The internet has made it difficult to know the sources of information and whether the people behind the information share the norms of science — the norm of honesty, for instance, and the aspiration to uncover the truth. We know there are members of the scientific profession that don’t always act according to those norms, but we know they are few in number and we know that the community is healthy because of the shock and horror that greets their exposure. Science may not be the magic dust that it was thought to be under Wave 1: it isn’t a foolproof set of tightly defined procedures, but it is still craftwork with integrity — the best we can get if we want technological advice to feed into policy.
Mills: So what are the practical implications of this account? One of the key takeaways from the sociology of scientific knowledge — specifically, what you call Wave 2 — seems to have been that it’s very hard, perhaps even impossible, to separate science from politics. Are you suggesting that we can or should?
Collins: Suppose you had said, “one of the key takeaways of the sociology of law was that it was impossible to separate legal advantage from wealth,” then the conclusion is obvious — strive to separate them better even if a complete separation is impossible. There is a pervasive view that because science cannot entirely escape from mundane influences we have to treat it as “politics by other means.” But this is a fallacy. One of the tasks of Wave 3 is to show how science and politics differ, and we are engaged in revealing some of the differences in this very conversation.
Mills: Well, let me try to make my question a little more concrete: One practical implication of the Wave 2 idea that science is intrinsically political is that we should try to increase lay participation in technical policymaking (for example, the movements known as “citizen science” or “participatory science”). Are you supportive of these efforts or do you think they are misguided?
Collins: I know from my own field studies that science is far too complicated and difficult to understand to allow for any great input from the lay public on technical matters. For instance, it took me the best part of ten years to gain enough fluency in gravitational wave physics to pass an Imitation Game and a lot of that was learning whom to trust. That complexity is why we had such terrible misunderstandings of the consequences of what became known as Climategate from the leakage of the University of East Anglia emails. Only if you understand the science and the social interactions of scientists at something like the level of interactional expertise can you understand the significance of the way scientists talk casually to each other. Up close, science always looks very messy, but “distance lends enchantment.” At a distance from the scene of scientific research, we tend to wind up with a polarized view: “these scientists are saints” or, if they’re not, then they must be “rogues.”
On the other hand, there are unqualified — for example, uncredentialed — people who can make important contributions to technically-related policy matters based on their experience. For instance, farm workers may know better than scientific experts how organophosphate herbicides should be sprayed in practice (on windy days, for instance). Or consider the activists during the AIDS crisis who acquired sufficient “interactional” expertise not only to advocate for themselves but also to participate in and improve technical decision-making. These examples have encouraged some commentators to think that anyone can become an expert and to always distrust elites. But such “experience-based” experts are not “lay persons.” They are as much experts and as much elites as anyone else. The whole of society is made up of mini-societies of experts into which one becomes socialized once primary socialization has done its work. Were we not all members of various elites there would be no scientists, car mechanics, plumbers, language teachers, truck drivers, brick layers, violinists, and so on. You don’t want your car repaired by a combination of those who know what they are talking about infiltrated by anonymous opinion pushers on the internet.
To maintain the idea of expertise and to maintain the safety of expert conclusions, scientists have to guard the boundaries of their truth-making groups jealously. They have to exclude both cranks and scientific opinions which are past their “sell-by-date” even when they come from highly qualified scientists. But they also have to include the views of experience-based experts who are not scientifically qualified, and so on. I think Wave 3 of science studies is needed to contribute to the understanding of how all this can and should be done right.
Mills: Okay, so putting it crudely, Wave 1 offered a technocratic model, where the public or politicians should simply defer to technical experts to make decisions about technical issues because science is supposed to be this magical truth-generating machine. Wave 2 pointed out that experts cannot be entirely neutral, however hard they try, and that science can rarely offer exact solutions in the short term. So policy cannot be dictated by science. Your view, Wave 3, wants to strike a kind of compromise or balance between these two, insofar as Wave 2 has been taken by some to mean that the public should always be involved as much as possible in technical decision-making. What does this compromise look like in practice?
Collins: The crucial thing is that science aspires to be neutral and, when it is working properly, it comes closer than any other institution. The old technocratic model was unsustainable and dangerous because it rested on a kind of scientific perfectibility that did not exist, and sustaining that magic view of science always risked disillusion when difficult science in the public domain is at stake — such as pandemic science or climate science where short-term decisions are required. Also, it was very difficult to include the experience-based expertise of unqualified persons under the technocratic model. We need social science expertise to help understand where the new boundaries should be.
The other problem with technocracy is that it implies that scientists should be making the policy decisions. They shouldn’t — that is the job of politicians. It is just that politicians have to be honest and transparent about what the scientific advice is and when they are accepting it and when they are rejecting it. In our book Why Democracies Need Science, my colleague Robert Evans and I suggested setting up committees made of scientists who understand a bit of social science and social scientists who understand a bit of natural science to reach a conclusion about the essentially sociological question of what the scientific consensus is and how strong it is in particular cases relevant to policy. We call such a committee, or set of committees, “the Owls,” because they would be wise and able to turn their heads 180 degrees from natural science all the way around to social science. When proposed policies are made public, politicians would have to react publicly to the Owls’ conclusions about the substance and strength of the scientific consensus.
Mills: Your defense of expertise starts from the worry that in trying to democratize expertise — what you call the problem of legitimacy in your writings — we’ve lost any sense of what makes experts distinctive and important. This is what you call the problem of extension — the idea that, in the extreme, anybody can be an expert. I want to play devil’s advocate a bit here. If we consider the coronavirus pandemic, for instance, we clearly have a problem of extension — with everybody feeling empowered to google their way to epidemiological expertise and challenging public health experts on every technical issue imaginable. But we also seem to have a problem of legitimacy, right? It seems plausible that a lot of the controversies during the pandemic have stemmed from the fact that members of the public feel as though a lot of policies — school closures come to mind — were made without their input or that they weren’t given adequate opportunity to participate in these decisions.
Collins: All difficult decisions in the public domain have to balance the problem of extension and the problem of legitimacy — they are in tension with one another. But given the complexities of epidemiological modeling, and given that disease transmission affects your neighbor and your neighbor’s grandmother as much as you, what would count as “adequate opportunity for public participation” in the case of something like vaccination? It seems to me that much of the rhetoric which surrounds public health treats individual choices as having no impact on anyone but the individuals making the choice. It’s a change of rhetoric that is needed. Vaccination choice is like the choice of what side of the road to drive on — it affects the life and death of others. What I have been trying to show here is that in technically complex domains, too much demand for participation should not displace the values that support the generation of scientific expertise. In the end, how to handle vaccination is a political decision but one where expert advice must play a major role and cannot be displaced by a social-media type popularity campaign or its equivalent, however politically attractive this would be.
Mills: Let me try to get at this in another and perhaps somewhat provocative way: During the pandemic, it’s become commonplace for experts and the media to talk in terms of “following the science.” Isn’t this a pretty clear example of “Wave 1”–style thinking, where the public or the politicians should simply defer to the experts? And if that’s right, then doesn’t that suggest that the work of Wave 2 has not been finished after all, that we still have to grapple with the problem of extension as well as the problem of legitimacy? In other words, does the defense of expertise today need to be waged on two fronts simultaneously?
Collins: I think the emphasis these days must be on the problem of extension because of the new attacks on expertise arising from populist approaches to political leadership — something new in my lifetime. I think the phrase “following the science” has to be understood in this new context as well as in the context of the earlier part of this interview. In the UK, Michael Gove attacked experts as a way of countering economic arguments against Brexit — and the Conservative government under Boris Johnson has succeeded all too well in discounting economic expertise when it suits them. When the pandemic hit the UK, I was fearful that we would see another attack on experts, but Johnson chose to claim he was “following the science.” He actually made various bad choices that led to the UK having the highest death rate per head in Europe, and it has never become clear just how this related to the scientific advice in the way I think it should have been made clear. Nevertheless, I would prefer to live in a country where the leader says, “we have taken the science into account but decided that political considerations over-rule it,” than a country where the leader explicitly displaces all scientific opinion with political opinion. Replace the phrase “following the science” with “listening seriously to scientific advice and explaining our decisions in the light of it” and we have the correct position for a democracy, I believe.
As for the problem of legitimacy, I hope we are working on it with this very interview by creating a climate in which the crucial if imperfect role of scientific expertise in technological decision-making and society as a whole is better and more widely understood.
Mills: I want to press this point just a little further, since I think it’s really important. Your alternative to (or rephrasing of) “following the science” sounds eminently sensible, at least in principle. But isn’t part of the problem that, in practice, scientific experts often disagree, especially in situations where no consensus is available? How are politicians or members of the public supposed to decide which expert advice to “follow” or to “listen seriously to” in these kinds of situations? Is this where the Owls come in?
The New Atlantis publishes one-of-a-kind thought leadership on science and tech
Collins: It’s a sign of health that scientific experts disagree in the short term: they are the model of how to transform disagreement into agreement in a trusting and truthful society. The Owls’ job would be to include setting a “grade” for the current state of scientific consensuses — say from A to E. Politicians, who have to make those policy decisions in the short term would then have to say something like, “the Owls say this is a grade E consensus, so we can’t learn too much from the science in this case,” or “the Owls say this is a grade A consensus, but I am going to overturn it for this and this reason, knowing this is a huge responsibility and I am putting my leadership on the line.” Or they could agree with the consensus, of course. Thabo Mbeki’s refusal, when he was president of South Africa, to distribute anti-retroviral drugs in his country is a perfect example of how not to do it: he said there was scientific disagreement when there was none. The Owls would have speedily cleared up that aspect of it.
Mills: Okay, final question: In your book Why Democracies Need Science, you frame your defense of expertise in a broader cultural context in which the public has lost faith in institutions of all kinds — not only those of science. And you’ve argued for the need to defend these institutions, and specifically for sociologists of science to make a more positive case for scientific expertise. But what if part of the problem is with the institutions themselves? My colleague Yuval Levin has argued that one of the problems nowadays is that we tend to think of institutions as platforms, vehicles for people to build their individual brands or advance their own interests, rather than as molds for forming people — whether it be journalists, or scientists, or students, or citizens, or whatever — who are members of particular communities with shared norms and goals. To what extent do you see this as a danger for science today, that its institutions are too often treated as platforms for individuals to build their brands or gain cultural or political influence or become rich or famous (or whatever), rather than molds for forming good scientists who can devote themselves to their craft with integrity and be good and trustworthy representatives of science?
Collins: It is certainly true that the nature of society’s foundational institutions has changed since I was a kid. I remember when UK banks were a watchword for integrity. I think the overall shift to the right in “Western” politics since the 1970s has made any institution with elements of a “public service” ethos very insecure and desperate to justify itself in terms of its economic value. And governments have encouraged the replacement of professional values with quasi-markets. I think Hayek’s argument for markets over command economies is right so long as it isn’t taken as a general rule for the foundation of societies. I remember walking around Manhattan when a light rain-shower caused the corner of every block to be populated by an umbrella-seller and thinking that a command economy could never manage that. But quasi-markets are not real markets and should horrify Hayek because, ironically, they reproduce the institutions of the Soviet Union and have similar bureaucratic overheads.
Under today’s pressures, science is sometimes tempted to sell itself as a provider of short-term solutions to technical problems, a driver of capitalism (which it is, of course), or a kind of entertaining space-opera, instead of a seeker of long-term truth. There is pressure on individuals to act accordingly. As far as the short term is concerned, science should be defending itself as craftwork with integrity — the best you can get when it comes to understanding the observable world and an object lesson for how to make decisions in conditions of uncertainty — with actions that engender trust immersed in honesty. I have tried to explain why I think that the professionalism of various foundational institutions, including science, has been eroded since I was young. It has now reached the point where the “Overton Window” has shifted so far that these institutions no longer have the vocabulary for justifying themselves in any way other than their contribution as measured by the market. But science as I know it has been much less affected than most and I think has the driving motive and associated practices to maintain a defense against corrosive influences, so long as everyone, including the government, the people, and the scientific salesforce, understand what science really is: it is one of society’s vital wellsprings of integrity.