Few of us stand in awe at every text message that materializes on our smartphone screen. This is a good thing, for the most part. One can hardly be expected to maintain a state of perpetual bewilderment at the technical marvels we carry around in our pockets. But had a fully charged iPhone fallen from the sky, say, sixty years ago, like the pristine Coke bottle discovered by an African tribe in the 1980 film The Gods Must Be Crazy, whoever came upon it would have been more than a little amazed. Indeed, the operations of the touch-sensitive slab would have seemed like a series of well-executed magic tricks — events that are manifestly real, but the causes of which are so effectively obscured as to produce the sensation that one is witnessing something impossible.
We would imagine that, lacking any knowledge of the causal antecedents of the device’s high-resolution animations, our mid-century iPhone wielder would have been compelled to ask how the mysterious object worked. He may have even devised a rudimentary theory, the same way a magician’s awestruck spectators grope for explanations after witnessing a seemingly impossible feat.
But is it ignorance of how the mysterious iPhone works that is the true source of this person’s wonder and curiosity? How many of us today have a better understanding of how our newest gadgets work than would our hypothetical friend from the 1950s? Yet it’s rare that we spend much time wondering what is going on within our pocket computers, or any of the various pieces of high technology we interact with every day.
Back in 2002, the authors of the National Academy of Engineering report Technically Speaking: Why All Americans Need to Know More About Technology observed that, “Americans use technology with a minimal comprehension of how or why it works or the implications of its use or even where it comes from.” The danger, they argue, is that, given our lack of comprehension, we “are poorly equipped to recognize, let alone ponder or address, the challenges technology poses or the problems it could solve.”
It is certainly true that we might be missing out on some important conversations about the future of the Internet and the like. If the recent controversies over NSA data collection prove anything, it is that there are real political costs to ignoring basic technical questions about the devices we routinely use. But there are broader issues at play when it comes to our easy technological ignorance. Thanks to the abundance of sleek technologies that mediate our lives, the everyday environment of most Americans is filled with mystery.
We are used to telling ourselves the opposite: that, through the march of scientific progress and technical expertise, we’re continuously increasing our knowledge of our surroundings. This belief is surely true in some important respects. But our failure to be more probing about the inscrutable gadgets around us is perhaps the clearest evidence that our appetite for satisfying explanations, and our ability to discover them, may not be as strong as we think. This state of affairs should strike us as more than merely curious — especially since the skills required to seek out relevant information, evaluate competing theories, and make informed judgments about complex issues are only becoming more critical.
It is almost obligatory that any discussion of this kind begin with the writer Arthur C. Clarke’s third law of prediction: “Any sufficiently advanced technology is indistinguishable from magic.” In illustrating this idea, Clarke distinguishes between two kinds of technology. The first includes machines like the steam turbine and the helicopter, that could have been at least comprehended by the inventors and tinkerers of previous ages, like Benjamin Franklin, Leonardo da Vinci, or Archimedes, to borrow a few of Clarke’s examples. The second includes devices like the computer, the television, and the nuclear reactor. These would have been baffling to even the most learned engineers of the past, argued Clarke, because “whatever his degree of education or intelligence, he would not possess the mental framework that could accommodate electron beams, transistors, atomic fission, wave guides, and cathode-ray tubes.” When presented with an explanation of how uranium-235 is used to create a fission chain reaction in an atomic bomb, a pre-twentieth-century scientist, “no matter how farsighted and imaginative,” Clarke writes, “would have said: ‘What utter nonsense! That’s magic, not science.’”
It is hard to imagine Benjamin Franklin crying “magic!” in a crowded lecture hall, but Clarke is at least right that scientists of yore would have experienced a sensation of wonder of the sort that we often associate with magic. By wonder, I mean the feeling of astonishment that drives us to seek out explanations. And while the word might have a somewhat fantastical connotation, it has been a serious topic of investigation since at least the time of Plato. In his dialogue with Theaetetus, Plato has Socrates explain that “this experience is very much a philosopher’s, that of wondering. For nothing else is the beginning (principle) of philosophy than this.” Aristotle, in the Metaphysics, echoes Plato when he claims that “it is owing to their wonder that men both now begin and at first began to philosophize.” René Descartes, in The Passions of the Soul, identifies wonder as “the first of all passions.” And in his posthumously published essay The History of Astronomy, Adam Smith describes wonder as a sentiment “which arises from an unusual succession of things…. Upon the clear discovery of a connecting chain of intermediate events, it vanishes altogether.” It is a sentiment that is essential to the enterprise of rational inquiry, as it alerts us to our ignorance and makes that ignorance uncomfortable.
In previous centuries, the ability of mysterious devices to provoke wonder in audiences was exploited for both educational and theatrical purposes by scientists and inventors. The painter Joseph Wright of Derby famously captured the spirit of this practice in such works as “A Philosopher Lecturing on the Orrery” (ca. 1766) and “An Experiment on a Bird in the Air Pump” (1768), each of which shows a magician-like scientist thrilling and educating his audience with some incredible invention. In the nineteenth century, natural philosophers used so-called “philosophical toys” — most famously, the kaleidoscope, invented in the 1810s — to dazzle the eye while illustrating the limits of human perception and inviting audiences to understand the world more deeply. During the Victorian era, London’s Royal Polytechnic Institution became a premier venue for the theatrical display of new inventions and scientific discoveries.
Perhaps the greatest practitioner of the spectacle-driven science lecture was John Henry Pepper. Today he is best known for the “Pepper’s ghost” illusion, a method (in fact originated by the engineer Henry Dircks) for producing what appear to be lifelike ghosts onstage. As historian J. A. Secord notes, the “Ghost Show” at the Royal Polytechnic Institution “was an integral part of a wider attempt by Pepper to inculcate a sense of rational wonder.” The goal of such displays, in other words, was to present pieces of technology, or demonstrations of scientific ideas, that would provoke the kind of wonder that motivates rational inquiry. English scientist Charles Wheatstone, writing in 1827 about the kaleidophone, argued that “the exhibition of striking experiments induces the observer to investigate their causes with additional interest.”
Contrast the lectures at the Royal Polytechnic to Apple’s remarkably well-stage-managed product launches. To be sure, Steve Jobs’s presentations carried no high-minded goal of generating “rational wonder.” How the new iPad worked was, at best, beside the point and, at worst, none of our business. But in many ways, Jobs’s routines can be seen as descendants of those of Pepper and his contemporaries. One glaring difference between the two, however, is that we are more likely to complain about the shortcomings of the products demonstrated at an Apple launch event — “what do you mean, it doesn’t support Flash?” — than we are to sit dumbstruck at their very existence. This habit of ignoring the awesomeness of technology is what the comedian Louis C. K. pokes fun at in a well-known routine deriding people who complain when, for example, their phones take too long to load a Web page: “Could you give it a second? It’s going to space!” And so, contrary to Clarke’s assertion, it seems that some of the most advanced technologies of today are regularly distinguished from magic. It is worth considering why this is.
The most obvious explanation is that we have simply grown accustomed to sophisticated gadgetry. In a 2011 Wall Street Journal column about the much-touted “magical” properties of the iPad, Virginia Postrel points out our lack of astonishment with pedestrian consumer products, be they “pencil or pencil skirt.” Though few of us could describe in detail how these items are made, “we don’t notice their magic — or the wonder of electricity or eyeglasses, anesthesia or aspirin — only because we’re used to them.” Any American born before the end of the Cold War has seen computers gradually shrink in size, become simpler to operate, and multiply in number. By the time the current generation of smartphones hit the market, most of us were acclimated to an environment full of easy-to-use but difficult-to-understand gadgets, and so a well-designed phone that could send e-mail and load Web pages, while certainly impressive, wasn’t exactly baffling.
The argument that wonder arises from newness more than from ignorance fits with another description of wonder offered by Adam Smith — as a moment when “the memory cannot, from all its stores, cast up any image that nearly resembles” what we are seeing before us. It also conforms to Descartes’s idea that when “some object surprises us and we find it novel, or very different from what we formerly knew or from what we supposed it ought to be, this causes us to wonder and to be astonished at it.”
But familiarity alone doesn’t explain our blunted sense of wonder, for it is not clear that the feeling of astonishment necessarily goes away when hard-to-explain events become more common. Having spent quite a bit of time with some eminently skilled and knowledgeable sleight-of-hand artists — and having been a member of this obsessive subculture for much of my life — I can attest that even the most seasoned magicians, many of whom see more magic in a week than most people will see in their entire lives, still experience the sensation of being fooled. Most of them love nothing more. There is an entire category of tricks that are designed to be “magician-foolers.” What’s more, some tricks are so visually powerful that they can generate a sense of amazement even when one knows how they work. This is sometimes what magicians mean when they comment to one another that a certain trick “looks like real magic.” These experiences are akin to (albeit on a different scale from) those of an astronomer who, despite his or her scientific knowledge of the universe, can’t help but gaze up at the stars in amazement. To say that we have gotten used to events we do not understand does not fully account for why we do not wonder at them.
A richer explanation of our indifference to the technological mysteries of everyday life requires that we consider two psychological tendencies that influence how we understand and interact with our surroundings. First, more than merely having gotten used to our devices, we have come to treat the behavior of advanced technologies as a basic fact about how the world works, just as we accept that letting go of a butter knife during breakfast will cause the utensil to fall to the table. When the knife drops, even someone without any training in physics generally does not ask how this happens. It seems we are inclined to treat many of the processes carried out by our devices the same way, accepting them simply as features of the physical world. The fact that double-clicking an icon on our screen causes a web browser to open does not demand an immediate explanation any more than do the effects of gravity.
In the discussion of causation in his Enquiry Concerning Human Understanding, David Hume argues that any expectations we might have about the effects of a given event — the “necessary connexion” of two events, as he puts it — owe their existence to our past experience, and thus are based on “custom or habit.” We know that the knife will fall when dropped from the air because we have repeatedly observed the association of these two kinds of events and learned what to expect. Hume continues:
It appears, then, that this idea of a necessary connexion among events arises from a number of similar instances which occur of the constant conjunction of these events…. But there is nothing in a number of instances, different from every single instance, which is supposed to be exactly similar; except only, that after a repetition of similar instances, the mind is carried by habit, upon the appearance of one event, to expect its usual attendant, and to believe that it will exist.
Hume may even understate our capacity to infer causal connections from past observation. In a 2005 paper, psychologists Nick Chater and Mike Oaksford suggest that, in some cases, the “repetition of similar instances” that Hume describes is not required, since “a single observation may be enough to establish a causal link.” They speculate that such “one-shot” learning is not only possible but common. Among other things, this kind of learning would help to explain why “children appear to be able to learn extremely rapidly and effectively about the causal structure in an astoundingly complex physical and social world.”
For Hume, all causal understanding is learned in more or less the same way, whether an event results from “the simple qualities of objects, without any secret structure of parts,” (dropping a butter knife) or whether it “is supposed to depend upon an intricate machinery or secret structure of parts,” (launching a Web browser). This last observation was driven home rather effectively by a widely viewed 2011 YouTube video of a one-year-old girl attempting to manipulate the images of a print magazine with the poking and swiping gestures she learned on a tablet screen. In her brief experience in the physical world, the baby in the video came to learn that some things respond like iPads when you touch them, the same way some things make a loud noise when you drop them and others make a mess when you spill them. These are all features of her world that do not cry out for investigation.
Of course, we all recognize that there is a fundamental difference between the behavior of touchscreen icons and that of colliding billiard balls. But the fact that, on a very basic level, we are inclined to treat these two cause-and-effect relationships in roughly the same way might have an anesthetizing effect on our inquisitiveness.
There is a second tendency that helps to explain why we are so comfortable with the mysteries presented by many of our gadgets. In many cases, the reason we do not demand explanations is that we think we already have them. This form of overconfidence has been dubbed the “illusion of explanatory depth” by psychologists Frank Keil and Leonid Rozenblit. As they explain in a 2002 paper in the journal Cognitive Science, not only do “people feel they understand complex phenomena with far greater precision, coherence, and depth than they really do,” but “the illusion is far stronger for explanatory knowledge than many other kinds of knowledge, such as that for facts, procedures or narratives.”
Keil and Rozenblit find that there is “a strong illusion of explanatory depth with one set of causally complex systems: devices.” They reached this conclusion through experiments in which participants were asked to evaluate their level of understanding of a number of common devices, such as a zipper, a quartz watch, and a flush toilet. The researchers then asked the participants to provide step-by-step explanations of the devices, and to answer diagnostic questions about them. Next, the participants were provided with an expert description of how each device worked. Across a number of different populations, participants’ self-assessments of their understanding were dramatically lower after the experiment than before. When forced to consider in explicit and detailed terms how these devices actually worked, the participants soon came to realize that their confidence in what they understood was unjustified.
The authors speculate that the illusion of explanatory depth “might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.” Far from being an unalloyed shortcoming, the illusion may act as a valuable check on our feeling of wonder. Without such a brake on our impulse to question and meddle, it might be a challenge to get out of the house in the morning, as everything from your alarm clock to your electric toothbrush to your television would inspire an all-out investigation. If Keil and Rozenblit are right, our tendency to overestimate how firmly we grasp complicated technologies is an important tool for coping with the complexity of the world.
One finding of Keil and Rozenblit’s that might complicate this picture is that the “ratio of visible to hidden parts is the best predictor of overconfidence for an item.” In other words, if some of the mechanics of a device are clearly visible, it contributes to our sense of knowing how that thing works. But there is a distinct preference in the design of many of today’s most popular gadgets toward obscuring their inner workings. As Apple’s longtime design chief Jonathan Ive has said, his company’s job is “to try to solve tough, difficult problems, but we do not make the complexity of the problem apparent in its resolution.” Matthew B. Crawford described this trend in these pages as “creeping concealedness” (“Shop Class as Soulcraft,” Summer 2006). Keil and Rozenblit’s finding would seem to suggest that the illusion of understanding shouldn’t be as strong for smartphones and digital music players, whose innards are slyly hidden, as it is for zippers, for instance, or lawnmower engines.
But for all of the physical transparency that many of today’s gadgets lack, there is a faux transparency that results from the affected manner in which we talk about technology. The culture that has sprung up around the devices of the last two decades has created a way of discussing very technical products and services that makes possible a perhaps more powerful illusion of understanding than merely visual transparency does. Indeed, by adopting the language of Silicon Valley and mainstream technology journalism, even the most uninformed enthusiast can talk like an aficionado. Read a popular technology blog or magazine and you’ll notice that a given device doesn’t have a price but a “price point”; it doesn’t have a shape or a size, as most solid objects do, but a “form factor”; its buttons do not merely feel a certain way when you press them, they provide “tactile feedback”; its software doesn’t just look pleasing, it has an attractive “skin.” Those who self-identify as “early adopters,” meanwhile, pay close attention to the date when a new product “ships.” And while the aforementioned terms of art serve to dress up otherwise banal observations (“it has buttons you can push”) it’s often less clear what work is done by qualifiers like “native,” “social,” and “2.0.”
Even the word technology has been repurposed. Where it was once used to describe a broad concept that encompassed any technical method through which human beings harnessed natural forces in pursuit of a given end, in today’s parlance, the term tech has come to refer to something vaguely related to electronics but much more difficult to pin down. As journalist Nathan Heller recently observed in The New Yorker, “Tech today means anything about computers, the Internet, digital media, social media, smartphones, electronic data, crowd-funding, or new business design.”
Buzzwords, of course, have always been alluring to non-experts eager to signal their shrewdness and sophistication. But it is striking how effectively technology companies have convinced consumers to adopt proudly what is essentially marketing jargon. And once one acquires a working vocabulary of such tech genteelisms, as well as a rough understanding of the basic functions of a device’s main components, one is able to participate in any number of seemingly technical conversations — about, say, a digital camera’s megapixel count, a flat-screen television’s refresh rate, a computer’s processing power, or the pixel density on a smartphone’s screen. Meanwhile, deeper explanations of how these devices actually do what they do are often relegated to sidebar articles, novelty websites like HowStuffWorks.com, or hokey television shows like the Science Channel’s How It’s Made.
This inflated, abstract way of talking, which marks every reader of Gizmodo or Wired as a Silicon Valley insider, only lends to the illusion that we understand our gadgets far more than we really do. As Adam Smith writes in The History of Astronomy, just as the “clear discovery of a connecting chain of intermediate events” dispels wonder, partial explanations that seem to “give some coherence to the appearances of nature” can diminish our eagerness to investigate.
Once recognized, this weakness for claiming ownership of explanatory knowledge we do not possess can be seen all over. For instance, an assortment of glossy cultural products — from TED Talks to breezy popular science books — have for years pleased audiences by delivering the intoxicating sensation of theoretical understanding, often (although not always) without much explanatory depth.
Not unlike tech punditry, many of our political debates are conducted at such a high level of abstraction that simple, nuts-and-bolts questions about matters of policy often go unasked. According to a study published in 2013 in the journal Psychological Science, this failure to consider basic causal explanations for the effects of a given public policy can lead to an illusion of explanatory depth that has significant effects on political judgment. In a series of experiments based on those of Keil and Rozenblit, the authors found that people who were asked to provide mechanistic explanations for certain policies — and thus forced to confront their lack of explanatory understanding — became more moderate in their political attitudes as a result. At the same time, merely asking people to list their reasons for supporting a policy, the study found, had no effect on the intensity of political positions, presumably because one needn’t consider the basic workings of a single-payer healthcare system or a national flat tax (two examples from the study) in order to marshal abstract reasons for supporting or opposing such reforms.
Our willingness to accept the unexplained events of everyday life seems to demand a response. Some have used the pervasive unfamiliarity with our technologically rich environments to call for greater emphasis on science, technology, engineering, and math (STEM) education in American schools. Bill Gates and tech theorist Douglas Rushkoff, among others, have stressed the importance of learning the basics of computer programming. As Brendan I. Koerner puts it in a recent article in Wired, schools should encourage fluency in computer languages rather than foreign languages “because it is code, not Mandarin, that will be the true lingua franca of the future.”
But casting the issue of technological ignorance as a problem of education is naïve at best. It might be true that increasing the number of Americans pursuing STEM degrees or learning to develop apps makes sense as a matter of economic policy, but it doesn’t come close to addressing the difficulties presented by a world that is growing more mysterious by the day. By some accounts, we’re approaching a point where no single human being can fully understand many of our most complex technologies. As computer scientist W. Daniel Hillis argues, “In an Internet-connected world, it is almost impossible to keep track of how systems actually function.”
This trend toward unintelligible complexity is not unique to technology. The ever-growing body of specialized scientific knowledge, in fields from genetics to theoretical physics, has left us with a mass of knotty theories that remain incomprehensible to laypeople, and in some cases even to experts. Writing about the current state of science in his 2013 book Average is Over, economist Tyler Cowen goes so far as to argue that “we simply may have reached the point in some key scientific areas where we are working with levels of explanation that our human brains — even those of Nobel laureates — cannot handle.” Cowen foresees a future where science will “look more like religion and magic because of its growing inscrutability. The working parts will be hidden, much as an iPhone functions without showing you its principles of operation.” The opacity of our world, in other words, isn’t a consequence only of our failure to look for satisfactory explanations. We may already live in a time where an accurate theoretical grasp of the things we encounter every day is no longer available even to the most determined generalist.
In such an epistemically opaque environment, the challenge we face is not in eliminating our ignorance, but managing it. Much of this project will involve developing the skills and institutions that will enable us to choose among experts and to sort good explanations from bad ones despite our hopelessly incomplete knowledge. It has long been a basic fact of life in a liberal democracy that average citizens must choose which experts to trust and which theories to endorse, whether in picking a motorcycle mechanic, a presidential candidate, or a preschool. As the principles underlying the world around us become more obscure, we will only become more reliant on experts in making even minor choices. In turn, our need for sound strategies and reliable institutions deciding between conflicting technical views will become more acute. Before we can develop the tools necessary to negotiate such a world, we’ll need to appreciate better the difference between a mundane occurrence and one that should make you wonder.
Exhausted by science and tech debates that go nowhere?