The day Archimedes leapt out of his bathtub and went running through the streets of Syracuse shouting “Eureka!” marked the beginning of a many-storied legend: that of the scientist or inventor overtaken by a flash of insight. One of the best-documented instances of that legend may have been one that occurred two thousand years later, and sixteen hundred miles to the northwest. In the words of the man who experienced it, recounted at a meeting of the Glasgow Archeological Society:
It was in the Green of Glasgow. I had gone to take a walk on a fine Sabbath afternoon. I had entered the Green by the gate at the foot of Charlotte Street — had passed the old washing-house. I was thinking upon the engine at the time, and had gone as far as the Herd’s-house, when the idea came into my mind, that as steam was an elastic body it would rush into a vacuum, and if a communication was made between the cylinder and an exhausted vessel, it would rush into it, and might be there condensed without cooling the cylinder. I then saw that I must get quit of the condensed steam and injection water, if I used a jet as in Newcomen’s engine. Two ways of doing this occurred to me. First, the water might be run off by a descending pipe, if an offlet could be got at the depth of 35 or 36 feet, and any air might be extracted by a small pump; the second was to make the pump large enough to extract both water and air…. I had not walked farther than the Golf-house … when the whole thing was arranged in my mind.
The “whole thing” was James Watt’s separate condenser, the mechanism that effectively doubled the efficiency of the atmospheric steam engine invented by the ironmonger Thomas Newcomen more than fifty years before. This particular flash of insight has become so famous, in fact, that its symbolic value may be just as important as its content.
At the time of Watt’s May 1765 walk, he was a sort of house mechanic at the University of Glasgow who had been asked to repair the university’s model Newcomen engine, only to realize its fundamental inefficiency. The original atmospheric engines, first appearing around 1712, had transformed boiling water into useful work, but they did so in a decidedly counter-intuitive way, using not the expansive power of steam, but its opposite: the vacuum created inside a cylinder when steam was condensed back into water. The pressure of the atmosphere on that vacuum pushed a piston to the bottom of the cylinder, and since the piston was chained to one end of a beam, as one end was pulled down, the other, seesaw-like, was pulled up. The result was a pumping action powerful enough to remove the water from mines that were hundreds of feet below the surface.
Using the same cylinder to hold the steam and to condense it, however, required a constant cycle of heating and cooling, with most of the fuel wasted in the process. The insight that came to James Watt on his Sunday stroll — that condensing the steam in a second cylinder would allow the working cylinder to stay hot, essentially doing twice the work for the same amount of coal — was hugely important; the doubling of power output freed steam engines from the tethers that bound them to the coal mines that provided their fuel, and allowed them to be profitably used anywhere that water needed to be pumped, and, eventually, anywhere that wheels needed to be turned. (The most common yardstick for steam engine performance, at a time when their primary function was pumping water out of mines, was the so-called “duty,” a measurement of the pounds of water raised one foot by a bushel of coal. A high-performing Newcomen-style engine typically generated a duty of between five and nine thousand pounds — that is, a bushel of coal could lift that many pounds of water. A 1778 Watt engine, with a separate condenser, achieved a duty of 18,900 pounds.)
But surrounding Watt’s specific insight is the tantalizing thought that his recollection says something general and illuminating about insight itself. Those eureka moments are so central to the process of invention that it seems that they must likewise be central to understanding the revolutionary explosion in inventive activity that we associate with eighteenth-century Britain — when, in the words of the schoolboy that T. S. Ashton used to introduce his brief but essential history of the Industrial Revolution, “a wave of gadgets swept over England.”
That “wave of gadgets” was historically unprecedented, but as a matter of degree rather than kind; invention is as old as civilization itself. A twentieth-century economic historian with the Dickensian name Abbott Payson Usher has described the sequence that leads to invention regardless of time and place, breaking the process into four steps: first, awareness of an unfulfilled need; second, recognition of something contradictory or absent in existing attempts to meet the need, which Usher called an “incomplete pattern”; third, an all-at-once insight about that pattern; and finally, a process of “critical revision” during which the insight is tested, refined, and perfected.
But everyone who studies the nature of discovery and invention recognizes the critical importance of the third step, insight — so much so, in fact, that it is frequently conflated with invention itself. Yet a large and growing body of research demonstrates that receptivity to those flashes of insight is a function of thousands of hours of practice — practice that looks very similar whether the activity in question is a basketball game, a chess match, or the invention of a new kind of steam engine. It seems worth considering, then, whether the brain’s neurons “improve” with repetition, in the same way as the body’s muscles. They at least seem to do just that: the more a particular connection between nerve cells is exercised, the stronger it gets. This phenomenon was first closely scrutinized fifty years ago by a Canadian psychologist named Donald Hebb. But “Hebbian” learning — the idea that “neurons that fire together, wire together” — was difficult to observe in a complex nervous system; and even then, it was easier to observe than to explain.
In the 1970s, Eric Kandel, a neuroscientist then working at New York University, embarked on a series of experiments to study the chemical changes in the brain associated with cognition. Kandel and his colleagues demonstrated that experiences literally alter the chemistry of neurons, by producing a protein called cyclic adenosine monophosphate, or cAMP. The cAMP protein, in turn, produces a cascade of chemical modifications that either promote or inhibit the synaptic response between neurons; every time the brain calculates the area of a rectangle, or sight-reads a piece of music, or conjugates a verb in a foreign language, the neurons involved chemically change, making it easier to travel the same path again. Kandel’s research hints at how repetition forms the chains that the philosopher of science Michael Polanyi called “tacit knowing,” and that James Watt called “the correct modes of reasoning.”
Kandel’s discovery, for which he received the Nobel Prize for Physiology in 2000, was provocative, but because the experiments in question were performed on the fairly simple nervous system of Aplysia californica, a giant marine snail, and documented the speed with which the snails could “learn” to eject ink in response to predators, it may be overreaching to say that science knows that the more one practices playing the violin, or extracting cube roots, the more cAMP is produced. It’s even more of a stretch as an explanation for how one learns to sight-read a Chopin etude. Or invent a separate condenser for a steam engine.
This doesn’t mean that modern cognitive science, armed with powerful imaging tools, hasn’t discovered some intriguing candidates for an “insight generator” within the brain. A team of cognitive scientists, Mark Jung-Beeman from Northwestern and John Kounios from Drexel, has performed experiments intended to measure heightened electrical activity in portions of the brain when those “eureka” moments strike. In the experiments, subjects were asked to solve a series of puzzles and to report whether they solved them by using a systematic strategy or whether the solution came to them by way of a sudden insight.
When subjects reported solving a puzzle via a sudden flash of insight, an electroencephalograph, which picks up different frequencies of electrical activity, recorded that their brains burst out with the highest of its frequencies: the one that cycles thirty times each second, or 30 Hertz. This was expected, since this is the frequency band that earlier researchers had previously associated with activities like recognizing the definition of a word or the outline of a car. What wasn’t expected was that the EEG picked up the burst of 30 Hz activity three-tenths of a second before a correct “insightful” answer — and did nothing before a wrong one. Even more revealing was that, simultaneous with the burst of electricity, the functional Magnetic Resonance Imaging machine (fMRI, a kind of brain scanner) showed blood rushing to several sections of the brain’s right “emotional” hemisphere, with the heaviest flows all directed toward a single spot — the anterior superior temporal gyrus, or aSTG.
But the discovery that resonates most strongly with the account of James Watt’s flash of insight is this: Most “normal” brain activity serves to inhibit the blood flow to the aSTG. The brain is apparently evolved to do its best daydreaming only when it senses that it is safe to do so — that is, when it is relaxed. As Kounios told The New Yorker, “The relaxation phase is crucial. That’s why so many insights happen during warm showers.” Or during Sunday afternoon walks on Glasgow Green.
But even if researchers could use fMRI to map the route traveled by insight in the human brain with the same level of precision of a vascular surgeon searching for an arterial stenosis — and of course they cannot — it wouldn’t reveal anything about the historical anomaly known as the Industrial Revolution. The era of rapid innovation that began in eighteenth-century Britain depended not on the unpredictable appearance of insights in the minds of geniuses, but on a thousand other less well-documented but no less important moments of discovery, on tens of thousands of hours spent pursuing false solutions — a critical mass of inventors building on each other’s incremental steps. Watt’s separate condenser was only necessary because of a dozen earlier breakthroughs — by the mathematician Robert Hooke, by the military engineer Thomas Savery, and, of course, by Newcomen himself; it was only possible because of a hundred earlier advances, in everything from iron metallurgy to thermodynamics.
But recognizing these precursors just pushes the question one step further back: What was it about the eighteenth century, especially in Britain, that produced such a dramatic increase in the population of inventors?
The modern world’s dependence on technological innovation has drawn generations of social scientists to study the reasons that a population produces more — or fewer — inventors. Forty years ago, the Austrian-American economist Fritz Machlup approached the question in a slightly different way: Is it possible to expand the inventive work force? Can labor be diverted into the business of invention? Can management emphasize invention? Do higher salaries attract more inventors, and, if so, do they produce more innovations?
Machlup — who first popularized the idea of a “knowledge economy” — spent decades collecting data on innovation in everything from advertising to typewriter manufacture, and arrived at the counterintuitive conclusion that higher rates of compensation actually lower the quality of labor. Machlup argued that the person who prefers to do something other than inventing, and only invents under the seductive lure of more money, is likely to be less gifted than those who don’t. This is the “vocation” argument dressed up in econometric equations: at some point, the recruits into the inventing “army” are going to reduce its average quality.
Machlup also examined the behavior of inventors as an element of what economists call input-output analysis, a map of an economy arising from the way that the output of one economic activity is the input of another: farmers selling wheat to bakers who sell bread to blacksmiths who sell ploughs back to the farmers. Harvesting, baking, and forging are “production functions”: the lines on a graph that represent one person adding value and selling it to another. In Machlup’s exercise, the supply of inventors (or inventive labor) was the key input, the production function was the transformation of such labor into a commercially useful invention, and the supply of inventions was the output.
As always, the equation included a simplifying assumption, and in this case, it was a doozy: that one man’s labor is roughly worth the same as another’s. This particular assumption gets distorted pretty quickly even in traditional input-output analysis, but it leaps right through the looking-glass when applied to the business of inventing, a fact of which Machlup was keenly aware: “a statement that five hours of Mr. Doakes’ time were the equivalent of one hour of Mr. Edison’s or two hours of Mr. Bessemer’s would sound preposterous.” Moreover, input-output analysis misses the most important factor of all, which is that successful inventors’ labor can pay for itself. But you only get the one hour of Thomas Edison’s time figuring out how to make a practical incandescent light bulb, or the one hour of Henry Bessemer’s time figuring out how to improve the production of steel, if you first have a Mr. Doakes plugging away on related work for five hours — or five hundred, or five thousand — without any reward at all. Machlup’s conclusion about the marginal productivity of the “worst” inventors is irrelevant to the support their existence provides to the “best.”
Even so, if the number of eureka moments can’t be doubled by recruiting twice as many inventors, or paying them higher salaries, can it be increased — was it increased in eighteenth-century Britain — by offering them a bigger investment in their own success? John Roebuck, the first investor in Watt’s separate condenser, certainly thought so. An inventor himself, Roebuck had half a dozen patents in his own name; he financed Watt’s research for years, in return for two-thirds of any profits. This was necessary for Watt’s eventual success, but not sufficient: Roebuck’s investments in inventions (his own and others’) made him bankrupt long before the separate condenser made Watt — and his new partner, the Birmingham manufacturer Matthew Boulton — rich.
As Roebuck and many other inventors have learned, you have to kiss a lot of frogs in order to find one prince, and inventors typically forgo a sizeable portion of their potential lifetime earnings searching for one. Thus, the characteristic stubbornness of inventors throughout history turns out to be fundamentally irrational. Their optimism is by any measure far greater than that found in the general population, with the result that their decision-making is, to be charitable, flawed, whether as a result of the classic confirmation bias — the tendency to overvalue data that confirms one’s original ideas — or the “sunk-cost” bias, which is another name for the folly of throwing good money after bad. A 2007 study in the Journal of Behavioral Decision Making indicated that even after reliable colleagues urge them to quit, a third of inventors will continue to invest money, and more than half will continue to invest their time.
A favorite explanation for this seeming contradiction is the work of the Czech émigré economist and cheerleader for entrepreneurism Joseph Schumpeter, he of the “perennial gale of creative destruction.” Schumpeter drew a famous, though not perfectly clear, boundary between invention and innovation, with the former an economically irrelevant version of the latter. The heroes of his economic analysis were, in consequence, entrepreneurs, who “may be inventors just as they may be capitalists, [but] they are inventors not by nature of their function but by coincidence and vice versa.” To Schumpeter, invention preceded innovation — he characterized the process as embracing three stages: invention, commercialization, and imitation — but was otherwise insignificant.
However, even if Schumpeter was correct that the number of frogs isn’t nearly as important as the number of princesses willing to kiss them, there is no plausible reason for a sudden increase in entrepreneurial investors in eighteenth-century Britain, comparable to the number of innovations in which they could invest. But if the great burst of invention for which James Watt is the archetype is not fully explainable by either the behavior of neurons, clever management, rational investment decisions, or even entrepreneurial risk-taking, then perhaps another phenomenon, unique to Watt’s time and place, is at work. What is the puzzle’s missing piece?
This: A hopeful inventor does not experiment and study for thousands of hours simply because of the economic calculus. Watt’s notebooks record months of trying every material under the sun to seal the first boiler of his separate condenser engine. Given the months of laborious experimentation and the other demands on the inventors’ time, the return on improving even the inventions of antiquity must have been poor indeed, no matter how much cAMP was produced during repeated neuronal exercise. Mr. Watt — like so many of the great inventors and the forgotten “Mr. Doakes” — played the game not because he carefully calculated the risks and benefits of a particular strategy, but because he dreamed of winning it.
Which brings us back to James Watt’s famous walk on Glasgow Green. The quotation from Watt above appears (with minor variations) not only in virtually every biography of Watt, but in just about every history of mechanical invention itself, including that of Abbott Payson Usher. Only rarely noted, however, is the fact that Watt’s reminiscence first appeared nearly forty years after his death and was the recollection of two men who heard it from Watt nearly fifty years after the famous walk.
Robert and John Hart were two Glasgow engineers and merchants who regarded James Watt with the sort of awe nowadays reserved for pop musicians, movie stars, or famous athletes; as Robert Hart recounted in “Reminiscences of James Watt,” they regarded him “as the greatest and most useful man that ever lived.” When the seventy-seven-year-old Watt entered their shop one day in 1813, he was welcomed with adoration and a barrage of questions about the great events of his life. Watt’s recollection of the Sunday stroll down Glasgow Green in 1765 comes entirely from this episode, which Robert Hart didn’t share with the world until 1859. In short, it is not the sort of memory that a skeptic would regard as completely reliable in all its details.
This is not to suggest that Watt’s account is inaccurate, but rather that it says something far more significant about the nature of invention. The research emerging from the fields of information theory and neuroscience on the nature of creative insight offers intriguing ideas about what is happening in an individual inventor’s brain at the moment of inspiration. However, activity in the aSTG, or cerebellum, or anything else, does not explain much about the notable differences between the nature of invention in the eighteenth century versus the eighth; the structure of the individual brain has not changed in millennia. Neither have the fundamental laws that govern our economic decisions.
On the other hand, the number of brains producing inventive insights has increased. A lot.
This is why the hero-worship of the brothers Hart is more enlightening about the explosion of inventive activity that started in eighteenth-century Britain than are their actual reminiscences. In the Harts’ era, statues like those that had once been erected to honor kings, soldiers, and religious figures were for the first time put up to honor builders and inventors. James Watt was an inventor inspired in every way possible, right down to the neurons in his Scottish brain; but he was also the inspiration for thousands of other inventors, both during his lifetime and beyond.
James Watt’s life, from this perspective, is a synecdoche — a stand-in for the whole era of perpetual innovation known as the Industrial Revolution. His modest circumstances at birth, his apprenticeship and training in mathematics outside the traditional universities, his enormous success as an artisan-scientist-entrepreneur, and even his membership in the famous Lunar Society of Birmingham (which included Watt’s partner Matthew Boulton, Joseph Priestley, Erasmus Darwin, and William Small) are characteristics he shares with hundreds of other less well-remembered innovators. He was the era’s most prominent and articulate defender of the legal and moral rights of inventors, making dozens of court appearances (on his own behalf, and in support of others). “An engineer’s life without patent,” he once wrote, “is not worthwhile.” In the mid-1780s, he authored a manuscript passionately defending the work of inventors and explaining how it differs from the work of the manufacturers who make, sell, and use new inventions:
[F]ew men of ingenuity make fortunes, or even can keep themselves on a footing with the tradesman who follows the common tracks, and who possesses no other merit than that of attending solely to his immediate interest without suffering himself to think seriously whether the article he manufactures might, or might not be Improved. The reason is plain, the man of ingenuity in order to succeed in the object he takes in hand, must seclude himself from Society, he must devote the whole powers of his mind to that one object, he must persevere in spite of the many fruitless experiments he makes, and he must apply money to the expences of these experiments, which strict Prudence would dedicate to other purposes.
It isn’t just that we remember Watt as “the most useful man who ever lived” because of his eureka moment, but that we remember his eureka moment because of his reputation as the most useful man who ever lived. A culture aspires to become that which it admires. A sizeable number of eighteenth-century Britons admired men like Watt, and what they represented; a large enough number, in fact, that Britain’s Patent Office, which at the time of Watt’s eureka moment granted fewer than twenty patents annually, was approving nearly three hundred per year by the time of his death. It was that admiration and aspiration that ignited the greatest innovation revolution in human history: one which, not at all coincidentally, has raised per capita human productivity — a number that had barely moved for five thousand years — at least tenfold in the last two centuries.
The inscription on the statue of Watt that stood in Westminster Abbey from 1825 until it was moved in 1960 reminded visitors that it was made “Not to perpetuate a name which must endure while the peaceful arts flourish, but to shew that mankind have learned to know those who best deserve their gratitude.” It is sometimes unclear whether we have actually learned this lesson all that well; a vocal minority seems nostalgic for the faux-Edenic world that existed before humanity started burning coal to boil water, and so produce power. Others have built academic careers on the dubious premise that the aggressive defense of patents by Watt, and those who followed him, did more to retard innovation than to promote it. Both nostalgia and revisionism are poor payment indeed for the debt that the modern world owes to the first generation of humanity that learned how to make heroes out of inventors.
Exhausted by science and tech debates that go nowhere?