Whether one wants to call it a problem, a plague, or a pending crisis, it is clear that healthcare systems throughout the developed world are increasingly difficult to sustain. This is true whether they are market-dominated (as in the United States) or government-run and heavily regulated (as in Canada and Western Europe). In the U.S., tens of millions go without health insurance. Medicaid costs are giving the states economic fits, and the projections for Medicare over the next two decades are a well-publicized source of anxiety. In Canada, waiting lists plague the national healthcare system, and though patients are well covered for physician and hospital costs, they also pay a good bit out of their own pockets for other services. In Western Europe, the combination of lagging economies, high unemployment, and a citizenry unwilling to tolerate benefit cuts is giving administrators and legislators a chronic headache.
Yet even as healthcare costs continue to rise faster than inflation on both sides of the Atlantic, there is good reason to doubt that the actual health gains will be anywhere proportional to the cost escalation. Indeed, the recent history of health progress shows a significant divergence of costs and benefits: small health gains achieved at higher costs. Moreover, the fact that the rising cost problem afflicts all systems should undercut a common misconception afflicting both pro-government and pro-market advocates: that there is an organizational solution if only their respective ideological strategies were implemented. That may have been the case in the past, but it makes less and less sense in light of expensive medical advances and undiminished public demand for them. We increasingly want more healthcare than we can reasonably afford, and we are often unsatisfied with the healthcare we get.
Our predicament invites us to consider two fundamental but neglected problems: our unwavering national commitment to medical progress and the way medicine and the broader culture situate the place of death in human life. But haven’t these problems been discussed enough already — such as the high cost of medical care at the end of life or the role of technology in pushing up costs? I think not. The dilemmas of progress and the realities of death are commonly domesticated and often trivialized, turned into little more than troublesome management puzzles. We have lacked a serious and sustained consideration of the value of medical progress, beyond simply discussing how best to manage and pay for it. And we have approached death in the public square mainly with calls for new death-defying advances and greater patient choice at the end of life. Such responses are insufficient to the challenges ahead and the gravity of these subjects.
Unless we think more seriously about progress and death, there will never be a feasible, humanly tolerable way to organize and run a healthcare system. We need to begin with the present moment: Where are we now with the fruits of progress after a century of rapid development, and what are we to make of death now that so much has successfully been done to forestall it? In the political realm, neither liberals nor conservatives have grappled adequately with these questions, and yet both sides might have something constructive to offer, if only we could cut through the loud and divisive clashes of partisan politics.
The problems of progress and death are usually dealt with in American society by evasion. They are translated out of the uncomfortable and seemingly intractable language of philosophy and religion into the more comfortable technical patois of law, management, and policy. What kind of medical progress, for instance, will do us the most good? That question is rarely addressed in any direct way, since we wrongly assume that we already know the answer. Progress is taken to be an unquestionable value and goal, stifling any critical examination. Benjamin Franklin captured this spirit in a 1780 letter to the great scientist Joseph Priestley: “It is impossible to imagine the height to which may be carried, in a thousand years, the power of man over matter…. All diseases may be prevented or cured.”
That kind of optimism continues to this day. The budget of the National Institutes of Health has had a steady and unbroken record for over 30 years of annual increases, now reaching $28 billion, voted upon with little dissent in a wholly bipartisan way. Few if any other federal agencies can match that record. Yet however much progress has been made, healthcare budgets always grow, the standard of what counts as good health steadily rises, and there is no such thing as too much. The economic cost of all that progress is now staggering.
The strategy of choice from both the left and the right to deal with that cost has been organizational and ideological. From the left, the emphasis has been on achieving government-financed, universal healthcare. Such a system would guarantee care for all citizens and allow government to use supply-side controls, such as caps on drug prices, to hold down costs. From the right, the emphasis has been on increasing consumer choice using market mechanisms, increasing competition among providers, and reducing government regulation.
Taken together, there is a bewildering array of management tactics, actual or proposed, to hold down costs: more medical research to rid us of expensive diseases, more health service research to design more efficient mechanisms of healthcare delivery, increased use of information technology (being pushed jointly by Hillary Clinton and Newt Gingrich), evidence-based medicine, medical savings accounts, physician incentive payments, and on and on. Everybody has a macro- or micro-scheme to make the system work better. Yet none of these reform ideas, individually or in combination, has worked to stop cost escalation in any notable way, even if there are scattered and local examples of small-scale success. And given that the cost-control effort in the U.S. began with the Nixon administration, we can hardly say that reform schemes have not had enough time to be tested.
In one of the rare examples of professional agreement, almost all economists believe that progress-driven technological innovation is the main engine of cost increases, stemming both from the development of new technologies and the intensified use of older ones. A common estimate is that 40 to 50 percent of expenditure growth can be traced to the increased costs of technology, far outrunning ordinary cost-of-living increases, malpractice suits, or administrative costs (among other favorite villains). As the physician-economist Thomas Bodenheimer spelled out in a June 2005 article in the Annals of Internal Medicine, there are many causes that drive this high-tech cost growth. They include increased capital expenditures (e.g., hospitals adding new or steadily upgraded diagnostic devices), increased use of new medical procedures and drugs (e.g., coronary angiography), expanded educational needs to master the use of the technologies, and increased personnel time to use the technologies with sick patients. Bodenheimer quotes the Brookings Institution economist Henry Aaron to put to rest a widespread misconception: that the wider use of a given technology will drive down unit costs. “Rapid scientific advance,” Aaron has written, “always raises expenditures, even as it lowers prices. Those who think otherwise need only turn their historical eyes to automobiles, airplanes, television, and computers. In each case, massive technological advance drove down the price of services, but total outlays soared.”
Not everyone finds these dramatic cost increases to be worrisome. Some argue that progress always costs money, but that in the end it does not matter how much a country spends on healthcare: if that is what people want, it is their right to make that choice. A second response, pushed by the Harvard economist David Cutler and the present administrator of the Centers for Medicare and Medicaid Services, Mark McClellan, is that an investment in health research is the best of all national investments, with an economic return of at least five times the cost of the research. New medical technologies may be expensive, these experts say, but they have an economic return of at least five times the cost of the research because of the economic value of the lives saved and the consequent increased life expectancy.
But both of these arguments are ultimately flawed. The first one assumes that there is a direct correlation between satisfying market preferences for healthcare and actual satisfaction with one’s health. But a preference for unlimited healthcare has never been shown to guarantee personal satisfaction with one’s health. In fact, there is evidence that many people feel subjectively worse about the state of their health even as the population’s health objectively improves. (This argument is made, for example, by Dr. Arthur Barsky in his book Worried Sick: Our Troubled Quest for Wellness.) Meanwhile, the Cutler-McClellan argument fails to consider the economic downside of ever more costly medicine (such as cuts in benefit coverage that disadvantage the poor), and it depends much too heavily on the use of disputed economic models. Those models attempt to put an economic value on individual lives by using survey research data on what people say they would be willing to pay for increased safety or additional years of life. This seems to me a fallacious argument.
A third response to the impact of technology on healthcare costs — evidence-based medicine — seems more plausible. The aim of evidence-based medicine is to measure the actual efficacy of medical procedures, diagnostic and therapeutic, and then to use the scientific results to set benchmarks for physician practices. Yet quite apart from the fact that scientifically credible evidence is expensive to gain, many physicians are suspicious of what they call “cookbook medicine.” They see it as great for outcome probabilities in general but less useful in telling them how to treat their individual patients.
To know scientifically that a procedure may, in general, have a 75 percent, or 50 percent, or 5 percent statistical likelihood of benefiting a certain class of patients provides no obvious guidelines on whether it will benefit an individual patient, or whether its cost will be “worth it,” however we choose to define that expression. As the likelihood or scope of medical benefit decreases, our ethical problems increase. Using a high-cost, high-tech therapy when the likelihood of dramatic improvements in health is 75 percent seems like an obvious good. But what if the likelihood of success is only ten percent, or what if the maximum benefit is likely only a few more weeks or months of life in the hospital? How then do we judge?
In a remarkably candid op-ed in the Wall Street Journal, Miles D. White (Chairman and CEO of Abbott, a pharmaceutical company) points to “healthcare’s paradox of progress.” He asks us to consider what an extra year, or six months, or 90 days of life is “worth,” and he argues that “we must start to analyze the value to society of innovations.” To say that is near-heresy in a pharmaceutical industry which gives the highest possible status to innovation, for reasons of competitive advantage, profit, and health gains. Yet it is precisely the right issue, especially as we think about the opportunity costs of high-tech medicine.
Let me offer a few of my favorite examples of the innovation problem. There are at least four expensive technologies already or soon to be on the market for the treatment of heart disease: a drug-eluting stent that is triple the price of earlier stents, doubling annual expenses to $4.6 billion; an improved ventricular assist device for use with patients who are not candidates for transplantation, at an estimated cost of $16 billion a year; an expanded use of the implantable cardioverter defibrillator, adding 400,000 new patients at a cost of $24 billion, or $120 billion to treat the estimated backlog of 2 to 4 million patients (with no clear way of determining which individual patients will benefit); and the long pending artificial heart, which could add costs of $11 billion a year. Now it is sometimes said that, in a medical economy of $1.4 trillion, the cost of each of these therapies taken individually is a drop in the bucket. One might say the same about the drug costs for treating colorectal cancer: about $31,000 for an eight-week course and up to $161,000 for some 12-week treatments. And that is only for one type of cancer. Taken together, such costs give added vitality to the old Washington joke, “a billion here and a billion there begins to look like real money.”
To be sure, the median survival rate for colorectal cancer has nearly doubled over the past decade, at the cost of a 340-fold increase in drug expenses. But in the case of some of the treatments, the gain is limited, as low as seven additional months of survival time. Likewise, most of the heart disease treatments cited above do not cure the disease; they just help people live longer with it, and often not much longer at all. As a society, we rightly cringe at saying that a few extra months of life just “aren’t worth it.” But we also cannot ignore the opportunity costs of letting expensive medications at the end of life trump other goods and obligations, including the obligation to provide basic medical care to the poor. As a team of oncologists put it: “As a society, we are reluctant to systematically deny access to expensive treatments that extend life by only a few weeks, but the morality of refusing to make deliberated choices is itself questionable.”
One way or another, our society needs to reconsider the nature of its commitment to medical progress. We need to stop assuming that every technological innovation is unequivocally good, and that progress should be open-ended, ever advancing, with no final goals or limits. In The Mirage of Health, the late biologist René Dubos provided good scientific reasons why disease and illness will always be part of the human condition. Assuming Dubos’s judgment to be biologically correct, this might seem to justify the belief in infinite progress: after all, the work of improving health will never be done; given nature’s fickle ways and mankind’s penchant for creating new health hazards, the work of medical innovation seems endless. But we are also discovering that throwing increasingly expensive technologies at disease, and particularly the chronic and degenerative diseases of aging, is an economically unsustainable or unwise way to proceed. And we are learning that progress itself steadily ups the ante about what counts as good health. We want and expect more from medicine than our grandparents did, and our adult children already want and expect more than we do. We are stuck in a vicious circle: the more we get, the more we want, and the more we want, the more we try to get. The result is an unaffordable healthcare system and a society that puts the pursuit of health above everything else. And given the large percentage of healthcare costs that are paid from the public treasury — even in the U.S., with its complicated system of private employer-based insurance — this healthcare problem is also a massive political problem.
As the annual increase in the NIH budget indicates, medical progress commands an unusual congressional consensus and great public support. This is true even for the most controversial area of research: stem cells. Everyone aggressively supports some form of stem cell research — whether using adult cells, umbilical cord cells, or cells taken from destroyed human embryos. Many states have already rushed to fund expensive stem cell programs — the largest initiative being in California, which recently authorized $3 billion over ten years for stem cell research. The public controversy centers on embryonic stem cell research, which social conservatives oppose because they believe that deliberately destroying human embryos is wrong. But these moral opponents regularly make their arguments in utilitarian terms — by saying that embryo destruction is “unnecessary” because adult stem cells may “work better.” The most vigorous opponents of embryonic stem cell research, in other words, are also the most vigorous proponents of massive public subsidies for alternative forms of stem cell research. They still embrace the gospel of medical progress.
Meanwhile, market-oriented conservatives have few hesitations. In the name of freedom and progress, they embrace a notable moral relativism: It is not the job of industry to pass judgment on what succeeds in the marketplace. Indeed, many economic conservatives fear state regulation on ethical grounds and embrace government-led research. As The Wall Street Journal editorialized, “political backing will be needed to damp down [ethical] objections to this kind of progress.” In the end, technological innovation is pushed as a major source of prosperity — as a source of jobs, profit, and national pride.
Many social conservatives have acknowledged the potential for moral and cultural harm in market practices. But they do not usually assault economic conservatives with the fury they reserve for liberals. A commitment to market values still dominates modern conservatism, admitting of no final ends, putting profit and preference satisfaction ahead of most other considerations, serving the faith in infinite progress as well as anyone could ask for. The pharmaceutical industry — resistant to price controls, dedicated to innovation, as willing to produce lifestyle drugs as life-saving drugs — stands as a model of this outlook and approach.
I wish I could say that liberals do better when it comes to thinking critically about the value of progress. But perhaps even more than conservatives, they are strict adherents of Enlightenment values. Science, rationalism, and the pursuit of progress (taken to be the main way to pursue happiness) are deep liberal commitments. This is visible in the liberal leadership of the stem cell movement and in the enthusiasm of many liberals for various enhancement technologies, such as radical extension of life expectancy, determination of the genetic traits of one’s children, and the effort to improve many human capacities from memory to intelligence. No liberal cause is higher than saving life and relieving suffering. These aims and values are surely shared by American conservatives, but they are held with less intensity, offset by religious or other sources of skepticism about improving the human condition in any ultimate sense.
At the heart of the liberal ideology of medical progress is the notion of control: specifically, the scientific mastery of nature, both the kind that hurts us and the kind that needs improvement, and personal control of our own biological lives, including (for some) euthanasia and physician-assisted suicide when nature can no longer be dominated. Needless to say, the liberal idea of progress admits of no final ends or purposes, no limit to what might be achieved, and no real grounds beyond public safety to limit the research imperative.
While liberals are usually thought of as market opponents, they have much in common with market conservatives. “Choice” is a word much used on both sides of the ideological aisle, whether by the libertarian market right to make and sell whatever people will buy, or the liberal scientific left to choose whatever biological future we are imaginative enough to devise. Yet if there are some liberals willing to consider the need for rationing healthcare resources to set limits on costs (if not on progress itself), the more common liberal trait is a faith in better management techniques and increased research funding to get us out of tight philosophical and economic corners. But in reality, our choices are not so easily fudged.
What demographers call the “health transition” — the shift from short lives marked by death from infectious diseases to long lives marked by death from chronic diseases — can be thought of as the great trade-off. The momentous gain in life expectancy — which began long before modern clinical medicine became efficacious but accelerated thereafter — was accompanied by changed patterns and trajectories of death. Death from most infectious diseases (such as dysentery, typhoid, plague, and smallpox) was often miserable but relatively fast, lasting from a few days to a few weeks; and, if one survived, there was rarely any lingering damage. Contemporary death, increasingly in old age, is for the most part slow and drawn out, lasting many weeks, months, and often years. This is the modern medical bargain: a longer, healthier life followed by many more years of serious decline and disability. Of course, it is often forgotten that the main reason for a longer life expectancy has been the sharp drop in child and maternal mortality. But the fact that medicine can now prevent or hold off many causes of death, in ways almost unimaginable 150 years ago, has created a number of new problems and moral dilemmas.
Despite its successes against many deadly diseases, modern medicine does not really know what to think about death itself. The medical enterprise is plagued by a great schism. On one side is the palliative care movement, working to pull the care of the dying back to its ancient roots of giving comfort and relieving suffering. An acceptance of death as an inherent part of life is taken to be necessary for a peaceful death. The other side of medicine is shaped by the research drive, aiming to find cures for any and all lethal diseases (none are exempt at the NIH). Death is the enemy, not to be accepted. Why do people die? The tacit answer is that they die from bad health behavior (which can be reformed), from genetic causes (which can be eliminated), or from the temporary inability of research (such as stem cells) to find cures. Such cures only require more time, money, and tireless zeal. The research drive in effect treats death itself as a curable disease, a kind of contingent biological event.
These two sides of medicine are thus at war with each other: accept death as a biological inevitability or reject it as a lingering biological accident. In the meantime, people keep dying. The idea of looking more closely at death and questioning the research imperative to “cure” it is less attractive than seeking largely managerial ways of helping the terminally ill to die more peacefully, by rescuing them in their final days or final hours from the cure-driven apparatus of modern medicine.
After many complaints and bad publicity in the 1960s and 1970s about the way the dying were being cared for (including abandonment by doctors, indifference to patient wishes, and often uselessly aggressive treatment), three specific reforms emerged: the hospice and palliative care movement, living wills and the appointment of surrogates to effect patient wishes, and a reform of medical education to encompass end-of-life care. Lurking in the wings was also a renewed push for euthanasia and physician-assisted suicide, the latter eventually legalized in Oregon.
The three reform efforts have had mixed success. The hospice movement, with improved palliative care as its core value, has been the most effective strategy, now reaching over 500,000 terminal patients a year. Living wills and the appointment of surrogates have had far less impact. Despite nearly 30 years of publicity, it is estimated that less than 25 percent of the adult population has made use of them; and even when they have, treatment instructions are often ignored by doctors or family caretakers. Physician education in end-of-life care has surely improved in recent years, but it is still slight and sketchy in comparison with efforts to teach aspiring doctors how to keep patients alive.
Moreover, despite the fact that Oregon permits physician-assisted suicide, there is little evidence of any strong push to extend the practice to other states (California being the exception). Yet even in Oregon, less than 74 people a year have made use of it, confirming earlier studies finding that there would be no significant demand for physician-assisted suicide or voluntary euthanasia and thus no measurable healthcare savings. More significantly, there is clearly an ethical reluctance in this country to permit or encourage these practices. This opposition no doubt springs from a recognition that assisted suicide and euthanasia would radically change the role of doctors, giving them a dangerous power over life and death long ago warned against in the writings of Hippocrates, and from worries about the potential for abuse, as seen in the Dutch practice of euthanasia.
As we are regularly reminded, the ethical and economic dimensions of medicine are irrevocably intertwined. Beginning in the 1970s, Medicare figures have shown that those in their last year of life (about 5 percent of Medicare recipients) consume some 25 percent of Medicare reimbursements. Many took this figure to show that too much money was being “wasted” on the dying. In reality, the figures were drawn retrospectively from medical records, with no indication of whether the 5 percent were known in advance to be dying but nonetheless treated excessively. One study showed that the most expensive patients are those who are not thought to be in danger of dying but who unexpectedly take a bad turn, provoking an all-out effort to keep them alive.
And here lies perhaps the hardest dilemma: not what to do when people are dying, but what to do when they might yet be “saved” or when their health prospects are unclear. Many efforts have been made to define such crucial terms as “medical necessity,” what patients need to save their lives or reduce their suffering, and “medical futility,” what treatments with very sick patients will do them no good at all. And many people assumed that the meaning of these terms could be determined empirically as a straightforward medical matter. In fact, this has not been possible, and these terms are less and less used in the clinical setting, beset by the emergence in recent years of two increasingly fuzzy lines: the medical line between living and dying and between useful and useless treatments.
We all know that technology is seductive: give me a try, it says, since “hey, you never know” (as an advertisement for the New York Lottery puts it). As with the lottery, there are always some winners, even when the odds are astronomically against them. Every doctor can tell a story of a highly improbable treatment outcome. Moreover, precisely because medical technology so often promises some potential benefit, even if small and uncertain, doctors are forced with much greater frequency to decide to forgo treatment — that is, to decide when not to take one more possible step, one that will likely do no good at all, but just might, maybe. An estimate (but with no solid data) is that 80 percent of current deaths come about because of deliberate decisions to cease or forgo treatment. Most doctors and families in such situations feel that they have no real choice, that all medical benefits for the given patient have come to an end. Yet this situation has spurred a reaction, in some conservative quarters, that abuse is rampant in discontinuing treatment with death as the explicit aim. And it has inspired calls, in some liberal precincts, that there is now no moral difference between allowing to die and directly killing patients, justifying euthanasia. Both beliefs seem to me wrong, but for reasons beyond the scope of this article.
For both economic and ethical reasons, decisions about whether to use expensive or burdensome treatments with a statistically low likelihood of success have received little systematic attention. If one believes that the highest good is to resist death, then a low probability of a good outcome is seen as better than no probability at all. But if one believes there are other goods at stake, individual or societal, then we face a more difficult set of decisions, because the marginal potential benefit of medical intervention is not cost-free.
We also face the age-old problem of medical uncertainty. Medical progress increases uncertainty by increasing the possible ways of intervening in response to disease. Evidence-based medicine, as noted, can generate statistically useful data, but it provides no certain predictions about individual patient responses. Moreover, some clinical categories are exceedingly hard to diagnose, meaning it is often hard to know whether someone has six months, six weeks, or six days left to live. Given this uncertainty, the challenge before us is whether we can learn to forgo high-tech interventions in individual cases, when death now is not inevitable, because doing so might allow us to sustain a healthcare system that provides decent, high-probability medical benefits for all. Meeting this challenge requires a greater willingness, in the culture, to accept personal death in its proper season rather than seeing death as the greatest enemy, or seeing every illness (especially among the elderly) as a triage situation that justifies putting all other concerns temporarily aside.
The struggle against death has always had the highest priority in clinical medicine (at least since the late nineteenth century, when medicine started becoming effective in saving life) and it remains the highest priority of progress to this day. Over the years, the NIH has spent the most research money on combating the leading lethal diseases (such as cancer, heart disease, stroke, and, more recently, AIDS), and much less on chronic, non-lethal conditions (such as mental health, arthritis, and osteoporosis) which together afflict many more people.
This set of priorities has never been seriously challenged. And the steady decline in mortality from the most lethal diseases has encouraged more, and not less, research to defeat them. While it might be hard to prove empirically, I think that much of the healthcare cost pressure in developed countries can be traced to the war against death, whether in the intensified use of technology to diagnose lethal disease or the development of expensive technologies to treat it. Time spent in an intensive care unit may or may not save your life, but there is not the least uncertainty that the bill for the effort will be staggering.
While there are surely many exceptions, the leading killer diseases are primarily diseases of aging. I have long believed, for that reason, that they should have a lower research priority at the NIH. At a minimum their budgets should plateau, allowing other research budgets to increase proportionately. A larger proportion of the remaining funds should be allocated to support prevention research. In caring for the elderly, we should focus on quality of life, not length of life. The time has come to take that idea seriously.
At the clinical level, it would seem appropriate to insist on a strong likelihood of success — a decent prospect for more years, not just months, of life in good health — before proceeding with treatment in intensive care units or the prescription of enormously expensive devices and drugs. Drug and device manufacturers should be required to provide solid information on the likely economic impact of any new, or improved, product they want to introduce into the medical marketplace.
To balance off these technological restrictions, there should be a far greater emphasis on two fronts. The first would be more research and clinical work on the disabilities and frailties of old age, taking seriously the notion of improving the physical and mental quality of elder life. At age 75, I do not look for medicine to give me more years, but I do want my remaining years to be good years, with mind and body reasonably intact. The second emphasis should be on long-term care. Some 30 percent of the elderly will spend some time in a nursing home, and by their eighties, almost all will need help carrying out the ordinary activities of daily living. Roughly half of those over 85 have some degree of dementia, placing a heavy burden on family members, many of whom need financial, social, and psychological help.
Liberals have been ambivalent about, and some even hostile to, policies with this kind of anti-technology bias. They see it as a form of “ageism,” treating the elderly differently from the young, and they reject the notion of applying different standards based on age for the use of high-technology medicine. They also want improved long-term care, but not at the expense of technological medicine. While social conservatives, meanwhile, have commented little on the resource allocation problem (at least in the way I have framed it), there is in the conservative tradition a healthy respect for a limited and finite human life cycle, a time to be born and a time to die. Such conservatives have sensibly resisted the call for radically increasing human life expectancy. Yet they also worry that setting limits on the use of existing medical technologies would lead to a more radical program of “social euthanasia,” where the old are left to die or actively killed because they are too burdensome or too expensive.
To be forced, out of sheer necessity or prudence, to limit healthcare shows no disrespect for life, so long as it is done in the name of other important social goods or to preserve a decent level of healthcare for all. No one has an unlimited claim on medical resources, particularly when providing them would divert money from other important social needs. In this connection, the conservative jeremiad that ours is a “culture of death,” aimed at getting rid of the weak and defenseless, seems both misleading and misguided. Death rates for every age group in this country are steadily declining, and those over 85 are the fastest growing segment of the population.
If anything, the war against death has been waged too fiercely, putting many people at risk of needlessly poor deaths — deaths so miserable that many become tempted by euthanasia or assisted suicide as more desirable exits. In reality, it is the “culture of life at all costs” that might one day lead us to accept the so-called “culture of death” by human will — both as an answer to prolonged dwindling at the end of life and as an extension of modern control from birth to death. While many conservatives believe it is disrespect for life that drives the euthanasia campaign, I think that misses the point. It embodies the same drive for ever greater control of our biology that marks the medical enterprise as a whole, fueled by an excessive fear of decline and encouraged by the perfectionism inherent in medicine’s research zeal.
Our American approach to medicine inculcates an attitude toward death that amounts to technological captivity — a captivity that affects liberals and conservatives alike, albeit in rather different ways. The great irony in this situation is that as this deeply rooted drive for research and innovation grows stronger, the evidence also grows that medical progress and clinical innovation are not the main determinants of population health. The best estimates are that no more than 40 percent of the decline in death rates can be traced to organized healthcare. It is, instead, the socioeconomic conditions of society that make the greatest difference. The best predictor of a healthy life is a person’s level of education, and that level is in turn made possible by the general prosperity and organization of a society.
If technological innovation proceeded not one step further in our country, life expectancy would continue to rise as long as the general standard of living continued to rise, and even more if effective disease prevention strategies were put into place. Of course, technological innovation will continue. But we might dream of different priorities for its development and resist expensive but only marginally effective treatments. Research as such is not the problem, but using technological advance as the only serious test of useful research is a problem that we are persistently unwilling to confront. It has been demographic, not biological, research that has revealed the social determinants of health, and it will be political and ethical reflection, not new technologies, that will enable us to make use of that profound knowledge. Devotion to progress demands no less.
Exhausted by science and tech debates that go nowhere?