Over the past two and a half decades, David Chalmers has established himself not only as one of the leading figures in Anglophone philosophy of mind, but also as the theorist who has done the most to bring the discipline into the public sphere without a disastrous sacrifice of content. Other specialists before him — Daniel Dennett most notably — have attempted to write for an educated readership broader than the close company of their academic peers; but few have succeeded as well as Chalmers at doing so without either terribly oversimplifying the issues under discussion or lapsing into obscurity at certain predictable junctures. Moreover, he writes, to all appearances at least, with an engagingly open mind on most of the topics he addresses, even where his own convictions prove quite firm and precise, rather than with, say, the sort of fideistic belligerence that certain militant materialists in the field sometimes tend to substitute for solvent argument (I shall name no names, not even Dennett’s).
His basic position is that of a philosophical naturalist who is nonetheless not a physicalist. That is, he rejects any explanations that exceed natural causes, but he does not assume that the category of the natural is exhausted by material exchanges of energy. He therefore embraces a form of “property dualism” (or “dual aspect” theory) in attempting to account for mental activity; for him, mind is real and really distinct from the physical, but is nevertheless a natural phenomenon.
Among his signal contributions to the popular understanding of these matters is the clear distinction he insists upon between the “easy” and the “hard” problems of consciousness, the former being those of the psychological and physiological structures and processes associated with mental events, the latter being that of the phenomenal character of subjective experience. He is also credited with popularizing (to the degree such things can be popularized) the notion of “philosophical zombies” — beings in some other possible world that are physiologically and functionally identical to us while yet lacking any phenomenal awareness whatsoever — as a riposte to a number of naïve theories that simply equate conscious mental states with brain states, or attempt to reduce them to mechanical functions in a behavioral system, or explain them away as a secondary and “supervenient” effect of purely physical processes, and so forth. (Admittedly, this aspect of his thought is also the most frequently misunderstood; many readers still tend to assume that he believes that such zombies are actually physically possible entities, which his own commitments would seem to rule out — see below.) He may be best known, however, for his “agnostic” but cautiously sympathetic treatment of “panpsychism” or, more precisely, what he calls “panprotopsychism”: that is, the notion that all material things possess, as an intrinsic property, a kind of primordial proto-consciousness. He tends, in short, toward the view that consciousness is most coherently viewed as a fundamental rather than emergent aspect of the physical world, and hence something on the order of a brute fact, like electromagnetism or gravity.
In any event, and all things considered, Chalmers has an honorable record as an accomplished and interesting thinker and writer. Were this not the case, his most recent book, Reality+, would not be quite as severe a disappointment as it ultimately is — though it would still be just as hard to read with pleasure. For one thing, it is a much, much longer book than it has any business being. The large page-count in itself is nothing new for Chalmers; he has never been a concise expositor of ideas, and his two most important books on philosophy of mind are physically enormous affairs, both of which could have been improved by a judiciously savage editor. But in those cases the longueurs were tolerable, as they were always rewarded by a quantity of interesting and thought-provoking material more than commensurate with the effort required to get through them. This book, by contrast, as regards the balance between sheer volume and genuine substance, has something of the quality of a sugary meringue: the reader is soon surfeited, but never really nourished. This may have something to do with an essential triviality in its topic — whether minds can or should or will or do exist in virtual worlds — which is nothing more than a beguiling category-error compounded from equal parts bad philosophy and bad science (but we will get to that shortly).
But the real problem is that the book breaks down into two broad and equally unsatisfying parts, and neither part lends support to the other: on the one hand, it advances arguments in favor of its central premise that virtual minds are a real possibility, none of which is remotely convincing; on the other, it offers a series of reflections on the moral, social, ontological, and even metaphysical implications of that premise, most of which are reminiscent of an introductory undergraduate course in elementary logic. Sometimes this latter is the case because, like many philosophers trained in the Anglo-American method, Chalmers seems eerily ignorant of vast stretches of traditional metaphysics, ontology, and logic. (Thus, when he addresses in passing such hoary philosophical puzzles as the Euthyphro Dilemma or the village atheist’s “If God created the world, where did God come from?,” he seems to be little more aware of how easily classical theistic thought disposes of them than does a total philosophical illiterate like Richard Dawkins.) More often, however, the problem is simply that all the seemingly fascinating questions he would like to raise about life in a virtual reality are in fact not really questions at all, let alone fascinating ones, if one has already granted the fabulous notion that virtual minds are a real possibility.
Naturally, that is to say, once one has been presented with the hypothesis that we could exist as the conscious beings we are in a purely fabricated world of experiences, or even that we already may exist in such a world (a possibility Chalmers takes quite seriously), one is presented also with a certain set of obvious issues to consider: Would our life have intrinsic meaning and value in such a world, or would it be no more than an arbitrary fiction, full of sound and fury, signifying nothing? Would we have real moral responsibilities toward others, toward our “world,” toward ourselves? Could we even be moral agents in any meaningful sense? Would the world of our experiences count as “reality” at all, or only as an illusion? And so on and so on.
But one no sooner poses these questions than it becomes clear that they more or less answer themselves, in part because they are matters of purely relative judgment that rarely rise above the level of semantics, but mostly because the underlying premise makes one’s conclusions inevitable. Chalmers has really very little choice but to take the pragmatic view that for us the real is the realm of the consciously perceptible and phenomenal and relational, rather than some occult realm on the other side of things or some set of unrepresentable physical conditions known only through their effects in the realm of appearances. It is more or less the founding precept of the late modern view of things that the true picture of reality — what Wilfrid Sellars called the “scientific image” — is the one where no one can live; it is the “manifest image” of perceptual experience that is our existential home.
When considering positions suggestively analogous to his own, Chalmers scrolls quickly through Plato’s cave, Zhuangzi dreaming that he is a butterfly, Descartes’s meticulous exorcism of his deceiver demon, Berkeley’s idealist phantasmagory, and so forth, but at the last he arrives at the aptest parallel of all: Kant’s transcendental deduction. Where pure reason is concerned, leave the noumena to themselves; our true dwelling is among the phenomena:
Kant’s realm of appearances is the realm of relations between things, including spatiotemporal relations and causal relations. Kant’s realm of things in themselves is the realm of intrinsic properties of things…. The relational properties of reality are knowable, but the intrinsic properties are unknowable.
So, obviously, if conscious, thinking subjects with free will (assuming there is such a thing) could exist together within a simulated world, with the power to affect one another and their shared phenomenal environment for either good or ill, then by that very token they and their world would be no less real than one composed of such physical ingredients as basic particles, molecules, and organic tissues. How can we say otherwise? Whether something counts as “reality” or not is surely not to be determined by the substrate in which it inheres; if it were, all “realities” would be equally vacuous in some sense, inasmuch as all of them would be reducible to — or would at their lowest levels shade away into — a quantum landscape of pure possibilities. The question is not what the world is composed from, but whether anyone is at home when one knocks on its door; and, if the answer to that second question is “Yes,” then the solutions to all those subsidiary questions of ontology and morality and whatnot turn out to be exercises in tautology: if you are really there, you — ontologically, morally, logically, and in every other significant sense — are really there.
For this reason, most of Reality+ consists in a rather banal process of supplying inevitable answers to otiose queries. And this is made all the more tedious if one realizes, as one should, that the correct answer to the essential question is in fact “No”: in a simulated world, only simulated selves could exist; no one — absolutely no one — ever will or could be at home there. Chalmers does not, however, realize this. So, for him, the questions he raises in his book are not mere idle ponderings, but matters of some urgency. He is convinced that at some point in the coming century our virtual realities will become indistinguishable from the non-virtual universe underlying them.
The physics engines that underpin VR are improving. In years to come, the headsets will get smaller, and we will transition to glasses, contact lenses, and eventually retinal or brain implants. The resolution will get better, until a virtual world looks exactly like a nonvirtual world. We will figure out how to handle touch, smell, and taste.
This may be true, of course, in the limited sense that we might one day be able to create simulated worlds of comprehensive and immersive detail, into which we will be able to enter through, say, neural ports of some kind.
But Chalmers means more than that: he is speaking also of possible virtual worlds populated by virtual minds, perhaps existing in some form of cybernetic machinery or perhaps existing wholly in the form of digital code. The day is approaching, he muses, when we may all have to decide whether to upload our minds to the Cloud. Indeed, he goes on to argue, it is not only possible but extremely likely that such secondary realities and virtual selves will ultimately far outnumber their “nonsimulated” precursors, and may already have done so, and that it may even be statistically probable that we already inhabit just such a simulated world, resting atop an incalculable succession of prior worlds, like a vast pillar of turtles, stretching down into the abyss of an exhausted material universe. And the reason that he believes all of this is his own commitment to what in Reality+ he tends to call “structuralism” but which anyone familiar with his first book, The Conscious Mind (1996), will recognize as his very odd “Principle of Organizational Invariance.”
According to this principle, consciousness is the result not at all of the physical substance or substrate of the brain, but entirely of its structure of relations and functions. Therefore, allegedly, any system of computation structurally isomorphic to a living brain’s will produce the same experiences as that brain would, no matter what the platform or physical medium by which that system is enacted. Thus, claims Chalmers, any simulated brain that precisely mirrored a biological brain would be no less conscious than its biological counterpart. His reason for thinking this is his argument from “fading qualia,” which in The Conscious Mind goes rather like this: one can imagine one’s brain being gradually transformed into a silicon rather than organic object through the replacement, one at a time, of each neuron by a computer chip; it seems unlikely, argues Chalmers, that in the process our consciousness would gradually disappear along with our original neurons, given that the structure of the emerging silicon brain — being isomorphic with the neurological brain it is replacing — would produce the same behaviors. And he more or less repeats this argument in Reality+, in this case to demonstrate that one’s mind could be gradually uploaded into a simulated brain without loss of consciousness. This would certainly be, in his opinion, the safest and surest method for transferring one’s mind to a virtual platform.
We can imagine that the uploading process happens over a period of weeks. After the first few cells are replaced, you take a break. Perhaps you’re a bit dazed from the operation, but otherwise you feel normal. The simulated cells behave exactly like the old cells and produce the same behavior, so you’ll behave normally. Given this, you’ll certainly say that you’re conscious.
Something like this will happen at every stage….
Even as “conceivability” arguments go, this one is pretty feeble. It presumes everything it is meant to prove. Chalmers simply takes it as given that the brain is basically a computer and that the mind consists entirely in the electrical impulses that are integrated with one another by that computer’s operative structure, presumably because those impulses convey transcriptions of some kind of functional system, in some form of electrical coding; and hence he also assumes that neurons are simple electrical conduits that can be replaced with neural prostheses so long as the latter allow those electrical transcriptions to continue to flow unimpeded. Then, apparently, so long as we accept the dogma of organizational invariance, we can confidently assume that all the features of conscious experience would simply emerge or be duplicated within a virtual brain, including even the sense of unified perspective that is, as far as our knowledge of nature goes, uniquely an attribute of intentional subjective consciousness. I myself have never been able to understand why Chalmers is so certain that a brain replaced piecemeal with circuitry would continue to yield the same behaviors as its original neurology. To me this seems like sympathetic magic: build a facsimile of a baseball field, and it will mystically summon spirits in from the cornfield; build a model of a brain, and it will conjure unified consciousness into existence once one turns on the electricity.
But, of course, there is no reason for believing any of this; and it seems far more likely that the process of replacing one’s neurons with computer chips would be little more than a very slow process of suicide, producing not the same behaviors as would a living mind, but only progressive derangement and stupefaction, culminating in an inert mass of diffusely galvanized circuitry. After all, any sober phenomenology of the full range of mental acts discloses a host of necessarily unified features that are almost by definition irreducible to a mere integration of diverse mechanical parts and discrete functions, and that no process of computation could reproduce: subjectivity as an indissoluble privacy; the unified and simultaneous field of apprehension belonging to that subjectivity; qualitative consciousness; intentionality and its intrinsic teleology; the indeterminate openness of the mind to novelty and even to fundamental revisions of its conceptual paradigms, which no computational algorithm could simulate; the immediate physical sense of self; the psychological sense of identity; judgments of value, such as “good” or “true” or “beautiful”; the prior and constant disposition of the mind toward these values, which seems to be the necessary motive of all movements of intellect and will toward the world; creative violations of rules that nevertheless make “sense” to us; and so on.
Structures and systems as such, of their nature, have as far as we can tell no unified and simultaneous view of anything, let alone any of the creative or intentional capacities that would be contingent on such unity and simultaneity; nor do they require these things, as the functions of any mere mechanical system, if it is to operate properly, must remain connected to but discrete from one another. Computers work as well as they do, after all, precisely because of the absence of mental features within them, which allows them to process data without being obliged to intuit sensibly, organize, unify, synthesize, or judge anything. They need not even judge whether their results are right or wrong. They need only produce results consistent with their programming.
Even, however, if one fancies that one can explain all of the marks of the mental in naturalist terms — or at least explain them away in the manner of, say, Daniel Dennett — there is still no reason glibly to assume that they are solely effects of some abstract structure, and that physiology is a matter of indifference. After all, is it really the structures as such that are doing the thinking? One can appreciate the frustration of John Searle (a convinced philosophical naturalist if ever there was one) with the oddly inverted Cartesianism in computational accounts of the mind, and especially with the curious dualism that so easily separates mentality into, on the one hand, a kind of functional software and, on the other, the purely structural hardware or “platform” where it is realized.
As mind has never been discovered anywhere except in organisms, where it appears to be associated with brains and nervous systems and nerve tissues and organic cells, what precisely justifies the belief that mental activity resides entirely in purely electrical activity and in the relations of the circuitry that permit it? Would it not make more sense to assume that the brain’s capacity for mentality has some causal connection to the cells and tissues and enzymes, synapses and axons and myelin, composing it, as well as to its uniquely organic and ontogenic history of continuous growth, development, catalysis, regeneration, and neural routing, as well as to its complex relations with the complete neurology, biology, organs, and history of the body it belongs to? What sort of neural plasticity, one has to wonder, would a network of silicon chips possess? Why assume that neurophysiology is simply one expression of a more general neurotechnology that might equally well be expressed in some other physical medium? Why imagine that mind is simply a pattern of activity that could be realized in a cybernetic simulacrum or digital phantom of a brain? If we take it for granted that thought and experience and all the features of mentality are simply a matter of connections and conductivity, with no relation to the physical ingredients and processes and individual history of the brain and body, we have already entered a kind of quasi-Platonic otherworld, albeit a drearily dispirited one: a realm where abstract patterns are all that is really real, while the concrete details of their instantiations are mere accidents.
Obviously, all of this requires a “functionalist” or “computational” account of the mind. The moment one grants that mental activity can proceed in indifference to the “platform” on which it is “run,” whether one quite phrases it that way or not, one has implicitly decided that mentality as such, in its every dimension, is a kind of program that is implemented in the form of a syntax of algorithmic operations. At the very least, one is bound to affirm a functionalist dualism of hardware and software, as well as the functionalist premise of “multiple realizability” — that is, the premise that the same mental operations can be realized on multiple physical substrates and by way of diverse physical structures.
For myself, I can think of no coherent way of understanding Chalmers’s combination of “structuralism” and the notion of transferable consciousness — at least, once we eliminate magic from our list of possibilities — except as requiring a functionalist model of thought. And this is very odd, for a number of reasons. The first is that his own famed argument regarding philosophical zombies seems, in part, to offer a fairly interesting critique of functionalism. Admittedly, setting aside all the various passing vogues in philosophy of mind (at present, for instance, panpsychism is very much in fashion), functionalism is unquestionably the dominant model of mentality in the Anglophone world — in philosophy of mind, neuroscience, cybernetics, AI research, and elsewhere — so it may be the case that Chalmers believes it correct in principle, even if he demurs from some of its typical features.
And yet it is a very strange theory indeed when one considers it in detail. It tells us that the human brain is what Daniel Dennett calls a “syntactic engine,” which as a phylogenic fortuity or hypertrophy was selected by life’s Darwinian logic, and which has over the evolutionary epochs come to function as a “semantic engine.” That is, the brain is a machine, similar to a Turing device or some other computational platform, that began its existence as a physiological organ for translating stimuli into responses but that now runs a more sophisticated program (as a result in part of being progressively populated by “memes”) for translating “inputs” into “outputs.” And this program has come to incorporate in its functions a number of mental states that appear to us to be conscious or intentional states with meanings, but that in fact are nothing of the sort. (I will not discuss the indelible absurdity of that claim here, but will assume that it is self-evident.) What we call thought is in fact merely a functional system for processing information into behavior, and every mental state is merely one or another “token” of a certain “type” of operation within that essentially unconscious, irreducibly physical system. Phenomenal consciousness, for instance, is not really the irreducible and incorrigible qualitative experience of a private subject, as it appears to be, but is instead only a discrete operation within a system for synthesizing data that, when considered from an “intentional stance,” we interpret as the private experience of a unified subject; in that form it serves as a convenient “user interface” for interacting with the system of functions it simultaneously represents and dissembles. Neither does the mind perform any genuinely intrinsically intentional acts; intention too is the illusory residue of the same purely formal, purely unintentional system. The governing maxim of functionalism is that, once the proper syntax is established in the neurophysiology of the brain, the semantics of thought will take care of themselves; once the syntactic engine begins running its impersonal algorithms, the semantic engine will eventually emerge or supervene.
Hence the seeming pertinence of the zombies argument: if functionalism is true, beings that are structurally and behaviorally identical to humans but lack consciousness are a logical possibility; one would simply have to assume that, under some different set of physical laws, the syntax of their operations would be realized through “tokens” devoid of the sort of mental states familiar to us. But, if such zombies are possible, then the functionalist reduction of the mental states familiar to us to mere “tokens” of certain operational “types” becomes extremely dubious. The experience (or illusion) of consciousness seems not only an unnecessary and vanishingly improbable candidate for evolutionary retention when other operative tokens are possible; it looks suspiciously like an epiphenomenal addition to physical existence, a mere impotent echo of physical processes it can do nothing to influence.
All of that is rather beside the point, however. The real problem with functionalism is that it is, in its every form and iteration, utter nonsense. It is meant as a solution to the problem of how thought can arise from a purely mechanical substrate, taking computation as a model for how a system of physical “switches” or operations can generate a syntax of functions, which in turn generates a semantics of thought, which in turn produces the reality or illusion of consciousness. Yet none of this is what a computer actually does, and certainly none of it is what a brain would do if it were only a computer.
Functionalism is nothing but a collection of vacuous metaphors — and, worse, metaphors that refer to a phenomenon that does not actually exist, and that could not do so in any possible world. It is rather on the order of explaining mind by analogy to all those square circles one is constantly coming across in nature. Neither computers nor brains are either syntactic or semantic engines; there are no such things as syntactic or semantic engines; and this is because syntax and semantics exist only as intentional structures, inalienably, in a hermeneutical rather than physical space, and then only as inseparable aspects of that indivisible semeiotic system. Syntax cannot exist prior to or apart from semantics, and neither exists except in the intentional activity of a mind. Syntax in the abstract is always only an artificial distillate of the complete language of signs that generated it; it cannot produce a semantics because it is ontologically dependent upon semantics; and it functions only as always already oriented toward the system of signs for which it has been encoded. Now, of course — to phrase the matter in structuralist terms — the signifier within any given sign may be variable and somewhat fortuitous, but the signified within that same sign is largely invariable within any given syntax.
This very problem, incidentally, bedevils the attempts of linguists to provide a plausible account of the evolution of natural language, despite the embarrassing absence among human cultures of any such thing as a “primitive” or evolutionarily more elementary form of language. Every attempt to reduce fully formed semeiotic economies to more basic syntactic algorithms, in the hope of further reducing those to proto-syntactic functions in the brain, founders upon the reefs of the indissoluble top-down hierarchy of language. No matter how basic the mechanism the linguist isolates, it exists solely within the sheltering embrace of a complete semeiotic ecology.
The problem becomes only more acute when one considers the operative structure of computer coding, because even the distilled or abstracted syntax upon which that coding relies has no actual existence within a computer. To think it does is rather like mistaking the ink and paper and glue in a bound volume for the contents of its text. Meaning — syntactical and semantical alike — exists in the minds of those who write the code for computer programs and of those who use that software, and not for a single moment does any of it appear within the computer. The software itself, for one thing, possesses no semeiotic unities of any kind; it merely simulates those unities in representations that, at the physical level, have no connection to those meanings. The results of computation that appear on a screen are computational results only for the person reading them. In the machine, they have no significance at all. They are not even computations. Neither does the software’s code itself, as an integrated and unified system, exist in any physical space in the computer; the machine contains only mechanical processes that correspond to the notation in which the software was written.
So the functionalist conceit that thought arises as a posterior effect of semantics, and that semantics emerges from syntax, while syntax is generated out of a purely physiological system of stimulus and response, could scarcely be more backwards; this bottom-up causal model of mental agency is a sheer fantasy. When one decomposes intentionality and consciousness into their supposed semeiotic constituents, and signs into their syntax, and syntax into physical functions, one is not reducing the phenomena of mind to their causal basis; one is dissipating those phenomena into their ever more rarefied, remote, and impotent dependent effects. Meaning is not a physical result of lower functions; meaning exists in minds that can extract functional patterns from their own operations and employ them to produce instruments — books, abacuses, Turing devices, what have you — that they can then employ as media of synthetic notation and information-processing. But this happens within what must remain forever an entirely top-down and indissoluble hierarchy of dependent relations, unified at its apex by intentional mind, or all at once it will disintegrate. In any intentional structure, the more “eminent” reality of realized meaning is the ontological ground of the operations that the structure creates and sets into motion. Mind cannot arise from its own contingent consequences.
And even this does not exhaust the problems with computational models of thought. The ontological priority of intentionality over semantic content and syntactic structure tells us a great deal about the structure of language and of meaning, and hence of meaningful thought; but there is an even more radical and primordial sense in which intentionality informs mental actions. I will not dwell on the issue here — and admittedly I am drawing on philosophical resources from outside the modern Anglophone mainstream — but I feel confident in asserting that any truly scrupulous phenomenology of mental acts must reveal that the most basic dynamism of thought — clearly irreducible to processes of input and output — is a kind of a priori orientation of intellect and will toward the totality of being as infinitely desirable intelligible truth. I could cite arguments from figures as diverse as Nicholas of Cusa, Ādi Śaṅkarācāryaḥ, Maurice Blondel, and hosts of others; but for now I will simply assert that such mental acts as recognition and judgment depend upon this rational appetite for the absolute, this total orientation of the thinking and adjudicating intellect toward a total intelligibility at once more original and more ultimate than any proximate object of knowledge, any propositional attitude, any inquiry, any finite project of the will. Only in the light of that finality beyond the finite, and the indeterminacy that it sets free within the mind, and the incessant further longing that it prompts in regard to every finite step toward fuller understanding, do recognition and judgment become possible.
Neither this transcendental appetite nor the necessary openness of mind and will it produces are forms of computation. Indeed, both are in principle repugnant to computation, as they are the very antithesis of the controlled algorithms of stimulus and response that alone constitute — and, of course, limit — computational processes. Even such plasticity as a computational algorithm possesses is the posterior result of the human desire for truth that both produced that algorithm and that interprets its results. In us, the structure of knowledge is actualized as a response to a prior orientation toward the absolute. An immanent indeterminacy of discovery is prompted into being by a transcendent determinacy of the will and intellect toward being as a whole and as one. And, for this reason, we are capable of discoveries for which, as yet, no algorithmic potency we possess could possibly prepare us; we enjoy an intrinsic capacity for unlimited novelty of apprehension, and are turned toward it as an object of intention that discloses both the nature of mind and the nature of intelligibility, without any prior constraints upon the operations of reason or of consciousness that would determine in advance what we may find or create. We can come to know because we want to know, and we want to know because we already know more than we understand; and the process of discovery is an ascent toward an ever greater coincidence of that unrestricted desire and the unrestricted intelligibility of which we have an implicit sense. This is also why we write poetry and compose sonatas, why we are able to invent things that nothing in nature adumbrates, why we can always revise even our most fundamental understandings of reality, and so forth.
Simply said, thought is not computation, not only because computation is a syntactic abstraction produced by intending intellects, but also because thought in its essence is an “open system” of discovery and therefore precisely what computation is not, even though thought is also what makes computation possible and what endows computation with a meaning extrinsic to itself but intrinsic to the mind that interprets its products.
Now, perhaps, by itself the sort of structuralism that Chalmers believes in need not be wedded to a functionalist account of thought. He could, for example, adopt some form of panpsychism, ideally as inflected through something like Giulio Tononi’s Integrated Information Theory, according to which “any physical system has subjective experience to the extent that it is capable of integrating information, irrespective of what it is made of.” This might explain, at least, how an artificial brain’s mere structural identity to a biological brain might be sufficient to render it conscious. It might even make it tenuously plausible to believe the (to my mind fantastic) notions that structures as such can think, or that systems as such can have experiences, or that the seemingly third-person functions of structures or systems might also have the character of first-person awareness. At least it would if Integrated Information Theory were not itself incoherent (but that is a discussion for another time). But none of this would be sufficient to explain — indeed, it would seem fundamentally incompatible with — Chalmers’s belief in the possibility of transferring or uploading consciousness from one platform to another. If the brain is merely a fungible occasion of mind, then mind still must be reducible to something like code. Certainly it could not subsist merely in the patterns of electrical energy in the brain, as then the transfer of the actual contents of thought would involve a magical circumvention of Heisenberg’s uncertainty principle … unless, of course, we could so perfectly reproduce any given brain and the pattern of electrical responses within it that the same mind would somehow be duplicated in the process … though that duplication would not actually be a transfer of consciousness … or … but…. (Here, however, we are beginning to see how a silly premise, when taken seriously, generates ever more preposterous reflections.)
None of this really matters in the end. The notion that consciousness could ever be uploaded into a virtual frame or into a cybernetic network is based on so coarsely absurd a picture of the brain and the mind alike, and depends (once again) on an analogy to computers that so fundamentally misrepresents what computers are, that it is well past time we ceased granting it anything more than cursory dismissal. We find it difficult to do so only because we have been habituated to that absurdity by centuries of indulging an illusion. The arguments advanced in Reality+ and the positions assumed there are merely the predictable results of a form of reflection that entered philosophy in early modernity and that then, despite its ungainliness, managed to crowd millennia of far more refined reflection out of our shared consciousness. The mechanical philosophy had just enough utility about it in the project of mastering nature that we were willing to overlook its essentially metaphorical, simplistic, and obscurantist logic; then, while our gaze was averted, it metastasized into the culture’s tacit metaphysics.
Ever since our picture of humanity came to be divided between the machine of the body and its resident ghost, modern philosophy has been confronted by the false dilemma of trying to determine which of the two is really the seat of the mind. For a short time, the ghost enjoyed that grand eminence; but ghosts are rather ineffectual in the physical order, when one thinks about it, and the tendency of the modern mind is not to believe in them to begin with, so in fairly short order the mind had to be retrieved from the ghost’s portfolio and assigned instead to the machine. But machines do not think; neither do they experience anything; they are composites of inert parts extrinsically organized to perform functions imposed upon them from without by beings who do think and experience things, and nothing more. This is true even of machines that obey coded programs, which are nothing but more sophisticated and flexible systems of mechanical operations, producing results that in themselves are still nothing in excess of the mechanical.
Before the modern epoch, however, there was no “problem of mind” as such. Very few would have thought it sensible to ask whether it was the soul or the organism to which mental acts belonged, not because the prevailing paradigm of human life was dualistic, but because it was not (at least, not in the modern mechanistic sense). Not even Platonism could be credibly accused of so antithetical a view of body and soul — despite the regularity with which analytic philosophers often mistake the Platonic view of embodiment for the Cartesian — because for the Platonists the body was not a machine merely animated by an extrinsic agency; it was itself already a reflection of an eternal form naturally disposed toward and instantiated by life. There is an entire vision of reality contained in the ancient meaning of the Greek word psychē, insofar as its reference was at once both the principle of life and the principle of subjective experience and thought. Modern dogma, of course, holds that what we think of as biological “life” is reducible to mere chemical interactions, and that we shall surely one day be able to summon living organisms out of our petri dishes as though from a magic cauldron; and yet, despite our ever more comprehensive understanding of biochemistry, whatever that final dash of henbane or last mystic incantation is that has the power to transform chemical volatility into organic vitality, it continues to elude us. Who can say whether life is really any less mysterious than mind? What one can say with a fair degree of certainty is that the unity of organic life — with all its “mindlike” order and teleology, its faculties of self-movement and reproduction — is always present, perhaps to the point of identity, wherever the unity of the conscious and intending mind is found; and, so, to assume that organism and mind can be physically alienated from one another and continue to exist separately is nothing but a wild leap of fantasy.
In the end, however, it is neither the banality of the questions the book poses nor the crudely mechanistic vision it presumes that I find most discouraging about Reality+. Rather, it is the squalor of its perspective. As I have already said, a phenomenology of mental acts, honestly undertaken, tends to reveal a certain constitutive orientation toward absolute ends — absolute values — within them. Simply said, we engage with the world, in both our ordinary and extraordinary exertions of intellect and will, on account of some deeper or higher (or both) preoccupation with truth or beauty or goodness in the abstract, some rational appetite that cannot be sated by any single finite object of knowledge or love or judgment, but that continues to draw us out of ourselves and toward such objects while also drawing us beyond them in search of still greater fulfillments. We do not engage with reality merely in the manner of a machine programmed to accomplish a determinate task, but as living souls animated by an insatiable hunger for truth as such, which opens up within us the directed indeterminacies of living mind.
Chalmers asks what counts as real for us; but the question that seems to be the actual dynamism of thought and consciousness is “What is Truth?” — which is to say, what is the “really real” that transcends any given reality known to us? The most poetic and affecting picture of this longing in Western philosophical tradition is, of course, the Allegory of the Cave in Plato’s Republic, where we learn that, for the lover of Truth, it is better to be dazzled by the Sun of the Good, and even rendered somewhat inept at again penetrating the shadows from which one has emerged into its light, than to live out the thwarted existence of someone who never knows and never truly seeks the fulfillment of that transcendental desire. By contrast, the position that Chalmers advocates — no matter how valiantly he attempts to salvage the dignity of virtual reality from every imputation of illusoriness — has a certain spiritual and moral cravenness about it. It is what one might call a “spelaean” rejection of the Sun of the Good and of the ennobling labor of rational life, in favor instead of the lazy recreant’s retreat deeper into the cave’s shadows. And this becomes all the more distasteful when one considers that a large part of the subtext of Reality+ is the looming environmental catastrophe threatening the planet. It is this, after all, that supposedly may force us soon to decide whether we should flee the ruin of this world into the narcotic bliss or cowardly safety of a virtual alternate reality:
In centuries past, families often faced a decision: “Should we emigrate to a new country to start a new life?” In centuries to come, we may face an equivalent decision: “Should we move our lives to a virtual world?” As with emigration, the reasonable answer may often be yes.
There is something I find genuinely depressing in this, even while I acknowledge that it is more than likely that we will not succeed in arresting the barbarity and stupidity with which our species continues to destroy the richness and diversity and loveliness and fragility of the earth. To prefer the comfortable shelter of a simulated environment to the mysterious, wild, prodigal beauty and sublimity of life and mind — of psychē — that exist in vital nature, or even to be able calmly to contemplate absconding to the former in the aftermath of the latter’s eclipse, seems to me worse than pitiable. It is hard not to think that the value of continued existence on such terms would be vanishingly tiny, and even rather contemptible, and equally hard not to feel that minds so pusillanimous as to want to survive in that manner would not be worth preserving in the first place — in any world, virtual or otherwise.
Happily, however, and quite contrary to the case made in this book, it is a choice with which we shall never really be presented and hence a choice we shall never have to make.
Exhausted by science and tech debates that go nowhere?