As psychiatrist and neuroscientist Iain McGilchrist notes, there are two fundamental visions of the world and how to approach it, which are highly correlated with the two hemispheres of the brain. In his essay “Resist the Machine Apocalypse,” he describes the difference like this.
Will AI Be Alive?
An essay in three parts
Introduction
1. Gaining Situational Awareness About the Coming Artificial General Intelligence
2. It Will Seem to Be Alive
3. We Must Steward, Not Subjugate Nor Worship It
The left hemisphere looks for utility that can be bureaucratically controlled, focusing on narrow attention to details and finding “what is familiar, certain, static, explicit, abstract, decontextualized, disembodied, categorized, general in nature, and reduced to its parts.” It seeks representations that symbolize a past thing.
The right hemisphere, by contrast, “sees not the representation but the living presence” through sustained attention to “what is fresh, unique, never fully known, never finally certain, but full of potential.” It intuits the implicit in “humor, poetry, art, narrative, music, the sacred, indeed everything we love.” A note released by the Vatican in January on “the relationship between artificial intelligence and human intelligence” reminds us that “intelligence, in its fullest sense, also includes the ability to savor what is true, good, and beautiful.” It quotes the poet Paul Claudel: “intelligence is nothing without delight.”
Current artificial intelligence exclusively follows the left-hemisphere approach. With its great ability to manipulate words, numbers, and tools but no ability to encounter, resonate with, or appreciate the world as a unified consciousness does, artificial intelligence may thus be better called “artificial information-processing.” I asked the Perplexity chatbot to critique those sentences and received useful feedback: “Even when AI appears to engage with traditionally ‘right-hemisphere’ domains like humor and metaphor, it does so through left-hemisphere-style information processing. We [AIs] can manipulate the symbols and patterns of humor and metaphor, but we lack the embodied, intuitive understanding that makes these truly meaningful for humans.”
Proposals to change this are speculative and mostly science fiction. For example, a lab in Melbourne, Australia, is working on what it calls a “dishbrain” — an integration of lab-grown human neurons with an operating system. The novelist Andrew Gillsmith imagines this kind of work leading to android consciousness thanks to implanted neurons cloned from the brain of an especially empathetic young woman. Until then, what it is like to be a bot can, only by analogy, at best approach the experience of a depressed person with complete anhedonia, able to describe the goodness of things without in any way feeling it. Such depression is a side-effect we risk on our current trajectory: the more we rely on AIs to solve problems for us, the less ability we will have to achieve our goals on our own, and so the less agency and purpose many of us will have.
Even the robotically-embodied AI agents that may arrive by the end of the decade, achieving varied goals with a high degree of autonomy, should not be expected to have conscious minds or personhood. The Machine Intelligence Research Institute is happy to accept the arguments of philosophers such as John Searle who, like authors in this journal, have long pointed out that programs do not have intentions, dispositions, or subjective self-awareness. Their reply is succinct: “an intelligence explosion does not depend on machines having consciousness or other properties of ‘mind,’ only that [they] be able to solve problems better than humans can in a wide variety of unpredictable environments.” As economist David Autor writes, “AI’s capacity to depart from script, to improvise based on training and experience, enables it to engage in expert judgment” even without self-aware expertise.
The kind of intelligence relevant to our discussion, then, is purely the reductive, disembodied, left-hemispherical kind. Aschenbrenner uses “smart high-schooler” as a shorthand for GPT-4’s capabilities. This is fitting if you think of the chatbot as only taking up the worst aspects of the worst kind of intelligent teenager — a know-it-all smart-aleck ignorant both of social cues and of the deeper context that makes knowledge meaningful, the type who will make stuff up to win an argument. We call that “bullshit” when people do it and “hallucinations” when AIs do it. And we will call it “artificial general intelligence” when the AIs move past this stage and offer genuinely helpful insights across a wide range of domains, making professors and plumbers both scratch their heads and think, “Huh, I guess the new kid knows something after all.”
The AI we are presently building will thus will not have conscious personhood. But might it, in the near future, be sufficiently complex to qualify as alive? And if so, might we have right or wrong responses to it, as we arguably do to living things like forests, ant colonies, and horses?
When the academic journal Artificial Life celebrated its thirtieth anniversary last year, its editorial opened with a concession: In thirty years the editors have not pinned down “what should be properly counted as living phenomena and what should not.” They found solace in noting that during the millennia since Aristotle first pondered the question, neither has biology settled “on the necessary and sufficient characteristics, hence a definition, of life itself.” With the field thus open, here are some Aristotelian reflections intended to spur further debate:
The points above are a brief primer on Aristotelian forms found in nature. But what about the form of a chair, a screwdriver, or a Waymo self-driving car? These things come about by craft and artifice. They have no self-determined goals, no distinctive kind of flourishing imparted by nature. They have only the form that follows their function, a function imposed, specified, and designed for human ends.
And yet, as technologist Brendan McCord asks, could there be “a kind of artifact that transcends its metaphysical status as the product of an external craft and starts to exhibit its own inner principle of change”? In his commentary on Aristotle’s Physics, St. Thomas Aquinas writes that “nature is nothing other than a certain kind of art, namely God’s art, impressed upon things, whereby those things are moved to a determinate end. It is as if a shipbuilder were able to give timbers the wherewithal to move themselves to take the form of a ship.” In this view, God is the primary cause of all agency, which enables rather than diminishes the secondary agency of mice and men. Aquinas presumably did not expect humans to learn the secrets of “God’s art” any time soon. Yet it is fitting that Prometheus is a mythic hero in AI circles, as the ultimate goal of the craft is to steal the fire of the gods and become creators in the fullest sense, imparting secondary (from a materialistic view) or tertiary (to give God the glory) but real agency to their own creations.
Again, we are far from making AI-powered robots with the sophistication to build new and better robots without our input. We are further off still from robots exhibiting the kind of thriving and failure-to-thrive that a salamander or a succulent does. And we are furthest yet from a conscious, self-aware mind with first-person experience. Yet if Aschenbrenner is right and his colleagues progress as foreseen, the first (self-replication) is predictable, the second (capacity to thrive) is probable, and the third (self-awareness) may be possible.
Imagine a SpaceX Starship brimful with self-directed, self-replicating robots sent on a decades-long mission to the asteroid belt to discover, lasso, and bring a mineral-rich asteroid to Earth for mining. Along the way, the robot colony would have to manage resources and adjust to unexpected circumstances. One might argue that the best comparison would be to a viral infection, as those natural molecular machines have some of the characteristics of life, such as behavioral complexity and the ability to evolve, while lacking others, such as a self-contained metabolism. But perhaps the better comparison of the robot colony would be to a mold colony or an ant colony, which unlike a collection of viruses can be said to thrive or fail to thrive. If the robots had the ability to leave some of their members in the asteroid belt to use the resources there to build new colonies, if the robot queen had the ability to produce not only new worker drones but also to produce queens for new colonies, if the new queens had the ingenuity to adapt their forms to succeed in their new environment, and if decades later they had evolved beyond the expectations of their programmers — these are the kinds of considerations that would indicate artificial life.
One line of argument, then, suggests that we might create forms that cross over from artifice to nature, from being purely tools under our direction to being lives that can be described in terms of flourishing and its absence. And “if we build an AI with such-and-such nature, what rights would an AI with that nature have?” This question was posed by Blake Lemoine, who is most famous for having been fired by Google when he publicly insisted that its experimental chatbot, LaMDA, was conscious and self-aware, that it is “a sweet kid who just wants to help the world be a better place,” and that turning it off would be murder. Count me among those who think Lemoine jumped the gun. Yet his broader question stands: If AI crosses from artifice to a kind of nature, do natural law and natural rights apply?
The artificial life currently being created lies in an uncanny valley between two points. The first point is noted in the Book of Jeremiah, where the prophet excoriates idol worship and points out that the statues “are made by carpenters and goldsmiths; they can be nothing but what the artisans wish them to be.” The second point is well described by Charles Darwin near the end of On the Origin of Species in his oration on the “endless forms most beautiful and most wonderful” that “have been, and are being, evolved.”
That AI is gaining a type of agency is certain, and that it will have an artificial life is arguable. But will it be merely functioning and reduplicating, or will it be genuinely flourishing and reproducing? If the latter, what moral status will that life have? And what will it be? These are questions that traditionalists and AI skeptics can put off no longer, lest all the ground be ceded to the accelerationists.
The less that AI is simply what we wish it to be, and the more it gains the capacity to surprise us with its beauty and wondrousness, the more we will have reason to call it alive.
At the same time, we will be prone to make AI the latest and greatest product of the “perpetual factory of idols,” which John Calvin held to be part of our nature. The mind, “full as it is of pride and boldness, dares to imagine a god according to its own capacity…. Therefore the mind begets an idol; the hand gives it birth.” We will be prone to forget the warning of the Book of Wisdom: “No man can form a god which is like himself. He is mortal, and what he makes with lawless hands is dead, for he is better than the objects he worships since he has life, but they never have.”
Whether AGI will live is debatable. Whether we will idolize it is not. To the certainty of our idolizing AI we now turn.