Anyone who has worked with ChatGPT or a similar AI chatbot has felt its growing usefulness. These models process and produce vast sums of stuff with ease and decent reliability, and they are getting better at it. The voice assistant in GPT-4o responds with human-like rapidity and a human-like voice. It mimics our conversational pauses, sighs, and laughter. It sometimes sounds like us. The Turing Test was once a way to measure the difference in kind between human and machine language. Now it seems like the machines have passed the test: we have an experience of mutual comprehension. The effect is that of the AI speaking with you, understanding, explaining, teaching, crafting. In short, the system is working.
This success has met with many admirers. Philosophers of mind, once dealing in a fanciful set of theories with little practical use, now see in these AI systems an endless supply of anecdotes confirming what some of them have been saying for a while. The computational theory of mind — the idea that human cognition works like a computing system — has been validated, they proclaim. As two philosophers in the field, Gualtiero Piccinini and Matteo Colombo, recently observed:
Typically, AI researchers are not overly concerned with how similar AI systems are to biological systems. Yet, AI and CTM [the computational theory of mind] are intimately related. To the extent that CTM is correct in asserting that minds are computing systems, it should be possible to reproduce mental computations, or at least computations equivalent to mental ones, in artificial systems. And to the extent that AI makes progress in building computing systems that exhibit mental capacities, it lends plausibility to CTM.
Progress has certainly been made, most notably in the ability of these chatbots to digest and produce language itself. Surely it makes all the sense in the world now to imagine the human mind as an organic computing system, complete with hardware, software, processing power, and programming. Some, such as Elon Musk, have bought fully into this view, deploying their money, time, and influence to build upon it.
But what kind of world is this theory endeavoring to build? This is the central, if veiled, concern of philosopher David Bentley Hart’s new book All Things Are Full of Gods. Expanding on his previous philosophical work on modernity, philosophy, and science, the book is a sweeping investigation of current scientific research and development, touching on neuroscience, panpsychism, Neo-Darwinism, the concept of emergence, thermodynamics, Integrated Information Theory, functionalism, and quantum theory, to name just a few. Whereas Hart’s previous books often turn to philosophy of mind, this book is encyclopedic because it addresses itself to a larger concern. Here he asks not only how we know the world, but how we put our answers to work in building it. Nothing less than the scientific and technological project of the last few centuries is at issue.
We often say that ideas shape the world. But Hart sees in our age the causation often run in the opposite direction. It is the world we are making that is shaping our beliefs. The persuasive force of the belief that all that exists is matter in motion comes not from the academy but from Silicon Valley. Machines are mechanical materialism in physical shape, fashioning our lives according to their principles. Mounting more than a purely philosophical attack, Hart writes with an intense concern for what we are building, not only what we are theorizing. There is a sense in which this book is written for the architects and consumers of our technological world.
The plan of today’s technological architects to build our future world, he fears, might actually succeed, and in so doing radically transform what the world is, what it means to live, and how the future unravels. World, life, and meaning may come to be replaced with something else entirely.
As a rejection of the opaque style of many contemporary philosophers, Hart frames his book as a dialogue between four Greek gods. Psyche speaks for Hart’s position on life and mind, Eros for his beliefs about the ultimate ends of desire, and Hermes for his views about the nature of language. Hephaistos, god of technology and manufacture, is happily convinced that mechanical materialism successfully describes the world. For Phaesty, as Psyche nicknames him, mind might be an illusion of matter, or it may emerge from it; whatever the case, matter is always the origin from which all else springs. It does not seem that Hart is painting with Hephaistos a caricature of some contemporary scientist or philosopher, unless it is to some extent himself. “I am of the party of Psyche, Hermes, and Eros, it is true; but,” Hart confesses, “Hephaistos is part of me.” He speaks, Hart writes, “for my doubts and hesitations” — the gaps that remain after the other three gods have made their case against mechanical materialism.
The book is written very much in Hart’s own style, but after some introductory material the gods alone are speaking, and Hart gives them room to be themselves. For example, Eros and Psyche are often making eyes at each other:
Eros: … And can mind emerge from truly mechanistic material causes? Is emergence of that sort even logically conceivable?
Psyche: My love.
Eros: Yes, my soul?
Psyche: I’m impressed.
Eros: I’m not just a pretty face, my soul.
Psyche: [With a hint of fond salacity in her voice:] Oh, how well I know that. But can we agree to address the question of organic life’s structure later? I fear we may be entering upon too large a question for the little time that remains before our dinner.
Eros: Perhaps that’s for the best.
Hephaistos: Well, wait a moment, let’s not curtail the natural course of the argument before I’m satisfied that I know where we stand.
And Hephaistos, always the advocate for mechanical materialism, is himself full of dry satire. Later in the book, we read:
Psyche: [After several moments of expectant silence:] Phaesty, my dear, your audience eagerly awaits.
Hephaistos: [Turning about:] Why, you furtive little deities you, I should have heard your approach.
Hermes: Not mine, surely.
Hephaistos: I was simply pondering how we should resume our debate. What’s clear to me is that we’ll have to revisit such matters as causal closure and emergence and so forth, since that’s the most elementary level of our disagreement.
Eros: I’m certainly agreeable.
Hephaistos: Yes, as a rule you are.
Not only does this method allow for many moments of summary and points of clarification, but the nature of conversation itself means that the topics can evolve into each other as spontaneously as they do among friends. One day they are talking about fallacies peculiar to philosophy of mind; hours later they are debating about DNA and RNA; after a few more drinks, things get heated over quantum mechanics. Setting the ideas within such a free-flowing discourse presents to the reader a topic to which Hart frequently returns: the limitless dynamism of human language.
Amid the drinks and banter, Hart reveals his program through the voices of Psyche, Eros, and Hermes: an apology for a Christian metaphysic — drawing, as he often does in his books, from Upaniṣadic writings on Brahman and from neo-Platonic ideas of mind and spirit — against Hephaistos’ view that matter is the source of all there is and mechanism its only cause.
Of central importance to Hart’s overarching vision is his defense of language and meaning against their supposed conquest by Large Language Models (LLMs) like ChatGPT. Though the gods’ conversation about LLMs takes up relatively little space in the book, the topic offers a clear window into Hart’s broader concern. Engaging with the technicalities as much as with the philosophical underpinnings, he argues that language and meaning are mathematically out of reach for these models, as indeed for the entire mechanistic view of the world.
A brief recap of LLMs is in order here. Since the 2017 publication of the article “Attention Is All You Need” by a group of Google scientists, LLMs have been designed — putting things very simply — to notice how earlier words (or tokens) can influence the sense of later words in a sequence. As the connections between words are evaluated, the algorithm slowly picks up on certain patterns, accounting for the wide variety of use cases any one word might have depending on the context: the shot of a gun, a shot of espresso, shot by a camera. When selecting for the next word in the sequence, the interconnections between the previous words are analyzed, percolating down to a final prediction that in this case the word “gun” is the most likely to be used with “shot,” or whatever the case may be.
But LLMs now do more than just next-word prediction. Many models have integrated an additional set of training, what has been called “reinforcement learning from human feedback.” This supplements the contextual training with human preferences. Through extra parameters in the code and having human evaluators grade the model’s responses, the model is shown what kinds of answers humans want, hence which responses it should favor. The process is rather more complex than simply predicting the next word based on existing texts. Still, after calculating from the context of previous words and optimizing for the response that will meet with the most approval from human trainers, the model finally selects an output because the code commands it, or the math recommends it.
Many have suggested that perhaps humans process and generate information in a similar fashion. Andy Clark, a professor of cognitive philosophy at the University of Sussex, has proposed the idea that human cognition is simply a prediction engine, a system of “prediction error minimization,” as he calls it. When we speak with others, he explains, we guess what should happen when we utter certain sounds, given our past experience with how others respond. Sometimes we misspeak, or our listeners mishear, but we are guessing at what they expect from us. When our expectations and their reactions don’t align, guesswork even becomes a source of humor.
Against all this, Hart argues that predictability is in fact antithetical to how language conveys meaning. To show this, he observes a curious feature of the order of words and the meaning they carry. This is how he puts it, in the voice of Hermes:
At the so-called syntactic level, the more determinate the content, the more susceptible it is of compression — of condensation, that is, into an algorithm. A thousand repetitions of a sequence of twenty numbers can be coded by providing just one iteration of that sequence and attaching it to a simple functional instruction for a thousand successive reiterations. A merely periodic sequence isn’t random; it’s precise and determinate, and so an algorithm can be generated that wholly contains all the “syntactic” information present in its uncondensed form…. Conversely, a wholly random sequence of, say, twenty-thousand numbers or letters can’t be usefully compressed or captured algorithmically.
Semantic information, you see, obeys a contrary calculus to that of physical bits. As it increases in determinacy, so its syntactical form increases in indeterminacy; the more exact and intentionally informed semantic information is, the more aperiodic and syntactically random its physical transmission becomes, and the more it eludes compression. I mean, the text of Anna Karenina is, from a purely quantitative vantage of its alphabetic sequences, utterly random; no algorithm could possibly be generated — at least, none that’s conceivable — that could reproduce it. And yet, at the semantic level, the richness and determinacy of the content of the book increases with each aperiodic arrangement of letters and words into coherent meaning.
The idea works something like this. As the reader goes deeper into Tolstoy’s novel, the resolution, so to speak, of the mind’s eye becomes sharper; the faces of the characters come into focus, their inner states become tangible, the world more concrete. Everything becomes clearer, or, as Hart puts it, more determinate. We need only reflect on what we knew of Anna Karenina after reading one chapter of the book and what we know now after finishing it to notice how much more definite and distinct our acquaintance with her has become.
This happens so effortlessly that Hart’s second observation might come as a surprise: the opposite holds true when we regard the sequence of words. As the story goes on, there is no telling how the words will fill the page. The sequence of words does not become more transparent over the course of the story, but rather more opaque. This is one of the amazing qualities of what Hart calls the “semeiotic economy,” which includes not just language but life: it is “quantitatively random and resistant to algorithmic compression at the syntactic level and yet utterly and exquisitely precise, meaningful, and determinate at the semantic level.”
That is, as a matter of syntax — the material arrangement of symbols on the page — the text is random, such that an algorithm cannot capture it and predict how the sequence continues. This is the case even while, as a matter of semantics — the meaning of the symbols — the text grows in precision. To put this into simple mathematical terms: the meaning of the whole goes to 1 as the predictability of its sequence goes to 0.
It can be very hard for us to distinguish between these two, particularly once we have grasped the meaning of the symbols, as we generally do when we read. It is much like a particular tree trunk back home that for me will always look like an old man’s withered face, no matter how hard I stare at it and tell myself it’s just a tree trunk. In an effort to see the material randomness of a text, to recognize it as a haphazard series of squiggles and jots, it helps to consider something whose meaning is more foreign to us:
INNOVAFERTANIMVSMVTATAS
DICEREFORMASCORPORADICO
EPTISNAMVOSMVTASTISETILLA
SADSPIRATEMEISPRIMAQVEAB
ORIGINEMVNDIADMEAPERPETV
VMDEDVCITETEMPORACARMEN
If that looks still too familiar, consider another sequence:
.. -. -. — …- .- ..-. . .-. – .- -. .. — …- … — …- – .- – .- … -.. .. -.-. . .-. . ..-. — .-. — .- … -.-. — .-. .–. — .-. .- -.. .. -.-. — . .–. – .. … -. .- — …- — … — …- – .- … – .. … . – .. .-.. .-.. .- … .- -.. … .–. .. .-. .- – . — . .. … .–. .-. .. — .- –.- …- . .- -… — .-. .. –. .. -. . — …- -. -.. .. .- -.. — . .- .–. . .-. .–. . – …- …- — -.. . -.. …- -.-. .. – . – . — .–. — .-. .- -.-. .- .-. — . -.
Now imagine the letters and lines and dots running up to, say, a million items. Hart’s point is that the material stuff on the page is so random that it cannot be compressed mathematically into a comprehensive pattern that an algorithm can then register and reproduce. And yet the semantic information of the sequence, its meaning, grows with its length — because we know the meaning of the symbols. (The first sequence above is the first sentence of Ovid’s Metamorphoses, written in scriptio continua, the practice of writing Latin without spacing or punctuation, which, in the second sequence, is then translated into Morse code.)
What does this entail for LLMs? For one thing, they are built upon a different assumption: that the order of words is not random. The patterns may be incredibly intricate, detectable only by very acute sensors. But the patterns of and within language can be captured without remainder. And so all the weights, nodes, parameters, and depth of the system are designed to be as sensitive as possible to any and all patterns hidden within the training data. And when the system generates output, it selects for the word order with the strongest statistical match to those patterns.
Yet according to Hart, in a novel like Anna Karenina the order of the words is precisely the metric that fades into noise, while at the same time meaning emerges ever more distinctly to the fore. There may be local apparitions of a pattern. But the text as a whole, and linguistic meaning more generally, is patternless. The text means something more distinct as it continues down an ever more winding path. No amount of context-based clues or human preferences can ascertain what might come next as the text grows.
We have all felt this inverse relation between syntax and meaning when reading good literature. There are moments when an author unveils something hidden to us, crafts exact if unlooked-for phrases, or applies new meaning to a word. We feel the thought unfurling, taking on a shape we could not foresee, as-of-yet unrelated ideas drawn together for us, insights pushing against our own intuitions, ideas diverging from our own.
This experience is not limited to a set of styles. The same obtains for the terse prose of Hemingway as the floral verse of Milton. Think of the detective stories of Sherlock Holmes, Hercule Poirot, Father Brown, or Lord Peter Whimsey, or the essays of David Foster Wallace, Jonathan Franzen, or G. K. Chesterton. We may at times imagine something of how they might write the story down. But we could not guess what they will say next. Good writing is an endless, widening surprise, disclosing something about the world we had somehow never noticed, shaping us long after we have read it. By contrast, predictable structure is prone to obscure meaning, boilerplate sentences are easily forgotten, and rote words are a sign of thoughtlessness.
For Hart, no amount of mathematical resourcefulness can get around this. Even if a model were devised whereby each successive token was selected precisely because it did not fit a pattern, what failsafe could ensure the text will not become gibberish as the sequence becomes less predictable? Randomness of order, after all, does not guarantee meaning; it just so happens that determinate meaning is ordered indeterminately. It’s no good either way. Unwavering adherence to prediction dictates that the output reads as cliché. But uncouple from prediction, and the model’s response risks becoming gobbledygook. Meanwhile, our experience of reading suggests that meaning flourishes when material disorder is informed by the mind.
And here the situation grows worse, for Hart, since there is in principle no way for an algorithm even to measure an increase in meaning. Doing so would require a mathematical formula for units of meaning. But meaning, Hart argues, is immeasurable because, like the mind, it is immaterial. “Meaning,” he writes, “can’t be measured in discrete quanta; … it’s something in a sense outside of … the whole quantitative realm of bits and their transmission.”
He notes with admiration how well Large Language Models can process the quantitative bits of language: letters, words, sentences, and so forth. But no amount of calculus and vectoring can capture the meaning given to them from above, by us. He gives the example of an abacus. We are not so delusional as to ascribe to the wooden beams and rods and beads any awareness of what it means for the beads to be placed to the right or to the left. Only the student attending to the tool knows what their placement entails. Likewise, AI language models move linguistic items around, but without any recognition that they mean anything. They needn’t mean anything, in fact, for the algorithm to analyze them. Train a model on nonsense, and it will find all the patterns and vector all the use cases there are to find. And if you ask it, it will feed its own nonsense back to you without prejudice. For this reason, such tools have no capacity for intending a clever mistake, such as those found in Lewis Carroll. When Alice eats the cake and, finding herself growing larger, she cries out “Curiouser and curiouser!” Even though the words are formally incorrect, we know exactly what she means. An LLM does not, cannot, “hallucinate” — the term of art for an apparently errant result. It is only the human who registers if a mistake has been made, and if, in the case of Alice, it is a neat one.
The algorithms underlying LLMs are “the relentless statistical inflation of the compressible, periodic level of information into an ever vaster imitation of the real openness of the semantic level,” Hart writes, adding with perhaps a smirk, “it’s ingenious.” But it can only ever generate “a mere simulacrum of syntax and semantics.”
This might explain why the texts these systems produce often read like they have all the thoughtfulness and verve of a bureaucrat. LLMs’ reliance on predictable structure has a permanent effect on the output: the syntax is formulaic, and significance becomes a diminishing return.
This, Hart reminds us, is not by coincidence. Meaning is not the point of a mechanized world. Control is.
Here Hart turns back to the larger point about the world we are building. A world that pays attention to what can be measured and predicted will fail to preserve what is immeasurable and unpredictable — features of the world as world. We are deluding ourselves that computers can understand language and produce meaning. More to the point, in dedicating ourselves to AI’s active substitution of language with formula and meaning with predictability, we rid ourselves of the qualitative goods of living in favor of the quantitative assets of mastery. This, Hart emphasizes, is simply the logical conclusion: an existence purged of whatever the mechanical outlook cannot explain.
The project, after all, unburdened by such beliefs as spirits and the divine, has produced the great fires of industry, electric grids, cellular towers, the Internet, worldwide commerce, far-reaching surveillance — unbelievably complex networks for processing and manipulating the movements of our lives. Thinking of the universe as an information-dense machine drifting inexorably toward equilibrium, with smaller versions of the same all the way down, may be an incomplete falsehood — but it works. See what it has made! Really, for a world built on the assumption that all real things are measurable and predictable, the view that the physical world and its living creatures are grounded in the infinite divine mind is not so much irrational as it is a hindrance to our plans.
So, says Hart, a world that rejects immeasurability and unpredictability will end up forming the world the only way it knows how: with yet more machines. After all, for modern humans, says Hart, “the machine is the real as such.” And with all machines, there must be someone pulling the levers, choosing how to employ its techniques. This is where things may get philosophical, but no less concrete:
Modernity is to a great extent nihilism, in the simplest, most exact sense: a way of seeing the world that acknowledges no truth other than what the human will can impose upon things…. The world’s no longer the home in which humanity dwells or a presence to which humanity feels the need to respond; it’s merely mechanism and a great reserve of material resources awaiting exploitation by the projects of the will.
When an interviewer asked Elon Musk of what use humans are to AI, he answered: “a source of will.”
What happens in such a scheme, Hart wonders, where we surrender the meaning of our lives to machines that have no idea what meaning is? If we submit our future to quantitative prediction instead of qualitative wisdom? When we willingly divest ourselves of the mind by giving over our attention and compliance to a counterfeit? Eventually, the historical and material forces will bear down on the belief in the mind. There will be little space left for the idea of the divine. Language will become mechanism. Engines of prediction will be appointed to the forefront of our military strategy, political leadership, educational upbringing, and economic growth. In such an inhospitable atmosphere, the phenomena of the mind begin to warp: consciousness, attention, wonder, love, poetry, philosophy. As this progresses, Hart writes,
Human beings turn for companionship to the thin, pathetic, vapid reflection of their own intelligence in their technology only because they first sealed their ears against the living voice of the natural world, to the point that now nothing more than its fading echo is still audible to them…. They long for the silence to be made complete.
By the end of the book, this is Hart’s warning: if the makers of this world succeed in crafting it according to their wills, such a space will hardly count as a world anymore, in the sense of a place of expanding significance, extraordinary turns, and unquantifiable qualities of existence — a human, mind-full existence. They will have engineered a network of inputs and outputs. What’s left of life will grow quiet and illegible, replaced with the noise of the machines. As predictability goes to 1, meaning will go to 0. Decisions may yet be made, goods traded, objectives won. Yet without any support for their significance, it will all contract to a mere happening, a change from one energy state to another. Such a time would cease even to be history. Who would be left to write it down?
Keep reading our Winter 2025 issue
How the System Works • Make Suburbs Weird • The New Control Society • Subscribe
Subscribers receive new issues weeks to months before articles are posted online.
SIGN IN TO ACCESS