Till Malfunction Do Us Part

Predictions of robotic intimacy
Subscriber Only
Sign in or Subscribe Now for audio version

In a recent issue of the journal Psychological Science, researchers from the University of Chicago and Harvard reported that people are more likely to anthropomorphize animals and gadgets when they are lonely. “People engage in a variety of behaviors to alleviate the pain of [social] disconnection,” the authors write, including “inventing humanlike agents in their environment to serve as potential sources of connection.” This finding is hardly surprising, and is not unrelated to one of the favorite objectives of the budding consumer robotics industry: manufacturing “companions” for the isolated elderly.

Japan — the country with the world’s highest percentage of elderly people and lowest percentage of children — has been at the forefront of this domestic-robot trend. In 2005, Mitsubishi released its “Wakamaru” robot to considerable fanfare. The three-foot-tall machine, its appearance something like a yellow plastic snowman, was designed to provide limited home care to the aged. It can “recognize” up to ten human faces, respond to voice commands, deliver e-mail and weather forecasts from the Internet, wheel around after people in their homes, and contact family members or health care personnel when it detects a potential problem with its ward.

Despite Mitsubishi’s high expectations, the first batch of one hundred Wakamaru did not sell well. At $14,500 apiece, Mitsubishi received only a few dozen orders, and then faced cancellations and returns as purchasers realized the robot couldn’t clean or cook, or do much of anything. Customers were amused to find the machine unexpectedly “watching television” or “dancing,” but were frustrated by its limited vocabulary and actual capabilities. Production was called off after three months, and the remaining stock of Wakamaru now work as rentable receptionists — a common fate for first-generation humanoid robots, too expensive for the general market.

In the past decade, other robots intended for the elderly made their debuts in nursing homes, including “Paro,” a furry, white, squawking baby seal made and sold in Japan. In videos viewable online, it is plain that nursing-home residents, including those suffering from advanced Alzheimer’s, take comfort in watching, touching, talking to, singing at, and cleaning Paro. Like the cats and dogs sometimes used in therapy — but with less unpredictability and mess — Paro’s robotic twitching and yelping seem to evoke a calm, warm focus in depressed, lonely, and ailing patients. Other robots provoke similar reactions, like “My Real Baby,” a robotic toy doll. “These are used to soothe individuals,” according to a 2006 paper by three M.I.T. scholars:

The doll helps to quell the resident’s anxiety. After a period of time (usually less than an hour), [the nursing home director] will return to the resident, take back the doll, and return it to her office. Often, when she takes the doll back, its mouth is covered in oatmeal, the result of a resident attempting to feed it. The reason that she takes the doll back, she says, is that “caring” for the doll becomes too much to handle for the resident.

It is difficult to fault nursing home directors who, out of compassion, offer sad patients the comfort of interacting with robotic toys. Other uses of today’s interactive robots seem essentially benign, too — like the use of “Nico” and “KASPAR,” child-size humanoid robots, as tools for the social training of autistic children, or the employment of the industrious robotic guard dragon “Banryu,” which prowls the house smelling for smoke and looking for intruders.

Reviewed in this article
Harper ~ 2007 ~ 334 pp.
$24.95 (cloth) $14.95 (paper)

But some analysts predict that we are nearing a day when human interactions with robots will grow far more intimate — an argument proffered in its most exaggerated form in Love and Sex with Robots, a new book that contends that by the year 2050, people will be marrying robots. The author, David Levy, is a British artificial-intelligence entrepreneur and the president of the International Computer Games Association. In the book, his Ph.D. dissertation from the University of Maastricht, Levy first explains why people fall in love with one another — a great and timeless mystery which, with the aid of social scientific formulae and calibrated ten-point checklists, he helpfully distills into twenty-one illuminating pages. He then sets out to explain why the blind rascal Cupid might have as much success — or more — striking passion between humans and machines. With such astute observations as “‘like’ is a feeling for someone in whose presence we feel good,” Levy lays out the potential for robots to exhibit “behavior patterns” that will induce people to fall for them, heart and soul:

A robot who wants to engender feelings of love from its human might try all sorts of different strategies in an attempt to achieve this goal, such as suggesting a visit to the ballet, cooking the human’s favorite food, or making flattering comments about the human’s new haircut, then measuring the effect of each strategy by conducting an fMRI scan of the human’s brain. When the scan shows a higher measure of love from the human, the robot would know that it had hit upon a successful strategy. When the scan corresponds to a low level of love, the robot would change strategies.

These made-to-order lovers, Levy says, will look like movie stars, write symphonies better than Mozart, possess a “superhuman-like consciousness,” converse with almost-infinite intelligence in any given language, demonstrate surpassing sensitivity to their owners’ every thought and need, and at a moment’s notice will be “in the mood.” Soon to be available for purchase at a location near you, their entire virtual existences will be devoted to making even the most luckless lover feel like a million bucks.

For those who desire absolute submissiveness in a mate, robots, with their admittedly “unsophisticated” personalities, will offer the logical solution (assuming they are not subject to the same technical frustrations and perversities endemic to all other appliances). But for those who feel the need for za-za-zoom, the love-bots of the future will be programmed to be feisty:

Surprises add a spark to a relationship, and it might therefore prove necessary to program robots with a varying level of imperfection in order to maximize their owner’s relationship satisfaction…. This variable factor in the stability of a robot’s personality and emotional makeup is yet another of the characteristics that can be specified when ordering a robot and that can be modified by its owner after purchase. So whether it is mild friction that you prefer or blazing arguments on a regular basis, your robot’s “friction” parameter can be adjusted according to your wishes.

Levy admits to finding it a little “scary” that robots “will be better husbands, wives, and lovers than our fellow human beings.” But in the end, the superiority of machines at pitching woo needn’t threaten humans: they can be our mentors, our coaches, our sex therapists — with programmable patience, sympathy, and “humanlike sensitivity.”

While Levy’s thesis is extreme (and terribly silly), many of its critical assumptions are all too common. It should go without saying that the attachment a person has to any object, from simple dolls to snazzy electronics, says infinitely more about his psychological makeup than the object’s. Some roboticists are very clear on this distinction: Carnegie Mellon field robotics guru William “Red” Whittaker, who has “fathered” (as writer Lee Gutkind puts it in his 2007 book Almost Human) more than sixty robots, advises his students and colleagues not to form emotional connections with them. “They certainly don’t have the same feelings for you,” Whittaker says. “They are not like little old ladies or puppies. They are just machines.”

The very premise underlying the discipline of sociable robotics, however, is that a machine can indeed mean something more. Their developers capitalize on the natural sociability of humans, our inborn inclinations to empathize with, nurture, or confide in something generating lifelike cues, to create the illusion that a lump of wires, bits, and code is sentient and friendly. Take, for example, the famous case of the cartoon-cute robot “Kismet” developed by Cynthia Breazeal at M.I.T. in the 1990s. Breazeal designed Kismet to interact with human beings by wiggling its eyebrows, ears, and mouth, reasoning that if Kismet were treated as a baby, it would develop like one. As she put it in a 2003 interview with the New York Times, “My insight for Kismet was that human babies learn because adults treat them as social creatures who can learn; also babies are raised in a friendly environment with people. I hoped that if I built an expressive robot that responded to people, they might treat it in a similar way to babies and the robot would learn from that.” The Times reporter naturally asked if Kismet ever learned from people. Breazeal responded that as the engineers learned more about the robot, they were able to update its design for more sophisticated interaction — a “partnership for learning” supposedly indicative of the emotional education of Kismet, whose active participation in that partnership is glaringly absent from Breazeal’s account.

It is important, Breazeal emphasizes in her published dissertation Designing Sociable Robots, “for the robot to understand its own self, so that it can socially reason about itself in relation to others.” Toward this goal of making conscious robots, some researchers have selected markers of self-understanding in human psychological development, and programmed their machines to achieve those specific goals. For example, Nico, the therapeutic baby bot, can identify itself in a mirror. (Aside from human beings, only elephants, apes, and dolphins show similar signs of self-recognition.) Kismet’s successor, “Leo,” can perform a complicated “theory of mind” cooperation task that, on the surface, appears equivalent to the psychological development of a four- or five-year-old. But these accomplishments, rather than demonstrating an advanced awareness of mind and self, are choreographed with pattern recognition software, which, though no small feat of coding cleverness, has none of the significance of a baby or an elephant investigating himself in a mirror.

Still, many artificial intelligence (AI) aficionados — including David Levy — hold that the interior state or lack thereof is not important; the outward markers of intelligence should be sufficient indicators of it. AI patriarch Alan Turing famously proposed in 1950 a test in which a machine would be deemed intelligent if a human conversing with the machine and another human cannot distinguish the two. (The implications and flaws of Turing’s test were unpacked at length in these pages by Mark Halpern [“The Trouble with the Turing Test,” Winter 2006].) Levy submits that this test be applied not just to machine intelligence but also to emotions and other aspects of personality: If a machine behaves as though it has feelings, who’s to say it doesn’t? Thus he predicts that by the year 2025, robots will not only be fully at home in the human emotional spectrum, but will even “exhibit nonhuman emotions that are peculiar to robots” — an absurdly unserious claim. (One robot frequently used in studies of emotion simulation is “Feelix” the Lego humanoid, designed to express five of biological psychologist Paul Ekman’s six “universal emotions.” Curiously, disgust, the sixth emotion, was deliberately excluded from Feelix’s repertoire.)

When explicitly defended, all such claims rest on the premise that human feelings are themselves nothing but the product of sophisticated biochemical mechanics. From the perspective that physiological processes and responses to stimuli comprise our emotions, “real” feeling is as available to robots as to living beings. “Every person I meet is … a machine — a big bag of skin full of biomolecules interacting according to describable and knowable rules,” says Rodney Brooks, former director of the M.I.T. Artificial Intelligence Laboratory, in his 2002 book Flesh and Machines: How Robots Will Change Us. “We, all of us, overanthropomorphize humans, who are after all mere machines.”

One might question how those who accuse anthropos of “overanthropomorphizing” himself propose to make convincingly human machines, with so little understanding of what constitutes humanity. Robots, after all, are created in the image of their programmers. Kathleen Richardson, a doctoral candidate in anthropology at Cambridge, spent eighteen months in Brooks’s lab observing the interaction between the humans and the robots and “found herself just as fascinated by the roboticists at M.I.T. as she was by the robots,” as Robin Marantz Henig reported in the New York Times:

She observed a kinship between human and humanoid, an odd synchronization of abilities and disabilities. She tried not to make too much of it. “I kept thinking it was merely anecdotal,” she said, but the connection kept recurring. Just as a portrait might inadvertently give away the painter’s own weaknesses or preoccupations, humanoid robots seemed to reflect something unintended about their designers. A shy designer might make a robot that’s particularly bashful; a designer with physical ailments might focus on the function — touch, vision, speech, ambulation — that gives the robot builder the greatest trouble.

One can just imagine a society populated by robo-reflections of the habits, sensitivities, and quirks of engineers. (There are, of course, simple alternatives: Lee Gutkind shares the telling little fact that at Carnegie Mellon, one saucy “roboceptionist” called “Valerie,” which likes to dish about its bad dates with vacuum cleaners and sessions with a psychotherapist, was programmed by computer scientists — but with a storyline designed by the School of Drama kids.)

The latter half of Levy’s book, a frighteningly encyclopedic treatise on vibrators, prostitution, sex dolls, and the short leap from all of that to sex with robots, scarcely deserves mention. Levy begins it, however, with the familiar story of Pygmalion, in a ham-handed act of mythical misappropriation.

The example of Pygmalion, though, is inadvertently revealing because its true significance is precisely the reverse of what Levy intends. In Ovid’s rendition of the tale, King Pygmalion is a sculptor, surrounded in the court by “strumpets” so bereft of shame that “their cheeks grew hard, / They turned with little change to stones of flint.” Disgusted by their behavior, he thoroughly rejects womankind and carves himself a statue “more beautiful than ever woman born.” Desiring his own masterwork, he kisses it, caresses it, and speaks to it as to his darling. In answer to his fervent supplication for “the living likeness” of his ivory girl, Venus brings the ivory girl herself to life, and she bears Pygmalion a daughter. Two generations later, their strange union comes to a sad fruition, as Pygmalion’s descendants collapse into incest and destruction.

Levy shallowly wants us to see in Pygmalion’s example only that human nature is what it always has been — that today’s attractions have ancient parallels; he glibly notes that “sex with human-like artifacts is by no means a twenty-first-century phenomenon.” But if anything, Pygmalion’s story is a warning against just the temptation Levy dangles before us. Even as Pygmalion is repulsed by the stony shamelessness of the women of Cyprus, his stony unforgivingness of the flaws of living human beings leaves him with a stone as the center of his desire. Pursuing this unnatural union leads his family into ruin, the final result of the terrible inversion of erotic love between creator and creation.

Levy mentions procreation only in passing, merely noting that the one shortcoming of “human-robot sexual activity” is that children are not a natural possibility. He goes on to suggest that the robot half of the relationship might contribute to reproduction by designing other robots inspired by its human lover. What it might mean, for example, for an adopted or artificially-conceived child to grow up with a robot for a “parent” is never once considered.

There are, however, scattered about Levy’s book half-baked insights about love, most notably its connection to imperfection and mortality. “Some humans might feel that a certain fragility is missing in their robot relationship,” he muses — but hastily adds that fragility, like every other necessary or desirable feature, can just be simulated. More serious, however, is his concession that the “one enormous difference” between human and robotic love is that a human is irreplaceable. This means, he says, that a human need never sacrifice himself to protect his robot, because a replica will always be available; its “consciousness,” backed up on a hard drive somewhere, can always be restored.

Levy fails to see the trouble with his fantasy, because he begins by missing altogether the meaning of marriage, sex, and love. He errs not in overestimating the potential of machines, but in underrating the human experience. He sees only matter in motion, and easily imagines how other matter might move better. He sees a simple physical challenge, and so finds a simple material solution. But there is more to life than bodies in a rhythmic, programmed dance of “living likeness.” That which the living likeness is like is far from simple, and more than material. Our wants and needs and joys and sorrows run too deep to be adequately imitated. Only those blind to that depth could imagine they might be capable of producing a machine like themselves. But even they are mistaken.

Caitrin Nicol, “Till Malfunction Do Us Part,” The New Atlantis, Number 19, Winter 2008, pp. 125-131.
Header image by studiostoks
Related

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?