Page 2 of 2
The Valley of Death
For Ghazanfar, Freud’s explanation of the uncanny valley, steeped in psychoanalytic theory, is much too “human-specific.” Nevertheless, the connection Freud makes between death and the uncanny valley persists in one form or another to this day.
For the most part, Freud’s essay reads like one big Freudian slip, revealing its author’s own anxieties about reconciling the uncanny with psychoanalysis. But in a sense, it succeeds despite itself: Its failures serve to illustrate the difficult nature of the uncanny, which is arguably the reason that for decades few scholars made serious attempts to investigate its origins: “It’s hard to treat the uncanny in the regular objectifying manner of the sciences or the humanities because it manifests itself through an interaction of subject and object—of feeling and situation—and in a way that is the hardest thing to analyze,” Weber explains.
In 1970 the Japanese roboticist Masahiro Mori published a short paper in the journal Energy in which he tried his hand at explaining the uncanny response we have toward human models. In much the same way Ghazanfar would later observe the uncanny valley response in monkeys, Mori noticed that when robots look very similar to us—but not so similar that we consciously mistake them for humans—our comfort level around them drops considerably. He dubbed this drop, bukimi no tani, or the “uncanny valley.”
In his paper, also titled “The Uncanny Valley,” he recommends that roboticists avoid building robots so realistic that they risk falling into the valley, offering the example of hands on a Buddha statue as an alternative approach to robot design: “The hand has no finger print, and it assumes the natural color of wood,” he wrote. “But we feel it is beautiful and there is no sense of the uncanny.”
“When we die, we fall into the trough of the uncanny valley.”
— Masahiro Mori
In the West, there is often a Frankensteinian stigma attached to artificial intelligence, but Mori offered Japan a much different perspective. In The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion, published in 1974, he wrote, “I believe robots have the Buddha-nature within them—that is, the potential for attaining Buddhahood.” His ideas about religion and the uncanny valley have had a substantial influence on the development of Japanese robotics. “In Japan, there is a great sensitivity in the government for having people who are accepting of robotics and robots in general. Mori’s interpretation of the uncanny valley became a kind of dogma,” says Karl MacDorman, a roboticist at Indiana University. As a result, Japan spent the next few decades avoiding human-like robot designs.
While the purpose of Mori’s paper was to inform robot design, in a concluding paragraph he cannot resist offering his own theory about the origins of the uncanny valley. He writes: “When we die, we fall into the trough of the uncanny valley. Our body becomes cold, our color changes, and movement ceases.” Human models fall into the uncanny valley because they remind us of death. “It may be important to our self-preservation,” he concludes.
Mori, like Freud, linked the uncanny valley to a “human-specific” notion of death, and many have suggested that he had Freud in mind when he penned “The Uncanny Valley”—which is possible since Freud’s concept of the uncanny, unheimlich, was translated in Japanese as bukimi prior to the publication of Mori’s paper. But MacDorman, who co-authored the definitive English translation of “The Uncanny Valley,” has his doubts: “There is nothing wrong with connecting Mori’s ideas to Freud,” he says. “But I don’t think Mori was inspired by him.”
In 2005 Mori began to get entangled with his study of the uncanny in much the same way that Freud had. In a somewhat puzzling note he sent to robotics conference, Mori wrote, “A dead person’s face may indeed be uncanny…[but] dead persons are free from the troubles of life, and I think this is the reason why their faces look so calm and peaceful.” These words came 35 years after the original publication of “The Uncanny Valley” and appear to suggest that what one finds uncanny evolves over time. MacDorman speculates that, in Mori’s case, this might be attributed to his age or development as a Buddhist. Here Weber’s point again rings true: Understanding the uncanny is neither an entirely subjective nor objective endeavor. Study it long enough, and eventually it makes a study out of you.
Evolving a Theory
But all along Mori hasn’t seen our avoidance of death as a consequence of repressed emotions the way Freud did. Instead he has understood it to be a mechanism we developed to keep ourselves safe. Nearly every hypothesis since has had this flavor. It has been suggested, for instance, that we avoid almost human figures because their peculiarities make them look sick, and we have developed an evolutionary mechanism for steering clear of pathogens. Another theory posits that we avoid figures with features slightly off from our own because they appear to be less-than-ideal mating material.
Ghazanfar rejects all of these hypotheses. “What is really going on is much simpler,” he says. He believes the uncanny valley response occurs because an animal—human or nonhuman—is evolutionarily inclined to develop an expectation of what members of its species should look like, a supremely important skill, as it lets the animal know with whom it can and cannot interact.
It’s easy to come
up with new explanations, but hard to throw out the older ones.
In this sense, life-like robotic and computer-generated models occupy a weird middle ground in an animal’s mind: They are familiar enough for the animal to consider the possibility that they are of the same species, but strange enough that they don’t quite meet the expectation the animal has developed for members of its species. “Any face that violates that expectation is going to elicit the uncanny response,” Ghazanfar says.
There does appear to be some experimental evidence in support Ghazanfar’s theory. Studies with children have shown that at a very young age, babies do not react negatively to human-like robots. As children grow older, such robots become more bothersome. This, Ghazanfar suggests, might be an indicator that infants have not yet developed a narrow expectation for what a human should like. As of yet, however, he has not tested his theory explicitly. “It’s what I think, but the experiments with monkeys weren’t straightforward so I couldn’t address all those things,” he says, which puts him in much the same place as Freud, Mori, and others before him.
But even if Ghazanfar can prove that his theory is correct, it won’t necessarily disprove Freud or Mori. We just don’t know enough about the uncanny valley to be confident that it can be traced back to a single cause. And that’s always been one of the biggest difficulties studying the phenomenon: It’s easy to come up with new explanations, but hard to throw out the older ones. “Things can be uncanny because of perceptual mechanisms or more psychological mechanisms,” MacDorman says. “So I don’t think the uncanny valley is necessarily a kind of single phenomenon.”
The uncanny valley has shaped robotics design for the past 40 years in Japan. Computer generated characters in videogames and films are designed to avoid it. Yet a clear understanding of it—or even an agreed-upon definition—still escapes us. Ghazanfar hopes his research will help to address these questions someday soon, but for the time being we know little more for certain about its origins than we did when Ernst Jentsch first called our attention to it in 1906. Perhaps we should have heeded the German doctor’s cautionary clause as he began to broach the subject: “[If] one wants to come closer to the essence of the uncanny, it is better not to ask what it is…”
Originally published November 16, 2009
Page 2 of 2