Human, Non-Human and Human-ish
Most robot-makers aren't trying to make artificial humans. But some are. What is up with that?
Three humans (standing) and their robot “geminoids.” Hiroshi Ishiguro and his robot are in the middle. At right is (are) Henrik Scharfe of Aalborg University’s Center for Computer-Mediated Epistemology. The identity of the woman at left has not been disclosed. 1
Just before the Covid shutdown last year, I stood in the lobby of a Tokyo hotel, in front of a pair of lookalike receptionists demurely seated on stools. In buff-colored uniforms and pillbox hats, they looked out over the front desk like wax-museum figures (you could’ve tagged them “flight attendants in the age of Mad Men”). But every now and then one of these women blinked — with a faint, audible click. They were robots — the sort that are made to look as human as possible.
The hotel is part of the Henn Na chain, whose brand is tightly wound around robots — for reception, cleaning, luggage handling, room service and other tasks. Some of these machines are functional (mechanical arms that wouldn’t be out of place in a factory). Some are small, even toy-like. Some are large and playful, like this guy:
That’s a raptor-clerk on duty at touristy hotels — like the one near the Disney resort east of Tokyo. But in the hotel where I stood, near the Ginza, guests expect a more businesslike, efficient atmosphere, said Sho Yojima, the human clerk on duty that day. So: no velociraptors.
This made sense to me as soon as Yojima explained it. But then I wondered why it seemed obvious, both to the Henn Na management and to me, that people would have more respect for a robot that looks human.
After all, most truly useful robots aren’t people-like at all. The iRobot Roomba, for example, isn’t a mechanical butler with arms, hands and a face. It doesn’t even have “a grabbing arm to get the clutter out of the way first," Helen Greiner, one of the founders of iRobot, told me when we spoke last winter. It succeeds because it operates like a robot, not like a robot imitating a human.
Greiner told me the Roomba’s designer (the legendary Joe Jones) filleted out those aspects of vacuuming that a robot would be bad at (like picking up scattered items and deciding “this goes in the trash, that goes in the sock drawer, this, I have to ask the child about”). That left the Roomba doing only those parts of the job that were “robot-friendly.” (Perhaps not as quickly as a human, or as flexibly, but reliably enough to be helpful — and cheaply enough to sell at a price people don’t mind paying for an appliance. )
With that approach the company created one of the few robots ordinary people actually want to have (more than 20 percent of vacuum cleaner sales in North America are robots, Greiner said). Trying to make an even vaguely human-ish robot, on the other hand, would have been too expensive and too technically difficult. "At the beginning with Roomba in 1998 we got reactions like, 'I'm not going to get a robot until it can do what I do, like picking up the clutter before I vacuum,' " Greiner said. "Those people are still waiting."
This strategy — define a “robot-friendly’’ task, then build the device around that — also inspired Tertill, a robot that takes on what Greiner calls the "tedious and maddening" work of pulling weeds. (She became CEO and chair of Tertill last fall.) A Tertill doesn't work like a person, or as fast as a person, but it will pull up any plants that its owner hasn't marked with small metal guards. And a gardener doesn’t need it to act human to be happy when she finds a weed-free flowerbed each morning.
“We try to dissuade people from using the [robot’s] mouth that way because you don't want to get hurt and you could damage it.”
Compared to these practical, sensible devices, imitation-person robots can seem like animatronic stunts. Look how much like a person it looks! (Not that it would fool anyone for more than a second). Look how it does what people do! (More slowly and more awkwardly by far). Wow, my sex robot is anatomically correct, and she talks! (But with a head full of gears and other moving parts, there are some common sex practices the robot can’t safely do — “we try to dissuade people from using the mouth that way because you don't want to get hurt and you could damage it,” Jeff Barkley, an exec at Abyss Creations, makers of human-like sex robots, told me when we spoke a couple of years ago.)
Yes, if I want to interact with a moving, roaring dinosaur, robots are my only option. But with 7 billion people on Earth, why would I want to spend time with an imperfect mechanical imitation?
I think this is a widespread view in robotics. But it’s not universal. On the contrary, around the world, accomplished and brilliant people are trying to build robots that look as much like people as possible, with an eye toward one day, some day, creating machines you can’t tell apart from flesh-and-blood humans.
At the most recent international conference on Human-Robot Interaction, which I attended virtually, I had a chance to hear a talk by one of the most distinguished and well-known researchers on this track — Hiroshi Ishiguro, director of the Intelligent Robotics Laboratory at Osaka University. That’s him atop this post, at the center of the photo with his “geminoid” — a robot copy of him that (in some ways) looks and moves very much as he does.
One reason for his quest, he said, is that the human brain is highly tuned to the features of other humans. Facial expressions, hand gestures, body postures, tones of voice and other aspects of personhood are simply the richest and most intuitive ways for people to communicate with machines. “The ideal interface for a human is a human,” he said. “We need humanoid robots because our brain is designed to recognize humans.”
“A child-faced robot will be accepted by people because it can ask any question -- as a child would.”
Why, for example, should roboticists wrack their brains trying to invent ways for people to convey the information robots will need to get around in streets, homes and workplaces? All of us already know entities who ask a lot of questions and require some patience as we teach them how the world works — they’re called children. And we have well-practiced ways of meeting their needs. So why not just build a robot that evokes those already-ingrained responses? One of the projects Ishiguro showed in his talk is Ibuki, a mobile robot with the face of a child. “A child-faced robot will be accepted by people because it can ask any question -- as a child would,” he said.
But do people really need to hear a child’s voice, from a child’s face, to want to relate to a robot? There’s a lot of evidence that they don’t. On the contrary, people will imagine they see thoughts and feelings in any machine that moves on its own. This means, people being people, they’re going to wonder what thoughts and feelings the machine has about them. Social robots like Ibuki exploit these tendencies, but people are going to relate emotionally to robots without faces or voices — even robots that don’t know people exist. Ask the people who name their Roombas, or the soldiers who held funerals for their robot “comrades.”
In fact, humans are so eager to socialize with robots that it’s sometimes a problem. More than one person at a robot startup has told me about careful decisions not to include a face, or eyes, or a voice for their product — because people are already all too inclined to chat and play with the device.
“You can easily create a very clear and consistent social interaction with very non-human or even abstract robotic objects,” Hadas Erel, who heads research on social human-robot interaction at the Media Innovation lad in the Interdisciplinary Center in Herzliya, Israel, told me the other day. As she puts it, every robot is a social robot to people around it.
For example, in a recent experiment Erel conducted, two small robots that have no faces, voices or other signs of intelligence were assigned to pass a ball around with a third player, who was human. The experimenters controlled the ball with a magnet, so they could make it appear that the robots were keeping the ball to themselves, not including their human partner much. Most of the people felt quite hurt by this robotic “ostracism.” (This is why it matters that every robot is social, Erel argues. Designers will need to foresee situations where machines, just going about their work, unintentionally wound people’s feelings.)
That people find human meaning in inhuman machinery is true even of some of Ishiguro’s own creations. I, for instance, find Ibuki fascinating and strangely beautiful. I had the same reaction last year when I stood before another Ishiguro project, this robot incarnation of the goddess Kannon at Kodaiji temple in Kyoto (which I’ve written about here).
Both of these devices have supple, mobile faces and fine delicate hands that appear very human indeed. However, Ibuki’s body is metal and gears, and it moves about on wheels. Kannon’s arms and torso are gleaming metal too. The robotish, non-human parts of the device did not detract from my experience of interacting with it. On the contrary, they enhanced it.
The beauty and impact of these robots arises from their blend of human and inhuman traits, not their close imitation of people. They’re worth knowing because they’re strange, not because they’re familiar.
In other words, the beauty and emotional impact of these robots doesn’t derive from their perfect imitation of people. It derives from their combination of human anatomy and high-tech industrial parts — from their strangeness, not their familiarity.
Today’s roboticists can’t suppress the robotness of their robots (blinking eyes click, mouths have gears behind them, walking on two legs is a challenge, etc). But perhaps such suppression shouldn’t even be the goal. Maybe the best possible humanoid robot of the future won’t be the one that fools people into thinking it is human, but instead the one that frankly presents itself as a mix of human and machine. That might be more ethical (because no problem of deception would arise), more interesting and more useful than an ersatz person.
Another justification for making humanlike robots, though, has nothing to do with how people would relate to them. Rather, the argument is that building artificial humans will yield insights into real ones. “Human-like robots, androids, can be a testbed for understanding humans,” Ishiguro said. “We are developing intelligent robots for understanding cognitive and neural science.”
I’m perplexed by this argument. You build a robot that behaves (in some way) just like a human (for example, another of Ishiguro’s projects, Erica, “the world’s most realistic android"). You can then explain, to the last detail, how the robot does what it does. But to claim that this elucidates how humans do the behavior, you have to assume that the robot and humans work in the same way.
How do I know the machine’s consciousness is identical to the human kind?
Suppose, for instance, that I build a robot that appears to have consciousness. When I tear it down and explain each of its processes, I have shown you how to make a conscious machine. But how do I know the machine’s consciousness is identical to the human kind? We don’t know what consciousness is in people, and we can’t say X is the same as Y if I don’t know what X is.
So I remain unconvinced that the goal of human-like robots is worth pursuing. Not because it’s impossible (it may not be) but because humanity doesn’t seem to need them.
After all, suppose people really could create, some day, devices that perfectly mimic people — that are as touchy, unpredictable, unreliable and frustrating as our fellow homo sapiens. What would that gain us?
Decades ago the great SF writer Stanislaw Lem took up this question in his last novel, Fiasco. He thought the answer was clear:
it would be like finally building, after colossal expenditures and theoretical work, a factory for making spinach or artichokes that were capable of photosynthesis—like any plant—and which in no way differed from real spinach and artichokes except that they were inedible.
News and Updates
More on Police-Dog Robots: Last month the New York City Police Department announced that it was terminating its deal with Boston Dynamics to use the Spot “robot dog” device, for pretty much all the reasons I laid out in this newsletter last Spring. I haven’t spoken with anyone at Boston Dynamics about this, but if I had to guess how distressed they were about the decision, I would say “not very.”
Pepper, Past: Pepper, a slight, wasp-waisted shiny white robot with an ingratiating voice and a tablet screen on its chest, is no more. According to Reuters and Steve Crowe at The Robot Report, SoftBank, the robot’s maker, stopped producing it last year, after making about 27,000 since it launched in 2014. That means Pepper joins other social robots (Baxter, Jibo, Vector, Cosmo, Kuri) on the ash heap of history, at least for the moment. (I say “for the moment” because Jibo, Vector and Cosmo have had surprising after-lives after their original companies went under, with other firms stepping in to replace their defunct original creators. The dream of social robots isn’t easily killed off.)
In any event, Pepper’s demise is, I think, bad news for researchers in Human-Robot interaction, for whom Pepper was a popular platform. (A quick search of Google Scholar for “Pepper” and “robot”2 turns up more than 17,000 hits.)
I am sure this unidentified person has friends, family and acquaintances who recognized her in these photos. If interest in her identity were high enough, we media people would out her easily. But the people running this event didn’t release her name. I mention this because a lot of the robot information you see in your news feed is like this — the fruit of carefully curated press kits, “media events” and interviews supervised by PR people. Something to keep in mind as a consumer of news about robots.
This search excluded the term “sweet” in order to sieve out papers about a robot that harvests sweet peppers. This is why I prefer robots with funny names.