Is It Ever OK to Bash a Robot?
"Robot abuse" sounds pretty bad. Here's why some think it's not.
When Georgia Tech's Ayanna Howard saw a visiting schoolchild poke her robot in its eye, she wasn't having it. She told the kid robots could "come to your house in the middle of the night while you're sleeping in order to punish you --- because they'll remember how bad you were to them!" (The anecdote is in her lively and pithy new book, Sex Race and Robots, where she adds "I got a little bit of satisfaction at the look of fear that appeared on his face.")
That's pretty hard core, but most roboticists I've met would get it. It's natural to want to catch your fragile and expensive darling before it hits the hard floor, and to bristle when people don't admire it as much as you do. They can't stomach people being "mean" to their machines.
Fortunately for them, human beings empathize readily with anything that has the slightest semblance of life and feeling. Robots move on their own and seem to make decisions, so they fit that bill. (This is why people name their Roombas and have held funerals for military robots "killed" in the line of duty). When manufacturers make robots big-eyed, smiley and friendly, they're exploiting this aspect of human nature, encouraging our tendency to see and respond to human-like qualities in a machine. Would you kick a furry, Elmo-ish robot politely asking you to press the elevator button for it?
Yet humans are equally good at shutting off empathy (see meat-eating, war, genocide, human trafficking, among other topics). So a portion of the human race, when it sees a robot, will, as the gamers say, grief it. I've seen this myself, traveling around (remember traveling around?) for robot-related journalism.
Visiting a couple of undergraduate classes at Purdue in the fall of 2019, I asked students’ opinions about food-delivery robots from a startup called Starship that were getting a trial run on the campus in Lafayette, Indiana. The majority were pretty harsh. "They get in people's way," one student said. "They can't even get up a ramp, they just get stuck," said another. "I saw one get hit by a bus," said a third, "and I was like, 'all right!' " Why? “Because I'd been predicting that it was bound to happen." Several people remarked that other students --- not them! --- liked to mess with the robots, blocking their way just to see what would happen.
In fact, some researchers have decided that — as the authors of this paper on self-driving-car bashing put it, that robots need coping strategies for human hostility. Finding those coping strategies is now a research topic.
For example, here (pdf download) Takayuki Kanda of Kyoto University and his collaborators explain how they devised instructions for a mall-wandering robot to avoid unsupervised groups of small children. (You'd avoid a dark alley where strangers seem to be hanging around looking for trouble; for Kanda's robot the equivalent is a pack of 6-year-olds and no adult on hand. The rule of thumb is go to a crowded part of the mall or go near a parent.) Flight is the only good option, they wrote, because once the bullying starts, "it is very difficult for the robot to persuade children not to abuse it."
This sort of work often assumes everyone agrees with roboticists that "robot abuse" --- defined by Kanda et al. as "persistent offensive action, either verbal or non-verbal, or physical violence that violates the robot’s role or its human-like (or animal-like) nature" --- is bad. But the question isn't really settled, as he and his fellow authors admit: "Whether this definition is justified, and if not how should we treat robots -- these ethical questions remain open," they write.
At first glance, it's hard to make a case for robot abuse, for the same reasons it's hard to argue in favor of chopping new furniture to kindling or throwing out new-baked cakes. If society doesn't want robots, it should just not to make them in the first place. However, "society" doesn't act or speak with one voice. Some of us are busy making and installing robots while most of us are out of the loop, meeting robots only when they are dropped into our lives without anyone asking what we think. Robot-bashing could be part of “society’s” conversation about this technology --- a crude but effective way for people to express their opinion and act on their own behalf. When robot abuse is a form of dissent, then maybe society should not be too hasty to stifle it.
Two essays published last year illustrate this line of thought. They are principled arguments against feeling sorry for robots (and thus, I think, they implicitly argue against design features that nudge people to be nice to machines).
One of these is this essay published last June in Noema magazine by Abeba Birhane of University College Dublin and Jelle van Dijk of the University of Twente in the Netherlands. They aren't concerned with how people feel about robots, I think for good reason: people's feelings are ever-shifting and depend a lot on what is happening around them. Instead, they address the ethical principle needed to define robot abuse as bad conduct: The idea that robots have, or soon will have, a right to fulfill their purposes and to be safe, just as humans do.
It's all too easy to speak of robot rights, they argue, on analogy with human rights and animal rights. But people and other creatures have rights because they exist independently of us. We spray against mosquitoes and put a tiger in the zoo, but the mosquito and the tiger would exist even if there were no people. Robots, though, are tools. They belong in a different category --- the one where we keep hammers, microwaves and crutches. It's absurd to claim robots have rights for the same reason it's absurd to claim crutches have rights. Even if it goes unseen by a single human, a tiger remains a tiger. But a robot isn't anything unless a human is around.
So, to talk of robot rights is to say we have to accept such beings as they are, and ignore the fact that they are our creations. Or, to be more precise, the creations of people who have money, power, tech savvy and ambition, whose goals do not include the well-being of other types of person.
"Treating AI systems developed and deployed by tech corporations as separate in any way from these corporations,” Birhane and van Dijk write, “or as ‘autonomous’ entities that need rights, is not ethics --- it is irresponsible and harmful to vulnerable groups of human beings."
A few months after this essay appeared, I came upon this one, in Real Life magazine, by the writer and reseacher Kelly Pendergrast, which explores what it must mean to rigorously remind ourselves that robots as things, not beings.
Pendergrast led with the well-known story of Hitchbot, a device designed to get drivers to pick it up and take it to another destination. Or, to put it in more empathy-inducing terms (which most media did), a harmless, talking and smiling little robot that went hitchhiking, entrusting its life to strangers.
Hitchbot was created by David Harris Smith of McMaster University and Frauke Zeller of Ryerson University in 2013, to answer the (tongue-in-cheek and loaded) question: "Can robots trust human beings?" The robot could talk simply but walk not at all. So it would ask nearby people to carry it into their cars and drop it off where it could get its next ride. By 2015 it had "hitched" successfully around Canada and parts of Europe. But that summer, as it was heading south from Boston, it crossed paths in Philadelphia with someone who smashed it to pieces.
I winced when I heard about that. It was the same distress I feel whenever I see brute force destroy any product of someone's loving labor. (I hate "food fight" comedy or the sight of people smashing motorcycles or TVs, whatever point they're making.) But that's empathy for the producer, not the product. I feel for the people who made the things, not for the things themselves.
Robots, though, are things that can win sympathy for themselves. More and more robots are, like Hitchbot, designed to do this. Smith and Zeller wrote before Hitchbot's Philadelphia debacle,
Most importantly, the robot was approximately the size of a 6-year old child, appealing to human behaviors associated with empathy and care. Its overall design, with a plastic bucket for a body, pool noodle arms and legs, and matching rubber gloves and boots was meant to be quirky and fun. The low-tech look of it was intended to signal approachability rather than suggesting a complex high-end gadget[
It's only human to let these design features work on you, but when you do, Pendergrast argues, you're admitting a psychic Trojan horse into your mind. When you sigh with empathy for Hitchbot, or feel protective and kind toward other machines, you're assenting to other people's goals, without giving thought to your own. Getting warm and fuzzy about such machines would be like tearing up over one of those messages from Facebook that begins we care about you. "Instead," she writes, "we should learn to see the robot for what it is --- someone else’s property, someone else’s tool." Sometimes we can make our peace with this tool. But sometimes, she writes, "it needs to be destroyed."
Workers have been down this road before, in another era of rapid technological change. The Luddites in the early 19th century destroyed looms, frames and other technology that threatened their livelihoods. Those people often get invoked pejoratively in tech circles, where "Luddite" is the term you use for people who are absurdly fearful of smartphones or online banking. But the Luddites, Pendergrast notes, didn't hate or fear the machines themselves. They were striking at their employers; the machines were an intermediary, "an expression of the exploitative relation between them and their bosses."
Those early industrial-age workers had it pretty tough. Imagine if, on top of their other troubles, they had to fight the feeling that the poor looms were in pain, or that a stocking frame's brothers might come to people's houses to punish them for their immoral behavior. Robots that are designed to win people’s empathy could reasonably be seen as the enemies of employees fighting speed-ups, deskilling and other robot-assisted assaults on work as they know it.
It's tempting, then, to say we workers of the world need to get a grip and become coldly rational about the cute, friendly robots that are coming their way. But if there is a cost to giving in to our feelings, there might also be a cost to resisting them.
As I've mentioned, people easily transfer human feelings to machines (think of those Roombas with names). It is likely that this process also works in the other direction --- that at times we transfer our feelings about machines to people. In this study of some 30,000 people in Europe and the U.S., for instance, Monica Gamez-Djokic and Adam Waytz of Northwestern University found that people hostile to automation were also hostile to immigrants (a person's feelings on one topic predicted his/her feelings on the other). People accustomed to shutting down empathy for machines could find it easier to shut down empathy for humans.
Those who oppose sex robots invoke a version of this argument. The effect of human-like sex machines, they say, won't be more robots being treated like people; it will be more people being treated like robots: "These products further promote the objectification of the female body and as such constitute a further assault on human intimacy," as they put it on the website of the Campaign Against Sex Robots. Preventing abuse of these robots, in their view, prevents later abuse of humans.
A similar argument was made recently in this paper by the philosopher Alexis Elder of the University of Minnesota. Elder defends the moral standing of robots because of their place in human lives and human hearts. Just as a robot represents work and care to its maker, it can also mean a lot to other people, for other reasons. Who cares whether it can feel fear, pain or shame? Its meaning to other humans entitles it to our considertion, she writes. So those soldiers organizing a funeral for their fallen robot could be doing the right thing:
While it makes no difference to the guest of honor, [the funeral] marks a way for people to honor the significant role the deceased has played in their lives, and to give their emotion and attachments space to be felt and recognized without “running wild.” The roles that life-saving bomb detection and pack robots play in these soldiers’ lives can be appropriately recognized as valuable and, in virtue of their connections to human soldiers, responses to their loss can be given weight in the form of ritual, as part of protecting the social capacities and human lives in which both humans and robots are embedded.
Ayanna Howard, who believes that already "humans are beginning to treat robots as an underclass,” would, I believe, agree with this. People who abuse robots, she writes in her book, are uncivilized — “they didn’t get enough HT (home training).” Her prescription for forestalling robot abuse (people ought to behave themselves) is different from the Campaign’s (let’s not build certain kinds of robots in the first place). But both believe it's always perilous to encourage people to be even more selfish, indifferent and brutal.
This is a widespread and reasonable belief. It is one reason we have laws against cruelty to animals. If society ever passes laws against cruelty to machines, it won’t necessarily be because we think machines are equivalent to animals. We could just believe that the effects of robot abuse on humans is bad enough to warrant a ban.
Is it possible to square these concerns with the point made by Birhane, van Dijk and Pendergrast? I think it might be — by refusing to cast the problem as a dichotomy that pits kindness against clarity of thought. It should be possible to construct a future in which the two principles don’t conflict.
That would require a different sort of political conversation about robots. Rather than insisting they have the right to smash a robot to pieces, people could insist that the sort of robot they'd like to smash not be built in the first place. Rather than accepting that people should treat a robot like a child or a friend, citizens can demand that robots be designed to foster, not hinder, our critical faculties. We could all push for more robots that deliberately remind us that they are machines --- that encourage our reflective minds to switch on, not off.
In fact, I've spoken to robot makers who chose designs that remind users that their machines aren't living things. Sarjoun Skaff, for example, the CEO and co-founder Bossa Nova Robotics, told me last spring that one of their machines --- a wheeled robot designed to wander store aisles, scanning stock --- was deliberately created to look like an appliance. "It was purposefully not made to have a face and eyes, because it's a tool," he said. Similarly, Ahti Heinla, one of the cofounders of Starship, told me his team had considered designing a human-like face for the company's delivery robots. "We decided against it," he said. "We did not want to communicate the expectation that it is exactly mimicking a human being. It is not. It is a different sort of thing."
There are many ways to make a robot acceptable to people without seeking to trick them. Roboticists I've met want people to like their machines because they meet a real need in a good way, not because the machine reminds them of grandma. When I met with Kanda 10 months ago in his office at Kyoto University, he made it clear the point of his research on making robots acceptable was to give people good reason to like the machines --- which, he said, most do. As we watched a video of children happily talking with his experimental robot, Robovie, he said, "interaction should be combined with a very nice service. The overall benefit should be there. I don't want to create a bad robot to dominate people."
So if you're riled up about robots and AI, my advice to you is this: Don't wait until they roll a machine into your workplace and wish you good luck working with it. Get informed now about what's going on in robotics, and take knowledgeable part in conversations about the ethics, politics and economics of robots and AI. Humanity, as Heinla told me, "is just scratching the surface" of human-robot interaction, just beginning to learn how to do it right. Why not take part in this just-started conversation? (Sorry to sound self-interested, but for that you could do worse than to subscribe to this very newsletter.)
In the long run, we'll all be better off preventing an undesirable robot from being built than we'd be trying to break it when it's already, winking a dewy eye and flashing its perfect smile.
Truly fascinating analysis here. The discussion of the false dichotomy between kindness and clarity of thought is particularly valuable - thank you.
I teach a class "Robot Ethics" to first year students every fall, this is a great article for that class. Much more important, I think, than whether or not ChatGPT is sentient. There is something wrong about intentionally breaking tools and it is obliquely connected to what is wrong about reducing people to tools.