All of this has happened before
If you think human-robot relations are a blank slate, Kate Darling is here to set you straight. Also: AI-human hybrids to help harried moms, California acts against robotizing workers, and more
News
A “centaur” to help mothers fend off burnout
Yohana is a new service created by robotics and AI pioneer Yoky Matsuoka that’s what Gary Kasparov calls a “centaur”: A human-AI hybrid to do a job better than either could do on alone. In this case, the job is assisting mothers. As Steven Levy reports in his newsletter, the AI tracks household needs and predicts problems by crunching information from various in-home sensors. The human employee uses that info to assist the client with anything from a kid’s science homework to renovating the attic.
Whether or not this particular startup succeeds, it’s a great example of how robots for daily life will arrive. Not (as the public fears) replacing humans but rather trying to work in a complimentary way with them. The hope for Yohana is that the sensors and AI make the human assistant’s job less of a grind and more effective too (Matsuoka cites a Google Nest motion sensor that detected an increase in the number of times an elderly man got up at night, thus flagging a health problem). And, of course, the client will have a better experience working with an understanding person than with a chatbot.
Levy notes the open questions: Will people be at ease with AI tracking and analyzing their families? And even if Mom’s birthday a gift to you was chosen by another person instead of an AI on its own, that still means the present wasn’t chosen by Mom, which is … maybe a little sad? (Matsuoka told Levy that gifts chosen by her AI-assisted helper were better than what she would have picked on her own.)
On the other hand, after a year and a half of work-from-home and school-at-home and go-nuts-at-home, a lot of us may not care about such niceties.
Fantastic Voyagers
Microscopic robots that could move about inside a human body would have many uses in medicine (in this project, for instance, a different approach created micro-bots that treated brain cancer in mice). So it’s a big deal that a team at Cornell has created microscopic legs for such robots. As the authors write in this paper in Nature, it’s “an important advance towards mass-manufactured, silicon-based, functional robots that are too small to be resolved by the naked eye.” Cornell News has the details about the robots, each of which is about 5 millionths of a meter thick.
California legislators act to protect workers from robotization
Outrage about warehouse workers forced to work like robots prompted the California legislature to pass new protections and disclosure requirements. If Governor Gavin Newsom signs Assembly Bill 701, warehouse workers will have new ways to fight quotas and speed standards that endanger their health.
Book Review: Kate Darling’s The New Breed
Novelty is the opium of modernity. Obsessed with change and newness, we tell ourselves we live in a uniquely stressful time, with problems our poor dumb Internetless ancestors never faced.
But human beings don’t seem to have changed much since the Stone Age, which means the way we think and feel about our circumstances is not different from the way people did in the past. So we tend to create the same circumstances again and again. Each of our revolutionary epochs, then, is not nearly as revolutionary as advertised in its time. And, even if it were, its resemblance to past events would shape how we understand it. We see the future through a lens shaped by the past (the personal one we’ve lived through and the wider, older one we’ve been told about). That it is the only lens available.
Not noticing this influence, it’s easy to think the past has nothing to teach us about new technologies. Yet our forbears wrestled with technology-related problems much like ours. This week’s uproar over vaccine mandates, for example, bears a distinct resemblance to worldwide resistance to vaccines at the end of the 19th century. In this decade’s struggle over unionizing tech workers both management and unions are using the same tactics that labor and capital used a century ago, when industrial workers were organizing. To tame Google, Facebook and other data-gobbling giants, governments have turned to theories of law developed in the 19th-century to grapple with oil companies, railroads and other monopolies and oligopolies. And the folkways and legal structures of medieval European peasants are being usefully explored for the light they can shed on contemporary people’s relationships to technology and to the that technology’s owners (think serf-to-lord).
I’m convinced about the past’s relevance to supposedly unprecedented problems. And yet, I’ve been guilty of thinking modern robots — the sort this newsletter is about — were an exception. After all, I thought, the human mind has never before had to reckon with devices that serve us yet also act independently and (sometimes) unpredictably, with (sometimes) effects on our feelings.
But now I’ve read Kate Darling’s The New Breed and I see I should not have been so credulous. Because humans have been making such devices — which often serve us as we expect yet also often confound us with actions we don’t want or control — for millennia.
They’re called domestic animals.
Before people manufactured robots, Darling points out, they bred dogs, horses, sheep, cattle, cats, koi and many others. Our ancestors said let’s make a dog that will retrieve the duck I shot; and make a turkey with lots of the white meat my customers want; and make a hardy little horse I can ride across the steppes. And they did.
Noting the similarities between our animals and our robots — the premise of Darling’s book — is one of those astute, re-orienting observations that is at once so apt and so significant that one wonders (well, I wondered) why the hell one didn’t see it before. As Darling puts it,
If we want to predict how we’re most likely to feel about, argue about, and treat robots in the future, we shouldn’t be envisioning Commander Data. We should be looking at our relationship with the other nonhumans that we’ve treated mostly like tools and sometimes as companions.
In fact, as Darling recounts, a lot of tasks that robots have lately been given were once given to animals. These include, for instance, autonomous weapons systems (pigeons in warheads to guide missiles, bats with bombs attached, Indian war elephants to freak out Greek hoplites, and Greek pigs set on fire and turned loose to scare the elephants). They also include illness and toxin detection (using dogs, canaries, rodents), cable-laying and landmine-finding. In fact, robots are still far from supplanting animals in some of these uses. Just ask the police and military agencies that are training eagles to attack drones.
As I read this lively and engaging book, I realized Darling isn’t the first person to note that humanity’s relationships with animals might be relevant to robotics. The sociologist Nicholas Christakis made a similar point to me in a conversation a few years ago, noting that humans have always been changed by their relationships to their tools — the ones they’ve bred as well as the ones they’ve made. But as far as I know Darling is the first writer to work out how exactly human-animal history can be a guide to imagining future robots, and a corrective for common errors.
Informed by the long past of human-animal relations, for one thing, we can set aside some fears (as oxen and horses didn’t replace human farmers, robots aren’t likely to replace human workers) and some abstruse theoretical debates (why ponder the possible personhood of robots, when we’ve kept animals for thousands of years without giving them the right to vote?).
More practically, Darling suggests, to resolve head scratching debates about who is responsible for an autonomous car knocking over grandma, we should look to long-established norms of responsibility for animal mischief. “Why,” she asks, “would we create a personhood for robots instead of thinking through our other options” for compensating people when they are injured by a robot? Animal-related laws and norms, from “sheep funds” to the registering of certain types of dogs to laws defining owners’ obligations when an ox wanders off, offer many time-tested solutions:
Contrary to the popular narrative that we’re on entirely new ground, our history with animals shows that we’ve deeply grappled with the question of autonomous agents that cause unanticipated harm based on their own decision-making.
Darling thinks we worry far too much about people treating robots as if the machines were human. Yes, today there are people who call themselves “dog parents” and talk to their beasts as if they were children. And, yes, in the past animals were sometimes arrested and prosecuted in court for murder and other crimes (Darling makes good use of the trial records). But such prosecutions haven’t become routine, and most pet owners still make a distinction between their beloved animals and their children. When you tell a friend he’s using his cat as a substitute for human company, he won’t say, “but everyone does that.” You can criticize because that never became universal practice.
Future arguments about robots, Darling predicts, will likely fall along the same lines. They won’t be about a mass global substitution of robot for human. They will be about whether this particular person, or company, or practice, is pushing the relationship beyond what it really is or should be.
And what should that relationship be? Some argue that we should we be coldly rational about robots, reminding ourselves that these things are soulless machines (as René Descartes and his acolytes said of animals in the 17th century). To these thinkers, anthropomorphizing a robot isn’t just a category error. It’s giving your mind over to the nefarious capitalists who developed and sold the robot. They’re going to make their machine say “I’m sorry to see you cancel.” You shouldn’t feel bad as a result!
In some of the most intriguing passages in the book, Darling makes a good case that this argument is an error, made possible by ignoring the past.
I think she’s right that our species’ history with animals shows there’s no reason to fear people will treat robots with too much tender affection. After all, humanity’s relationship with other creatures involves killing them and eating them, and/or working them for our own gain. Even doting on a pet is using the creature for our own purposes. As Yi-Fu Tuan argued in his classic Dominance and Affection: The Making of Pets, the human urge to control and modify nature, and its creatures, is far stronger than our solicitude for them. More recently Hal Herzog’s Some We Love, Some We Hate, Some We Eat(which Darling quotes) also explores how all of our relationships with animals are based on our wants and needs.
Yes, some people will like robots and be kind to them and have their feelings hurt by them. But when such feelings conflict with our own desires, we have a long animal-related record of setting those feelings aside. As I mentioned in a post a few weeks ago, humans aren’t all that good at treating humans well.
OK, you might reply, but robust and influential movements against animal cruelty in many nations show that norms can change. What if empathy for robots becomes such a norm? Wouldn’t that be terrible?
Darling thinks not. She makes a thought-provoking defense of anthropomorphism, as a means of expanding our empathy and reducing the world’s sum total of human cruelty. After all, it is because humans anthropomorphize animals that there arose an influential movement against cruelty to our fellow creatures. (I once received an email solicitation to help defend lobsters from being boiled alive, in which I was reminded that the crustaceans “hold hands” (well, claws) as they walk across the sea floor. Why try to get me to imagine lobsters acting like kindergarteners? No doubt because, when it comes to fundraising, it works.)
Against the “it’s-just-a-machine-don’t-feel-for-it” stance, Darling cites an array of thinkers whose arguments do or could justify empathy for robots:
Mark Coeckelbergh and John Danaher have argued that if robots are performatively equivalent to something that already has moral status, we should give robots that status, too. (If it looks and acts like a cat, then treat it like a cat.) Shannon Vallor argues that sadists, torturing unfeeling robots for their own pleasure, are engaging in acts that don’t contribute to broader human flourishing. Instead, she says, we should encourage activities that help people live out the character traits we view as good and admirable. Tony Prescott and David Gunkel base their approaches on our relationship with robots: if people view their relationships with certain robots as meaningful, then we should honor those bonds.
Not all of these claims make sense to me. But I think they’re all worth thinking about, if only as a reminder that the “it’s-just-a-machine” claim is not as self-evident or simple as its believers say.
Of course the similarity Darling claims is not identity. Animals are different from robots. For instance, in the case of a robot that does harm, laws on responsibility for oxes that gored people and dogs that bit the mailman won’t answer all questions. Such norms fix responsibility on a single entity — the animal’s owner. But a robot that erroneously kills someone can be made up of mechanisms and software from different makers. The chain of responsibility is harder to track.
Then, too, the most important distinction (to me, anyway, though Darling doesn’t make much of it) is that animals weren’t originally created by and for people. Robots, on the other hand, were. They are tools. If all human beings disappeared tomorrow, a Labradoodle — which despite the interference of human breeders remains, even now, a weird-ass wolf — would find its place in the newly human-free ecosystem. On the other hand, on that day the most complex and lifelike robot would become a meaningless assemblage of stuff. Darling makes clear that not everyone thinks this distinction makes sense. I think it does. At least I think I think so. I’ll be thinking more about these questions, and more clearly, thanks to this book.
This and That
What we talk about when we talk about robot umpires.
“Robo-umpires” in baseball aren’t robots, as far as I can see. The term is player slang for a modified missile-tracking system that simply crunches data and feeds its analysis to a human umpire, who yells out the verdict. That doesn’t fit most mainstream working definitions of a robot, which would include a body and an ability to take some kind of physical action.
Yet a number of things in Zach Helfand’s fine piece on robo-umpires last month in The New Yorker 1 are worth mentioning in this our robot-centered newsletter.
The first is how easily any algorithmic decision-making now gets represented as robot decision-making. It’s significant that players went for that nickname, as did all the news stories and videos illustrated with beep-beep-boop robots in the umpire’s place on the field.
Why do people easily imagine a robot when they hear about an algorithm?
I think they unconsciously want intelligence to have a body, to be a part of their physical world. Perhaps imagining a body, a presence, makes machine intelligence seem less weird? Or simply more approachable? (An angry fan would feel silly yelling at “a large black pizza box with a glowing green eye,” as Helfand describes it. But something with arms and a face? Much easier to resent.)
The second striking fact that leaps out of Helfand’s piece is the power of “automation bias” — that is, the assumption that machines are smarter than they are, and more authoritative than any human. This bias impels humans to respect a machine’s truth claim, even if that claim is annoying. Even if they would reject the same information from a human. My favorite anecdote in the article features a spectator who’s not liking the calls coming from (he thinks) a human umpire. They guy yells something insulting about the human umpire’s height (“how can he even see over the catcher?”). But then someone explains that the calls are coming from a machine, and that the human can only relay them. That makes the heckler fold. He says, “he’s called a good game, I gotta say!”
Full disclosure: I have written for The New Yorker (print edition once, online several other times) and have good friends who’ve worked for the magazine in one capacity or another. I don’t think any of that is relevant, as I don’t know Helfand. But, you know, they call it full disclosure. So, there you have it.