A few days after I wrote my last post about robot "accidents," coincidentally the good people at Black in Robotics retweeted several pics of delivery robots getting in trouble. For example, this one getting massacred by a train.
And these three
which, the tweeter said, took five minutes to find out how to get around one another. (Kudos is due either to the human supervisors who monitor from afar and step in when the robots are flummoxed — or to improvements in the nav software that make the robots able to sort this out.)
And this one
in which a "burrito bot" (good term, I hope it catches on) appears to have wandered off in search of Walden. (Or maybe this is simply the best route to its client. A path made of dirt doesn't necessarily require legs to be useful.)
Pretty sure all these robots are Starship Robotics delivery bots, which I've written about a little in the past. That got me recalling this thread from 2019
in which Emily Ackerman, a chemist who uses a wheelchair, found her way blocked by a Starship robot. It was in a curb cut that she needed to use — to get out of an intersection where the light was about to turn green for oncoming traffic. To her dismay, the robot didn’t back up or move out of the way, and she had to hoist herself up over the curb.
I called Ackerman shortly after this tweet and interviewed her for a piece I was writing. (This one, if you're keeping score. Though the editor cut her from the published piece.) Far from a traumatized victim of robot malfeasance, she proved to be a canny activist who'd seized on a truly troubling incident to amplify some good points. (Props to her for that; Rosa Parks wasn't just some lady on a bus, either.)
Those points being:
1. Designing robots to act safely among the full diverse range of humanity is pretty difficult. For instance, it's quite reasonable in general terms to expect that your wheeled robots will use curb cuts. Separately, it certainly sounds sensible to tell the robot something like "when blocked, just stop, don't back up, wait for more input to sort it out." (Starship was pretty mum with me about their operations, but they told Ackerman that the "stop moving" routine was decided after consulting people with impaired vision, who said a moving robot would be a problem for those who could not see where it went.)
However, combine the reasonable-seeming use of curb cuts and the reasonable-seeming "just stop and wait" imperative, and you get Ackerman being blocked from the safest and easiest path out of traffic (and the path she was legally entitled to use, after a hard-won fight for legislation). That's a reminder that, as Ackerman put it in our conversation, "minority communities are super diverse in themselves. And so what's best for me is not best for someone else who is disabled in a different way."
How can designers of robots for daily life make sure they've addressed the needs of all the varieties of human being who will encounter their machine? Including more different kinds of people in the design process is often mentioned. And perhaps some sort of industry standard needs to be adopted on this point. But I don't mean to suggest that this will be a neat and tidy solution. Robot designs seem likely to have to balance conflicting needs at times.
2. Designing robots to be comprehensible to all humans is also pretty difficult. The ethos prevailing in labs and startups is that the machine needs to "just work." Training should be quick and painless in the workplace, and unnecessary in the street and in the home. But this ideal of "seamless" tech begs the question, "seamless to whom?" I've been aware of this all my life, because I'm one of those people who misses the supposedly obvious cues in designed objects. I open packages upside down, fall into software dead-ends, enter buildings via the exit — living proof that what’s obvious to you isn’t obvious to everyone.
So I had no trouble grasping Ackerman's dismay about the ideology of "it just works." The robot she encountered didn't just work for her (since it was blocking her from getting out of the intersection). The ideology decrees that "there's no guide telling me how to interact with it, and what to do when it's in my way," Ackerman said.
I guess in the long run we should expect that society will develop conventions for dealing with robots, as we developed conventions for dealing with automobiles. People will imbibe a feeling for robotic behavior in childhood — a feeling that will be harder and harder for designers to violate as time passes.
I don't mean to suggest that there’s nothing to worry about. Quite the contrary. Establishing the conventions of human-robot relations will be a battle, which ordinary humans could lose. Just look at those automobiles: The conventions developed for cars ceded huge amounts of public space to the things. As Clive Thompson points out here, this wasn't because that was what everyone wanted or needed. It was because the industry ran great PR campaigns — like the one Thompson describes, which transformed "walking like a normal person" into "jaywalking."
At the moment, though, we are as Ackerman described us when she and I spoke: "Running before we can walk." The robots are out there, bumbling around, inspiring a certain amount of schadenfreude (I guess we could say maschinenschadenfreude?) when they fail. As those tweets illustrate, the bumpy ride of robot-human accommodation in the “real world” has begun.
Postscript: The nature of this maschinenschadenfreude — the reason people are often so happy to see a robot take a pratfall — is worth exploring in a later post. Readers, do you have a theory? Thoughts? Stray hunches? Any and all welcome in the comments.
My guess is that at some point this company is going to pay Disney to use the sad Artoo noise, and make us more forgiving of the robots when they make mistakes. And probably the next Dora the Explorer / Bob the Builder franchise will be Bobbie Burrito the friendly delivery bot.