'Accident' Isn't Always the Right Word for Robot Screwups
In a lot of incidents, humans and machines are operating normally
Last month a Redditor posted this video, saying it shows a Tesla responding to its owner's use of the "Smart Summon" feature. From his smartphone, the guy directed the car to maneuver to where he was. Unfortunately, there was a private jet in the way, which the Tesla didn't detect.
Was this an "accident" — an unexpected, unforeseeable event? That's the word many reach for when a robot messes up. Not the dictionary definition, which assigns the cause of a mishap to some random event from Beyond (like a lightning strike or an earthquake). We say "accident" as most American English speakers use the word, to describe a situation in which something or someone failed to work as expected. The bolts were supposed to be replaced, but they weren't. The machine was supposed to stop running before anyone got near. The human wasn't supposed to be on the wrong side of the security barrier. Something went wrong, and there will now be a search for the "clues" that investigators need to figure out "what caused the accident."
During the four decades that have passed since a Ford autoworker named Robert Williams became the first person killed by a robot, this concept of accident made sense. Robots worked in factories, and people injured or killed by them generally were victims of either human carelessness (their own or someone else's) or a machine malfunction.
But now we're getting autonomous cars and other devices that live and work with people in spaces that are more human-friendly (less predictable, less monotonous) than a factory floor. So we're starting to see "accidents" that don't fit the concept. In these cases, when people look for the part or party that didn't act as expected, they don't find a culprit. They find instead that both machine and human followed their "programming" without glitches. Neither party "did something wrong." The problem instead was in the way they met.
I think the airport incident fits this description.1 Commenting on the video, one Reddit user listed some good reasons to assume the Smart Summoned car was doing exactly what it was programmed to do. The car, after all, bumped into the plane's tail section, well above the ground (the landing gear wasn't in the car's path). It shouldn't surprise anyone that a Tesla's sensors are set up to detect objects on the street, not floating above at the height of the car's roof.
Tesla's manual actually warns owners about this, stating: "Smart Summon may not stop for all objects (especially very low objects such as some curbs, or very high objects such as a shelf)." It advises that the Smart Summons feature should be used only in "familiar and predictable" places. And it switches to all-caps to warn that "Smart Summon is a BETA feature. You must continually monitor the vehicle and its surroundings and stay prepared to take immediate action at any time. It is the driver's responsibility to use Smart Summon safely, responsibly, and as intended."
Now, perhaps this makes you think that this was an accident caused by one side of the human-robot partnership — the idiot human. I mean, people ought to RTFM, right?
Well, yes, sure. Any individual Tesla driver ought to do just that, and accept responsibility for bad judgments. However, even as we hold individuals responsible for their own decisions, we can admit that human beings, collectively, will engage in predictably foolish gambits. Like responding to a cool new gadget's latest feature by trying to test its limits, or show it off, or both. Being humans, we know how humans are.
We don't need to let any individual driver off the hook to note that, given a population of many thousands of Tesla owners, some messing around with Smart Summon is foreseeable. So I think you can say a human caused this accident, but I don't think it's reasonable to say this was "human error." Not when the behavior is virtually certain to emerge in a population, regardless of what you write in your manual.
Margaret Hamilton, the pioneering computer scientist who led the team that made flight software for the moon landings, tells a story about the dangers of assuming people can be instructed out of acting like people. She would bring her daughter Lauren to work while she was working on the Apollo mission software. One day, imitating her mother, the child pressed some keys, which started a simulated launch. Then, as she simulation-traveled toward the moon, she tapped some other keys. That summoned up a program that was supposed to run before launch. A mistake, but the computer did its best to comply — by wiping out the navigation program to make room. As Hamilton told The Guardian:
The computer had so little space, it had wiped the navigation data taking her to the moon. I thought: my God – this could inadvertently happen in a real mission. I suggested a program change to prevent a prelaunch program being selected during flight. But the higher-ups at MIT and Nasa said the astronauts were too well trained to make such a mistake. Midcourse on the very next mission – Apollo 8 – one of the astronauts on board accidentally did exactly what Lauren had done.
After that, Hamilton got her program change.
Of course, there's a big difference between a 1960s astronaut and the owner of a car in 2022: The astronaut received a lot of training.
The premise of modern robots entering homes and workplaces, though, is that they will "just work." People are supposed to read a QuickStart screen or page through a manual, and then combine that reading with earlier experiences with similar machines — and presto! they'll make the robot work as they wish. Home robots and street robots and autonomous systems in cars are marketed around the idea that elaborate training isn't going to be needed.
Robot makers can't have it both ways. Either their devices are risky to use and require careful attention (so don't put one in the average driveway); or they're easy and intuitive (in which case, don't assume a few lines of instruction are going to stop people from following their instincts and inclinations).
In World War II the US Army Air Corps toted up 457 crashes of its B-17 Flying Fortress bomber in less than two years. Many occurred during landings that seemed to be going smoothly until the planes drove into the runway. Pilot error was blamed, until a post-war investigation revealed a different culprit: The controls for dropping the plane’s landing gear looked exactly like the controls for pulling the wing flaps. Pilots were reaching to deploy the landing gear and instead dropping their wing flaps, which slowed the plane and dropped it, without wheels, onto the ground.
The problem here wasn’t the plane, whose mechanics worked perfectly. And it wasn’t the pilots’ choices — they intended to drop the landing gear when they reached for the controls. The problem was the connection between human and machine. The design of the controls wasn’t realistic about human nature.
The solution was to make the landing gear controls a different shape than the flap controls, making it much easier for even a stressed-out or exhausted pilot to tell them apart. As Cliff Kuang and Robert Fabricant note in their book User Friendly (where I learned about the B-17 crashes), designers today know (or should know) better than to design for ideal people. Instead, as they write, "you had to take them as they were: distracted, confused, irrational under duress. Only by imagining them at their most limited could you design machines that wouldn’t fail them."
Why aren't all human-facing robots issued in this spirit, including autonomous vehicles? My guess is that the people creating these impressive devices imagine that their users are like them — as appreciative, attentive and careful about these wonderful new machines as their makers are. He who writes the manual don't imagine the minds of those who will never read it. And so we get these "accidents" in which the robots act as planned and humans act as expected and something goes wrong. With so many robots planned for daily life, we're going to need a better word for that than "accident."
Assuming it is not a hoax. As far as I know, no one has verified it. No one has debunked it either.