

Discover more from Robots for the Rest of Us
The reason voice-activated devices favor male voices is probably not as simple as you'd think.
Also: Robot news from the air, the street, the surgical theater and a deep, deep cave in Kentucky.
News
Another military self-flying drone milestone.
A U.S. Navy F-35 fighter was refueled in mid-air by an autonomous drone last week, for the first time. It was the Navy’s third successful test of aerial refueling by Boeing’s MQ-25 “Stingray.’’ The plan is for the Stingrays to replace human-piloted planes that now fly on missions just to refill the tanks of combat-ready fighters. This should free up more human pilots to fight and so increase the U.S. Navy’s chances of winning in a confrontation. As DefenseOne notes, some in the Chinese military seem to think highly of this concept. According to the analyst Collin Koh, a recent study of the MQ-25 program by officers of the People’s Liberation Army said the drone will make the U.S. Navy’s carrier-based forces more effective and resilient.
Another Walmart test of a self-driving delivery system.
Walmart, Ford and Argo (an AI company jointly owned by Ford and Volkswagen) will soon use autonomous delivery vehicles for Walmart orders, in a test run in Miami, Austin and Washington, D.C., later this year. Walmart’s hugeness means this could be a big deal. On the other hand, Walmart is famously fickle about its robotics partners, so we’ll see.
Another Contender in Surgery Robotics
This week, Vicarious Surgical, which combines miniaturization and VR in its minuscule robots, became a publicly traded company on the New York Stock Exchange (ticker symbol RBOT). Inspired by the 1966 classic camp movie Fantastic Voyage (in which humans are shrunk down to a size smaller than individual cells), the company combines small remote-controlled robots with VR, to allow surgeons to work as if they were actually tiny travelers inside their patients.
A multi-year DARPA contest is wrapping up this week
Since 2019 the Defense Advanced Research Projects Agency (DARPA) — the people whose blue-sky projects have led to the Internet, personal computers, weather satellites and Covid-vaccine technology — have been running a “subterranean challenge’’ for autonomous robots. Competitors field robots in hazard-filled spaces that are dangerously unpredictable, changeable and perilous. DARPA wants to develop robots for spaces like smugglers’ tunnels, flooded subway platforms, and cold, dark caves. The competition terrain features lots of robot-unfriendly characteristics, like, oh, pitch darkness, flooding, mud, dust, rubble, tight passages, sudden vertical drops and so so on. In those places, robots in previous rounds have found backpacks, drills, “humans” (actually dummies), cell phones, traces of toxic gases, and other “artifacts” planted by the organizers. The more artifacts your robot finds and correctly identifies, the more points you get. There have been virtual and real-life stages of the contest since 2019, but Friday, September 24, is the grand real-world challenge finale, with eight teams competing in a deep cave system near Louisville, Kentucky. First prize: $2 million. You can watch video from each the week’s competition here.
Gender Bias in Robots is Not a Simple Problem
I’ve been told more than once that it will be fairly easy to correct for bias in the behavior of robots that were designed by and for men. The claim comes with a “you have to start somewhere” shrug, and then a technical explanation. Like, “the robot doesn’t respond to female voices as well as male? Well, just widen the frequency range it responds to” (men’s speech tends to range in pitch from 85 Hertz to 180, while women’s is in a range from 165 to 255, so, duh).
Certainly those kinds of technical fixes ought to be done. But the effects of gender bias, and of a material and psychic culture built to suit men, are more subtle and more pervasive than problems of microphone settings.
In this research, Hannah Pelikan, Sofia Thunberg and Ericka Johnson of Linköping University in Sweden tested a hypothesis that seemed straightforward: Robots in their labs, and AIs at home and in other settings too, seemed not to “get” women’s voices as well as those of men. Two of the authors had personally experienced robots ignoring their voice commands but responding instantly to male colleagues, as Pelikan told a workshop on the curious tendency of humans to assign gender to robots last month (a part of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)). Other people — including male co-workers and partners — shared the impression that men have an easier time being understood and getting machines to do things. (I too have this impression, having seen how Siri, Cortana and other voice-interface devices seem to respond more often to my voice than my wife’s or my son’s, both of which are higher-pitched.)
To get hard evidence of this bias, the three researchers conducted a video study in which 35 people (ten of whom were women) spoke briefly with a Google Assistant device.
The result was surprising: There was no significant gender difference in the machine’s responsiveness.
How can that be, given the widespread perception that voice-based tech responds more readily to male speech? The answer, the authors suggest, is that the source of the bias is not in the physics of “male” and “female” voices (I’ll explain the reason for the “ ’’ in a moment.) It’s found, instead, in much more diffuse cultural, psychological and political effects of gender inequality.
One of these effects, I suspect, is what I call “a feeling for the mechanism’’ — the ability to read between the lines of a machine’s message, ignoring its “official report’’ to act based on your own understanding of its state. Windows freezes on a screen as your new PC sets up, but you know you’ll be fine if you shut down and restart. The AI at the polling place says your paper ballot has too many ovals filled in, and spits it out, but the experienced poll worker says “just put it back through a second time, it will work.” Or (most consequentially) Soviet military computers report that U.S. missiles have been launched, World War III is under way, and it’s time to fire back — but at your desk near Moscow, you decide it’s a false alarm. (True story, from 1983. The man who saved us all is named Stanislov Petrov.)
I wonder if men who think they have less trouble with voice-recognition are having the same amount of trouble — but then discounting it, ignoring it or working around it. Because they have a hunch about what the device is really doing. I think such instances might be part of the factor that Pelikan et al. call “confidence with using technology.”
Why would men have more of such a feeling for the mechanism, more of such confidence? Because the designers of AI have been mostly men, and their design choices communicate in many ways, explicit and implicit, that tech is for people like them. Also because men are socialized to think they’re supposed to be the techie gender.
Other possible factors the authors mention include the way women have been socialized to converse (interrupting less, waiting to take their turn to speak) and the effects of non-native accents.
Of course, “confidence with using technology’’ isn’t simply gendered. It also depends on experience, self-confidence and education — aspects of life that are tied with social class, migration, ethnicity, disability status and many other consequential categories humans assign to one another.
That a simple Male/Female binary won’t suffice to understand all these issues is, actually, the researchers’ main conclusion. It jumped out at them after they found that one person in their video study identified as neither a man nor a woman. This suggested to them that defining the tech-bias problem as “robots aren’t built by or for women” is a narrow way to state a problem that extends in many directions. The study of human-robot interaction, they believe, needs a lot more categories.