The controversy over behavioral economics
The idea that robots can influence people to do good is based on a set of ideas that are have been criticized as exaggerated or even "dead"
It’s a commonplace in Human-Robot Interaction research that human beings respond to robots as if the machines were human — or at least, human-like. That’s why people name their robots, dress them up, mourn their “deaths” and try to avoid hurting their feelings.
At least, they do so when it’s convenient. I thin it’s a different question whether people will sacrifice anything important to their feelings of empathy for a robot. (In this study from last spring, for example, people were willing to make a bit of effort to avoid creating work for another human, but they weren’t willing to pay empathy’s price for a robot. They were more reluctant to litter after they’d watched a human picking up other people’s trash, but not after they’d seen a robot do so.)
Still, people do seem to interpret robots as social beings, so it’s reasonable to think that robots could exert social pressure, or at least social influence, on humans. A “nudge,” as defined by Cass Sunstein and Richard Thaler in their book of the same name, is a choice framed in such a way that the “right” option is easy to choose and the wrong ones are harder. Leave me to my own devices, alone in a room, and I might not bother to pick up my litter. But if others are present, I will.
Can a robot make me feel that way too? Or otherwise have the same social effect as a human? Some work suggests the answer is yes. (Discussion here and specific example here.)
Twenty years ago, economic orthodoxy considered it absurd to think a machine could influence a person’s behavior. This was because it considered absurd that people could influence each other, when it came to most choices. Economists then held that people know what they want and how to get it, and then consciously make choices based only on the cold hard facts (littering saves me a couple of minutes and since there is no one here to levy a fine, I will do it).
Nudges are a popular policy choice around the world today because that rational-economic-man doctrine was overthrown by the school of “behavioral economics” — a discipline that aimed to map (and use) unconscious, irrational influences on decision-making. Those include not only the social influence of others but also a vast compendium of cognitive biases. (A famous one of these, for example, is “loss aversion” — the claim that humans are more alert to the risk of loss than to any possible gain, so that the same choice looks different depending on how it is framed. I will cheerfully sign up for surgery with an 80 percent chance of success but I am wary of one with a 20 percent chance of failure.)
Today, the behavioral-economics rebels have become the establishment. And lately they’re the ones under attack.
The news-cycle prompt for this post is the fact that a prominent and much-discussed paper in the field (one of whose authors, Dan Ariely, is one of the public faces of the field) has been shown to contained fabricated data. The larger context is a wave of skepticism for a group of behavioral-economics ideas that were once taken as Gospel, by scholars, journalists (including me), the Nobel Prize committee (which has awarded the prize in economics to two leaders of the behavioral school, Daniel Kahneman and Richard Thaler), and governments around the world.
The driver of the backlash is the claim that behavioral economics, like the rational-man view it overthrew, has got people wrong. Its claims, say critics, are exaggerated and its experiments don’t replicate (this means when researcher B re-does Researcher A’s famous experiment, she doesn’t get the same famous result). So, skeptics say, those clever policies to get people to make good choices aren’t actually doing much good. Even “loss aversion,” which seems so well-documented and so intuitively right, has been exaggerated in its scope and importance, say the critics. Jason Hreha, a human-behavior expert and former head of Walmart’s Behavioral Science team, sums up the case in this j’accuse. "Behavioral economics,’’ he writes, “is dead.”
That claim itself looks exaggerated, writes Scott Alexander here (in the best and most detailed look at the controversy), with “loss aversion” looking more robust than Hreha claims. If you’re interested in how robots might or might not “nudge” humans, both Hreha and Alexander’s pieces are well worth your time.
Do behavioral economics’ matter for Human-Robot Interaction research? I think maybe they do, because getting robots to “nudge” people rests on “nudge” theory, which in turn rests on behavioral economics theories about behavior. If you disagree (or agree!), let us know in the comments.
Another very thought-provoking article. I am fascinating by the connections between behavioural economics and smart tech (and this is something which is central in Zufoff's 2019 book The Age of Surveillance Capitalism), but I think your discussion of robotics is important precisely because it raises the prospect of a more embodied form of social influence. The most arresting issue in this piece is the question of whether it matters if nudges are actually effective. I think that even if nudges are ineffective there are still ethical costs in their application. By treating people as flawed and utilising unconscious strategies of behavioural modification is it not possible that behavioural economics could contribute to the construction of the types of human subjects its theories predict. Perhaps I am over-estimating the power and influence of traditional mechanisms of behavioural economics, but in the age of the hyper-nudging this trend could become more realistic.