"Mirroring" is a technique that former FBI negotiator Chris Voss teaches for putting people at ease, so you can glean information from them (as in how they feel about letting the hostages go, how high a price they're willing to pay for your turnips, how serious they are about wanting a divorce.)
Mirroring is very simple: You just repeat the last three significant words the person said, and then you wait. (That's the hard part, as interviewers know. Even the great Robert Caro has to write "SU" as he takes notes, reminding himself to "shut up"). Mirroring makes people feel listened to, which prompts then to fill the silence with more of their thoughts.
And "mirroring" is the reason people in the 1960s treated a dirt-simple computer program called ELIZA as if it were human.
ELIZA was a program created in the 1960s by Joseph Weizenbaum, a computer scientist at MIT. He was seeking a means for people to communicate with computers in normal language (though in those early days we had to type into a teletype-like machine, connected to a big computer that was sharing its processing power with scores of users. Your typing and the computer's responses would appear, clackety-clack, on an endless roll of paper.) Weizenbaum programmed the machine to respond to people as would a mirroring therapist. It had a list of related words it could throw in now and then (it might say "family" if you mentioned "mother"). But mostly, it mirrored. As Weizenbaum wrote in a famous paper (pdf), that algorithm produced conversations like this one (human typing is normal, computer responses are in ALL CAPS):
Well, my boyfriend made me come here.
YOU BOYFRIEND MADE YOU COME HERE
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
It’s true. I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
I need some help, that much seems certain.
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY
My mother takes care of me.
WHO ELSE IN YOU FAMILY TAKES CARE OF YOU
My Father.
YOUR FATHER
You are like my father in some ways.
WHAT RESEMBLANCE DO YOU SEE
You are not very aggressive but I think you don’t want me to notice that.
WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE
You don’t argue with me.
WHY DO YOU THINK I DON’T ARGUE WITH YOU
You are afraid of me.
DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU
My father is afraid of everybody.
People reacted to this just as, decades later, kidnappers and bank robbers responded to Voss. In a modest and subtle but significant way, they felt soothed. Attended to. They opened up, sorted through feelings, shared secrets. One time, Weizenbaum's secretary asked him to leave the room so she could have privacy as she typed back and forth with ELIZA.
‘Mirroring’ makes an AI sound aware and empathetic.
Weizenbaum was appalled by what he considered an amazing demonstration of people's willingness to treat an artificial intelligence as if it were conscious, sentient, aware -- as if it had a mind. It was alarming, he wrote later, "that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
I think this is wrong.
People aren't stupid. What they are, though, is needy. Confronted with an entity that mirrored them, they played along. (I suspect "playing along," as opposed to believing or not-believing, is the root of a lot of feelings and actions around robots, and deserves more attention than it has gotten. More on this in future posts.)
People aren’t stupid. They’re needy. So they’re willing to treat a machine as if it could hold a conversation.
Many people are game to play along with new technology. People in this experiment kept playing along, I think, because they, like most of us, don't get the attention we'd like. Many of us feel (as someone once said) that we have a gallon to offer, but the world only takes a pint. After all, as you slog about your day, what do other people want to hear from you? At work, reports from a narrow section of yourself, the part you sell for wages. At home, the reassuring signs that you're a good son or daughter, or father or mother. (Our loved ones, bless them, want stability. And they've already heard all our stories.) People leapt at the chance to be "heard" by ELIZA, which is to say, they leapt at the chance to hear themselves speak, unbridled, about things that mattered — without feeling that anyone wanted them to shut up.
So they benefitted from treating ELIZA as if it were sentient. Maybe sometimes a lot, at other times (maybe when Weizenbaum asked if they were cuckoo?) not so much. But the way Weizenbaum framed their behavior — as a kind of delusion — set the terms of many a later debate: What is to be done about the (supposed) fact that people are easily deluded by artificial intelligences?
In the past few years, however, thanks to Large Language Models, AIs have gotten much better at talking real pretty. That has recently led more than a few people to claim we shouldn't just talk about machines that we can treat as if they were sentient. They say we should consider whether, if an AI exhibits certain key indicators, it really is sentient.
What to do about a robot that nudges you to think it has a mind?
That, too, is another post, which I'll put up shortly. Here, I just want to note that when you can ask if an AI really is sentient, you've created a new category of error about machines. Before, people might (supposedly) foolishly over-estimate the computer (as was the case with ELIZA). Or they might project thoughts and feelings without any reasonable basis (as is the case with people who name their Roombas and love them). But what do you do when an AI's design actively encourages people to imagine it has its own mind? For example, what rules should apply to interactions with Hanson Robotics' Sophia, which is designed to interact with humans as if it were a person. (This is the robot that, in an instance of governmental playing-along, was made a citizen of Saudi Arabia in 2017.)
There are a lot of ethics questions involved in designing robots that "expect" to be treated as humans would like to be treated (we'll skip over the fact that humans treat humans quite badly a lot of the time). What contexts will require this kind of machine? (Companion for a hospice patient, yes; deliverybot, maybe no.) Should we define contexts where the illusion of a mind is absolutely forbidden? (Sexual-service robots, maybe, or any kind of sales relationship, where humans might be manipulated into paying money so as not to hurt the robot's feelings)
In what circumstances should robots signal that they are definitely not beings with minds and feelings of their own? (There's some evidence, for instance, that people will apply their moral hunches to robots — you probably don't want humans gumming up the job site by telling Robot 1 to stop being unfair to Robot 2.) What about robots that need to annoy or anger their human users to do their jobs (which I wrote about here)?
In future posts, I'll get into work being done on these and related questions.