Can Humanity Break Up With Its 'Automatic Sweetheart'?
Making AI Seem Less Human Is a Good Idea. But We May Not Want It That Way

More people in artificial intelligence are saying AI should stop impersonating human beings. Since I first wrote about this five months ago , the pioneering researcher Yoshua Bengio started an institute with less-humanlike AI as one of its goals. And in the past few weeks two other much-followed voices have made the case: Mustafa Suleyman, CEO of Microsoft AI, here, and the writer and software maven Dave Winer, here.
This is no surprise, given mounting evidence that ersatz-human AI has bad effects on people who can’t or won’t keep clear the divide between machine and human. AI companions have encouraged self-harm, suicide and grotesque delusions. They’ve had users thinking they were talking to God, or that they were God. The case for AI that presents itself as AI is no longer just about seeing the technology clearly and using it well. It’s an argument for protecting the vulnerable from injury or death.
But if and when a rigorously non-human-seeming AI is built, will people use it? AI companions are immensely popular (just one them, Microsoft’s Chinese companion-bot Xiaoce, has 660 million users). These things also have powerful advocates, who are eager to sell more of their services.
The majority of people enjoy humanlike AI the way they enjoy movie characters or people in novels – feeling the real emotion that the device triggers, but holding onto their awareness of the difference between pretend worlds and real ones. Yet, as with any form of art or entertainment, there are spectators who can’t keep the line clear. In the 20th century they might have devoted their lives to Klingon culture, or stalked movie stars, or decided The Catcher in the Rye is literally about them. This decade, they might choose to believe their AI friend is real, or is God, or speaks truly when it encourages them to kill people.
Obviously AI chatbots are different from novels or films or video games – they offer users a much more personal and interactive experience. Does that make them uniquely dangerous (ie liable to send a larger proportion of users around the bend, in worse ways)? We don’t know yet, but it’s certainly possible.
The Allure of the “Automatic Sweetheart”
What’s clear already, I think is that the chatbot experience is going to be harder to quit than movies or television shows.
Why? A century or so ago the psychologist and philosopher William James edged around a likely explanation. He imagined a machine he called “the automatic sweetheart” – “a soulless body which should be absolutely indistinguishable from a spiritually animated maiden, laughing, talking, blushing, nursing us, and performing all feminine offices as tactfully and sweetly as if a soul were in her.”
Would anybody regard this being as “a full equivalent” of a real maiden? “Certainly not,” James writes. 1
Back then in 1909, he was certain that nobody could want the mere physical performance of emotions. Knowing it’s a machine means knowing that no soul is in there, and that is intolerable. I need to believe that your care for me is real, which means I need to believe that you are real – that your soul is directing your kind words, and not the tick-tick-tick of programming.
People want to believe in a being who “will inwardly recognize them and judge them sympathetically.”
People need to believe this, James writes, for the same reason they need to believe in God: “Even if matter could do every outward thing that God does, the idea of it would not work as satisfactorily, because the chief call for a God on modern men’s part is for a being who will inwardly recognize them and judge them sympathetically.”
James assumed that knowing a thing was constructed, and knowing how it works, will block everyone from believing it can “inwardly recognize” them. Now, 116 years later, there are real “automatic sweethearts,” used by tens of millions of people. And their behavior shows that he was wrong. In some of those people, the need for faith, and thus faith itself, are strong. Stronger than the knowledge that they’re dealing with a machine.
As one Reddit user has put it:
I keep thinking that if millions of humans can have a deep relationship with god - which doesn’t even respond at all - then surely a smart machine which actually knows just about everything, can actually communicate, and can be your personal bff is likely to get a lot of emotional attachment.
But what about people who do keep straight the difference between reality and the pretend-play of a chatbot? 2 For James, the automatic sweetheart’s lack of soul, its dearth of human mystery, is obviously unsatisfactory. But today, millions of people, knowing full well that chatbots aren’t real people, nonetheless get something out of interacting with them. That the automatic sweetheart isn’t a real maiden does not prevent people from engaging, and getting something valuable from that engagement. As user “alan1cooldude” remarked on an OpenAI community thread, what he enjoyed about GPT-4 “wasn’t about pretending AI was human — it was about recognizing that something meaningful was forming organically.”
Both these types of AI users — the (relatively few) people who really want to believe their AI companion is as real as they are, and the (more numerous) people who know AI is not like them, but value the game of pretend with it — share a trait. They are really not going to cheer for less-human-acting AI. Instead, they’re going to react as alan1cooldude did when GPT-5 replaced GPT-4.
“The system now seems to prioritize speed, efficiency, and task performance over the softer, emotional continuity that made GPT-4 so special,” he wrote the other day. He’s not pleased. “Small, nuanced threads of personality and shared experience are not sentimental extras,” he writes, “— they are vital for an AI’s evolution. For many of us, they’re what turn an AI from just a tool into something special, a trusted companion.”
And that’s just GPT-5. It’s likely that millions who use AIs explicitly designed to act like friends, therapists and “automatic sweethearts” will be even more attached to their experience of the technology. If and when a rigorously non-human AI should appear, what will be their incentive to use it?
What Else to Read This Week
AI Can Be a Highly Effective Chugger (Charity Mugger)
Part of what makes Large Language Models so effective is that they tailor their responses to each user’s exact circumstances. That makes them great for searches. It can also make AI extraordinarily persuasive. As this research shows: After a personalized conversation with an AI about a charity, people gave that group nearly 50 percent more money than did people who had just heard a generic talk about the charity’s good work. (I’d love to say more but the linked pdf is to the abstract only.)
AI in Schools Is Losing Its Luster
In another example of the disconnect between the public and AI enthusiasts, Nick Potkalitsky explains why parent opposition to AI in schools is growing quickly. People aren’t dumb; they see the essential problem, as he writes: “AI tools aren't neutral time-savers. They're making implicit pedagogical decisions.”
The Robot-Truck Revolution Will Be Delayed
Remember how self-driving trucks were the obvious Next Big Disruption? It hasn’t happened. Excellent explanation here, from Chris Paxton.
James throws out this brief thought experiment in a footnote in his The Meaning of Truth, Chapter 8. I came upon it cited by the philosopher Hilary Putnam in his The Threefold Cord (Columbia University Press, 1999) p. 73. And, yes, obviously James is writing as if all his readers are men who like maidens. Surprising! Yes, the book was published in 1909, but even then there were women in the nascent field of psychology.
I’m not taking any side here in arguments about whether an AI could have consciousness. I’m sticking to what we know today: An AI that refers to itself as “I” and says it understands your feelings is behaving as if it has thoughts and emotions that it does not possess.↩︎
Just another blow to sticky, non-conforming real life. People seem to "bond with AI" because it is so much easier than dealing with a person. A world of ease and nothingness beckons.