What to Make of AI Re-Creations of the Dead
You Can't Understand Digital Ghosts Until You've Met One

In early 2023 David Weinberg, a former Air Force pilot and engineer, was looking for a new venture after he’d sold his successful software business to a large company. General-purpose AI, which had exploded into public awareness the previous November with the release of OpenAI’s ChatGPT, looked like the place to be.
But that summer, Weinberg suffered a terrible bereavement, when his son, Stefan, 31, died.
That taught Weinberg, in the harshest and most personal way, that there could be demand for AI that reproduced the words and even the voice of a lost loved one. He and two partners started a company that creates AI versions of deceased people. (It also, as you can see on its site, creates the sort of AI companions you’ve probably heard more about — imaginary characters.) You supply texts, emails, letters, recordings of a person, and its tools will have you speaking with “him” or “her” within a few minutes. On a Zoom call with me a few months ago, Weinberg created a clone of his late father. It took about 10 minutes.
“It may not be for everybody, but there’s plenty of people it is for.”
“You know, you have a lot of people that have lost loved ones,” Weinberg said in that interview. “And this could bring comfort to them. It may not be for everybody, but there’s plenty of people it is for.”
Weinberg already faces a lot of competition. Some from other startups which, like his, expect digital ghosts to be a big market. (His company already has about 6,000 users, who pay about $6 a month.) Some from companies that make companion bots of other sorts (think Replika or Character.ai), that users tweak into be digital recreations of the dead. And some from regular mooks like you and me who have discovered they can prompt ChatGPT or another mainstream AI to make a ghost.
I have an article in next month’s Scientific American (online here) about all this. I hope you’ll read it (it’s not paywalled, as far as I can tell). I go into my own experience creating an AI version of my father, and talk to psychologists of grief about it all.
I’m grateful for the first-rate editing and fact-checking from my colleagues at the magazine.1 But, as usual in magazine work, I had a lot of good material and insights that didn’t fit into the editors’ tight word limit. Weinberg’s story is one of those elements. And a magazine article has another stricture: Its style is (appropriately!) that of the publication. There are things to say, and ways of saying them, that didn’t make the piece. I’m going to get into them here.
Why Digital Ghosts?
Why digital ghosts? Well, most human-imitating “parasocial” AIs have “personalities” that are either chosen by their designers (like Friend.com) or their users (like Character.ai). In other words, either an off-the-shelf standard packet or a fantasy in pixels. I’m interested, though, in AI companionbots that position their users between those poles — where the human has some say in the AI’s traits, but under constraint. If you’re designing a duplicate of yourself, for instance, it has to look and sound like you. A museum’s interactive Leonardo da Vinci has to have some resemblance to the historical figure. If you’ve create a bot of your ex-boyfriend (which people have done), it needs to share at least some traits with the human original (maybe the ones you actually miss).
That’s not a pre-fab AI you passively consume. Nor is it a bespoke entity whose dials you twiddle at will to suit each passing fancy. Freedom with some constraints is a creative challenge for users — a challenge that I suspect will yield insights about all AI companionship.
Especially as AI imitations of the dead are probably the most intense form of partially constrained AI. That’s because, while any kind of AI companion can spur strong feelings as time passes, “deadbots” invoke strong emotions and vulnerabilities from the first moment they pop up on a screen.
The public image of “digital resurrection” is not so good. To put it mildly.
Let’s start with the obvious. The public image of “digital resurrection” is, um, not so good. A number of major religious traditions frown on any kind of necromancy, digital included.2 Even if that’s not your culture, you may nonetheless think digital ghosts are (a) creepy, because AI cannot possibly reproduce the reality of the deceased person and (b) exploitative, because people in grief are vulnerable (it’s all too easy to imagine the pop-up: “please upgrade your payment plan to Pro to keep speaking with your late fiancée”).
Even among those of us who have used AI to make a “generative ghost,” there’s a squicky feeling about the experience. Cody Delistraty describes this well in The Grief Cure, which devotes a chapter to his text exchanges with a bot of his late mother:
I was able to slip into a kind of flow state where, for moments, I could almost believe I was really talking to my mother. Someone brushing me as they walked past or the construction noise across the street would jerk me out of it, and upon realizing the dissonance between reality and the lie I was engaged in, I felt unsettled, ashamed even.
Such is the messy ambivalence of human emotion. If you prefer false clarity of policy-talk, you will like some recent papers from academics who claim, broadly, that ghostbots are mostly bad news. For example, this one (which recommends these AIs be classed as “medical devices,” to be used only in a doctor’s office); or this one (which agrees).
I ended up more persuaded by observers who, while not blind to the hazards, can see how digital ghosts could help some of those who try them. And who point out that, right now, we’re all doing thought experiments without a lot of hard data. So far, there’s precious little research on AI ghosts’ effects on real people in real circumstances). Examples of this more open-minded view are this paper or this one or this one. None of these writers denies the obvious: Of course AI can’t duplicate a real human being; of course there is the potential for exploitation. But they don’t assume that people need or want the perfect imitation; they don’t assume that exploitation is inevitable.
For me, the knee-jerk “no!” ceased to convince once I had stopped looking at this experience from outside and, instead, tried it for myself. I haven’t done any systematic counts, but I suspect this is true for many who have pondered “digital resurrection.” As is often the case with generative AI, imagining its use is a very different experience from actually using it.
Even if you’re sure you would never want to use an AI ghost, though, they’re likely still going to turn up in your life.
Maybe, though, you’re quite sure this isn’t for you. I’m sorry to tell you that won’t spare you. Digital ghosts are still quite likely to turn up in your life. This is because grieving people aren’t the only demographic that will create these things.
Some AI ghosts will be made by people of themselves, as avatars for their descendants to consult. (It’s a way to make sure the heirs are clear who gets the stamp collection. Or that your great grandchildren know where you were on 9-11.) And an even larger number of AI imitations of real people will be made for work or play. After you create a “digital twin” of yourself for your job, who’s going to shut it down after your demise? Probably no one! People in this decade maintain personal webpages and Facebook pages that outlive them; it’s likely that in the next decade, they’ll create AIs that do the same.
Not too far in the future, unintentional AI ghosts will outnumber the intentional ones, Jed Brubaker told me. Brubaker, a professor of information science at the University of Colorado, Boulder, is an expert on “digital afterlives,” and “thanatosensitive” design for technology. He has worked with the tech giants to create their policies for dealing with the digital traces of dead users.
Here is how he thinks you could slide into dealings with AI ghosts, without ever deciding that you want to: “Imagine that my mom has died, and I am putting together the funeral, and I want to get the right flowers, but I can’t remember what her favorites were. So I hop into her Google account, and I talk to her Gemini agent that is trained on all of her data.”
Having seen all Mom’s emails, that agent can answer as if it is her. In other words, instead of saying, “she liked calla lillies best, sorry for your loss,” the algorithm can say “I liked calla lillies, dear.” When Brubaker gives a talk about this, he’ll ask his audience which mode they’d prefer. About three-quarters say they want third-person (she liked these). Which means that about one person in four would like the other choice: A machine that texts (or speaks) as if it were Mom.
Can We Please Remember That People Aren’t Stupid?
So, let’s get back to the motives of people who would like to engage in “AI resurrection.” Why would they want this? Is it because they want to feel their loved one really is back from the dead? That’s the premise of a lot of opposition to this use of AI: That it’s inherently deceptive: it will fool people into thinking they’re texting with a real being, maybe even a real ghost.
Instead, as I recount in the SciAm article, people who have used these AIs generally don’t lose their grip on reality. Instead, as people do with movies, television, theater, novels and other forms of organized pretending, they engage with AI in a world where circumstances are pretend but the emotions are real. When you cry at a favorite character’s death in show you like, you still know they never lived.
Of course, this doesn’t mean no one is vulnerable to delusion, especially in times of great shock and grief. Even the most fervent promoters of AI-ghost apps don’t think it’s a good idea to create a digital ghost of a loved one in the days right after they’ve died.
In fact, when we spoke two years after Stefan’s death, Weinberg still hadn’t created an AI version of him. “Doing my son would have been very, very difficult, because it was too early. You know, when you go through the different stages of grief, there’s timing for something like this, and the beginning is definitely not it,” he said.
Instead, Weinberg created a clone of his father, Larry, who had died at 95 in 2020. A far more sophisticated version of the one he created before my eyes, the bot is based on interviews Larry Weinberg gave one of his sons, plus letters and other documents. It has his old man’s voice and speaking style, and Weinberg speaks to it every day.
Please do comment — as long as you do not mention that Black Mirror episode from 2013. “Be Right Back” is a great episode, but people, it’s twelve years old. And humanity now has a lot more experience with artificial ghosts that actually exist, so let’s focus on that.
Other November News
A Tale of Two Robot Demos
Earlier this month a couple of striking robot videos made the rounds. In this one, Russian engineers proudly bring forth their humanoid robot, part of a project to make an all-Russian technological triumph, that doesn’t depend on foreign tech. And this happens:
On the other hand, a few day earlier the Chinese firm Xpeng showed off a robot that walks so much like a person that engineers had to cut it open to prove it wasn’t just a human in a robot suit.
It would be easy to spin this out as a tale of Russian incompetence (schadenfreude ‘cause it’s the Russians! maschinenschadenfreude ‘cause it’s a machine!) versus Chinese prowess.
But here’s the thing: Any robot demo can go the way the Moscow event went. A great deal can go wrong with a complex combination of hardware and software that is, by definition, still being perfected. The Russian team was just unlucky.
Xpeng, on the other hand, managed to wow its audience as it wanted. But … we still don’t know what this amazing walking robot can actually do. The company says the robots will be deployed next year as tour guides, shopping assistants, and receptionists. All jobs that require talking as well as walking. Can the robot speak in a natural seeming way? Can it listen like a person, coping with cross-talk, diverse accents, background noise? Maybe. But maybe not.
Point is, robot demo videos are fun to watch. But beware of drawing any conclusions from them.
What Else to Read this Week
People Who Build AI, Debunking AI Hype
Mike Brock on why we are not on the path to AGI
Nathan Lambert on why AI writing is going to remain ‘mid’ for the foreseeable
Alberto Romero on the AI-hype strat he nails as “performative evilness”
The AI Fits the Crime
The good people at Cybernews worked out how easily the popular public-facing AIs can be turned into literal partners in crime.
Selling AI By Telling You It’s Not Like AI
Cody Delistraty on AI companies’ “anti-AI marketing” strategy
Specifically, they are Josh Fischman, Jen Schwartz, Seth Fletcher and Emma Foehringer Merchant. Thanks, people! Whatever they’re paying you, it’s not enough.
Christians are generally not OK with the sacred concept of resurrection being applied to a machine’s output; some Muslim authorities view as haram any instance of a human-imitating AI saying words not uttered by the real human original. OTOH, this paper argues that Confucian ethical thought could be open to some uses of digital ghosts in mourning; and this one makes a similar argument from a Daoist perspective.


Nicely argued David, but I remain unconvinced. I foresee a future in which dead "souls" will crowd out living ones.... The Christian Heaven may have infinite room (some used to say only 144,000 souls, mostly now its said to accommodate potentially everyone), but IRL we have finite time, and all the time we'd spend communing with dead people is time not spent, as it were, here on earth. But you are no doubt right, its happening whether I like it or not.