After a chatbot encouraged a suicide, “AI playtime is over."
Time to set aside naive acceptance — and naive criticism
Warning. This post gets into detail about a case of self-harm. If you’re in a vulnerable state, take care. If you’re at risk of harming yourself, please skip to the end of this post and contact one of the services described there.
Recently, a Belgian man named Pierre took his own life. His wife and psychotherapist believe – with good evidence – that a chatbot encouraged him to do it.
“If it were not for this AI, my husband would still be here,” said Claire, his widow, according to Pierre-François Lovens, whose article broke the story in the Belgian newspaper La Libre (paywalled). (The names are pseudonyms, to protect her privacy and that of the couple’s two children.) She came forward, Lovens writes, to prevent others from being victimized by an artificial-intelligence application.
Causality is hard to prove in suicide, and Claire has said she isn’t going to pursue the chatbot’s ma…