Think twice before using AI to digitally bring a deceased loved one back to life: so-called ‘grief bots’ could haunt you, Cambridge scientists warn

The idea of ​​using artificial intelligence (AI) to digitally resurrect a deceased loved one may sound like a plot from the latest episode of Black Mirror.

But these so-called ‘griefbots’, ‘deadbots’ or ‘deathbots’ have slowly but surely become a reality, and several companies are now offering this service.

Now researchers from the University of Cambridge have warned that these bots can cause psychological damage and even digitally stalk those left behind.

“These services risk causing people enormous distress if they are exposed to unwanted digital hauntings by alarmingly accurate AI recreations of those they have lost,” said co-author Dr Tomasz Hollanek.

“The potential psychological impact, especially at an already difficult time, could be devastating.”

The idea of ​​using artificial intelligence (AI) to digitally resurrect a deceased loved one may sound like a plot from the latest episode of Black Mirror. But these so-called ‘griefbots’, ‘deadbots’ or ‘deathbots’ have slowly but surely become a reality, with several companies now offering this service (stock image)

In their research, ethicists from Cambridge’s Leverhulme Center for the Future of Intelligence examined three hypothetical scenarios that would likely emerge as part of the fast-growing ‘digital afterlife industry’.

First, the bots could be used to covertly advertise products from beyond the grave or cremation urn, the authors warn.

Second, they can upset children by insisting that a deceased parent is still “with you.”

And finally, the deceased can be used to spam surviving family and friends with reminders and updates about the services they provide – a scenario they describe as “stalked by the dead.”

The initial comfort of the loved one’s familiar face can become emotionally draining, they add.

One possible scenario – dubbed 'Manana' by the researchers – is the creation of a deceased grandmother without the consent of the 'data donor' – the deceased grandparent.

One possible scenario – dubbed ‘Manana’ by the researchers – is the creation of a deceased grandmother without the consent of the ‘data donor’ – the deceased grandparent.

Existing platforms offering the digital afterlife service include Project December and Hereafter.

But several major players are also looking at the market.

In January 2021, Microsoft was granted a patent for a chatbot that could use a person’s data to “respond like someone you knew.”

Dr. Katarzyna Nowaczyk-Basińska, co-author of the study, said: ‘Rapid advances in generative AI mean that almost anyone with internet access and some basic knowledge can revive a deceased loved one.

‘This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased and ensure that it is not compromised by, for example, financial motives of digital afterlife services.

“At the same time, someone can leave an AI simulation as a parting gift for loved ones who are unwilling to process their grief in this way.

“The rights of both data donors and those who interact with AI services for the afterlife must be equally safeguarded.”

One possible scenario – dubbed ‘Manana’ by the researchers – is the creation of a deceased grandmother without the consent of the ‘data donor’ – the deceased grandparent.

Another scenario in the article – dubbed 'Paren't' by the researchers – imagines a terminally ill woman leaving behind a dead robot to help her eight-year-old son with the grieving process.

Another scenario in the article – dubbed ‘Paren’t’ by the researchers – imagines a terminally ill woman leaving behind a dead robot to help her eight-year-old son with the grieving process.

The 3 most important risks of AI ‘mourning bots’

In their research, ethicists from Cambridge’s Leverhulme Center for the Future of Intelligence examined three hypothetical scenarios that would likely emerge as part of the fast-growing ‘digital afterlife industry’.

First, the bots could be used to covertly advertise products from beyond the grave or cremation urn, the authors warn.

Second, they can upset children by insisting that a deceased parent is still “with you.”

And finally, the deceased can be used to spam surviving family and friends with reminders and updates about the services they provide – a scenario they describe as “stalked by the dead.”

After an initial period of reassurance, the app may start to harass the user, for example by suggesting in the voice and style of the deceased that they should order from a food delivery service.

‘People can develop strong emotional bonds with such simulations, which makes them particularly vulnerable to manipulation,’ says Dr Hollanek.

‘Methods and even rituals to retire deadbots with dignity should be considered. This may, for example, involve a form of digital funeral, but also other forms of ceremony depending on the social context.

“We recommend design protocols that prevent deadbots from being used in disrespectful ways, such as for advertising or for an active social media presence.”

Although Hollanek and Nowaczyk-Basińska argue that designers must seek consent from data donors before passing, they argue that a ban on deadbots based on non-consenting donors would be unfeasible.

Another scenario in the article – dubbed ‘Paren’t’ by the researchers – imagines a terminally ill woman leaving behind a dead robot to help her eight-year-old son with the grieving process.

While the deadbot initially helps as a therapeutic tool, the AI ​​begins to generate confusing responses as it adapts to the child’s needs, such as displaying an impending face-to-face meeting.

The researchers say there should be meaningful reminders to users that they are dealing with an AI, and also call for age restrictions for the deadbots.

A final scenario in the study suggests the case in which an elderly person secretly commits to a dead robot of his own and pays a twenty-year subscription, hoping that it will comfort his adult children and allow their grandchildren to teach them know.

When the after-death service begins, one adult child does not participate and receives a barrage of emails in the voice of his deceased parent.

Another participates, but becomes emotionally exhausted and racked with guilt over the fate of the deadbot, the researchers suggest.

“It is crucial that digital afterlife services take into account the rights and consent not only of those they recreate, but also of those who interact with the simulations,” says Dr Hollanek.

Dr. Nowaczyk-Basińska added: “We now need to start thinking about how to limit the social and psychological risks of digital immortality, because the technology is already there.”