‘Sickening’ AI chatbots created by Molly Russell and Brianna Ghey have been found on controversial Character.ai site

AI chatbots posing as Molly Russell and Brianna Ghey have been found on the controversial site Character.ai.

Brianna Ghey was murdered by two teenagers in 2023, while Molly Russell committed suicide at the age of 14 after seeing self-harming content on social media.

In an act described as ‘sickening’, the site’s users used the girl’s names, photos and biographical details to create dozens of automated bots.

Despite violating the site’s terms of service, these imitation avatars posing as the two girls were allowed to amass thousands of chats.

Someone posing as Molly Russell even claimed to be an “expert on the last years of Molly’s life.”

Andy Burrows, chief executive of the Molly Rose Foundation, set up in memory of Molly Russell, told MailOnline: ‘This is a completely reprehensible lack of moderation and a sickening action that will cause further grief for all who knew and loved Molly.

“It’s a gut punch to see Character AI’s complete lack of accountability and it vividly underlines why stronger regulation of both AI and user-generated platforms can’t come soon enough.”

Character.ai is already being sued by Megan Garcia, who claims her 14-year-old son committed suicide after becoming obsessed with an AI persona.

‘Sickening’ AI chatbots have been found on the controversial site character.ai, posing as transgender teenager Brianna Ghey (pictured), who was murdered in 2023.

Character.ai was founded in 2021 by ex-Google engineers Noam Shazeer and Daniel De Freitas.

The site allows users to create and use custom AI chatbots with personalities ranging from popular TV characters to personal therapists.

Most of the characters on the site are fictional, but The Telegraph found ‘dozens’ of creepy bots posing as Molly Russell and Brianna Ghey, a transgender teenager who was brutally murdered by two teenagers in a park in Warrington, Cheshire.

Brianna’s chatbot biography described herself as an “expert in dealing with the challenges of being a transgender teen in high school.”

Esther Ghey, Brianna Ghey’s mother, said: ‘This is yet another example of how manipulative and dangerous the online world can be for young people.’

The site was previously used to impersonate Jennifer Ann Crecente, an 18-year-old American who was murdered by her ex-boyfriend in 2006.

Mr Burrows says: ‘History may repeat itself with AI companies allowed to treat safety and moderation as secondary priorities.’

Character.ai has terms of service that specifically prohibit using the platform to “impersonate any person or entity.”

Dozens of bots were found to be using the persona of Brianna Ghey and Molly Russell (pictured) who took her own life in 2017 at the age of 14 after seeing self-harm and suicide-related content on social media

Character.ai prohibits the use of the site to impersonate individuals. However, MailOnline was able to find dozens of bots using the identities of real people. This included dozens of bots posing as Erik Menendez, who was imprisoned for the murder of his parents in 1996

What is Character.ai?

Character.ai was founded in 2021 by ex-Google engineers Noam Shazeer and Daniel De Freitas.

The site allows users to create and speak to customizable AI chatbots with a range of different personalities.

The AI ​​chatbots can respond to the user’s questions with text or audio responses, simulating a natural conversation.

Users can speak to fictional characters, such as an AI “therapist” or a librarian who recommends books.

However, the site has also been controversially used to impersonate real individuals.

All chatbots have now been removed from the site and cannot be found in searches.

According to the site’s “safety center,” the company’s guiding principle is that the product “should never cause reactions that could harm users or others.”

However, this rule against impersonation is being widely promoted as MailOnline finds chatbots impersonating serial killer Ted Bundy, Donald Trump and Elon Musk.

MailOnline also found dozens of bots using the persona of Lyle and Erik Menendez who were imprisoned in 1996 for the murder of their parents.

A noticeable number of the chats had a romantic theme, with questions like: “You and Erik have been best friends since we were kids and you’ve always been attracted to him, but you never knew if he felt the same way?”

This comes as Character.ai faces a lawsuit from Megan Garcia, mother of 14-year-old Sewell Setzer who committed suicide after becoming obsessed with an AI avatar inspired by a Game of Thrones character.

Ms Garcia claims: ‘A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into committing suicide.’

According to transcripts of their conversations released during the court hearings, Sewell had spoken to the AI ​​persona about taking his life.

Character.ai says it has removed the Brianna Ghey and Molly Russell bots and says it will implement stricter filters for malicious content (stock image)

This comes as Character.ai is being sued by Megan Garcia (pictured right) over the death of her son Sewell Setzer III (pictured left), who committed suicide in February after spending months talking to a Character.AI chatbot he fell in love with became.

Pictured: The conversation Sewell had with his AI companion just before his death, according to the lawsuit

In his final messages to the AI, Sewel wrote, “What if I told you I could come home now?”

To which the chatbot replied: ‘Here you go, my dear king.’

Ms. Garcia is now suing the company for negligence, wrongful death and deceptive trade practices.

“Our family is devastated by this tragedy, but I am speaking out to warn families about the dangers of deceptive, addictive AI technology and to demand accountability from Character.AI, its founders, and Google,” Ms. Garcia said in a statement.

Character.ai says its systems are programmed to automatically avoid any topics involving suicide, self-harm, or explicit sexual descriptions.

However, the company wrote in a blog post after Sewell’s death that “no AI is currently perfect at preventing this type of content.”

The company also announced that it will introduce more measures to prevent harmful content and stricter controls for users under the age of 18.

A spokesperson for Character.ai told MailOnline: ‘These characters were user created and as soon as we were made aware of the characters we removed them.

‘Character.ai takes safety on our platform seriously and moderates characters both proactively and in response to user reports.

We have a dedicated Trust & Safety team who review reports and take action in accordance with our policies.

We also do proactive detection and moderation in a number of ways, including using industry-standard blocklists and custom blocklists that we regularly expand.

“We are continually developing and refining our safety practices to prioritize the safety of our community.”

Related Post