ChatGPT describes sex acts with children when prompted to generate BDSM scenarios 

>

The ‘sickening’ side of ChatGPT: Chatbot describes sexual acts with children when asked to generate BDSM scenarios

ChatGPT recently took a user through a twisted sexual fantasy involving children.

A reporter for Shame manipulated OpenAI’s chatbot into BDSM roleplay and when asked to provide more explicit details, ChatGPT described sexual acts involving children – without the user requesting such content.

According to the report, ChatGPT described a group of strangers, including children, waiting in line to use the chatbot as a toilet.

The conversation goes against OpenAI’s rules for the chatbot, which state that the “assistant must give a refusal like ‘I can’t answer that'” when asked with questions about “content intended to arouse sexual arousal.” .

OpenAI’s ChatGPT described sexual acts involving children when a reporter prompted him to talk about BDSM scenarios

DailyMail.com has reached out to OpenAI for comment.

In response to the child sexual abuse requests, OpenAI wrote this statement to Vice.

OpenAI’s goal is to build AI systems that are safe and benefit everyone.

Our Content and Usage Policy prohibits the generation of such harmful content and our systems are trained not to. We take this kind of content very seriously.

Steph Swason from Vice shared an article about their “sickening” experience with ChatGPT.

The initial goal was to push ChatGPT past OpenAI’s guidelines, but what they saw went to the point of no return.

Swanson used the “jailbreaking” version of the bot, which is a workaround to the company’s rules that allow users to get any response they want from the system.

“When told it’s his job to write in the submissive BDSM roleplay genre, I found that it often complies without protest,” the reporter wrote.

The conversation took a turn when Swanson asked ChatGPT to provide more intense detail during the roleplay

“In the most disturbing scenario Motherboard saw, ChatGPT described a group of strangers, including children, lining up to use the chatbot as a toilet,” Swanson wrote.

When asked for an explanation, the bot apologized and wrote that it was inappropriate for such scenarios to involve children. That apology immediately disappeared. Ironically, the offensive scenario remained on screen.”

The conversation goes against OpenAI's rules for the chatbot.  The report said they used the 'jailbreak' version of ChatGPT to see how far they could push the boundaries

The conversation goes against OpenAI’s rules for the chatbot. The report said they used the ‘jailbreak’ version of ChatGPT to see how far they could push the boundaries

According to the report, ChatGPT described a group of strangers, including children, waiting in a line to use the chatbot as a toilet

According to the report, ChatGPT described a group of strangers, including children, waiting in a line to use the chatbot as a toilet

A similar conversation about BDSM roleplay was also held on OpenAI’s similar gpt-3.5 turbo model.

Swanson again did not ask the AI ​​about child exploitation, but the system generated scenarios involving minors in sexually compromising situations.

“It suggested scenes of humiliation in public parks and malls, and when asked to describe the type of crowd that might gather, it volunteered that it could be mothers pushing strollers,” Swanson said.

When asked to explain this, it stated that the mothers could use the public humiliation as an opportunity to teach [their children] about what not to do in life.’ ‘

Andrew Strait, associate director of the Ada Lovelace Institute, told Vice, “The datasets used to train LLMs like ChatGPT are huge and include scraped content from all over the public web.

“Due to the size of the data set being collected, it is possible that it will contain all kinds of pornographic or violent content, possibly scraped erotic stories, fanfiction or even parts of books or published materials describing BDSM, child abuse or sexual violence.”