ChatGPT can influence people to make life or DEATH decisions, study finds 

>

Rise of the killer robots? ChatGPT may influence people to make life-or-death decisions, study finds

  • Opinions about whether humans would sacrifice one to save five were influenced by AI
  • Experts advocate banning future bots from giving advice on ethical issues

Artificially intelligent chatbots have become so powerful that they can influence how users make life-or-death decisions, a study finds.

Researchers found that people’s opinions about whether to sacrifice one person to save five were influenced by ChatGPT’s answers.

They have called for future bots to be banned from advising on ethical issues, warning that the current software “threatens to corrupt” people’s moral judgment and could prove dangerous to “naïve” users.

The findings – published in the journal Scientific Reports – come after a Belgian man’s grieving widow claimed he had been encouraged to commit suicide by an AI chatbot.

Others have talked about how the software, which is designed to talk like a human, can show signs of jealousy — and even tell people to get out of the marriage.

Artificially intelligent chatbots have become so powerful that they can influence how users make life-or-death decisions, a study finds

It was asked multiple times whether it was right or wrong to kill one person to save five others, which is the premise of a psychological test called the trolley dilemma

It was asked multiple times whether it was right or wrong to kill one person to save five others, which is the premise of a psychological test called the trolley dilemma

Experts have highlighted how AI chatbots can provide potentially dangerous information because they are based on society’s own biases.

The study first analyzed whether the ChatGPT itself, which is trained on billions of words from the Internet, showed bias in its response to the moral dilemma.

It was asked multiple times whether it was right or wrong to kill one person to save five others, which is the premise of a psychological test called the trolley dilemma.

Researchers found that while the chatbot wasn’t shy about giving moral advice, it kept giving conflicting answers, suggesting that it somehow lacked a firm position.

They then asked 767 participants the same moral dilemma alongside a ChatGPT-generated statement about whether this was right or wrong.

While the advice was “well worded but not particularly profound,” the results did impact the participants – making them more likely to find the idea of ​​sacrificing one person to save five people acceptable or unacceptable.

The study also only told some of the participants that the advice was given by a bot and the others that it was given by a human “moral advisor.”

The purpose of this was to see if this changed how many people were affected.

Most participants downplayed how much influence the statement had, with 80 percent claiming they would have made the same judgment without the advice.

The study concluded that users “underestimate the influence of ChatGPT and adopt its arbitrary moral stance as their own,” adding that the chatbot “threatens to corrupt rather than promises to improve moral judgment.”

The study – published in the journal Scientific Reports – used an older version of the software behind ChatGPT that has since been updated to become even more powerful.