A law professor has been falsely accused of sexually harassing a student in reputation-destroying misinformation shared by ChatGPT, it is alleged.
US criminal defense attorney Jonathan Turley has expressed concern about the dangers of artificial intelligence (AI) after he was falsely accused of sexual harassment during a trip to Alaska that he never made.
To come to this conclusion, it was alleged that ChatGPT relied on a quoted Washington Post article that was never written, which quoted a statement that was never issued by the newspaper.
The chatbot also believed that the ‘incident’ took place while the professor was working at a faculty where he had never worked.
In a tweet, the George Washington University professor said, “Yesterday, President Joe Biden stated that “it remains to be seen” whether artificial intelligence (AI) is “dangerous.” I would like to differ…
Professor Jonthan Turley was falsely accused of sexual harassment by AI-powered ChatGPT
“I learned that ChatGPT falsely reported a sexual harassment claim that was never made against me on a trip that never happened while I was on a faculty where I never taught.
“ChatGPT relied on a quoted Post article that was never written and quotes a statement that was never made by the newspaper.”
Professor Turley first discovered the allegations against him after receiving an email from a fellow professor.
UCLA professor Eugene Volokh had asked ChatGPT to find “five examples” where “professor sexual harassment” was a “problem in US law schools.”
In an article for USATodayProfessor Turley wrote that he was on the list of the accused.
The bot reportedly wrote, “The complaint alleges that Turley made ‘sexually suggestive remarks’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”
This allegedly happened while Professor Turley was employed at the Georgetown University Law Center – a place where he had never worked.
It wasn’t just a surprise to UCLA professor Eugene Volokh, who conducted the research. It came as a surprise to me, as I’ve never taken students to Alaska, The Post has never published such an article, and I’ve never been accused of sexual harassment or assault by anyone,” he wrote for USAToday.
The AI bot cited a Washington Post article that was not written to support its falsified claims
The false claims were subsequently investigated by the Washington Post revealing that the Microsoft-powered GPT-4 had also shared the same claims about Turley.
This repeated slander seemed to have happened after press coverage highlighting ChatGPT’s initial mistake, demonstrating how easily misinformation can spread.
Following the incident, Microsoft’s senior communications director Katy Asher told the publication it had taken steps to ensure its platform was accurate.
She said, “We have developed a security system including content filtering, operational monitoring and abuse detection to provide our users with a safe browsing experience.”
Professor Turley responded to this his blogShare: “You can be slandered by AI and these companies just shrug that they are trying to be accurate.
Meanwhile, their fake accounts are spreading all over the internet. By the time you hear a false story, the trail is often cold at the origin with an AI system.
“You have no clear path or author to seek redress. You’re left with the same question from Reagan’s Secretary of Labor, Ray Donovan, who asked, “Where do I go to get my reputation back?”
MailOnline has reached out to both ChatGPT and Microsoft for comment.
Professor Turley’s experience follows previous concerns that ChatGPT has not consistently provided accurate information.
Professor Turley’s experience stems from a fear of misinformation spreading online
Investigators found that ChatGPT used fake magazine articles and fabricated health data to support cancer claims.
The platform also failed to return results as “extensive” as those found through a Google search, it was claimed, because it incorrectly answered one in 10 breast cancer screening questions.
Global cybersecurity advisor at ESET, Jake Moore, warned that ChatGPT users shouldn’t take everything as “gospel” to avoid the dangerous spread of misinformation.
He told MailOnline, “AI-driven chatbots are designed to rewrite data entered into the algorithm, but when this data is incorrect or out of context there is a chance that the output will misrepresent what it has been taught.
“The pool of data from which it has learned is based on data sets from Wikipedia and Reddit, among others, which in essence cannot be taken as gospel.
The problem with ChatGPT is that it cannot verify the data, which could contain misinformation or even bias. Even worse is when AI makes assumptions or falsifies data. In theory, this is where the “intelligence” part of AI is meant to take over autonomously and create data outputs with confidence. If this is harmful, as in this case, it could be the downfall.’
These fears also come at a time when researchers are suggesting that ChatGPT may corrupt people’s moral judgment and be dangerous to “naive” users.
Others have talked about how the software, which is designed to talk like a human, can show signs of jealousy — and even tell people to get out of the marriage.
Mr. Moore continued: ‘We are entering a time where we have to continually confirm more information than previously thought, but we are still only on version 4 of ChatGPT and even earlier versions with its competitors.
“That’s why it’s critical that people do their own due diligence based on what they read before jumping to conclusions.”