HALF of us can’t tell if copy has been written by ChatGPT or a human being

>

More than half of people can’t identify whether words are written by AI chatbots like ChatGPT, new research has shown — and Generation Z is the worst.

Researchers found that 53 percent of people didn’t spot the difference between content produced by a human, an AI, or an AI edited by a human.

Among young people aged 18-24, only four in ten – where people aged 65 and over were able to correctly recognize AI content more than half the time.

It comes amid fears that ChatGPT and similar bots could threaten white-collar jobs.

A new study has found that only four in ten people aged 18 to 24 can tell the difference, while those over 65 aren’t easy to fool – 52 percent of this group correctly identified AI-generated content

Robert Brandl, CEO and founder of the web tool review company tool tester, who conducted the latest survey told DailyMail.com: “The fact that younger readers were less good at identifying AI content was surprising to us.

It could indicate that older readers are currently more cynical about AI content, especially since it’s been so important in the news lately.

Older readers will have a broader base of knowledge to draw from and better compare internal perceptions of how to answer a question than a younger person simply because they have been exposed to such information for many years.

“A University of Florida study showed that younger audiences are just as susceptible to fake news online as older generations, and therefore being young and potentially more tech savvy is no defense against being misled by online content.”

The survey also found that people believe that warnings should be given that AI has been used to produce content.

It involved 1,900 Americans who were asked to determine whether writing was created by a human or an AI, with content in a variety of fields, including health and technology.

Familiarity with the idea of ​​”generative AI” like ChatGPT seemed to help – only 40.8 percent of people were completely unfamiliar with ChatGPT and it’s the type that can properly identify AI content.

More than four in ten people (80.5 percent) think that companies that publish blogs or news articles should warn if AI has been used.

Nearly three-quarters (71.3 percent) said they would trust a company less if it used AI-generated content without being clear about it.

“The results seem to show that the general public may need to rely on artificial intelligence revelations online to know what is and what is not AI-created, as people cannot tell the difference between human and AI-generated content,” Brandl said. .

“We were surprised to see how readily people took writing AI as a human creation. The data suggests that many resorted to guessing because they just weren’t sure and couldn’t say.’

Brandl also said that many in the study seemed to assume that every copy was AI-generated and that such caution may be helpful.

Tools like ChatGPT are notorious for adding factual errors to documents – and in recent weeks cybersecurity researchers have warned that they can be used as tools for fraud.

The research comes as cybersecurity researchers have warned of a coming wave of AI-written phishing attacks and fraud.

In tests, people can’t tell if ChatGPT or a human wrote the text

The study involved 1,900 Americans who were asked to determine whether writing is human or AI-created, with content in a variety of fields, including health and technology

Young people were the easiest to fool, the study found

Cybersecurity firm Norton warned that criminals are turning to AI tools like ChatGPT to create “decoys” to rob victims.

A report in New Scientist suggested that using ChatGPT to generate emails could reduce costs for cybercriminal gangs by up to 96 percent.

“We found that readers often assumed that any text, be it human or AI, was AI generated, which could reflect the cynical attitude people currently have towards online content,” said Brandl.

This may not be such a terrible idea as generative AI technology is far from perfect and can contain many inaccuracies. A prudent reader is likely to be less inclined to blindly accept AI content as fact.”

The researchers found that people’s ability to recognize AI-generated content varied by industry – AI-generated health content was the most likely to mislead users, with 56.1% mistakenly thinking AI content was written by a human or by a man was edited.

Technology was the industry where people found AI-generated content the easiest to identify – with 51% correctly identifying AI-generated content.

ChatGPT averaged 13 million daily users in January, making it the fastest-growing Internet app of all time, according to analytics firm CompareWeb.

It took TikTok about nine months after its global launch to reach 100 million users and Instagram more than two years.

OpenAI, a private company backed by Microsoft Corp., made ChatGPT available to the public for free at the end of November.

Related Post