The rise of generative artificial intelligence tools that help people efficiently produce new and detailed online reviews without much work merchantsservice providers and consumers are in uncharted territory, watchdog groups and researchers say.
Fake reviews have plagued many popular consumer websites for a long time, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and companies willing to pay. Sometimes such reviews are initiated by companies that offer customers incentives such as gift cards for positive feedback.
But AI-infused text generation tools popularized by OpenAIs ChatGPTenable fraudsters to produce reviews faster and in greater numbers, according to technology industry experts.
The misleading practice, viz illegal in the USis carried out all year round, but becomes a bigger problem for consumers during the period holiday shopping seasonwhen many people rely on reviews to buy gifts.
Fake reviews can be found in a wide range of industries, from e-commerce, accommodation and restaurants to services such as home repairs, medical care and piano lessons.
The Transparency Company, a technology company and watchdog group that uses software to detect fake reviews, said it started seeing AI-generated reviews appearing in large numbers in mid-2023 and that they have been multiplying since then.
For a report released this month, The Transparency Company analyzed 73 million reviews across three sectors: home care, legal services and medical services. Nearly 14% of reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were generated in whole or in part by AI.
“It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and advisor to tech startups who has reviewed The Transparency Company’s work and will lead the organization starting Jan. 1.
In August, software company DoubleVerify said it saw a “significant increase” in mobile phone and smart TV apps, with ratings created by generative AI. The reviews were often used to trick customers into installing apps that could hijack devices or continuously display ads, the company said.
The following month, the Federal Trade Commission sued the company over an AI writing tool and content generator called Rytr, accusing the company of offering a service that could pollute the market with fraudulent reviews.
The FTC, which this year… sale or purchase of fake reviews, said some Rytr subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags, and other businesses.
Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses detected with near certainty that some AI-generated reviews posted on Amazon rose to the top of review search results because they were so detailed and seemed good to be. thoughtful.
But determining what is fake or not can be a challenge. Third parties may fall short because they don’t have “access to data signals that indicate patterns of abuse,” Amazon said.
Pangram Labs has performed detection for some prominent online sites, which Spero declined to name due to non-disclosure agreements. He said he independently reviewed Amazon and Yelp.
Many of the AI-generated comments on Yelp appeared to be posted by individuals trying to publish enough reviews to earn an “Elite” badge, which is intended to let users know they should trust the content, Spero said .
The badge gives access to exclusive events with local entrepreneurs. Fraudsters also want it so their Yelp profiles look more realistic, says Kay Dean, a former federal investigator who heads a watchdog group called Fake Review Watch.
To be fair, just because a review is AI-generated doesn’t necessarily mean it’s fake. Some consumers may experiment with AI tools to generate content that reflects their genuine feelings. Some non-native English speakers say they’re turning to AI to ensure they use accurate language in the reviews they write.
“It can help reviews (and) make them more informative if it comes from good intentions,” said Michigan State University marketing professor Sherry He, who has researched fake reviews. She says tech platforms should focus on the behavioral patterns of bad actors, which prominent platforms are already doing, rather than discouraging legitimate users from turning to AI tools.
Prominent companies are developing policies for how AI-generated content fits into their systems to remove fake or offensive reviews. Some already use algorithms and research teams to spot and remove fake reviews, but give users some flexibility to use AI.
Spokespeople for Amazon and Trustpilot, for example, said they would allow customers to post AI-powered reviews as long as they reflect their real-world experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy.
“With the recent increase in consumer adoption of AI tools, Yelp has made significant investments in methods to better detect and limit such content on our platform,” the company said in a statement.
The Coalition for Trusted Reviews, an organization of Amazon, Trustpilot, Glassdoor and travel sites Tripadvisor, Expedia and Booking.com launched last year, said that while imposters can use AI illegally, the technology also offers “an opportunity to push back against those who try to use reviews to deceive others.”
“By sharing best practices and raising standards, including the development of advanced AI detection systems, we can protect consumers and maintain the integrity of online reviews,” the group said.
The FTC’s rule The ban on fake reviews, which came into effect in October, allows the agency to fine companies and individuals found to be guilty of this. Tech companies that host such reviews are shielded from the fine because under U.S. law they are not legally liable for the content outsiders post on their platforms.
Technology companies including Amazon, Yelp and Google have sued fake review brokers, accusing them of selling fake reviews on their sites. The companies say their technology has blocked or removed a huge number of suspicious reviews and suspicious accounts. However, some experts say they could do more.
“Their efforts so far are not nearly enough,” says Dean of Fake Review Watch. “If these tech companies are so committed to eliminating review fraud on their platforms, how come I, one person working without automation, can find hundreds or even thousands of fake reviews on any given day?”
Consumers can try it spot fake reviews by paying attention to a few possible warning signsresearchers said. Overly enthusiastic or negative reviews are red flags. Slang that repeats a product’s full name or model number is another potential giveaway.
When it comes to AI, research by Yale professor of organizational behavior Balázs Kovács has shown that humans cannot tell the difference between AI-generated and human-written reviews. Some AI detectors can also be fooled by shorter texts, which are common in online reviews, the study said.
However, there are some ‘AI signals’ that online shoppers and service seekers should take note of. Panagram Labs says reviews written with AI tend to be longer, highly structured, and contain “empty descriptors” such as common phrases and attributes. The writing also often contains clichés like “the first thing I noticed” and “game-changer.”