Reports said actor died after surgery to look Korean. Was he AI?

Seoul, South Korea – The news that Saint Von Colucci, a 22-year-old Canadian-Portuguese actor, singer, and songwriter with appeal in the South Korean entertainment scene, died after undergoing surgeries to look like a K-pop star who was buzzing in the media.

Von Colucci allegedly underwent 12 plastic surgeries, costing more than $200,000, to resemble BTS member Jimin and overcome discrimination “against his Western features.” He is said to have recently landed a role in an upcoming Korean drama.

The only problem is that von Colucci may never have existed.

Saint Von Colucci reportedly died after undergoing multiple surgeries to look like Jimin from K-pop ground BTS [Daily Mail Online]

A body of evidence suggests he is the product of an elaborate hoax using artificial intelligence that fooled dozens of media outlets stretching from the United States and Canada to the United Kingdom, South Korea, India, Malaysia and the Philippines .

The debacle appears to be the first known instance of AI being used to trick mass media outlets into spreading misinformation, ushering in a new era of computer-generated fake news.

“Misinformation and disinformation generated using AI tools are certainly a cause for concern as they will make life more difficult for fact-checkers and journalists,” Felix M Simon, a journalist and PhD student at the Oxford Internet Institute, told Al Jazeera .

The saga began earlier this week when journalists around the world received a press release announcing that von Colucci had died on April 23 at a hospital in Seoul.

The press release, which was written in clumsily worded English, claimed to be from a public relations firm called HYPE Public Relations.

However, the press release contained numerous red flags.

The press release announcing Saint Von Colucci’s death carried numerous red flags [Courtesy: Instagram/papaxxzy]

Many web links in the document failed to load, including a link to von Colucci’s alleged Instagram account, and the hospital mentioned in the press release does not exist.

Headquartered at WeWork’s London and Toronto offices, HYPE’s website appears unfinished and was registered just a few weeks before von Colucci’s death.

When Al Jazeera tried to call HYPE at the number listed, no one answered. Al Jazeera later got a text from the song saying, “Wtf do you want.”

Other than the press release, there is little evidence that Von Colucci is a real person.

Despite being described as a songwriter for a number of K-pop stars, Von Colucci had no significant online presence and no one has come forward to publicly mourn his death.

The online footprint out there raises more questions.

Photos from Von Colucci online are blurry and contain strange features, including deformed hands in at least one instance – a telltale sign of AI being used.

AI-generated image detection software, while having limitations, indicates that some photos have a high probability of being produced or edited with AI software. Al Jazeera could not independently verify the authenticity of the images.

Von Colucci’s claimed music repertoire, including the album “T1K T0K H1GH SCH00L”, is not available on any mainstream music streaming service.

In a press release distributed last year, Von Colucci was described as “the second son of Geovani Lamas, the CEO of IBG Capital, Europe’s largest hedge fund company”.

Geovani Lamas has no official presence online, while the top search result for IBG Capital is an investment company in the US state of Arizona.

Von Colucci’s images show telltale signs of AI manipulation [Courtesy: Instagram/papaxxzy]

In another twist, the K-pop star wannabe’s Instagram page was reactivated this week, with one comment edited two days after his reported death. The comment has since been removed.

The litany of red flags didn’t stop the media from rushing to cover Von Colucci’s bizarre demise, including sensational pre- and post-surgery photos that showed his transformation from a white man into a person with East Asian features.

After Daily Mail Online reported the story, it was quickly picked up by media outlets around the world.

Daily Mail Online quietly removed its article on Wednesday without any explanation or retraction notice.

The story remains on the websites of dozens of other outlets, including The Independent in the UK, the Hindustan Times in India, the Malay Mail in Malaysia and Newsis in South Korea.

The Canadian embassy in Seoul declined to comment when contacted by Al Jazeera.

South Korean media has reported that police have not received a case report where a Canadian actor died due to plastic surgery complications.

The apparent hoax is a stark reminder of AI’s potential, still in its infancy, to blur truth and fiction, especially as declining media revenues and the workforce raise existential questions about the future of professional journalists and news.

ChapGPT has expressed concern about the possibility of spreading misinformation [File: Florence Lo/Reuters]

Platforms like ChatGPT, which can write entire articles in a human voice, already allow anyone to create compelling news stories with just a few clicks that can be used for political manipulation and conspiracy theories.

AI can also already be used for “deep fakes” that manipulate videos and images of real people, giving bad actors opportunities to disrupt elections, damage reputations, create revenge porn and even incite violence.

AI-generated content has previously been accused of misleading people in large numbers.

Manipulated photos of Pope Francis in a white puffer jacket and the arrest of former US President Donald Trump recently went viral on social media.

But the case of von Colucci appears to be the first example of journalists being defrauded on a large scale, exposing deficiencies in editorial standards and basic fact-checking.

Still, Simon of the Oxford Internet Institute expressed his optimism that AI-generated fake news would not have a catastrophic effect on public discourse.

“The main problem with misinformation and disinformation is the demand for it – which is limited – and the ability to reach people by getting it into the mainstream – which is difficult. The ability to generate more and/or higher quality mis- and disinformation is unlikely to change this,” he said.

“Moreover, we have fairly decent mechanisms of epistemic vigilance — judging context, source, comparing information to previous information, for example — that are likely to adapt and work against new forms or attempts to mislead us.”

Related Post