A new form of bullying is creeping into Australian schools as teens have started using AI technology to create nude images of their classmates.
Police received reports of male students in the US using an AI-powered website to generate ‘deepfake’ pornographic images of their classmates using photos found online in early November.
The technology has now become so advanced that some deepfakes can no longer be distinguished from reality, which has led to ‘sextortion’ among classmates.
Students have been caught threatening to post the photos online if they are not provided money or sexual favors from a fellow student.
In Australia, reports are emerging of teenagers doing the same to each other, and in some cases even to their teachers.
Students in the US and now Australia have started creating naked ‘deepfake’ images of their classmates using AI technology (stock image)
The technology has now become so advanced that some deepfakes are no longer distinguishable from reality, which has led to ‘sextortion’ among classmates (stock image)
Australian eSafety Commissioner Julie Inman Grant said she had seen a growing number of complaints about pornographic deepfakes this year.
“ESafety has seen a small but growing number of complaints about explicit deepfakes through our image-based abuse program since the start of the year,” Ms Grant told news.com.au.
‘The rapid deployment, increasing sophistication and popular adoption of generative AI means that massive amounts of computing power or masses of content are no longer required to create convincing deepfakes.
“Deepfakes, especially deepfake pornography, can be devastating to the person whose image is hijacked and sinisterly altered without their knowledge or consent.”
Ms Grant said recent advances in technology have meant cybercriminals have found more ways to exploit it to create more convincing images.
There is no easy way to combat sextortion; that is possible deny victims a possible solution.
However, according to Ms Grant, the eSafety office has an 87 per cent success rate in systematically removing deepfakes uploaded to the internet when they are reported.
Australian eSafety Commissioner Julie Inman Grant (pictured) has said her department has seen a spike in reports of children claiming to have been extorted through fake images of themselves
Melbourne-based AI expert Anuska Bandara claims the recent spike in deepfake technologies could be linked to the arrival of a program called AI-hype in November 2022, followed by OpenAI’s ChatGPT.
According to Bandara, the danger of the technology lies in the fact that victims remain powerless when they are threatened with fake nude photos of themselves.
Scammers are also known to take full advantage of AI technology on people they have never met.
“The real individuals have no control over what deepfakes, created using advanced AI techniques, might communicate. “Using this technology, scammers are using deepfakes to influence unsuspecting individuals, placing them in dangerous situations or even engaging in the distribution of explicit content,” he said.
Mr Bandara highlighted that the risks associated with uploading images online have increased due to deepfakes, making it all the more important for people to ensure their accounts are private.
AI technologies have been used by scammers in the past, even going as far as cloning voices of social media users before calling their targets’ parents and begging for money.
The so-called ‘family emergency’ scam can be pulled off using just three seconds of audio, easily extracted from a social media clip, to clone a person’s voice, a McAfee investigation has found.
The same study also showed that one in four respondents had some experience with it AI voting fraud and one in ten said they had been personally targeted.
Criminals typically request money sent via a cryptocurrency such as Bitcoin, as these are untraceable and therefore limit the ability to track down scammers.
Richard Mendelstein, a software engineer at Google, lost $4,000 after receiving a disturbing phone call in which his daughter appeared to be screaming for help.
He was then told by her ‘kidnappers’ to withdraw $4,000 in cash as ransom.
Mendelstein sent the money to a telephone service in Mexico City and only later realized that he had been scammed and that his daughter was safely in school.