Aussie student’s horror after Googling her own name – and her life will never be the same again

When Noelle Martin first Googled herself in 2012 at the age of 17, she had no idea that a decade later she would still be battling the horror of what she found.

The then schoolgirl from Perth, Western Australia, discovered that her face had been photoshopped into a series of pornographic images.

These “deepfakes” — images and videos digitally created or modified using artificial intelligence or machine learning — looked disturbingly realistic.

Noelle Martin (pictured) was shocked to discover in 2012 that her face had been photoshopped on a series of pornographic websites

Still fighting to have the images removed, the 28-year-old declares: ‘You can’t win’

To this day, Ms. Martin, now 28 years old, says she doesn’t know who created the fake images or videos of her having intercourse that she would later find.

She suspects that someone probably took a photo posted on her social media page or elsewhere and turned it into porn.

Horrified, Ms. Martin contacted several websites over several years in an attempt to remove the images. Some did not respond. Others took it away, but she soon found it again.

“You can’t win,” Mrs. Martin said. “This is something that will always be there. It’s like it ruined you forever.”

The more she spoke out, she said, the more the problem escalated. Some people even told her that the way she dressed and posting images on social media contributed to the harassment — essentially blaming her on the images rather than the creators.

Ultimately, Ms. Martin turned her attention to legislation, calling for a state law in Australia that would fine companies $555,000 if they fail to comply with takedown notices for such content from online safety regulators.

The Perth lawyer helped reform laws criminalizing the distribution of non-consensual images in 2018

In 2019, she received the WA State Recipient of the Year for her campaign.

But Ms. Martin acknowledges that it’s almost impossible to control the internet when countries have their own laws for content that is sometimes created on the other side of the world.

Ms Martin, currently a lawyer and legal researcher at the University of Western Australia, says she believes the problem needs to be managed through some kind of global solution.

“This is something you can’t escape because it’s a permanent, lifelong form of abuse,” she shared news.com.au.

“They literally rob you of your right to self-determination, effectively, because they obscure you and your name and your image and permanently violate you.”

Ms Martin stars in the new SBS show Asking For It which ‘examines the contemporary sexual revolution and seeks to bring about an era of ‘enthusiastic consent’ at a time when millions of Australians live with an epidemic of sexual violence’.

Image-based sexual abuse, as the criminal practice is now called, is booming — and experts fear recent advancements in artificial intelligence (AI) will make it even worse.

“The reality is that technology will continue to spread, continue to evolve, and will continue to be as simple as pressing the button,” said Adam Dodge, the founder of EndTAB, a technology abuse training group. .

“And as long as that happens, people will no doubt… continue to misuse that technology to harm others, primarily through online sexual assault, deepfake pornography, and fake nudes.”

Meanwhile, some AI models say they are already restricting access to explicit images.

OpenAI says it has removed explicit content from data used to train the DALL-E image generation tool, limiting users’ ability to create these types of images.

Ms. Martin stars in the new SBS show Asking For It, which explores sexual consent

The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians.

Midjourney, another model, blocks the use of certain keywords and encourages users to report problematic images to moderators.

Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using the image generator Stable Diffusion.

Those changes came in response to reports that some users were taking celebrity-inspired nude photos using the technology.

Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques such as image recognition to detect nudity and return a blurry image.

But it is possible for users to manipulate the software and generate whatever they want as the company releases its code to the public.

Some social media companies have also tightened their rules to better protect their platforms from harmful materials.

TikTok said last month that any deepfakes or manipulated content that depict realistic scenes should be labeled to indicate they are fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed.

In 2019, Ms. Martin received the WA State Recipient of the Year for her campaign for justice for victims of image-based sexual abuse

Gaming platform Twitch also recently updated its explicit deepfake images policy after a popular streamer named Atrioc was discovered during a livestream in late January that had opened a deepfake porn website in his browser.

The site featured fake images of fellow Twitch streamers.

Twitch already banned explicit deepfakes, but now showing glimpses of such content — even if intended to express outrage — “will be removed and will result in enforcement,” the company wrote in a blog post.

And intentionally promoting, creating or sharing the material is grounds for an immediate ban.

Research on deepfake porn is not mainstream, but a report released in 2019 by the AI ​​company DeepTrace Labs found that it was almost entirely weaponized against women and that the most targeted were Western actresses, followed by South Korean K-pop singers .

In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the Internet. The reporting site works for regular images and AI-generated content – which has become a growing concern for child safety groups.

“When people ask our senior leadership, what are the boulders coming off the hill that we are concerned about? The first is end-to-end encryption and what that means for child protection. And AI, and deepfakes in particular, come second,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which runs the Take It Down tool.

“We haven’t been able to formulate a direct answer to it yet,” said Portnoy.

Related Post