AI is being used to transform real photos of children into sexualised images

>

Pedophiles are using a hot new artificial intelligence (AI) platform to turn real photos of children into sexualized images, it has been revealed.

It has led to warnings to parents to be careful about the photos of their children they post online.

The images were found on the American AI image generator Midjourney, which, like ChatGPT, uses prompts to provide an output, although these usually consist of images rather than words.

The platform is used by millions and has produced such realistic images that it has fooled people all over the world, including users on Twitter.

An image of Pope Francis donning a huge white puffer jacket with a cross hanging around his neck caused a frenzy among social media users earlier this year.

Investigation: Pedophiles are using the new artificial intelligence (AI) program Midjourney to turn real photos of children into sexualized images, it has been revealed.

Fake images of Donald Trump’s arrest and “The Last Supper Selfie” have also been created using the platform.

WHAT IS MIDJOURNEY AI?

Midjourney is an online image generator that mlike ChatGPT uses prompts to provide output.

However, this output usually consists of images rather than words.

The platform is used by millions and has produced such realistic images that it has fooled people all over the world, including users on Twitter.

An image of Pope Francis donning a huge white puffer jacket with a cross hanging around his neck caused a frenzy among social media users earlier this year.

Fake images of Donald Trump’s arrest and “The Last Supper Selfie” have also been created using the platform.

Last year, the platform received a backlash when a computer-generated image won first place in a US art contest.

The AI ​​artwork, called Théâtre D’opéra Spatial, was submitted by Jason Allen, who said he used Midjourney to create the stunning scenes that seem to combine medieval times with a futuristic world.

The program recently released a new version of its software that has increased the photorealism of its images, which has only increased its popularity.

An investigation by the Time found that some Midjourney users create a large number of sexualized images of children, as well as women and celebrities.

Among these are explicit deepfake images of Jennifer Lawrence and Kim Kardashian.

Users go through the Discord communication platform to create prompts and then upload the resulting images to the Midjourney website in a public gallery.

Despite saying content should be “PG-13 and family-friendly,” the company also warned that because the technology is new, it “doesn’t always work as expected.”

Nevertheless, the explicit images found violate the Midjourney and Discord terms of use.

While virtual images of child sexual abuse are not illegal in the US, in England and Wales such content – known as non-photographic images – is banned.

The NSPCC’s head of online child safety policy, Richard Collard, said: ‘It is completely unacceptable that Discord and Midjourney are actively facilitating the creation and hosting of degrading, abusive and sexualized images of children.

“In some cases this material would be illegal under UK law and by hosting child abuse content they are putting children at a very real risk of harm.”

He added: ‘It is incredibly distressing for parents and children to have their images stolen and modified by offenders.

By only posting photos to trusted contacts and managing their privacy settings, parents can reduce the risk of images being used in this way.

“But ultimately, technology companies must take responsibility for tackling the use of their services by offenders.”

Responding to The Times findings, Midjourney said it would ban users who broke the rules.

The CEO and founder David Holz added: ‘In recent months we have been working on a scalable AI moderator, which we started testing with users last week.’

A Discord spokesperson told the Times, “Discord has a zero-tolerance policy for promoting and sharing non-consensual sexual material, including sexual deepfakes and child sexual abuse material.”

Midjourney produces such realistic images that people are fooled.  An image of Pope Francis donning a huge white puffer jacket with a cross hanging around his neck caused a frenzy among social media users earlier this year

Midjourney produces such realistic images that people are fooled. An image of Pope Francis donning a huge white puffer jacket with a cross hanging around his neck caused a frenzy among social media users earlier this year

The AI ​​was also used to show former US President Donald Trump being arrested in New York

The AI ​​was also used to show former US President Donald Trump being arrested in New York

It has generated images of historical figures taking a selfie during well-known events, such as The Last Supper

It has generated images of historical figures taking a selfie during well-known events, such as The Last Supper

The discovery comes amid growing concerns about pedophiles exploiting virtual reality environments.

Earlier this year, an NSPCC investigation revealed for the first time how platforms like the metaverse are being used to abuse children.

Records showed that UK police forces had recorded eight instances of virtual reality (VR) rooms being used for child sexual abuse offences.

Powered primarily by Meta’s Mark Zuckerberg, the metaverse is a series of virtual spaces where you can game, work, and interact with others who aren’t in the same physical space as you.

The Facebook founder was a leading voice in the concept, which is seen as the future of the internet and would blur the lines between physical and digital.

West Midlands Police recorded five cases of metaver abuse and Warwickshire one, while Surrey Police recorded two crimes – including one involving Meta’s Oculus headset, now called the Quest.

HOW TO DISCOVER A DEEPFAKE

1. Unnatural eye movements. Eye movements that don’t look natural — or a lack of eye movement, such as a lack of blinking — are huge red flags. It’s a challenge to mimic the blinking in a way that looks natural. It is also challenging to mimic a real person’s eye movements. That’s because a person’s eyes usually follow the person they’re talking to.

2. Unnatural facial expressions. If something doesn’t look right on a face, it could indicate facial distortion. This happens when one image is stitched over another.

3. Inconvenient positioning of facial features. If someone’s face is pointing one way and their nose is pointing the other, you should be skeptical about the authenticity of the video.

4. A lack of emotion. You can also recognize what’s known as “face distortion” or image stabbing when someone’s face doesn’t seem to show the emotion associated with what they’re supposedly saying.

5. Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t look natural, or if the head and body are placed awkwardly or inconsistently. This is perhaps one of the easier to spot inconsistencies, as deepfake technology usually focuses on facial features rather than the entire body.

6. Unnatural exercise or body shape. If someone looks distorted or absent when they turn sideways or move their head, or if their movements are jerky and incoherent from one frame to the next, you should suspect the video is fake.

7. Unnatural colors. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is probably fake.

8. Hair that doesn’t look real. You will not see any frizz or flyaway hair. Why? Fake images cannot generate these individual characteristics.

9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so the lack of outlines of individual teeth may be a clue.

10. Blurring or misalignment. If the edges of images are blurry or images are misaligned, such as where someone’s face and neck meet their body, you know something is wrong.

11. Inconsistent sound or sound. Deepfake creators usually spend more time on the video than on the audio. The result can be poor lip sync, robotic sounding voices, strange word pronunciation, background digital noise, or even the absence of audio.

12. Images that look unnatural when slowed down. If you’re watching a video on a screen larger than your smartphone or if you have video editing software that can slow down the playback of a video, you can zoom in and view images more closely. For example, if you zoom in on the lips, you can see if they’re actually talking or if it’s a bad lip sync.

13. Hashtag Differences. There is a cryptographic algorithm that allows creators to prove that their videos are authentic. The algorithm is used to insert hashtags at certain places in a video. If the hashtags change, you should suspect video manipulation.

14. Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish the authenticity of a video. This is how it works. When a video is created, the content is registered in a ledger that cannot be changed. This technology can help prove the authenticity of a video.

15. Reverse image search. An original image search, or a reverse image search using a computer, can find similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not yet publicly available, investing in a tool like this can be beneficial.