‘Inappropriate images’ circulate at yet another California high school, as officials grapple with how to protect teens from AI porn created by classmates

A third school in Southern California has been hit with allegations of digitally manipulated images of students circulating on campus.

The Los Angeles Unified School District has launched a formal investigation into claims that “inappropriate images” were created and shared by students at Fairfax High School.

LAUSD officials claim the images were distributed via a “third-party messaging app” not affiliated with the district.

“These allegations are taken seriously, do not reflect the values ​​of the Los Angeles Unified community, and will result in appropriate disciplinary action if warranted,” the district wrote in a statement Wednesday.

While officials declined to say whether the images in question were created using artificial intelligence, they asserted that the district “remains steadfast in providing training on the ethical use of technology – including AI.”

The Los Angeles Unified School District launched an investigation after ‘inappropriate images’ were allegedly created and shared by students at Fairfax High School

District officials said the images were shared on a “third-party messaging app,” but did not clearly indicate whether the images were created using artificial intelligence

Additionally, they wrote, “LAUSD is committed to improving education around digital citizenship, privacy and safety for everyone in our school communities.”

The incident is the latest in a series of scandals to plague schools in the Southern California region.

Just last month, administrators at Laguna Beach High School launched an investigation after a student allegedly created and distributed “inappropriate images” of his classmates.

In an email to parents on March 25, Principal Jason Allemann wrote that school officials were “taking steps to investigate this issue and address it directly with those involved.”

At the same time, he added, the issue should be seen as “a learning opportunity for our students, reinforcing the importance of responsible behavior and mutual respect.”

Laguna Beach police are assisting with the ongoing investigation, which is part of a separate investigation by the Beverly Hills Police Department into the spread of “deepfakes” at a local high school.

The Beverly Hills Unified School District issued a statement to parents when artificially generated nude photos of students began circulating around Beverly Vista Middle School in late February.

An investigation is underway at Laguna Beach High School, where a student allegedly created and distributed “inappropriate images” of classmates last month

The term “deepfakes” is used to refer to AI-generated media that has been manipulated to replace the likeness of one person with that of another.

Artificially generated nude photos of students began circulating around Beverly Vista Middle School in late February, leading to the expulsion of five eighth graders

“Sixteen eighth-grade students were identified as victims, as were five seriously involved eighth-grade students,” wrote Superintendent Michael Bregy.

While Bregy acknowledged that children are “still learning and growing, and that mistakes are part of the process,” he confirmed that disciplinary action had been taken and noted that the incident was quickly brought under control.

The district vowed to be responsible for any other students who “created, distributed or possessed these types of AI-generated images.”

The board of education voted to expel the five eighth-graders, whose names have not been released, during a special meeting on March 6.

The term “deepfake,” a portmanteau of “deep learning” and “fake,” emerged on Reddit in 2017.

It was used to denote images created by a user who, using artificial intelligence, transferred the faces of celebrities to pornographic video clips.

Today, the term is used broadly to refer to AI-generated media that has been digitally manipulated to replace the likeness of one person with that of another.

The images produced are often so convincing that deepfake technology has caused mass panic among celebrities and politicians alike.

The term “deepfake,” a portmanteau of “deep learning” (a form of artificial intelligence) and “fake,” emerged on Reddit in 2017

New York Rep. Alexandra Ocasio-Cortez recently shared her traumatic experience when she came across an AI-generated pornographic video of herself online

The images produced are often so convincing that deepfake technology has caused mass panic among celebrities and politicians alike.

Recently, New York Rep. Alexandra Ocasio-Cortez repeated the traumatic experience of seeing an AI-generated pornographic video of herself earlier this year.

She was inspired to introduce legislation in the House of Representatives that would allow victims of deepfakes to take civil action against the producers and distributors of the offending content.

“Victims of non-consensual pornographic deepfakes have waited too long for federal legislation to hold perpetrators accountable,” Ocasio-Cortez wrote in a statement last month:

“Now that deepfakes are easier to access and create – 96% of deepfake videos circulating online are non-consensual pornography – Congress must act to show victims that they will not be left behind.”

The bill, called the DEFIANCE Act, would create a federal civil right of action for victims of so-called “digital forgery.”

It was introduced in the House of Representatives on March 7 and referred to the Justice Committee.

Related Post