As OpenAI’s Sora blows us away with AI-generated videos, the information age is over – let the disinformation age begin

AI video generation is nothing new, but with the advent of OpenAI’s Sora text-to-video toolit has never been easier to create your own fake news.

Sora’s photorealistic capabilities have surprised many of us. While we’ve seen AI-generated video clips before, there’s a degree of accuracy and realism to them these incredible Sora video clips That makes me…a little nervous, to say the least. It is undoubtedly very impressive, but it is not a good sign that my first reaction comes to that video of puppies playing was immediate concern.

It’s a little disturbing that the possible harbinger of the destruction of truth has arrived in the form of golden retriever puppies. (Image credit: OpenAI)

Our dear editor-in-chief Lance Ulanoff wrote an article earlier this year discussing how AI will make it impossible to distinguish truth from fiction by 2024, but at the time he was mainly talking about image generation software. Soon everyone will be able to get their hands on a simple and easy-to-use tool for producing full-length video clips. Combined with the existing power of voice deepfake artificial intelligence (AI) software, the potential for politically motivated video impersonation is greater than ever.

‘Fake news!’ shouted the AI-generated Trump avatar

Now, I don’t just want to spread endless fear here about the dangers of AI. Sora isn’t widely available yet (currently by invitation only), and I truly believe AI has plenty of use cases that could improve human lives; implementations in medical and scientific professions could potentially serve to take away some of the busy work that doctors and researchers have to deal with, making it easier to cut through the chaff and get to the important stuff.

Unfortunately, as with Adobe Photoshop before it, Sora and other generative AI tools shall are used for nefarious purposes. If you try to deny this, you are trying to deny human nature. We already saw it Joe Biden’s vote has been hijacked due to robocall scams – how long will it be before ersatz videos of political figures start flooding social media?

It only takes one person with malicious intent for an AI tool to become dangerous. (Image credit: Shutterstock)

Sora, just like OpenAI’s flagship AI product ChatGPTprobably will not be the instrument used to produce this counterfeit. Sora and ChatGPT both have a host of safeguards in place to prevent them from being used to produce content that violates OpenAI’s user guidelines. For example, prompts that request explicit sexual content or the likeness of others will be rejected. In its defense, OpenAI says it plans to “continue engaging policymakers, educators, and artists around the world to understand their concerns.”

However, there are ways around these guardrails: I tested this myself and the results were sometimes hilarious – and OpenAI’s transparent approach to AI development means that Sora imitators will surely pop up everywhere in the not-too-distant future. These knockoffs (just like chatbots based on ChatGPT) will not necessarily have the same safety and security features.

Robocop, meet Robocriminal

AI tools are already being used for a lot of unreliable things online. Some of it is relatively harmless; if you want to have a steamy R-rated conversation with an AI pretending to be your favorite anime character, I’m not here to judge you (well, maybe I am a little, but at least it’s not crime). Elsewhere, however, bots are being used to scam vulnerable internet users, spread disinformation online and rip off social media platforms for people’s personal data.

The power of something like Sora could make this even worse, allowing for even more sophisticated counterfeiting. It’s not just about what the AI ​​can create, remember – it’s about what a talented video editor can do with the raw footage from a tool like Sora. A little tweaking here, a filter there, and suddenly we had grainy phone camera footage of a prominent politician beating up a homeless man in an alley. Don’t even get me started on how generating high-quality AI videos is practically guaranteed to disproportionately impact women thanks to the recent online trend of AI-powered “revenge porn.”

The worst part? It only becomes more difficult to distinguish the fakes from the real. Despite what some AI proponents may tell you, that is currently the case No reliable way to definitively confirm whether footage was generated by AI.

OpenAI CEO Sam Altman has previously come under fire for misuse of ChatGPT by malicious third-party groups. (Image credit: JASON REDMOND/AFP via Getty Images)

Software for this does exist, but it doesn’t have a great track record. When Scribblr has tested several AI detection tools, it found that the paid software with the highest success rate (Winston AI) was correct only 84% of the time. The most accurate free AI detector (Sapling) offered only 68% accuracy. As time goes on, this software may improve, but the incredibly rapid development of generative AI could surpass it, and there is always the risk of false positives.

Sure, many AI-produced videos and images can easily be identified as such by a seasoned Internet user, but the average voter doesn’t have as sharp eyes, and the telltale signs – usually strange shapes around human digits and limbs, or unrealistic camera movements – will only fade away as technology improves. Sora represents a huge leap forward, and I’m honestly a little worried about what the next big leap will look like.

The age of disinformation

When we discuss AI deepfakes and scams, we often do so on a fairly macro scale: AI influences upcoming electionsAn AI-generated CFO steals $25 millionAnd AI art wins a photography competition are all good examples. But while the idea of ​​secret AI senators and top executives worries me, it’s on the small scale where lives will truly be ruined.

If you’ve ever sent a nude photo, congratulations: your jealous ex can now scour your social media for more material and turn it into a full-fledged sex tape. Accused of a crime, but you have a video recording that exonerates you? Tough, the court’s AI detection software gave a false positive, and now you’re hit with an additional offense for producing false evidence. Individuals stand to lose the most when faced with emerging new technologies like Sora – I don’t care major companies are losing money because of a hallucinatory chatbot.

We live in a time when the entire body of human knowledge is almost entirely accessible to us from the little rectangles in our pockets, but AI threatens to poison the well. It’s nothing new – this isn’t the first threat to facts the internet has faced, and it won’t be the last, but it’s a very good thing could be be the most devastating yet.

Quit

Of course, you could say ‘same sh*t, different day’ about all this – and that wouldn’t be wrong. Scams and disinformation are not new, and the targets of technologically enhanced deception have not changed: they are mainly the very young and the very old, those who have not yet learned enough about technology or who cannot keep up with developments. his relentless march.

I hate this argument though. It’s defeatist and doesn’t take into account the enormous power and scalability that AI tools can put in the hands of scammers. Snail mail fraud has been around for decades, but let’s be honest: it takes a lot more time and effort than instructing a bot to write and send thirty thousand phishing emails.

The rise of AI has made online phishing fraud faster, easier and more widespread than ever before. (Image credit: Shutterstock)

Before I wrap this up, I want to make one thing clear, because my social inbox invariably gets clogged with angry AI enthusiasts whenever I write an article like this: I’m not to blame AI for this. I don’t even blame the people who make it. OpenAI appears to be taking a more cautious and transparent approach to deep learning technology than I would expect from many of the world’s largest companies. What I want is for people to be fully aware of the dangers, because text-to-video AI is simply the latest trick in the bad actors’ playbook.

If you’d like to try Sora for yourself (hopefully for healthier purposes), you can create an OpenAI account today by following: this link – but keep in mind that the software isn’t available yet unless you have an invite. OpenAI is taking a cautious approach this time around, with the first wave of testers consisting mainly of ‘red teamers’ who are stress-testing the tool to eliminate bugs and vulnerabilities. There’s no official release date yet, but it’s probably not far off if OpenAI’s previous releases are anything to go by.

You might also like…

Related Post