This is too easy – I just used Sora for the first time and it blew my mind
Sora, OpenAI’s new AI video generation platform, which finally launched on Monday, is a surprisingly rich platform that offers simple tools for generating shockingly realistic-looking videos almost instantly. Even in my all-too-brief hands-on experience, I could tell that Sora is about to change everything about video making.
OpenAI CEO Sam Altman and company were wrapping up their third presentation of their planned “12 Days of AI,” but I could hardly wait to get out of that live and, I assume, non-AI generated video feed to dive into contemporary content creation. changing announcement.
Ever since we saw short Sora clips created by select video artists and shared by OpenAI, I and anyone with even a passing interest in AI and video have been waiting for this moment: our chance to touch and try out Sora.
Spoiler alert: Sora is stunning, but also so massively overloaded that I couldn’t create more than a handful of AI video samples before the system’s servers barked that they were “at full capacity.” Yet this glimpse was so worth it.
Sora is so important that its generative models don’t live within the large ChatGPT language model space or even within OpenAI’s homepage. The AI video generation platform guarantees its own destiny Sora.com.
From there I logged into my ChatGPT Plus account (you need at least that level to make up to 50 generations per month; Pro gives you unlimited). I also had to give my age (the year is blurred because I’m vain).
The landing page is, as promised, a library grid with everyone else’s AI-generated video content. It’s a great place to get inspired and see the stunning realism and surrealism possible through OpenAI’s Sora models. I could even use any of these videos as a starting point for my creation by “remixing” one.
However, I have chosen to generate something new.
At the bottom of the page there is a prompt field where you can describe your video and set some parameters. That field contains options such as the aspect ratio, resolution, duration, and the number of video options Sora could choose from. There’s also a style button with options like ‘Balloon World’, ‘Stop Motion’ and ‘Film Noir’.
I’m a fan of film noir and am intrigued by the idea of ’Bubble World’, but I didn’t want to hinder the pace in any way, so I started typing in my prompt instead. I asked for something simple: a middle-aged man building a rocket ship near the ocean and under a moonlit sky. There would be a campfire nearby and a friendly dog. It was not a detailed description.
I pressed the up arrow on the right side of the prompt window and Sora got to work.
Within about a minute I had two five-second video options. They looked realistic. Well, at least one of them did. One clip featured a golden retriever with an extra tail where its head should have been. Over the course of the video’s five-second running time, the extra tail became a head. The other video was less disturbing. In fact, it was almost perfect. The problem was the rocket ship – it was a model and not something my character could fly.
At this point I could edit my prompt and try again, review the video’s storyboard, mix it with another video, repeat or remix it. I chose the video with the normal dog and then selected the remix.
You can make a light remix, a subtle one, a strong one, or even a custom remix. My system defaulted to a strong remix and I asked for a bigger rocket, one big enough to take the man to the moon. I also wanted to place it behind him and ended up asking for the campfire to be partially visible.
The remix lasted almost five minutes, which once again produced a beautiful video. Sure, Sora doesn’t know anything about space travel or rocket science, but the composition was spot on, and I can see how I could push this video in the right direction.
That was actually my plan, but when I tried another remix, Sora complained that it was full capacity.
I also tried using Storyboard to create another video. In this case, I entered a prompt that became the first board in my storyboard; Sora automatically interpreted this and then let me add additional beats to the video via additional storyboards. I had a video in mind of a “Bubble World” scene with two characters sharing a romantic pasta dinner, but again, Sora ran out of capacity.
I wanted to try more and see, for example, how far you could get with Sora; OpenAI said they are starting with “conservative” content controls. Which could mean that things like nudity and violence are rejected outright. But you know, AI prompt writers always know how to get the best and worst out of generative AI. I guess we’ll just have to wait and see what happens on this front.
Server issues aside, it’s clear that Sora will revolutionize the video creation industry. It’s not just the uncanny ability to give simple directions and create realistic videos in minutes; it’s the wealth of video editing and creation tools available on day 1.
I guarantee the model will become more powerful, the tools even smarter and the servers plentiful. I’m not sure what Sora means to video professionals worldwide, but the sooner they try this, the sooner they’ll prepare for what’s to come.