OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie down

OpenAI’s new Sora text-to-video generation tool won’t be publicly available until later this year, but in the meantime it’s offering some tantalizing glimpses of what it can do – including a mind-boggling new video (below) showing what TED Conversations may look like this in forty years.

To create the FPV drone-style video, TED Talks teamed up with OpenAI and filmmaker Paul Trillo, who has been using Sora since February. The result is an impressive, if somewhat baffling, fly-through of futuristic conference calls, strange laboratories and underwater tunnels.

The video once again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-like effect has become popular for powerful videos on social media, but traditionally it requires advanced drone piloting skills and expensive kit far beyond the new DJI Avata 2.

Sora’s new video shows that these types of effects could be opened up to new creators, potentially at a much lower cost – although that comes with the caveat that we don’t yet know how much OpenAI’s new tool itself will cost and who it will cost. are available for.

view more

But the video (above) also shows that Sora is still far from being a reliable tool for full-fledged movies. The people in the shots are only on screen for a few seconds and there’s plenty of creepy nightmare fuel in the background.

The result is an experience that’s exhilarating, while also leaving you feeling strangely off-balance, as if you’re landing again after a sky dive. Still, I definitely want to see more examples as we speed toward Sora’s public launch later in 2024.

How was the video made?

(Image credit: OpenAI/TED Talks)

OpenAI and TED Talks didn’t go into detail about how this particular video was made, but its creator, Paul Trillo, recently spoke more broadly about his experiences as one of Sora’s alpha testers.

Trillo told it Business insider about the types of cues he uses, including “a cocktail of words that I use to make it feel less like a video game and something more cinematic.” Apparently these include clues like “35 millimeters”, “anamorphic lens” and “depth of field lens vignette”, which are necessary otherwise Sora will “kind of default to this very digital-looking output”.

Currently, every prompt must go through OpenAI so it can meet strict safeguards around issues like copyright. One of Trillo’s most interesting observations is that Sora is currently “like a slot machine where you ask for something, and you mix up ideas, and there’s no real physics engine in it.”

This means it’s still a long way from being truly consistent with people and object states, something OpenAI admitted in an earlier article. blog post. OpenAI said Sora “currently exhibits numerous limitations as a simulator,” including the fact that “it does not accurately model the physics of many basic interactions, such as glass breaking.”

This incoherence will likely limit Sora to a short video tool for some time, but it’s still a tool I can’t wait to try out.

You might also like it

Related Post