Monday, April 22, 2024

OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie down

OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years.

To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels.

The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2.

Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to.

See more

But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background.

The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024.

How was the video made?

A video created by OpenAI Sora for TED Talks

(Image credit: OpenAI / TED Talks)

OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester.

Trillo told Business Insider about the kinds of prompts he uses, including "a cocktail of words that I use to make sure that it feels less like a video game and something more filmic". Apparently these include prompts like "35 millimeter", "anamorphic lens", and "depth of field lens vignette", which are needed or else Sora will "kind of default to this very digital-looking output".

Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently "like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it".

This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora "currently exhibits numerous limitations as a simulator", including the fact that "it does not accurately model the physics of many basic interactions, like glass shattering".

These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out.

You might also like



from TechRadar - All the latest technology news https://ift.tt/eyUrl7X

No comments:

Post a Comment

This devious new malware is going after macOS users with a whole barrel of tricks

Security researchers from Group-IB discover unique new piece of malware It abuses extended attributes for macOS files to deploy the payl...