OpenAI Sora: Everything you need to know
OpenAI revealed Sora to the world on February 15, 2024 by sharing a handful of remarkable AI generated videos and a research paper on X.
Sora wasn’t the first artificial intelligence video model, but it was the first to show such high levels of consistency, duration and photo realism.
While the output seems impressive, so far only videos generated by OpenAI staff have been shared on either X or TikTok, although some were made with prompts suggested by fans.
No date has been set yet for when the model will be made public, or what limitations will be placed on its output before it is integrated into a tool like ChatGPT.
Sora news and updates (Updated March 14, 2024)
Sora will come out this year, declares OpenAI CTO
OpenAI launches a TikTok channel to share Sora generated videos
ElevenLabs announces sound effects tool and shares Sora video with sound
Sora is so impressive it has put all other AI video tools on notice
What is OpenAI Sora?
Sora is a generative video model, similar to the likes of Runway’s Gen-2, Pike Labs Pika 1.0 and Stable Video Diffusion from StabilityAI. It turns text, images or video into AI video content.
It is named for the Japanese word “sky,” which the company said was to show its "limitless creative potential." One of the first clips showed two people walking through Tokyo in the snow.
Unlike some of the models that came before it, Sora appears to be much more capable, able to generate clips of up to one minute long and with consistent characters and motion.
What is the technology behind Sora?
The technology behind Sora is an adapted version of the models built for DALL-E 3, OpenAI’s generative image platform but with additional features for fine-tuned control.
Sora is a diffusion transformer model, that is it marries the type of image generation model behind Stable Diffusion with the token-based generators powering ChatGPT.
A video is generated in a latent space and "denoised," or formed in 3D patches and then put through a video decompressor to turn into a standard, human viewable output.
What data was Sora trained on?
OpenAI says it trained its model on publicaly available videos, public domain content and copyrighted videos where it had purchased the licence in advance.
It hasn't said exactly how many videos went into the training data and is unlikely to ever reveal that information. It is thought to be in the millions.
The company used a video-to-text engine to create captions and labels from ingested video files to further fine-tune Sora on real-world content.
Rumors and speculation suggest that OpenAI also made use of synthetic video content, such as that generated using Unreal Engine 5 as this would also give it information on the physics of the worlds inside the video clips it ingested.
Why did Sora surprise its developers?
Every large scale AI model has its quirks, behaving in unexpected ways or responding to prompts in a way that almost feels the opposite of what was intended. Sora is no different.
During the post-training run Tom Brooks, a Sora researcher said it seemed to work out how to create 3D graphics from its own dataset without any additional training.
Meanwhile, Bill Peebles, another researcher working on the model said it automatically created different video angles without being prompted — it assumed that was what was needed.
What about content restrictions and privacy?
During training there were also red teamers and safety experts working to track, label and prohibit use cases for misinformation, hateful content and bias through adversarial testing.
There are also metadata tags within the generated videos to label it as made by AI and text classifiers that will check prompts don't violate usage policies.
Like DALL-E 3, OpenAI says Sora will have a number of content restrictions before launch. This will include limits on generating images of real people.
This will also include a ban on generating videos showing extreme violence, sexual content, hateful imagery, celebrity likeness or the IP of others such as logos and products. None of this is possible easily with DALL-E 3 and the same restrictions will apply.
How can I access Sora?
You can't currently access Sora. The only insight we have into the model is what we've seen from videos shared by OpenAI themselves. This is because they are working on ensuring it doesn't generate misinformation or dangerous content.
Tim Brooks, research lead at Sora said they have to focus on safety and ensure mechanisms are in place that the public can be confident in the difference between AI generated and real videos before it is released.
It also takes a long time to make a single video clip. Long enough, the team explained, to make a coffee and come back to it still making the clip.
It is most likely that Sora will be integrated into ChatGPT similar to DALL-E 3 rather than made available as a standalone product — although previous versions of DALL-E had their own page.
The model will also be available as an API where third-party developers can integrate its functionality into their own products, although that will come further down the line.
This already happens with DALL-E 3. For example you can use the OpenAI model within your own product to automatically create images, or, as is the case with the AI image platform NightCafe, offer your own interface to generate images with the model.
We may even see it reserved as a professional tool, integrated into products like Apple's Final Cut Pro or Adobe Premiere Pro for filmmakers and VFX artists.
When will Sora be released?
OpenAI hasn't set a release data for Sora yet, but the CTA Mira Murati says it will come out sometime in 2024, and possibly before the summer.
When released it will be available and priced similarly to OpenAI's image generation model DALL-E, likely integrated into the premium version of ChatGPT.