
OpenAI to take three key ‘safety steps’ before rolling out game-changing video generator
Recent years have shown that artificial intelligence promises to revolutionize the world of technology like never before and now, OpenAI has unveiled a staggering new text-to-video tool called Sora that has sparked both excitement and concern in equal measure.
OpenAI unveils Sora video generator
OpenAI, the company behind ChatGPT, has unveiled a new video-generating AI tool named Sora.
Sora is a text-to-video model, meaning that it can take a users written prompt and turn that into a video.
The mind-blowing program can create videos up to 60 seconds in length that feature highly detailed scenes complete with complex camera motion and multiple characters with vibrant emotions.
To show off its new model, OpenAI has released a series of jaw-dropping AI-generated videos including a wintry scene in Tokyo, life-like mammoths walking through snowy tundra and an animated furry monster playing with a candle among others.
It should be noted that the Sora text-to-video model is still in the testing phase and is not yet available for public use.
Social media raises concerns about Sora AI tool
Following the unveiling of OpenAIs Sora model, plenty of social media users have expressed their excitement about what the tool could be used for. However, many have raised concerns about the potentially damaging impact it could have – whether thats in spreading misinformation or taking the jobs of people in creative industries.
One commenter on X (formerly Twitter) said: You scientists are so preoccupied with whether or not you can, you don’t stop to think if you should.
A second added: The entire stock footage industry just died with this one tweet. RIP.
I can see absolutely zero ways in which this might be abused, joked a third.
This is terrifying and going to steal jobs, commented another. Not to mention the amount of terrible things this could be used for.
In relation to the Hollywood strikes of last year, this X user noted: This is exactly what SAG-AFTRA was scared about.
And finally, this commenter wrote: This is all fun and games until you end up in court watching a 60-second video evidence of yourself committing a crime youve never done.
OpenAI responds to concerns with three key safety steps
Pre-empting the concerns that social media users have raised, OpenAI has already identified several important safety steps that it will be taking before making Sora available to the public.
We are working with red teamers domain experts in areas like misinformation, hateful content, and bias who are adversarially testing the model, said the company in a statement on its website.
Were also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora, added OpenAI. We plan to include�C2PA metadata�in the future if we deploy the model in an OpenAI product.
We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals, the company said. Well be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology.
While OpenAI is looking to ensure its new Sora tool safely, the company has recognized that Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. Thats why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.