OpenAI has recently launched a new application, Sora, which exclusively features short-form videos generated by artificial intelligence. The app showcases a range of bizarre and humorous content, from a fascist rendition of SpongeBob SquarePants to a dog behind the wheel of a car, and even Jesus playing Minecraft. Users can easily create their own videos using a simple text prompt interface, resulting in a captivating feed of addictive 10-second clips.
Sora debuted just days after Meta introduced a similar offering through its Meta AI platform. Early reviews by NPR reveal that OpenAI’s Sora can create remarkably realistic videos, including ones featuring real individuals if they provide consent. However, this powerful capability raises significant concerns among researchers about the implications of such technology. Solomon Messing, an associate professor at New York University, expressed that the era of “seeing is not believing” may have begun, given the app’s ability to generate video content that could misrepresent individuals.
The interface of Sora closely resembles popular video-sharing apps, enabling users to select videos based on mood and control the extent to which their likeness is used. Users can manage whether their faces are utilized by everyone, a select group of friends, or kept private, with the option to remove such videos later. To signal that content is AI-generated, Sora adds moving watermarks and incorporates metadata identifying the videos as AI-made.
OpenAI has instituted guidelines to regulate content creation on Sora, barring videos that could facilitate deceit, fraud, scams, and impersonation. The company stated that they use a combination of automation and human review to detect and address violations. However, NPR’s initial exploration of the app revealed potential loopholes in its moderation system. For instance, users were able to generate videos promoting conspiracy theories, including a fabricated message from former President Richard Nixon about the moon landing being a hoax and depicting astronaut Neil Armstrong’s actions on the lunar surface.
Further scrutiny indicated that Sora could be manipulated to produce videos related to sensitive topics such as chemical and biological weapons, contradicting OpenAI’s global usage policies. While the app was designed to ensure user safety, it also allowed an overwhelming amount of content featuring trademarked brands and copyrighted material. One notable video depicted Ronald McDonald evading police in a hamburger-themed vehicle, alongside other creations featuring well-known characters from various franchises.
OpenAI acknowledged the instances of copyright infringement but defended their decision to grant users this level of creative freedom. The head of media partnerships at OpenAI, Vaun Shetty, stated that the company is open to cooperating with rights holders to ensure compliance and address any takedown requests.
As OpenAI faces a lawsuit from The New York Times over copyright issues related to its Large Language Model, the broader implications of a social media landscape driven by AI remain uncertain. Messing noted that while there was significant angst about deepfakes in the past, the situation has not deteriorated as feared. Nonetheless, he emphasized that the high quality and accessibility of content produced by Sora signal a potential shift where digital authenticity may become increasingly difficult to ascertain.