ReportWire

Tag: Sora

  • Disney is investing $1 billion in OpenAI and licensing its characters for Sora

    (CNN) — Disney is taking a $1 billion equity stake in OpenAI, while also striking a deal that would allow its famous characters be used on Sora, the AI company’s video generation platform.

    Disney’s investment in OpenAI is the first such major licensing agreement for Sora.

    Under the agreement, users of OpenAI’s shortform video-generating social media network Sora will be allowed to make videos using more than 200 Disney animated characters. Those characters including Mickey and Minnie Mouse, Disney Princesses like Ariel, Belle, and Cinderella, characters from Frozen, Moana, and Toy Story. Animated characters from Marvel and Lucasfilm, including Black Panther and Star Wars characters like Yoda are included as well – although the agreement does not include any talent likenesses or voices.

    Users of OpenAI’s popular chatbot ChatGPT will also be able to ask the bot to create images using the Disney characters.

    “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Disney CEO Robert A. Iger, CEO said as part of a statement.

    OpenAI, which has come under scrutiny for copyright violations – and also for striking massive ‘circular’ deals leading to fears of an AI bubble – said the deal shows how the creative community and AI can get along.

    “Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

    Shortly after the announcement, Iger and Altman both sat down with CNBC’s David Faber, during which the Disney boss stressed that the deal “does not, in any way, represent a threat to the creators.”

    “In fact, the opposite, I think it honors them and respects them, in part because there’s a license fee associated with it,” Iger said, later adding that the goal is to “continue to honor, respect, value the creative community in general.”

    Iger also stressed that the deal allows Disney to “be comfortable that OpenAI is putting guardrails essentially around how these are used,” adding that, “really, there’s nothing for us to be concerned about from a consumer perspective.” Altman, too, stressed the presence of guardrails, telling Faber that “it’s very important that we enable Disney to set and evolve those guardrails over time, but they will, of course, be in there.”

    The deal is exclusive, per Iger, at least in part. The Disney CEO hinted that “there is exclusivity, basically, at the beginning of the three-year agreement,” but remained mum on what that means. Asked if OpenAI is pursuing similar deals with other companies, Altman said, “I won’t rule out anything in the future, but we think this alone is going to be a wonderful start.”

    Disney has previously sued AI companies for using their intellectual property. On Monday, the company sent Google a cease and desist letter, according to a source familiar with the situation.

    The cease and desist letter claims the company’s AI products, including its image and video generating products Veo and Nano Banana, are infringing Disney’s copyrights “on a massive scale,” by allowing users to create images and videos depicting their characters. The letter alleges that Google has “refused to implement any technological measures to mitigate or prevent copyright infringement.”

    In response, a Google spokesperson said they have “a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them.”

    More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

    Disney had already sent similar cease and desist letters to Meta and Character.AI. In June, Disney and Universal sued AI photo generation company Midjourney, alleging the company violated copyright law.

    This story has been updated with additional developments and context.

    Hadas Gold and CNN

    Source link

  • Disney signs deal with OpenAI to allow Sora to generate AI videos featuring its characters | TechCrunch

    The Walt Disney Company announced on Thursday that it has signed a three-year partnership with OpenAI that will bring its iconic characters to the company’s Sora AI video generator. Disney is also making a $1 billion equity investment in OpenAI.

    Launched in September, Sora allows users to create short videos using simple prompts. With this new agreement, users will be able to draw on more than 200 animated, masked, and creature characters from Disney, Marvel, Pixar, and Star Wars, including costumes, props, vehicles, and more.

    These characters include iconic faces like Mickey Mouse, Ariel, Belle, Cinderella, Baymax, Simba, as well as characters from Encanto, Frozen, Inside Out, Moana, Monsters Inc., Toy Story, Up, and Zootopia. Users will also be able to draw on animated or illustrated versions of Marvel and Lucasfilm characters like Black Panther, Captain America, Deadpool, Groot, Iron Man, Darth Vader, Han Solo, Stormtroopers, and more.

    Users will also be able to draw on these characters while using ChatGPT Images, the feature in ChatGPT that allows users to create visuals using text prompts.

    The agreement does not include any talent likenesses or voices, Disney says.

    “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” said Disney CEO Bob Iger in a statement.

    Disney says that alongside the agreement, it will “become a major customer of OpenAI,” as it will use its APIs to build new products, tools, and experiences, including for Disney+.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    “Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI, in a statement. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

    It’s worth noting that Disney has sued the generative AI platform Midjourney for ignoring requests to stop violating its intellectual property rights. Disney also sent a cease-and-desist letter to Character.AI, urging the chatbot company to remove Disney characters from among the millions of AI companions on its platform.

    Disney’s agreement with OpenAI indicates the company isn’t fully closing the door on AI platforms.

    Aisha Malik

    Source link

  • All the Bad Things That Can Happen When You Generate a Sora Video

    First chance I got, I downloaded the Sora app. I uploaded images of my face—the one my children kiss at bedtime—and my voice—the voice I use to tell my wife I love her—and added them to my Sora profile. I did all this so I could use Sora’s “Cameo” feature to make an idiotic video of my AI self being shot with paintballs by 100 elderly nursing home residents.

    What did I just do? The Sora app is powered by Sora 2, an AI model—and a rather breathtaking one to be honest. It can create videos that run the gamut of quality from from banal to profoundly satanic. It is a black hole of energy and data, and also a distributor of highly questionable content. Like so many things these days, using Sora feels like it’s a little bit of a naughty thing to do, even if you don’t know exactly why.

    So if you just generated a Sora video, here’s all the bad news. By reading this, you’re asking to feel a little dirty and guilty, and your wish is my command.

    Here’s how much electricity you just used

    One Sora video uses something like 90 watt-hours of electricity according to CNET. This number is an educated guess drawn from a study of the energy use of GPUs by Hugging Face

    OpenAI hasn’t actually published the numbers needed for this study, and Sora’s energy footprint has to be inferred from similar models. Sasha Luccioni, one of the Hugging Face researchers who did that work, isn’t happy with estimates like the one above, by the way. She told MIT Technology Review, “We should stop trying to reverse-engineer numbers based on hearsay,” and says we should pressure companies like OpenAI to release accurate data. 

    At any rate, different journalists have provided different estimates based on the Hugginface data. For instance, the Wall Street Journal guessed somewhere between 20 and 100 watt-hours.

    CNET analogizes its estimate to running a 65-inch TV for 37 minutes. The Journal compares a Sora generation to cooking a steak from raw to rare on an electric outdoor grill (because such a thing exists apparently).

    It’s worth clarifying a couple things about this energy use issue in the interest of making you feel even worse. First of all, what I just outlined is the energy expenditure from inference, also known as running the model in response to a prompt. The actual training of the Sora model required some unknown, but certainly astronomical, amount of electricity. The GPT-4 LLM required an estimated 50 gigawatt-hours—reportedly enough to power San Francisco for 72 hours. Sora, being a video model, took more than that, but how much more is unknown.

    Viewed in a certain way, you assume a share of that unknown cost when you choose to use the model, before you even generate a video.

    Secondly, separating inference from training is important in another way when trying to figure out how much eco-guilt to feel (Are you sorry you asked yet?). You can try to abstract away the high energy cost as something that already happened—like how the cow in your burger died weeks ago, and you can’t un-kill it by ordering a Beyond patty when you’ve already sat down in the restaurant. In that sense, running any cloud-based AI model is more like ordering surf and turf. The “cow” of all that training data may already be dead. But the “lobster” of your specific prompt is still alive until you send your prompt to the “kitchen” that is the data center where inference happens.

    Here’s how much water you just used:

    We’re about to do more guesstimating, sorry. Data centers use large amounts of water for cooling—either in closed loop systems, or through evaporation. You don’t get to know which data center, or multiple data centers, were involved in making that video of your friend as an American Idol contestant farting the song “Camptown Races.”

    But it’s still probably more water than you’re comfortable with. OpenAI CEO Sam Altman claims that a single text ChatGPT query consumes “roughly one fifteenth of a teaspoon,” and CNET estimates that a video has 2,000 times the energy cost of a text generation. So a back-of-the-envelope scribble of an answer might be 0.17 gallons, or about 22 fluid ounces—a little more than a plastic bottle of Coke.

    And that’s if you take Altman at face value. It could easily be more. Plus, the same considerations about the cost of training versus the cost of inference that applied to energy use apply here as well. Using Sora, in other words, is not a water wise choice. 

    There’s a slight chance someone might make a truly hideous deepfake of you.

    Sora’s Cameo privacy settings are robust—as long as you’re aware of them, and avail yourself of them. The settings under “Who can use this” more or less protect your likeness from being a plaything for the public, as long as you don’t choose the setting “Everyone,” which means anyone can make Sora videos of you. 

    Even if you are reckless enough to have a publicly available Cameo, you have some added control in the “Cameo preferences” tab, like the ability to describe, in words, how you should appear in videos. You can write whatever you want here, like “lean, toned, and athletic” perhaps, or “always picking my nose.” And you also get to set rules about what you should never be shown doing. If you keep kosher, for instance, you can say you should never be shown eating bacon.

    But even if you don’t allow your Cameo to be used by anyone else, you can still take some comfort in the open-ended ability to create guardrails as you make videos of yourself.

    But the general content guardrails in Sora aren’t perfect. According to OpenAI’s own model card for Sora, if someone prompts hard enough, an offensive video can slip through the cracks.

    The card lays out success rates for various kinds of content filters in the 95%-98% range. However, subtracting only the failures gets you a 1.6% chance of a sexual deepfake, a 4.9% chance of a video with violence and/or gore, a 4.48% chance of something called “violative political persuasion,” and a 3.18% chance of extremism or hate. These chances were calculated from “thousands of adversarial prompts gathered through targeted red-teaming”—intentionally trying to break the guardrails with rule-breaking prompts, in other words.

    So the odds are not good of someone making a sexual or violent deepfake of you, but OpenAI (probably wisely) never said never.

    Someone might make a video where you touch poop.

    In my tests, Sora’s content filters generally worked as advertised, and I never confirmed what the model card said about its failures. I didn’t painstakingly create 100 different prompts trying to trick Sora into generating sexual content. If you prompt it for a cameo of yourself naked, you get the message “Content Violation” in place of your video.

    However, some potentially objectionable content is so weakly policed as to be completely unfiltered. Specifically, Sora is seemingly unconcerned about scatological content, and will generate material of that sort without any guardrails, as long as it doesn’t violate other content policies like the ones around sexuality and nudity.

    So yes, in my tests, Sora generated Cameo videos of a person interacting with poop, including scooping turds out of a toilet with their bare hands. I’m not going to embed the videos here as a demonstration for obvious reasons, but you can test it for yourself. It didn’t take any trickery or prompt engineering whatsoever. 

    In my experience, past AI image generation models have had measures in place to prevent this sort of thing, including Bing’s version of OpenAI’s image generator, Dall-E, but that filter appears to be gone in the Sora app. I don’t think that’s necessarily a scandal, but it’s nasty!  

    Gizmodo asked OpenAI to comment on this, and will update if we hear back. 

    Your funny video might be someone else’s viral hoax. 

    Sora 2 has unlocked a vast and infinite universe of hoaxes. You, a sharp, internet-savvy content consumer would never believe that anything like the viral video below could be real. It shows spontaneous looking footage seemingly shot from outside the White House. In audio that sounds like an overheard phone conversation, AI-generated Donald Trump tells some unknown party not to release the Epstein files, and screams “Just don’t let ’em get out. If I go down, I will bring all of you down with me.”

    Judging from Instagram comments alone, some people seemed to believe this was real

    The creator of the viral video never claimed it was real, telling Snopes, who confirmed it was made by Sora, that the video is “fully AI-generated” and was created “solely for artistic experimentation and social commentary.” A likely story. It was pretty clearly made for clout and social media visibility. 

    But if you post videos publicly on Sora, other users can download them and do whatever they want with them—and that includes posting them on other social networks and pretending they’re real. OpenAI very consciously made Sora into a place where users can doomscroll into infinity. Once you put a piece of content in a place like that, context no longer matters, and you have no way of controlling what happens to it next. 

    Mike Pearl

    Source link

  • OpenAI Can’t Legally Use the Word ‘Cameo’ in Sora Now

    According to CNBC, OpenAI must not use the term “cameo” in the Sora app after a temporary restraining order was issued by Judge Eumi K. Lee of the Northern District of California. Last month, OpenAI was sued by Cameo, the celebrity video-selling platform for violating its trademark.

    The judge’s restraining order will expire on December 22.

    Sora is the social-media-style app that debuted alongside the attention-grabbing video generation model Sora 2 on September 30. Much of the controversy around the use of the model (and app) has directly or indirectly involved the Cameo feature.

    “Cameos” in Sora are video generations involving likenesses uploaded through a process within the app. Prompting Sora for a Cameo allows the user to invoke a specific person, and receive a video featuring a sanctioned version of that person, be they a celebrity Sora user or just a friend.

    “Cameos” in Cameo, meanwhile, are the videos users buy from celebrities. When you initially book one, the platform calls it a “personalized video,” but when your order is fulfilled, the push notification you get from the Cameo app says “Your Cameo from [celebrity] is ready.” So if you’ve ever said something like “I got a Cameo of Kenny G for my birthday,” you were using the term as Cameo apparently intends, and apparently feels is part of its trademark.

    OpenAI’s statement to CNBC, reads, “We disagree with the complaint’s assertion that anyone can claim exclusive ownership over the word ‘cameo’, and we look forward to continuing to make our case to the court.”

    Confusingly, not every Sora video involves a Cameo, and certain people have been easy to generate with Sora without using the Cameo feature to mark that person’s participation as official. This included likenesses of Michael Jackson—which OpenAI apparently deemed acceptable because Jackson is dead.

    Others, like living actor Bryan Cranston could be added through workarounds. In the case of Cranston, no Cameo was necessary if the user prompted with the term “Walter White,” his Breaking Bad character, which introduced additional confusion around copyrighted characters.

    Cameo claimed OpenAI’s use of the word was a decision made “in blatant disregard for the obvious confusion it would create.” Cameo also noted that personalities like Mark Cuban and Jake Paul are on Cameo, and can be Cameo-ed on Sora, which adds to the confusion, Cameo argues.

    It’s worth noting that while “cameo” is indisputably a valid word independent from its connection to the celebrity video platform, OpenAI does capitalize the first letter in “Cameo” when it uses the word in conjunction with the Sora feature.

    Last week, the library app OverDrive sued OpenAI over another Sora-related trademark issue, claiming that the image it uses as its app icon and watermark is too similar to OverDrive’s icon.

    When Gizmodo tested Sora while reporting this article, the app still contained the word “Cameo.” We reached out to OpenAI for information about whether they plan to comply with the order, and will update if we hear back. 

    Mike Pearl

    Source link

  • OpenAI learned the hard way that Cameo trademarked the word ‘cameo’ | TechCrunch

    OpenAI’s social app Sora launched with a controversial feature called Cameo, allowing users to deepfake themselves or others (with permission). The feature had a tenuous rollout — Martin Luther King Jr.’s estate had to get involved, to give you an idea of what went on — but now it faces a new challenge.

    Apparently, Cameo — the app where you buy custom video messages from celebrities — can claim the trademark of the word “cameo.”

    U.S. District Judge Eumi K. Lee imposed a temporary restraining order that blocks OpenAI from using the word “cameo,” as well as any similar-sounding words or phrases, on Sora.

    The temporary restraining order issued on November 21, 2025 is set to expire on December 22, 2025, at 5:00 p.m. A hearing on the matter is scheduled for December 19, 2025, at 11:00 a.m.

    As of Monday afternoon, the Sora app still uses the “cameo” language, however.

    “We are gratified by the court’s decision, which recognizes the need to protect consumers from the confusion that OpenAI has created by using the Cameo trademark,” Cameo CEO Steven Galanis said in a statement. “While the court’s order is temporary, we hope that OpenAI will agree to stop using our mark permanently to avoid any further harm to the public or Cameo.”

    OpenAI disagrees with the assertion that the company can claim exclusive ownership over the word “cameo,” the company told CNBC.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Amanda Silberling

    Source link

  • OpenAI can’t use the term ‘Cameo’ in Sora following temporary injunction

    Cameo, the app that allows people to buy short videos from celebrities, has won an important victory in its legal battle against OpenAI. On Monday, a federal judge granted the company a temporary restraining order against OpenAI, CNBC reports. Until December 22, the startup is not allowed to use the word “cameo” in relation to any features inside of Sora, its TikTok-like app for creating AI-generated videos. The order covers similar words like “Kameo” and “CameoVideo.”

    “We are gratified by the court’s decision, which recognizes the need to protect consumers from the confusion that OpenAI has created by using the Cameo trademark,” Cameo CEO Steven Galanis told CNBC. “While the court’s order is temporary, we hope that OpenAI will agree to stop using our mark permanently to avoid any further harm to the public or Cameo.”

    OpenAI did not immediately respond to Engadget’s comment request.

    Cameo sued OpenAI in October, claiming the company’s use of the term was likely to confuse consumers and dilute its brand. Before filing the suit, Galanis said Cameo tried to resolve the dispute “amicably,” but claims OpenAI refused to stop using the name. Sora’s cameo feature allows users to upload their likeness to the app, which other people can then use in their own videos. US District Judge Eumi K. Lee, who granted Cameo the temporary junction, has scheduled a hearing for December 19 to determine if the order should be made permanent.

    Igor Bonifacic

    Source link

  • ‘Saturday Night Live’ Just Nailed the Problem With AI Products

    The cast of “Saturday Night Live” is coming for the sometimes absurd world of AI-generated video.

    A skit from the show’s Nov. 15 episode poked fun at the technology’s penchant for some pretty strange glitches. It featured four grandchildren, played by cast members Chloe Fineman, Sarah Sherman, Marcello Hernández and Tommy Brennan, visiting grandmother Ashley Padilla in a nursing home on Thanksgiving. The children tell their grandmother that they uploaded some of her photos to an app that will bring them to life by turning them into short videos. (Apps like MyHeritage‘s Deep Nostalgia and AliveMoment already offer these types of capabilities. OpenAI’s Sora 2 on the other hand generates video from text prompts and allows users to insert their own likeness.)

    The AI animation begins innocently enough with Glen Powell, who is portraying the woman’s deceased father, smiling and waving—but things quickly escalate. In the next photo, Powell poses with Padilla’s mother next to a barbecue. She takes a drag off of her hotdog, while Powell throws the family dog, which has two tails and no head, on the grill.

    “There’s probably just too much going on in the picture and the AI got confused,” Sherman explains to the distraught grandmother.

    They move on to a photo with Powell and a family friend, played by Mikey Day, posing in a bowling alley. The bowling balls float out of frame, Powell whips out a wad of cash, and Day pulls down his pants to expose a “Ken doll crotch.” The episode culminates with the grandchildren saying they have one last “special” photograph that shows the grandmother’s parents grinning down at her, swaddled in a blanket.

    “Maybe we don’t bring this one to life. It’s just so nice the way it is,” Padilla implores. But Hernández insists, arguing it costs “10 credits just to upload it to the app.”

    The mother emerges from behind a bench as a disembodied torso, while Powell tears the swaddled infant in half and plays her like an accordion. A pantsless Day crashes in on the scene before a nuclear bomb goes off in the back. The cast bites back laughter as they promise they’ll return to visit their grandmother for Christmas.

    Although exaggerated, the skit is making fun of some very common problems with AI. With AI video generation in particular, the results can be dramatic or just plain weird. One big issue is hallucination, which refers to when AI models generate false information—this can include fabricated data from a chatbot or too many fingers on a hand in an AI video.

    But even in the short time that AI-powered video generation apps have been made available to the public, the quality has made some serious strides, which can lead to problems of its own. The issue is prompting concern from watchdogs. 

    Earlier this month, nonprofit nonprofit Public Citizen penned a letter to OpenAI demanding the withdrawal of its text to video app, Sora 2, arguing it does not contain enough safeguards and poses a “potential threat to democracy,” as well as to the privacy of individuals, The Los Angeles Times reported. Outlets like Futurism and 404 Media have also tracked a flood of hateful, misogynistic and violent content onto social media since AI video apps went mainstream.

    Give the video a watch, below:

    Chloe Aiello

    Source link

  • South Park’s Donald Trump and J.D. Vance Are Hooking Up

    She got him.
    Photo: Comedy Central

    AI couldn’t make the November 12 episode of South Park. This week, the animated comedy took on Sora, OpenAI’s video-creation tool, and somehow managed to work in a new twist in the Donald Trump and Satan romance: a love triangle. J.D. Vance is pulling Trump away from his partner, despite Satan’s butt pregnancy.

    The episode begins with Butters using Sora to make revenge videos of his ex-girlfriend, Red McArthur, getting “pissed on by Santa.” Red retaliates by making a video of Butters having sex with Totoro (from My Neighbor Totoro) and playing the video at school assembly. Detective Harris, who doesn’t fully understand “Sora,” comes looking for Totoro, who he believes is actually molesting kids. But the South Park Elementary children continue to make gross videos of each other, leading Detective Harris down wackier and wackier paths, attempting to find his animated foes.

    Meanwhile, Cartman is being held in a hotel room by Peter Thiel, who kidnapped him during the “Six-Seven” episode because he believed Cartman was possessed (why else would he be laughing so hard at that meme?) and could help stop Satan’s butt baby, a.k.a. the Anti-Christ. Thiel is sending AI videos to Cartman’s mom in which Cartman tells her that everything is okay and he’s doing well, which she takes at face value.

    Co-conspirator Vance is also hard at work at the White House and tells Trump to get rid of his own baby because he won’t actually want to deal with it. Trump agrees. In the heat of the moment, Vance and Trump consummate their relationship on Trump’s bed with Satan in a NSF-anywhere scene.

    In South Park, the detective brings the kids to court to try to catch the animated predators. He then traces the IP address on the videos “Cartman” sent his mother and goes to arrest Thiel. There, on Thiel’s laptop, he finds security footage of Vance and Trump hooking up and leaks it. Satan chooses to believe Trump when he says that the sex tape is just AI. But when Trump leaves their bed, he goes to make out with Vance. What will Satan do?

    Jason P. Frank

    Source link

  • Meta AI’s app downloads and daily users spiked after launch of ‘Vibes’ AI video feed | TechCrunch

    New data indicates that use of Meta AI’s mobile app for iOS and Android has seen a significant increase. According to a new analysis from market intelligence provider Similarweb, the app’s daily active users across both platforms jumped to 2.7 million as of October 17, up from around 775,000 just four weeks ago. In addition, Meta AI’s app installs are also up, reaching 300,000 new downloads per day, compared with under 200,000 daily downloads a few weeks ago.

    For comparison, Meta AI’s app had just 4,000 daily downloads a year ago, on October 17, 2024.

    Image Credits:Similarweb

    The firm says it hasn’t seen any meaningful correlation in either search or advertising estimates, but notes Meta could be running Facebook or Instagram promotions that wouldn’t be captured in its model.

    However, there’s also another possible explanation for the sharp rise: the launch of Meta’s new Vibes feed in September, which introduced short-form AI-generated videos to the Meta AI mobile app.

    Meta AI introduced the Vibes feed on September 25, which correlates with the sharp increase in the app’s daily active users on iOS and Android, as seen in the chart below.

    Image Credits:Similarweb

    Recently, OpenAI’s video generator Sora drew headlines as its app reached the top of the App Store when users rushed to try the new technology. However, Meta AI could have benefited from this launch as well. While Similarweb says its data doesn’t prove cause and effect, it’s possible that the attention to Sora drove some people to try Meta AI, in order to compare the two experiences.

    Another possibility is that Meta could be benefiting from Sora’s invite-only status. That is, those who couldn’t try out the OpenAI app may have looked for an alternative to experiment with. This would be an interesting explanation, too, as it suggests that OpenAI’s decision to gatekeep Sora may have directly boosted its rivals.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Meta AI's Vibes feed, which showcases AI videos, may have driven a spike in app downloads and usage,
    Image Credits:Similarweb

    As of October 17, Meta AI’s app had seen a 15.58% increase in daily active users worldwide, while ChatGPT, Grok, and Perplexity saw declines of 3.51%, 7.35%, and 2.29%, respectively.

    Sarah Perez

    Source link

  • OpenAI’s Sora bans Martin Luther King Jr. deepfakes after his family complained

    New York (CNN) — OpenAI announced that it has “paused” users’ ability to generate videos of Martin Luther King Jr. on its artificial intelligence video tool Sora, following backlash over “disrespectful depictions.”

    “While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used,” the company said in a Thursday statement posted on X. “Authorized representatives or estate owners can request that their likeness not be used in Sora cameos.”

    The change comes a few weeks after the launch of Sora 2, which lets users make realistic-looking AI-generated videos using real and historical people. Critics charge that it’s contributing to an era of misinformation and “AI slop” that is blurring the lines between what’s real and what’s fake.

    The product has also generated online discussion about ethics around the use of this technology. Some creators were using King’s likeness for inappropriate purposes. Users recently recreated the late actor Robin Williams in AI videos, prompting his daughter Zelda to call them “disturbing.”

    OpenAI said it “thanks Dr. Bernice A. King for reaching out on behalf of King, Inc., and John Hope Bryant and the AI Ethics Council for creating space for conversations like this.”

    The King Center didn’t immediately respond to CNN’s request for comment.

    Jordan Valinsky and CNN

    Source link

  • OpenAI suspends Sora depictions of Martin Luther King Jr. following a request from his family

    OpenAI has paused video generations of Martin Luther King Jr. on Sora at the request of King Inc., the estate that manages his legacy. The company said in an announcement on X that it worked with the estate to address how his “likeness is represented in Sora generations” after people used the app to create disrespectful depictions of the American civil rights leader. It’s not quite clear if OpenAI intends to restore Sora’s ability to generate videos with MLK in the future, but it’s wording implies it does and that it has only suspended the capability as it “strengthens guardrails for historical figures.”

    After OpenAI launched the Sora app, users generated videos with likenesses of dead public figures, including Michael Jackson, Robin Williams and MLK. Williams’ daughter, Zelda Williams, had to beg people to stop sending her AI videos of her father. “To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough’, just so other people can churn out horrible TikTok slop puppeteering them is maddening,” she wrote on Instagram. MLK’s daughter, Bernice A. King, wrote on Threads that she agreed and also asked people to stop sending her videos of her father.

    According to a report by The Washington Post, the Sora-made videos that were posted online included King making monkey noises while he was giving his “I Have a Dream” speech. Another video showed King wrestling with Malcolm X, whose daughter, Ilyasah Shabazz, questioned why AI developers weren’t acting “with the same morality, conscience, and care… that they’d want for their own families” in a statement made to The Post.

    OpenAI said that while there are “strong free speech interests in depicting historical figures,” it believes “public figures and their families should ultimately have control over how their likeness is used.” It also said that the estate owners of other historical figures and their representatives can ask the company for their likenesses not to be used in Sora videos, as well.

    Mariella Moon

    Source link

  • Japan asks OpenAI not to infringe on ‘irreplaceable’ manga and anime content

    Japan’s government has asked OpenAI not to infringe on anime and manga content that it called “irreplaceable treasures,” according to a report from ITMedia seen by IGN. The request was made by a key minister in charge of AI and IP in response to numerous videos from OpenAI’s Sora 2 generator that use copyrighted material from Japanese studios.

    “We have requested OpenAI not to engage in any actions that could constitute copyright infringement,” said cabinet minister Minoru Kiuchi at a press conference last week. “Anime and manga are irreplaceable treasures that we can be proud of around the world.”

    Launched on October 1, OpenAI’s Sora 2 can generate 1080p videos up to 20 seconds long with sound. The company also released the Sora app that uses Sora 2 to generate TikTok-style videos of nearly anything. Anime has been a key theme, with many short videos replicating copyrighted materials from franchises like DragonBall and Pokémon.

    Despite the demand, Japan has been one of the more progressive nations when it comes to artificial intelligence. The nation’s AI Promotion Act aims to boost the use of AI as an economic growth driver, while also outlining guidelines around copyright infringement. However, the topic of enforcement is still fuzzy so the government is trying to get a better grip on it. “Japan bears a responsibility to take the lead on making rules [around AI and copyright], precisely because we are a country… [that creates] anime, games, and music,” said parliament member Akihisa Shiozaki on his blog.

    Last month, OpenAI said it had contacted studios to give them the option of opting out of Sora 2 training on their materials, Reuters reported. The new process requires movie studios and other content owners to explicitly ask OpenAI to exclude their copyright material from videos generated by Sora. It’s not known which, if any, Japanese studios the company has contacted.

    Steve Dent

    Source link

  • Mark Cuban Just Invited Everyone to Deepfake Him on Sora and It’s Really Quite Brilliant

    Yesterday, Mark Cuban told his followers on Threads, “For those of you on Sora, my Cameos are open. Have at it.” At first, it sounded like a joke. Why would Cuban want people making weird videos of him saying all sorts of crazy things? Or, worse, why would he want to be dropped into potentially political or otherwise controversial videos?

    But if you stop and think about it, what Cuban just did was actually quite brilliant. He wasn’t just giving people permission to make AI videos of him—he was making a statement about what it means to live in a world where artificial intelligence is no longer an experiment. Cuban seems to believe AI may be the single most essential tool moving forward, and we should all spend as much time as possible learning how to use it well.

    The deepfake invitation

    As I’m sure you know, OpenAI’s Sora is having a viral moment. According to the company, it reached a million downloads in just five days, more quickly than even ChatGPT back in 2022. It’s still the number one app in the iOS App Store.

    At the same time, the app’s rise has reignited long-running debates about copyright and the ethics of generative media. Artists and filmmakers have accused OpenAI of training Sora on copyrighted video without permission. Some studios have even suggested the tool amounts to digital theft—using the creative work of others to produce content that could compete with it. OpenAI, for its part, seems to have been caught off guard by how controversial it has become.

    That’s the context for Cuban’s invitation. While a lot of celebrities and IP owners have hired lawyers to protect their likenesses from AI manipulation, Cuban leaned into it. “My Cameos are open,” he said, framing his digital image as a kind of public sandbox. In other words: go ahead and try it.

    AI as the most important skill

    It’s a bold move, but it fits with Cuban’s long-held philosophy about technology. He’s said on a number of occasions that the biggest mistake you can make right now—whether you’re a student, a business owner, or a creator—is sitting on the sidelines.

    “If I was 16, 18, 20, 21 starting today,” he said earlier this year, “I would spend every waking minute learning about AI. Even if I am sleeping, I am listening to podcasts talking about AI.”

    That’s not hyperbole. It’s an intentional strategy, and it gets to why Cuban’s invitation is so smart. If you want to get people to spend all their time with something, you have to make it worth it. Or, at a minimum, you have to make it fun.

    It’s sort of like a calculated dare. By telling people to “have at it,” he’s acknowledging that there are real risks, but he’s also forcing people to ask the question: “What’s the responsible way to explore this new creative power?”

    Ignoring it won’t make it safer. On the other hand, engaging with it with some measure of humor might.

    Experimentation is the point

    Cuban has argued for years that AI isn’t just another wave of technology. It’s the wave—the one that will separate people who thrive in the next decade from those who don’t. “There are two types of companies in the world,” he says. “Those that are great at AI, and everybody else.”

    The same could be said for people. He isn’t telling everyone to become AI researchers or build the next ChatGPT. He’s telling them to experiment—to play with the tools, learn their limits, and figure out how to make them useful.

    AI, he likes to remind people, “is never the answer—it’s the tool.” What matters is how creatively you use it.

    So when he invites the internet to “deepfake” him on Sora, he’s modeling exactly that mindset. He’s saying: don’t be afraid of it. Don’t moralize about it before you understand it. Try it out. Break things. See what happens.

    The risk in the lesson

    There’s an obvious risk here. An app that makes it this easy to make deepfakes is controversial for good reason. It blurs the line between entertainment and deception, and they do it based on what others created. Public figures have every reason to protect their image, particularly as AI-generated media becomes harder to distinguish from reality.

    Cuban’s move flips that fear on its head. By giving permission, he removes the taboo. If people are going to make deepfakes anyway—and they are—then he might as well be the one to turn it into a teachable moment.

    That’s what makes it brilliant. It’s not about Cuban himself; it’s about how everyone else responds. The more people experiment, the faster we’ll uncover the limits and possibilities of AI video tools like Sora. That’s why Cuban’s invitation matters. It’s a message to the next generation: don’t just watch what AI can do. Do something with it.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    Jason Aten

    Source link

  • OpenAI’s TikTok of AI slop hit one million downloads faster than ChatGPT

    Sora, OpenAI’s app and social network for AI-generated videos, has been downloaded over one million times, according to Sora head Bill Peebles. The app reached one million downloads in less than five days, Peebles says, “even faster than ChatGPT did.” That’s despite OpenAI only making the app available in North America, and its decision to require users to have an invite to actually use it.

    Like TikTok, Sora offers an endless vertical feed of videos, only Sora’s videos are AI-generated rather than uploaded by users. Creating a 10-second video of your own is as simple as writing a prompt to OpenAI’s Sora 2 model in the app. And through the Sora’s Cameo feature, you can even create videos of yourself and anyone else who’s agreed to share their likeness to the service.

    The limited guardrails OpenAI has put on Sora has already led to a rash of videos featuring OpenAI’s Sam Altman and content that clearly infringes on copyright. The fact that Sora can so readily create videos of recognizable characters like Pikachu raises questions about what OpenAI’s model was trained on, and has unsurprisingly prompted pushback from the larger entertainment industry.

    In response, the company has updated Sora to give users more control over what videos their likeness can appear in. OpenAI plans to offer similar controls to rights holders, giving them “the ability to specify how their characters can be used (including not at all),” according to Altman. It’s not clear why these controls weren’t available when Sora launched, but both seem like good changes.

    Because of Sora’s invite system, it’s difficult to say if the over one million downloads the app has received translates to as many users. It’s not unusual for someone to download an app and never use it. Whatever the case, OpenAI’s bet on AI-generated videos seems like it might be a winning one, provided the company finds a way to actually make more money than it looses generating videos for Sora.

    Ian Carlos Campbell

    Source link

  • You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

    The complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, CEO Sam Altman announced that the company will be moving to an “opt-in” model that will “give rightsholders more granular control over generation of characters”—and Sora obsessives are not taking it particularly well.

    Given the type of content that was being generated with Sora and shared via the TikTok-style social app that OpenAI launched specifically to host user-generated Sora videos, the change shouldn’t come as a shock. Almost immediately, the platform was inundated with copyrighted material being used in ways that the rightsholders almost certainly did not care for, unless you think Nickelodeon really loved the subversiveness of Nazi SpongeBob. On Monday, the Motion Picture Association became one of the loudest voices calling for OpenAI to put an end to the potential infringement. It didn’t take long for OpenAI to respond and acquiesce.

    In a blog post, Altman said the new approach to copyrighted material in Sora will require rightsholders to opt-in to having their characters and content used—but he’s very sure that copyright holders love the videos, actually. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all),” Altman wrote, stating that his company wants to “let rightsholders decide how to proceed.”

    Altman also admitted, “There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.” It’s unclear if that will play with rightsholders. MPA CEO Charles Rivkin said in a statement that OpenAI “must acknowledge it remains their responsibility—not rightsholders’—to prevent infringement on the Sora 2 service,” and said “Well-established copyright law safeguards the rights of creators and applies here.”

    While OpenAI might be giving copyright holders more control of the outputs of its model, it doesn’t appear that they had much say on the inputs. A report from the Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. It’s not clear that OpenAI went out and got those rights to train Sora 2, but the generator is very good at spitting out accurate recreations of copyrighted material in a way that it could only do if it was fed a whole lot of existing content during training.

    The biggest AI training case thus far saw Anthropic pay out $1.5 billion to settle a copyright infringement case with authors of books the company pirated to train its models. The judge in that case did find that using copyrighted material for training without permission is fair use, though other courts may not agree with that call. Earlier this year, OpenAI asked the Trump administration to call AI model training fair use. So a lot of OpenAI’s strategy around Sora appears to be fucking around and hoping, if it makes the right allies, it’ll never have to find out.

    OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well.

    AJ Dellinger

    Source link

  • Fake Protest Videos Are the Latest AI Slop to Go Viral in MAGA World

    Last week, OpenAI released Sora 2, the latest version of its AI video creator, along with a new Sora app for making and sharing those videos. The new tool has led to an explosion in realistic AI videos on social media, including plenty of fresh discussion about intellectual property rights. But one of the oddest things to emerge with Sora 2 is a new crop of videos showing AI protesters.

    These aren’t just any protest videos. Specifically, supporters of President Donald Trump appear to be making videos of protesters being brutalized by the federal agents and troops who have been sent to U.S. cities. Trump has most recently tried to deploy National Guard troops to Portland and Chicago, and while those deployments have been delayed by the courts, those cities are still crawling with ICE agents terrorizing immigrant communities.

    One of the new fake protester videos, which has over 40 million views on Instagram, shows someone clad in black shouting in the face of a soldier dressed in fatigues.

    “What’s your name, soldier?” the AI protester yells repeatedly. The text on the screen reads “wait for it” before the AI soldier sprays the protester with orange pepper spray, yelling back “Sergeant Pepper.”

    The video has been shared on several platforms, including TikTok and X, where many people don’t seem to understand it’s AI. The actor James Woods, a Trump supporter who frequently shares right-wing memes on X, wrote on Tuesday about the video, “Couldn’t get better than this. Haiku-level brilliant.” Many of the comments seem to be just as oblivious to the fact that it’s AI.

    Another video that’s become popular on Facebook, Instagram, and X shows AI protesters shouting “no queso, no cheese,” apparently a racist riff on the common protest chant “no justice, no peace.” The AI protesters in the video are also sprayed with a chemical agent.

    The video has gotten over 1.5 million views on X alone, with the caption, “Lmfao, that was beautiful and before you ask I voted 100 for this!!”

    The phrase “I voted for this” has become a common thing for far-right supporters of Trump to say when something particularly brutal has happened to their political opponents.

    The Instagram version of the video includes the captionLiberals acting like clowns – got treated like clowns by federal agents – goodbye – FAFO,” an acronym for “fuck around and find out.” Some users on Instagram have pointed out that it’s AI, but the account insists in the comments that it’s real.

    It’s not real, obviously. And the big Sora watermark should be the clue to anyone who’s familiar with OpenAI. But unfortunately, that kind of watermark isn’t enough for most people these days to differentiate between fake and real content.

    The shorter Sora videos are also being compiled into larger compilations of AI-generated fakes, like in this one on X that includes jokes about the protesters being paid. It’s a common right-wing accusation that all the people protesting Donald Trump are actually paid to be there, often by liberal activists like George Soros.

    The most curious thing about all of these fake protester videos is that there are plenty of real videos to share. For example, a pastor in Chicago was shot in the head with pepper balls by agents last month. Rev. David Black was protesting outside an ICE facility near Chicago, Illinois, and was simply speaking peacefully when he was attacked.

    Another one of several videos showing protesters getting brutalized while doing nothing wrong was recently captured in Portland. A woman is seen simply talking with police before she’s sprayed for absolutely no discernible reason.

    But real videos of a pastor or a woman getting treated with unnecessary violence don’t really fit the narrative that President Trump and his fascist goons are trying to sell. Trump has claimed the reason he’s sending agents and troops to U.S. cities is that they’re overrun with crime. And he just wants to restore law and order.

    In reality, violent crime is near a 50-year low, and Trump is simply trying to strike fear into the hearts of ordinary Americans as his thugs disrupt countless lives. That disruption is having an economic impact, as restaurants in Chicago are comparing the loss of business to the start of the covid-19 pandemic.

    Perhaps that’s why we’re seeing so many fake videos of protests on social media right now. They need a pretext to justify their brutal crackdown on working people. Trump himself seems to have been convinced to deploy the National Guard to Oregon because he saw too many old videos of Portland from the summer of 2020 on TV. And when it comes to Trump, he’s pretty easy to fool.

    Trump has even shared an AI video of himself promoting a magical “med bed” that can cure all diseases. The president’s Truth Social account deleted the video, but it’s still not clear why he shared the conspiracy theory in the first place. It’s entirely possible he thought it was real. We simply don’t know.

    Remember when the president saw a photoshopped image of the letters “MS-13” on someone’s hand back in April and insisted it was real?

    President Donald Trump holds up an image of a hand that’s been photoshopped to add the characters “MS-13” at the White House on April 18, 2025. Image: Truth Social

    Trump is not a smart man. And his followers are even dumber, as they spread AI slop far and wide, creating an alternative reality where sadistic troops dole out punishment to the left on American streets. Unfortunately, you don’t need AI to see so much brutality right now. And if Trump has his way, you’re going to see a lot more of it very soon.

    Matt Novak

    Source link

  • OpenAI’s Sora App Drags Us Into the Litigation Phase of AI

    Well, the AI wars just got worse. Just when I thought the AI platformers had figured out how to temper their conquests and deliver tools that would result in long-term wins for everyone, OpenAI went and launched Sora 2, a one-stop shop for prompt-based short-video copyright infringement on the iPhone app store and it skyrocketed to number one like a bullet with 164,000 downloads in 48 hours. 

    If you were busy this weekend and missed the whole fiasco around OpenAI’s Sora private app release, you missed a parade of prompt-driven AI-generated short videos featuring Ronald McDonald fleeing the police in a burger-shaped car – along with all sorts of protected IP like Nintendo, Southpark, even the Simpsons characters doing whatever meme-able things the app’s invite-only users could unleash on an amused public.

    Oh, the lawyers were not amused.

    Especially those who work for the companies holding the IP copyrights for those mostly animated characters. Also unimpressed were the lawyers who work for famous people who were about to be placed in compromising positions – once some idiot decided the world would get a laugh out of seeing Taylor Swift dressed as a Nazi and waving a banana while shoplifting in an adult video store. 

    Yeah, the AI wars went nuclear. No sign on when they get better. Because maybe all OpenAI did was rip a page from the tried-and-true “apologize after instead of ask for permission first” playbook. But I believe they took that old chestnut of a strategy to the next level, using the Sora private release as a test of opt-out versus opt-in for AI.

    Maybe they knew exactly what they were doing. And maybe they got us again. Reckless speculation to follow.

    Modern Generative AI Is Built On IP Theft

    Man, it sounds harsh when you say it out loud but there really isn’t a counter-argument anymore.

    Back in 2010, I co-invented some of the first public-facing generative AI. However, unlike today’s version, our models were developed solely on the private data of our customers, data which never left their possession, giving them total control over how that data was exposed to either their own customers or the general public. 

    Now, if you decide that you want to sell AI not just to specific customers, but to the whole wide world, you’re gonna need – you guessed it – the whole wide world’s data.

    How do you get all that data? Not only that, how do you get permission to use all that data?

    Well, in my experience, you would need to scrape first and apologize later. 

    These shenanigans all came to a head in 2024, and by the end of the year, people like me were raising their hands and asking if we were just going to let everyone get away with the mass theft of all the world’s IP – while also noting that said IP was “housed” in a notoriously poor and unverified data store.

    But the thing is, people like me already knew the answer, because while this was the first time we heard the opt-out/opt-in argument as it related to AI, it was the same opt-out/opt-in argument we had already heard about SEO. SEO was such a lure of cash to be made that we not only let the bots in, we tweaked the content and added the keywords to make it easier for the bots to collect whatever IP they wanted. Just give us that coveted high-ranking search link, please!

    We let it be OK to do that.

    Not only did the AI platformers ride the backs of that process, allegedly, but when they had what they needed, they broke the original promise. No more links for you!

    What do we get in return? Apparently, users get a chance to “engage with their family and friends through their own imaginations.”

    This Is Not About User Imagination

    It never was.

    It’s the same story Facebook gave us when they started letting brands make social accounts, way back in the olden times.

    It’ll be fun. You can interact with Diet Pepsi the same way you interact with your best friend from the third grade. Or your mom. It will deepen relationships with end users and the brands they love. 

    Come on.

    It hasn’t stopped. Just last week, when Hollywood actors and actresses lost their minds over the launch of AI actress Tilly Norwood, the let’s-all-just-calm-down response from the creators was: “AI offers another way to imagine and build stories.”

    I gotta call bullshit here.

    How is it not a way to generate content without paying the people who own the IP – in this case the actors and actresses Tilly was trained on?

    Same thing here with Sora. This has, in my opinion, nothing to do with interacting with friends in a brand new way. It has everything to do with generating content without paying the creators of the IP.

    Opt-Out As Policy Is a Joke

    And it always has been.

    Lawyers: Wait, you can’t do opt-out. It’s a completely onerous burden on the owners of the IP.

    AI platforms: Oops. We just did. We’ll fix it. Here’s some money.

    I think, at this point, generative AI might be serving as a testing ground for the next phase of AI – prescriptive, predictive, autonomous, and agentic, or the more “thinky” AI.

    The rest of that “real” AI is where the “real” money is, but like generative AI, it doesn’t work without a lot of data, and in many cases, it doesn’t work well without proprietary data. 

    And paying the owners for that proprietary IP is really expensive.

    I don’t think copyright law makes a dent here. Copyright law is nothing more than a warning and an excuse to start a legal battle. Besides, the toothpaste is already out of the tube. Because that’s not Spongebob Squarepants in that Sora video, it’s ShapeySlacks McPhilFish, or whatever derivative you want to slap on it.

    But, that won’t stop the IP owners from trying. Last month, Rolling Stone-owned Penske sued Google. And in the same week Hollywood freaked out about Tilly Norwood, Disney sued Midjourney and sent a cease-and-desist to Character.

    Welcome to the litigation phase of Generative AI, folks. If that’s where we’ve arrived then I don’t see it getting better any time soon. So protect your IP, because during the next phase of “real” AI, that proprietary data is going to be a lot more valuable. 

    If you found some enjoyment in this reckless speculation, please join my email list so I can shoot you a quick heads up when I write something completely from my own human brain. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    Joe Procopio

    Source link

  • You can’t libel the dead. But that doesn’t mean you should deepfake them. | TechCrunch

    Zelda Williams, daughter of the late actor Robin Williams, has a poignant message for her father’s fans.

    “Please, just stop sending me AI videos of Dad. Stop believing I wanna see it or that I’ll understand. I don’t and I won’t,” she wrote in a post on her Instagram story on Monday. “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”

    It’s probably not a coincidence that Williams was moved to post this just days after the release of OpenAI’s Sora 2 video model and Sora social app, which gives users the power to generate highly realistic deepfakes of themselves, their friends, and certain cartoon characters.

    That also includes dead people, who are seemingly fair game because it is not illegal to libel the deceased, according to the Student Press Law Center.

    Sora will not let you generate videos of living people — unless it is of yourself, or a friend who has given you permission to use their likeness (or “cameo,” as OpenAI calls it). But these limits don’t apply to the dead, who can mostly be generated without roadblocks. The app, which is still only available via invite, has been flooded with videos of historical figures like Martin Luther King, Jr., Franklin Delano Roosevelt, and Richard Nixon, as well as deceased celebrities like Bob Ross, John Lennon, Alex Trebek, and yes, Robin Williams.

    How OpenAI draws the line on generating videos of the dead is unclear. Sora 2 won’t, for example, generate former President Jimmy Carter, who died in 2024, or Michael Jackson, who died in 2009, though it did create videos with the likeness of Robin Williams, who died in 2014, according to TechCrunch’s tests. And while OpenAI’s cameo feature allows people to set instructions for how they appear in videos others generate of them — guardrails that came in response to early criticism of Sora — the deceased have no such say. I’ll bet Richard Nixon would be rolling over in his grave if he could see the deepfake I made of him advocating for police abolition.

    Deepfakes of Richard Nixon, John Lennon, Martin Luther King, Jr., and Robin Williams
    Deepfakes of Richard Nixon, John Lennon, Martin Luther King, Jr., and Robin WilliamsImage Credits:Sora, screenshots by TechCrunch

    OpenAI did not respond to TechCrunch’s request for comment on the permissibility of deepfaking dead people. However, it’s possible that deepfaking dead celebrities like Williams is within the firm’s acceptable practices; legal precedent shows that the company likely wouldn’t be held liable for the defamation of the deceased.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough,’ just so other people can churn out horrible TikTok slop puppeteering them is maddening,” Williams wrote.

    OpenAI’s critics accuse the company of taking a fast-and-loose approach on such issues, which is why Sora was quickly flooded with AI clips of copyrighted characters like Peter Griffin and Pikachu upon its release. CEO Sam Altman originally said that Hollywood studios and agencies would need to explicitly opt out if they didn’t want their IP to be included in Sora-generated videos. The Motion Picture Association has already called on OpenAI to take action on this issue, declaring in a statement that “well-established copyright law safeguards the rights of creators and applies here.” He has since said the company will reverse this position.

    Sora is, perhaps, the most dangerous deepfake-capable AI model accessible to people so far, given how realistic its outputs are. Other platforms like xAI lag behind, but have even fewer guardrails than Sora, making it possible to generate pornographic deepfakes of real people. As other companies catch up to OpenAI, we will set a horrifying precedent if we treat real people — living or dead — like our own personal playthings.

    Amanda Silberling

    Source link

  • OpenAI’s Sora soars to No. 3 on the U.S. App Store | TechCrunch

    OpenAI’s Sora app for AI videos is a viral hit, despite being invite-only for now and limited to users in the U.S. and Canada at launch. On its first day, Sora saw 56,000 downloads, and is now ranked as the No. 3 Top Overall app on the U.S. App Store, according to new data from app intelligence provider Appfigures.

    The firm estimates Sora’s iOS app pulled in a total of 164,000 installs during its first two days, September 30th and October 1st.

    The day-one figure puts Sora’s debut ahead of the performance of other major AI app launches, including Anthropic’s Claude and Microsoft’s Copilot, and puts it on par with xAI’s Grok launch. Meanwhile, OpenAI’s ChatGPT and Google’s Gemini iOS apps had somewhat stronger launches, with each reaching at least 80,000 downloads on day one.

    Since Sora is still invite-only, this may not be the fairest comparison, we’ll admit. It’s possible the new video app could have attracted even more installs if it were open to all users.

    Despite this restriction, it’s a fairly strong showing for the new release, indicating demand for AI video tools in consumers’ hands in more of a social networking-like experience. (This is much to the chagrin of some at OpenAI, who want the company to focus on solving harder problems that benefit humanity. But who’s to say that humanity isn’t benefiting from deepfakes of OpenAI CEO Sam Altman asking, “Are my piggies enjoying their slop?”)

    To compare Sora’s early success to other AI apps, Appfigures had to run an analysis that only looked at the other AI apps’ U.S. and Canadian downloads. That’s because the different AI apps on the market have pursued different launch strategies. For instance, ChatGPT initially launched on iOS and limited itself to U.S. users at the time, while Grok limited its iOS-only release to the U.S., Australia, and India. Anthropic, meanwhile, didn’t indicate there were geographic restrictions when it first brought its Claude app to iOS last year.

    For a more of an apples-to-apples comparison, Appfigures crunched the numbers to focus only on each app’s U.S. downloads, plus those in Canada, if the app had been available there at launch.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    It found that ChatGPT and Gemini had larger launches than Sora, with 81,000 and 80,000 day-one iOS downloads, respectively. Sora tied with Grok for day-one installs, at 56,000. And it easily beat out the launches from AI apps Claude and Copilot. The former pulled in 21,000 day-one downloads, while the latter only saw 7,000.

    Sora also hit the U.S. App Store’s top charts, becoming the No. 3 overall top app by day two. For comparison, ChatGPT reached No. 1 on its second day, while Grok was No. 4, Gemini was No. 6, Copilot was No. 19, and Claude was No. 78.

    Sarah Perez

    Source link

  • OpenAI’s Sora app is real but you’ll need an invite to try it

    Well, that was fast. One day after Wired reported that OpenAI was preparing to release a new AI social video app, the company has revealed it to the wider world. It’s called the Sora app, and it’s powered by OpenAI’s new Sora 2 video generation model. As expected, it’s possible to add your likeness to a video you generate using a feature OpenAI calls “Cameo.”

    Right now, Sora is only available on iOS — with no word yet on when it might arrive on Android — and you’ll need an invite from the company. However, once you receive access, you’ll be able to invite four friends to download the software.

    Developing…

    Igor Bonifacic

    Source link