ReportWire

Tag: sora 2

  • Japanese Companies Tell OpenAI to Stop Infringing On Its IP

    [ad_1]

    The Content Overseas Distribution Association (CODA), which represents several major Japanese entertainment companies such as TV studio Toei and game developer Square Enix, recently sent a written request calling on OpenAI to end its unauthorized use of their IP to train its recently launched Sora 2 generative AI.

    Nearly 20 co-signers have accused the tech company of copyright infringement, alleging a “large portion” of Sora 2 content “closely resembles Japanese content or images [as] a result of using Japanese content as machine learning data.” The letter mentioned OpenAI’s policy of using copyrighted works unless the owner explicitly asks to opt out, but argues under Japanese law, it should instead be an opt-in system, since permission for copyrighted works is generally required beforehand.

    As such, CODA’s made two requests of OpenAI: that its members’ content aren’t used to train Sora 2 unless permission is given, and that OpenAI “responds sincerely to claims and inquiries from CODA member companies regarding copyright infringement related to Sora 2’s outputs.”

    In mid-October, the Japanese government requested OpenAI stop infringing on the country’s local anime and video games like One Piece and Demon Slayer. At the time, Minoru Kiuchi, its minister of state for IP and AI strategy, called such works some of the country’s “irreplaceable treasures,” and other politicians have similarly criticized the generation model. Earlier this year, OpenAI CEO Sam Altman talked up being able to create Ghibli-like images via ChatGPT’s then-new update, which was then used by the White House to dehumanize immigrants and highlight President Donald Trump’s ongoing deportation efforts.

    At time of writing, OpenAI hasn’t responded to CODA’s request—but in a longer statement, the companies warned they would “take appropriate legal and ethical action against copyright infringement, regardless of whether we use generative AI.”

    [via Automaton]

    Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

    [ad_2]

    Justin Carter

    Source link

  • Cameo CEO: Sora 2 AI Slop Poses an ‘Existential’ Threat to Our Business

    [ad_1]

    Could AI-generated videos of celebrities upend the creator economy’s old guard? That’s the question posed in a recent lawsuit filed by video messaging app Cameo against OpenAI—the latest legal action thrown at the AI juggernaut this year. 

    The newest version of OpenAI’s Sora 2 video-generation app includes a feature that allows users to generate synthetic video of themselves on demand. To promote the feature, celebrity likenesses, such as investor Mark Cuban and the creator Jake Paul, are already available on the app, enabling select Sora users to flood their friends’ inboxes with personalized greetings from the rich and famous. 

    For the company Cameo, Sora 2’s latest feature is particularly grating—largely because it’s also called Cameo, says Steve Galanis, CEO of the creator economy app, founded in Chicago in 2017. “I’m not concerned about competition. If that model beats our model, I don’t like to lose, but that’s okay,” Galanis tells Inc. “What we’re very specifically fighting is calling it what they did. So forget the technology, forget the business model.” 

    The complaint, filed Tuesday in a California Federal District Court, says that by using the same name for its new feature, Sora 2 poses an “imminent, existential, and potentially lethal threat” to Cameo’s business. 

    Though Sora 2’s Cameo is only in beta and available via invite to users in select markets, synthetic celebrity videos have swarmed social media since the app’s update on September 30. The wave of fake videos has ensnared Galanis’ company in the kind of confusion that’s only become more common in the age of deepfakes and online deception. He says that customer service queries for Sora 2’s Cameo that users have entered into ChatGPT serve links back to the original Cameo.

    Sora 2’s update also has social media users falsely attributing the synthetic celebrity videos to Cameo, Galanis claims. “From a customer confusion perspective, as these videos are coming out, people are tagging Cameo on TikTok and Instagram with these Sora videos.” This problem will only compound if the product is rolled globally, the Cameo chief argues. “Millions of AI slop videos coming over our search results could be existential to our business.” 

    The complaint seeks an unspecified amount of injunctive monetary relief, and alleges a variety of offenses, including trademark dilution, trademark infringement, and unfair competition. Galanis says that Cameo sent OpenAI a cease and desist letter earlier this month, but Sora’s celebrity feature was still released with the same name. 

    OpenAI CEO Sam Altman has steered his company to titanic prominence in the field as the company reportedly gears up for an IPO at a $1 trillion valuation. Altman wrote on his personal blog earlier this month on the heels of the Sora update: “We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly.” 

    An OpenAI spokesperson tells Inc: “We’re reviewing the complaint, but we disagree that anyone can claim exclusive ownership over the word ‘cameo.’”

    “They’ve been a bad actor here,” Galanis says. “The genie is not going back in the bottle, these technologies are here to stay. But I do think that when they’re rolled out in a really disgusting way like that, it turns a lot of people off,” Galanis tells Inc. 

    Ironically, Galanis says he’s fed the complaint into OpenAI’s flagship product ChatGPT to assess the strength of Cameo’s case. “It’s an interesting read,” he says with a smirk. 

    [ad_2]

    Sam Blum

    Source link

  • Bryan Cranston Was Bothered by Sora 2, But Now He’s Praising OpenAI

    [ad_1]

    If you were following the Sora 2 news closely when the limited public release of OpenAI’s new video generator started on September 30, you may have noticed some unsettling videos featuring the likeness and voice of iconic TV actor Bryan Cranston—typically in character as Breaking Bad protagonist Walter White. Cranston evidently saw those too, and he found them so unsettling he reportedly contacted his union, SAG-AFTRA, about it.

    But good news: OpenAI has apparently addressed Cranston’s misgivings, and he’s praising the company publicly now.

    In a statement released Monday by SAG-AFTRA (via Deadline), Cranston stated that initially he was “deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way.”

    To be specific, he might have been concerned about this video set in a strip mall parking lot in which Cranston (appearing as Walter White) and deceased pop musician Michael Jackson announce to Jackson’s vlog viewers that they’ve been hanging out.

    Perhaps he also saw this more elaborate work of fan fiction in which Cranston and the rest of the core Breaking Bad cast are in what appears to be the Vietnam War:

    On October 8, Cranston’s agency, released an indignant statement about Sora 2, asking in part:

    The question is, does OpenAI and its partner companies believe that humans, writers, artists, actors, directors, producers, musicians, and athletes deserve to be compensated and credited for the work they create? Or does OpenAI believe they can just steal it, disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work?

    On Monday, however, Cranston had seen something he liked, and was no longer upset. He announced that he was “grateful to OpenAI for its policy and for improving its guardrails.”

    Additionally, Deadline says SAG-AFTRA, OpenAI, the Association of Talent Agents, United Talent Agency, and Creative Artists Agency all released a related joint statement including the following: “While from the start it was OpenAI’s policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt-in.”

    On October 3, well before CAA’s angry statement about Sora 2, OpenAI CEO Sam Altman painted a slightly different picture in regards to OpenAI’s copyright policy upon the release of Sora 2. He wrote in a blog post that in light of how the product was being used, OpenAI “will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls,” and added, “We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.”

    We asked OpenAI to clarify the timeline of the Sora 2 copyright policy, and will update if we hear back.

    Altman wrote in that same post that OpenAI is “going to have to somehow make money for video generation.”

    [ad_2]

    Mike Pearl

    Source link

  • The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission | TechCrunch

    [ad_1]

    Chris Lehane is one of the best in the business at making bad news disappear. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels – Lehane knows how to spin. Now he’s two years into what might be his most impossible gig yet: as OpenAI’s VP of global policy, his job is to convince the world that OpenAI genuinely gives a damn about democratizing artificial intelligence while the company increasingly behaves like, well, every other tech giant that’s ever claimed to be different.

    I had 20 minutes with him on stage at the Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and into the real contradictions eating away at OpenAI’s carefully constructed image. It wasn’t easy or entirely successful. Lehane is genuinely good at his job. He’s likable. He sounds reasonable. He admits uncertainty. He even talks about waking up at 3 a.m. worried about whether any of this will actually benefit humanity.

    But good intentions don’t mean much when your company is subpoenaing critics, draining economically depressed towns of water and electricity, and bringing dead celebrities back to life to assert your market dominance.

    The company’s Sora problem is really at the root of everything else. The video generation tool launched last week with copyrighted material seemingly baked right into it. It was a bold move for a company already getting sued by the New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was also brilliant. The invite-only app soared to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman; characters like Pikachu and Cartman of “South Park”; and dead celebrities like Tupac Shakur.

    Asked what drove OpenAI’s decision to launch this newest version of Sora with these characters, Lehane offered that Sora is a “general purpose technology” like the printing press, democratizing creativity for people without talent or resources. Even he – a self-described creative zero – can make videos now, he said on stage.

    What he danced around is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use typically works. Then, after OpenAI noticed that people really liked using copyrighted images, it “evolved” toward an opt-in model. That’s not iterating. That’s testing how much you can get away with. (By the way, though the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have gotten away with quite a lot.)

    Naturally, the situation brings to mind the aggravation of publishers who accuse OpenAI of training on their work without sharing the financial spoils. When I pressed Lehane about publishers getting cut out of the economics, he invoked fair use, that American legal doctrine that’s supposed to balance creator rights against public access to knowledge. He called it the secret weapon of U.S. tech dominance.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Maybe. But I’d recently interviewed Al Gore – Lehane’s old boss – and realized anyone could simply ask ChatGPT about it instead of reading my piece on TechCrunch. “It’s ‘iterative’,” I said, “but it’s also a replacement.”

    Lehane listened and dropped his spiel. “We’re all going to need to figure this out,” he said. “It’s really glib and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will.” (We’re making it up as we go, is what I heard.)

    Then there’s the infrastructure question nobody wants to answer honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the adoption of AI to the advent of electricity – saying those who accessed it last are still playing catch-up – yet OpenAI’s Stargate project is seemingly targeting some of those same economically challenged places to set up facilities with their attendant and massive appetites for water and electricity.

    Asked during our sit-down whether these communities will benefit or merely foot the bill, Lehane went to gigawatts and geopolitics. OpenAI needs about a gigawatt of energy per week, he noted. China brought on 450 gigawatts last year plus 33 nuclear facilities. If democracies want democratic AI, he said, they have to compete. “The optimist in me says this will modernize our energy systems,” he’d said, painting a picture of re-industrialized America with transformed power grids.

    It was inspiring, but it was not an answer about whether people in Lordstown and Abilene are going to watch their utility bills spike while OpenAI generates videos of The Notorious B.I.G. It’s very worth noting that video generation is the most energy-intensive AI out there.

    There’s also a human cost, one made clearer the day before our interview, when Zelda Williams logged onto Instagram to beg strangers to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

    When I asked about how the company reconciles this kind of intimate harm with its mission, Lehane answered by talking about processes, including responsible design, testing frameworks, and government partnerships. “There is no playbook for this stuff, right?”

    Lehane showed vulnerability in some moments, saying he recognizes the “enormous responsibilities that come with” all that OpenAI does.

    Whether or not those moments were designed for the audience, I believe him. Indeed, I left Toronto thinking I’d watched a master class in political messaging – Lehane threading an impossible needle while dodging questions about company decisions that, for all I know, he doesn’t even agree with. Then news broke that complicated that already complicated picture.

    Nathan Calvin, a lawyer who works on AI policy at a nonprofit advocacy organization, Encode AI, revealed that at the same time I was talking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to Calvin’s house in Washington, D.C., during dinner to serve him a subpoena. They wanted his private messages with California legislators, college students, and former OpenAI employees.

    Calvin says the move was part of OpenAI’s intimidation tactics around a new piece of AI regulation, California’s SB 53. He says the company weaponized its ongoing legal battle with Elon Musk as a pretext to target critics, implying Encode was secretly funded by Musk. Calvin added that he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and that when he saw OpenAI claim that it “worked to improve the bill,” he “literally laughed out loud.” In a social media skein, he went on to call Lehane, specifically, the “master of the political dark arts.”

    In Washington, that might be a compliment. At a company like OpenAI whose mission is “to build AI that benefits all of humanity,” it sounds like an indictment.

    But what matters much more is that even OpenAI’s own people are conflicted about what they are becoming.

    As my colleague Max reported last week, a number of current and former employees took to social media after Sora 2 was released, expressing their misgivings. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it is “technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”

    On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were “possibly a risk to my whole career,” Achiam went on to write of OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”

    It’s worth pausing to think about that. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one,” isn’t on a par with a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the professional risk.

    It’s a crystallizing moment, one whose contradictions may only intensify as OpenAI races toward artificial general intelligence. It also has me thinking that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. It’s whether others – including, critically, the other people who work there – still believe it.

    [ad_2]

    Connie Loizos

    Source link

  • You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

    [ad_1]

    The complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, CEO Sam Altman announced that the company will be moving to an “opt-in” model that will “give rightsholders more granular control over generation of characters”—and Sora obsessives are not taking it particularly well.

    Given the type of content that was being generated with Sora and shared via the TikTok-style social app that OpenAI launched specifically to host user-generated Sora videos, the change shouldn’t come as a shock. Almost immediately, the platform was inundated with copyrighted material being used in ways that the rightsholders almost certainly did not care for, unless you think Nickelodeon really loved the subversiveness of Nazi SpongeBob. On Monday, the Motion Picture Association became one of the loudest voices calling for OpenAI to put an end to the potential infringement. It didn’t take long for OpenAI to respond and acquiesce.

    In a blog post, Altman said the new approach to copyrighted material in Sora will require rightsholders to opt-in to having their characters and content used—but he’s very sure that copyright holders love the videos, actually. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all),” Altman wrote, stating that his company wants to “let rightsholders decide how to proceed.”

    Altman also admitted, “There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.” It’s unclear if that will play with rightsholders. MPA CEO Charles Rivkin said in a statement that OpenAI “must acknowledge it remains their responsibility—not rightsholders’—to prevent infringement on the Sora 2 service,” and said “Well-established copyright law safeguards the rights of creators and applies here.”

    While OpenAI might be giving copyright holders more control of the outputs of its model, it doesn’t appear that they had much say on the inputs. A report from the Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. It’s not clear that OpenAI went out and got those rights to train Sora 2, but the generator is very good at spitting out accurate recreations of copyrighted material in a way that it could only do if it was fed a whole lot of existing content during training.

    The biggest AI training case thus far saw Anthropic pay out $1.5 billion to settle a copyright infringement case with authors of books the company pirated to train its models. The judge in that case did find that using copyrighted material for training without permission is fair use, though other courts may not agree with that call. Earlier this year, OpenAI asked the Trump administration to call AI model training fair use. So a lot of OpenAI’s strategy around Sora appears to be fucking around and hoping, if it makes the right allies, it’ll never have to find out.

    OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI’s Sora App Drags Us Into the Litigation Phase of AI

    [ad_1]

    Well, the AI wars just got worse. Just when I thought the AI platformers had figured out how to temper their conquests and deliver tools that would result in long-term wins for everyone, OpenAI went and launched Sora 2, a one-stop shop for prompt-based short-video copyright infringement on the iPhone app store and it skyrocketed to number one like a bullet with 164,000 downloads in 48 hours. 

    If you were busy this weekend and missed the whole fiasco around OpenAI’s Sora private app release, you missed a parade of prompt-driven AI-generated short videos featuring Ronald McDonald fleeing the police in a burger-shaped car – along with all sorts of protected IP like Nintendo, Southpark, even the Simpsons characters doing whatever meme-able things the app’s invite-only users could unleash on an amused public.

    Oh, the lawyers were not amused.

    Especially those who work for the companies holding the IP copyrights for those mostly animated characters. Also unimpressed were the lawyers who work for famous people who were about to be placed in compromising positions – once some idiot decided the world would get a laugh out of seeing Taylor Swift dressed as a Nazi and waving a banana while shoplifting in an adult video store. 

    Yeah, the AI wars went nuclear. No sign on when they get better. Because maybe all OpenAI did was rip a page from the tried-and-true “apologize after instead of ask for permission first” playbook. But I believe they took that old chestnut of a strategy to the next level, using the Sora private release as a test of opt-out versus opt-in for AI.

    Maybe they knew exactly what they were doing. And maybe they got us again. Reckless speculation to follow.

    Modern Generative AI Is Built On IP Theft

    Man, it sounds harsh when you say it out loud but there really isn’t a counter-argument anymore.

    Back in 2010, I co-invented some of the first public-facing generative AI. However, unlike today’s version, our models were developed solely on the private data of our customers, data which never left their possession, giving them total control over how that data was exposed to either their own customers or the general public. 

    Now, if you decide that you want to sell AI not just to specific customers, but to the whole wide world, you’re gonna need – you guessed it – the whole wide world’s data.

    How do you get all that data? Not only that, how do you get permission to use all that data?

    Well, in my experience, you would need to scrape first and apologize later. 

    These shenanigans all came to a head in 2024, and by the end of the year, people like me were raising their hands and asking if we were just going to let everyone get away with the mass theft of all the world’s IP – while also noting that said IP was “housed” in a notoriously poor and unverified data store.

    But the thing is, people like me already knew the answer, because while this was the first time we heard the opt-out/opt-in argument as it related to AI, it was the same opt-out/opt-in argument we had already heard about SEO. SEO was such a lure of cash to be made that we not only let the bots in, we tweaked the content and added the keywords to make it easier for the bots to collect whatever IP they wanted. Just give us that coveted high-ranking search link, please!

    We let it be OK to do that.

    Not only did the AI platformers ride the backs of that process, allegedly, but when they had what they needed, they broke the original promise. No more links for you!

    What do we get in return? Apparently, users get a chance to “engage with their family and friends through their own imaginations.”

    This Is Not About User Imagination

    It never was.

    It’s the same story Facebook gave us when they started letting brands make social accounts, way back in the olden times.

    It’ll be fun. You can interact with Diet Pepsi the same way you interact with your best friend from the third grade. Or your mom. It will deepen relationships with end users and the brands they love. 

    Come on.

    It hasn’t stopped. Just last week, when Hollywood actors and actresses lost their minds over the launch of AI actress Tilly Norwood, the let’s-all-just-calm-down response from the creators was: “AI offers another way to imagine and build stories.”

    I gotta call bullshit here.

    How is it not a way to generate content without paying the people who own the IP – in this case the actors and actresses Tilly was trained on?

    Same thing here with Sora. This has, in my opinion, nothing to do with interacting with friends in a brand new way. It has everything to do with generating content without paying the creators of the IP.

    Opt-Out As Policy Is a Joke

    And it always has been.

    Lawyers: Wait, you can’t do opt-out. It’s a completely onerous burden on the owners of the IP.

    AI platforms: Oops. We just did. We’ll fix it. Here’s some money.

    I think, at this point, generative AI might be serving as a testing ground for the next phase of AI – prescriptive, predictive, autonomous, and agentic, or the more “thinky” AI.

    The rest of that “real” AI is where the “real” money is, but like generative AI, it doesn’t work without a lot of data, and in many cases, it doesn’t work well without proprietary data. 

    And paying the owners for that proprietary IP is really expensive.

    I don’t think copyright law makes a dent here. Copyright law is nothing more than a warning and an excuse to start a legal battle. Besides, the toothpaste is already out of the tube. Because that’s not Spongebob Squarepants in that Sora video, it’s ShapeySlacks McPhilFish, or whatever derivative you want to slap on it.

    But, that won’t stop the IP owners from trying. Last month, Rolling Stone-owned Penske sued Google. And in the same week Hollywood freaked out about Tilly Norwood, Disney sued Midjourney and sent a cease-and-desist to Character.

    Welcome to the litigation phase of Generative AI, folks. If that’s where we’ve arrived then I don’t see it getting better any time soon. So protect your IP, because during the next phase of “real” AI, that proprietary data is going to be a lot more valuable. 

    If you found some enjoyment in this reckless speculation, please join my email list so I can shoot you a quick heads up when I write something completely from my own human brain. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    [ad_2]

    Joe Procopio

    Source link

  • The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More

    [ad_1]

    On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.

    In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.

    There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.

    That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.

    The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)

    Other videos at least attempt to be a little creative about how they use copyrighted characters. Users have, for instance, thrown Ronald McDonald into an episode of “Love Island” and created a fake video game that teams up Tony Soprano from The Sopranos and Kirby from, well, Kirby.

    Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.

    “People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”

    There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.

    One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.

    There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?

    OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.

    A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.

    There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Officially Launches Video Generator Sora 2, Now With Social Feed

    [ad_1]

    Fake videos are about to look less fake, for better or worse. On Tuesday, OpenAI announced the release of Sora 2, the latest version of its flagship model for audio and video generation. And, as previously reported, the launch of the model is accompanied by a new social app designed to allow people to share their AI-generated videos, creating an endless scroll of uncanny content that will almost surely further fry the brains of people.

    In an announcement video that the company claims was completely generated by Sora 2, a fabricated version of OpenAI CEO Sam Altman called the model “the most powerful imagination engine ever built.” The focus of the update appears to be what OpenAI calls “world simulation,” attempting to accurately recreate the physics of the real world. The company put a strong emphasis on highlighting videos of people moving in realistic ways. The company admits it’s still imperfect, but claims Sora 2 is “better about obeying the laws of physics compared to prior systems.” OpenAI also claims the model is much better at following intricate instructions and can now produce multiple different shots based on a prompt.

    Then there’s the Sora app. As rumored and reported on by Wired, the Sora app is a feed entirely of videos created via OpenAI’s video generation model. The app features a vertical scroll to move through videos, which are served up based on a recommendation algorithm. Users will be able to insert themselves into a video through a feature called “cameo,” which requires users to record a video of themselves to verify their identity and, in OpenAI’s words, “capture your likeness.” Insert ominous needle drop here. Other users will also be able to use your likeness in videos.

    While your likeness can be used by others, OpenAI insists that the user is in control. “Only you decide who can use your cameo, and you can revoke access or remove any video that includes it at any time. Videos containing cameos of you, including drafts created by other people, are viewable by you at any time,” it said.

    OpenAI did make a point to insist that it is rolling out the Sora social app “responsibly.” Users will be able to set their own feed by telling the model what they want to see, and the company claims it’s “not optimizing for time spent in feed, and we explicitly designed the app to maximize creation, not consumption.” Creation is also time spent in-app, of course, but whatever. The company also claims that teenage users will be subjected to strict limits, with a cap on how many videos they can view per day and restrictions on how their likeness can be used.

    The company also said it doesn’t currently have a plan to monetize the app… except for the plan it has to monetize the app. “Transparently, our only current plan is to eventually give users the option to pay some amount to generate an extra video if there’s too much demand relative to available compute. As the app evolves, we will openly communicate any changes in our approach here, while continuing to keep user wellbeing as our main goal.” Basically, there will eventually be a limit on how much you can generate unless you pay for more. And it’s not like it’s particularly difficult to see ads working their way into the feed, but the company isn’t specifically saying that will come (though it also doesn’t explicitly rule it out).

    The Sora app, which is invite-only for now, can be downloaded on iOS by users in the US and Canada, with plans to roll out beyond North America soon. It’ll be free to start. As for Sora 2 itself, it’s available for ChatGPT Pro users (that’s the $200 per month tier) for now.

    [ad_2]

    AJ Dellinger

    Source link