ReportWire

Tag: intellectual property

  • Hollywood groups condemn ByteDance’s AI video generator, claim copyright infringement

    [ad_1]

    A new artificial intelligence video generator from Beijing-based ByteDance, the creator of TikTok, is drawing the ire of Hollywood organizations

    A new artificial intelligence video generator from Beijing-based ByteDance, the creator of TikTok, is drawing the ire of Hollywood organizations that say Seedance 2.0 “blatantly” violates copyright and uses the likeness of actors and others without permission.

    Seedance 2.0, which is only available in China for now, lets users generate high-quality AI videos using simple text prompts. The tool quickly gained condemnation from the movie and TV industry.

    The Motion Picture Association said Seedance 2.0 “has engaged in unauthorized use of U.S. copyrighted works on a massive scale.”

    “By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs. ByteDance should immediately cease its infringing activity,” Charles Rivkin, chairman and CEO of the MPA, said in a statement Tuesday.

    Screenwriter Rhett Rheese, who wrote the “Deadpool” movies, said on X last week that “I hate to say it. It’s likely over for us.” His post was in response to Irish director Ruairí Robinson’s post of a Seedance 2.0 video that shows AI versions Tom Cruise and Brad Pitt fighting in a post-apocalyptic wasteland.

    Actors union SAG-AFTRA said Friday it “stands with the studios in condemning the blatant infringement” enabled by Seedance 2.0.

    “The infringement includes the unauthorized use of our members’ voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood,” SAG-AFTRA said in a statement. “Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”

    ByteDance said in a statement Sunday that it respects intellectual property rights.

    “(We) have heard the concerns regarding Seedance 2.0. We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users,” the company said.

    [ad_2]

    Source link

  • Minions hit Olympic ice: Spanish skater close to music approval

    [ad_1]

    MILAN — It appears as if those troublemaking Minions will be taking the Olympic ice after all.

    Spanish figure skater Tomas-Llorenc Guarino Sabate said after practice early Thursday that he has received the necessary approval for three of the four music cuts he needs to perform his short program. The only piece missing from his medley is “Freedom” by Pharrell Williams, and the American musician and producer has been sympathetic to his plight.

    “They are discussing it,” Sabate told The Associated Press and a few other reporters. “He seems to be OK, but there’s problems because he’s restricted by his label. A lot of technical stuff. But they are working to make it happen.”

    Sabate was optimistic enough to practice his Minions-themed program shortly after 7 a.m. local time inside a nearly empty Milano Ice Skating Arena. The program opens with peels of laughter from the characters before going into music from the film franchise.

    Sabate had performed the program all season, believing he had gone through the proper protocols in a system called ClicknClear to obtain the necessary permissions. But then on Friday, Universal Studios stepped in, asking for more details not only on the music being used but also the blue-and-yellow Minions-themed outfit that Sabate plans to wear.

    Suddenly, the possibility of performing Minions became so dire that Sabate began practicing last year’s program, set to music by the Bee Gees. The big problem with that plan: He used the same music for his free skate this season.

    “Then people started sharing, reposting, sending so much support and love to me,” Sabate said. “The next thing I know, I wake up Tuesday with I don’t know how many messages. … And I think Tuesday night I had a message from people telling me Universal had changed their mind, and you have the rights to the first two pieces of music.”

    One of the two remaining pieces turned out to be a Spanish artist, so Sabate reached out to him on social media. They had a chat over the phone and he was able to get approval. That left only the Pharrell Williams part in question.

    The copyright problem is relatively new in figure skating. For years, music using lyrics was not allowed, and classical music and other standard fare was part of the public domain, meaning it could be used or modified freely and without permission.

    That changed in 2014, when the International Skating Union began to allow words. Fast-forward to the 2022 Beijing Olympics, and one of the indie artists who covered “House of the Rising Sun” objected to the use of its work by American pairs skaters Alexa Knierim and Brandon Frazier. The ensuing lawsuit prompted the ISU to develop systems to help skaters get proper permissions.

    The process remains confusing and full of pitfalls.

    In fact, Sabate isn’t the only one at the Milan Cortina Olympics affected by it.

    Two-time world medalist Loena Hendrickx of Belgium had been performing her short program to “Ashes” by Celine Dion from the film “Deadpool 2.” But after the European championships last month, her brother and coach, Jorik Hendrickx, and choreographer Adam Solya grew concerned that the music would not be approved for the Olympics, forcing them to change course.

    Hendrickx is now performing what is largely the same program to “I Surrender,” another song by Dion, which has the same feel as “Ashes.” She was able to obtain permission for that piece because it is part of ClicknClear’s catalogue of licenses.

    Other skaters also have had to make minor modifications to their Olympic programs over the past few weeks.

    “We don’t want athletes to be worried about the music,” ISU president Jae Youl Kim told AP recently. “It’s really complicated because sometimes one piece of music is owned by 16 different individuals and entities, different rights holders. So actually we are taking a different approach. We are talking directly with the major music labels: ‘Guys, these are young skaters. How can we find a solution that works for everybody?’ We’re still in discussions. But this is something that we are very seriously committed to.”

    ___

    AP Winter Olympics: https://apnews.com/hub/milan-cortina-2026-winter-olympics

    [ad_2]

    Source link

  • NRA Sues Charity Arm After Alleged Takeover by LaPierre Allies

    [ad_1]

    WASHINGTON, Jan 6 (Reuters) – The National ‌Rifle ​Association has sued its charity ‌affiliate, alleging a “disgruntled faction of former NRA directors” has ​seized the affiliate to turn it into a competitor and misused nearly $160 million in NRA ‍funds.

    The NRA’s lawsuit against the ​NRA Foundation, filed on Monday in Washington, D.C., federal court, said the foundation ​was taken ⁠over by allies of former CEO Wayne LaPierre in an attempt to sever it from the larger group.

    The lawsuit accused the foundation of breaching their contract, infringing trademarks and illegally diverting charitable assets. The NRA requested a court order ‌blocking the foundation from misusing NRA money or trademarks.

    “This is a disappointing day, ​and ‌it should not have ‍come to ⁠this,” NRA CEO Doug Hamlin said in a statement. “A foundation established to support the National Rifle Association of America has taken actions that are adversarial at a time when the NRA is rebuilding and focused on its long-term mission.”

    Spokespeople for the foundation did not immediately respond to a request for comment on Tuesday. 

    LaPierre, 76, led the NRA ​for more than three decades. A New York state jury found him liable in 2024 for mismanaging the group and costing it millions of dollars to support his lavish lifestyle. A state judge later banned LaPierre from serving as an NRA officer or director for 10 years.

    According to Monday’s lawsuit, NRA members seeking to reform the group obtained a board majority in 2025 over the “old guard” affiliated with LaPierre. 

    The NRA’s complaint said that the foundation had been taken over by LaPierre allies ​seeking to “jettison the Foundation’s historic purpose of supporting NRA’s charitable programs and transform the Foundation into a vehicle for personal reprisal.”

    The group accused the foundation of “hijacking” its trademarks for fundraising efforts and repurposing money meant ​to support its charity work.

    (Reporting by Blake Brittain in Washington; Editing by David Bario and Alistair Bell)

    Copyright 2026 Thomson Reuters.

    [ad_2]

    Reuters

    Source link

  • Exclusive: AI for patent filings startup Ankar secures $20 million Series A round | Fortune

    [ad_1]

    Two former Palantir employees hoping to use AI to transform the process for filing and managing patents have secured $20 million in investment for their London-based startup, Ankar.

    The Series A funding round for Ankar was led by venture capital firm Atomico, with participation from Index Ventures, Norrsken, and Daphni. The company had announced a £3 million ($4 million) seed round in May that was led by Index, with support from Daphni and Motier Ventures.

    Ankar was founded by Tamar Gomez and Wiem Gharbi in 2024. The pair met while working at Palantir, where they both encountered the time-consuming process of trying to obtain patents for new technology. Gomez, who has a business background, worked as a development strategist for Palantir, while Gharbi, who is a data scientist by training, worked on machine learning applications. They took the name Ankar for their new company from the name of an omniscient and powerful knight found in pre-Islamic poetry. 

    “We are trying to turn IP that has been viewed as a cost center for a very long time into more of a strategic and competitive asset that we need today in a world that is becoming more and more competitive,” Gharbi, who is Ankar’s chief technology officer, told Fortune

    The new funding for Ankar comes as intellectual property has become increasingly critical to corporate value. Intangible assets like IP now represent up to 90% of the value of S&P 500 companies, according to the World Intellectual Property Organization. Yet the systems for protecting those assets remain stubbornly outdated, according to Gomez and Gharbi, who say they witnessed how time-consuming and difficult it is to obtain a patent when they were working at Palantir.

    “To go from something that’s in the head of the inventor—an innovation—to something that is a bankable asset that can be leveraged by the company in the form of a patent took years, basically,” Gomez, who is Ankar’s CEO, said. “The tools to do so were incredibly legacy or just non-existent. It was like a hodgepodge of manual processes.”

    Patent attorneys can spend weeks searching multiple databases and reading patent filings to try to determine the extent to which, if any, prior patents might conflict with the new invention they were hoping to protect. Then it can take many more weeks to craft a patent application with the right arguments to try to overcome any objections from patent examiners. Securing a patent can take up to 24 months.

    Ankar wants to use large language models to streamline that process. Because these models can search for phrasing that has the same meaning, even if it doesn’t use the exact same keywords, they can quickly surface patent filings from databases that previously would have taken multiple searches and hours of reading to discover.

    The startup’s invention discovery tool searches across 150 million patent applications and 250 million scientific publications and produces reports assessing how “novel” an invention is and what claims have already been made by previously patented inventions that might be similar (what’s known in the patent world as “prior art.”) The platform helps inventors harvest their ideas and guides patent attorneys through drafting applications, including spotting gaps in existing patents where claims for a new invention might get the most traction. It also supports patent lawyers when they have to respond to possible challenges from patent examiners, giving them a single view of the entire history of the application process.

    “Patent claims are basically the scope of protection for your invention—like, what are the most important pieces of my invention that I want to protect? [Ankar’s] tool can help suggest an initial set of claims and then help the patent attorney think through potential options for broadening these claims,” Gharbi said. “So it’s no longer about just helping you kind of generate words, because we think that the value of just generating words is going to decrease over time. It’s going to become more about like, how do I generate the best qualities of the scope of protection?”

    The company has secured some notable early customers, including global cosmetics giant L’Oréal and global law firm Vorys. Ankar says that so far its customers have reported an average 40% boost in productivity, with hundreds of hours shifted to high-value strategic work.

    Jean-Yves Legendre, competitive IP intelligence manager at L’Oréal, praised Ankar in a statement, saying that the startup “understood patents, spoke our language, and adapted to our needs.”

    Many global companies, particularly in automotive, electronics, and R&D-heavy sector are redoubling efforts to protect their intellectual property, concerned that generative AI will make it easier for competitors to replicate product designs, architectures, and processes. At the same time, many companies are eager to record and protect their IP because they want to use it to train or fine-tune their own AI models to help boost productivity.

    Ankar plans to use the new funding to double its current 20-person headcount and expand its engineering, product, design, and go-to-market teams across Europe and the U.S.

    This story was originally featured on Fortune.com

    [ad_2]

    Jeremy Kahn

    Source link

  • US Jury Says Apple Must Pay Masimo $634 Million in Smartwatch Patent Case

    [ad_1]

    (Reuters) -A federal jury in California said on Friday that Apple owes medical-monitoring technology company Masimo $634 million for infringing a patent covering blood-oxygen reading technology.

    The jury agreed with Masimo that the Apple Watch’s workout mode and heart rate notification features violated Masimo’s patent rights, a Masimo spokesperson confirmed.

    An Apple spokesperson said that the company disagrees with the verdict and will appeal. Masimo, in a statement, called the verdict “a significant win in our ongoing efforts to protect our innovations and intellectual property.”

    The California lawsuit is one branch of a contentious, multi-front patent fight between Apple and Irvine, California-based Masimo, which has accused Apple of hiring away its employees and stealing its pulse oximetry technology to use in Apple Watches.

    The dispute led a U.S. trade tribunal to block imports of Apple’s Series 9 and Ultra 2 smartwatches in 2023 after finding that Apple’s technology infringed Masimo’s patents. Apple removed blood-oxygen reading technology from its watches to avoid the ban and reintroduced an updated version of the technology in August with approval from U.S. Customs and Border Protection.

    The ITC separately on Friday decided to hold a new proceeding to determine whether Apple’s updated watches should be subject to the ban. 

    Masimo has filed an ongoing lawsuit against Customs over the decision. Apple has separately challenged the import ban at a federal appeals court.

    A California judge declared a mistrial in Masimo’s trade-secret case against Apple in 2023 after a jury failed to reach a unanimous verdict. Apple won a minimal $250 verdict against Masimo in Delaware last year over allegations that Masimo’s smartwatches infringe two Apple design patents.

    (Reporting by Blake Brittain in WashingtonEditing by Rod Nickel)

    Copyright 2025 Thomson Reuters.

    [ad_2]

    Reuters

    Source link

  • Denmark eyes new law to protect citizens from AI deepfakes

    [ad_1]

    COPENHAGEN, Denmark — In 2021, Danish video game live-streamer Marie Watson received an image of herself from an unknown Instagram account.

    She instantly recognized the holiday snap from her Instagram account, but something was different: Her clothing had been digitally removed to make her appear naked. It was a deepfake.

    “It overwhelmed me so much,” Watson recalled. “I just started bursting out in tears, because suddenly, I was there naked.”

    In the four years since her experience, deepfakes — highly realistic artificial intelligence-generated images, videos or audio of real people or events — have become not only easier to make worldwide but also look or sound exponentially more realistic. That’s thanks to technological advances and the proliferation of generative AI tools, including video generation tools from OpenAI and Google.

    These tools give millions of users the ability to easily spit out content, including for nefarious purposes that range from depicting celebrities Taylor Swift and Katy Perry to disrupting elections and humiliating teens and women.

    In response, Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission. A bill that’s expected to pass early next year would change copyright law by imposing a ban on the sharing of deepfakes to protect citizens’ personal characteristics — such as their appearance or voice — from being imitated and shared online without their consent.

    If enacted, Danish citizens would get the copyright over their own likeness. In theory, they then would be able to demand that online platforms take down content shared without their permission. The law would still allow for parodies and satire, though it’s unclear how that will be determined.

    Experts and officials say the Danish legislation would be among the most extensive steps yet taken by a government to combat misinformation through deepfakes.

    Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, said that he applauds the Danish government for recognizing that the law needs to change.

    “Because right now, when people say ‘what can I do to protect myself from being deepfaked?’ the answer I have to give most of the time is: ‘There isn’t a huge amount you can do,’” he said, ”without me basically saying, ‘scrub yourself from the internet entirely.’ Which isn’t really possible.”

    He added: “We can’t just pretend that this is business as usual for how we think about those key parts of our identity and our dignity.”

    U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms.

    Danish Culture Minister Jakob Engel-Schmidt said that the bill has broad support from lawmakers in Copenhagen, because such digital manipulations can stir doubts about reality and spread misinformation.

    “If you’re able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy,” he told reporters during an AI and copyright conference in September.

    The law would apply only in Denmark, and is unlikely to involve fines or imprisonment for social media users. But big tech platforms that fail to remove deepfakes could face severe fines, Engel-Schmidt said.

    Ajder said Google-owned YouTube, for example, has a “very, very good system for getting the balance between copyright protection and freedom of creativity.”

    The platform’s efforts suggest that it recognizes “the scale of the challenge that is already here and how much deeper it’s going to become,” he added.

    Twitch, TikTok and Meta, which owns Facebook and Instagram, didn’t respond to requests for comment.

    Engel-Schmidt said that Denmark, the current holder of the European Union’s rotating presidency, had received interest in its proposed legislation from several other EU members, including France and Ireland.

    Intellectual property lawyer Jakob Plesner Mathiasen said that the legislation shows the widespread need to combat the online danger that’s now infused into every aspect of Danish life.

    “I think it definitely goes to say that the ministry wouldn’t make this bill, if there hadn’t been any occasion for it,” he said. “We’re seeing it with fake news, with government elections. We are seeing it with pornography, and we’re also seeing it also with famous people and also everyday people — like you and me.”

    The Danish Rights Alliance, which protects the rights of creative industries on the internet, supports the bill, because its director says that current copyright law doesn’t go far enough.

    Danish voice actor David Bateson, for example, was at a loss when AI voice clones were shared by thousands of users online. Bateson voiced a character in the popular “Hitman” video game, as well as Danish toymaker Lego’s English advertisements.

    “When we reported this to the online platforms, they say ‘OK, but which regulation are you referring to?’” said Maria Fredenslund, an attorney and the alliance’s director. “We couldn’t point to an exact regulation in Denmark.”

    Watson had heard about fellow influencers who found digitally-altered images of themselves online, but never thought it might happen to her.

    Delving into a dark side of the web where faceless users sell and share deepfake imagery — often of women — she said she was shocked how easy it was to create such pictures using readily available online tools.

    “You could literally just search ‘deepfake generator’ on Google or ‘how to make a deepfake,’ and all these websites and generators would pop up,” the 28-year-old Watson said.

    She is glad her government is taking action, but she isn’t hopeful. She believes more pressure must be applied to social media platforms.

    “It shouldn’t be a thing that you can upload these types of pictures,” she said. “When it’s online, you’re done. You can’t do anything, it’s out of your control.”

    ___

    Stefanie Dazio in Berlin, Kelvin Chan in London, and Barbara Ortutay in San Francisco, contributed to this report.

    [ad_2]

    Source link

  • Denmark Eyes New Law to Protect Citizens From AI Deepfakes

    [ad_1]

    COPENHAGEN, Denmark (AP) — In 2021, Danish video game live-streamer Marie Watson received an image of herself from an unknown Instagram account.

    She instantly recognized the holiday snap from her Instagram account, but something was different: Her clothing had been digitally removed to make her appear naked. It was a deepfake.

    “It overwhelmed me so much,” Watson recalled. “I just started bursting out in tears, because suddenly, I was there naked.”

    In the four years since her experience, deepfakes — highly realistic artificial intelligence-generated images, videos or audio of real people or events — have become not only easier to make worldwide but also look or sound exponentially more realistic. That’s thanks to technological advances and the proliferation of generative AI tools, including video generation tools from OpenAI and Google.

    These tools give millions of users the ability to easily spit out content, including for nefarious purposes that range from depicting celebrities Taylor Swift and Katy Perry to disrupting elections and humiliating teens and women.

    In response, Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission. A bill that’s expected to pass early next year would change copyright law by imposing a ban on the sharing of deepfakes to protect citizens’ personal characteristics — such as their appearance or voice — from being imitated and shared online without their consent.

    If enacted, Danish citizens would get the copyright over their own likeness. In theory, they then would be able to demand that online platforms take down content shared without their permission. The law would still allow for parodies and satire, though it’s unclear how that will be determined.

    Experts and officials say the Danish legislation would be among the most extensive steps yet taken by a government to combat misinformation through deepfakes.

    Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, said that he applauds the Danish government for recognizing that the law needs to change.

    “Because right now, when people say ‘what can I do to protect myself from being deepfaked?’ the answer I have to give most of the time is: ‘There isn’t a huge amount you can do,’” he said, ”without me basically saying, ‘scrub yourself from the internet entirely.’ Which isn’t really possible.”

    He added: “We can’t just pretend that this is business as usual for how we think about those key parts of our identity and our dignity.”


    Deepfakes and misinformation

    U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms.

    Danish Culture Minister Jakob Engel-Schmidt said that the bill has broad support from lawmakers in Copenhagen, because such digital manipulations can stir doubts about reality and spread misinformation.

    “If you’re able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy,” he told reporters during an AI and copyright conference in September.

    The law would apply only in Denmark, and is unlikely to involve fines or imprisonment for social media users. But big tech platforms that fail to remove deepfakes could face severe fines, Engel-Schmidt said.

    Ajder said Google-owned YouTube, for example, has a “very, very good system for getting the balance between copyright protection and freedom of creativity.”

    The platform’s efforts suggest that it recognizes “the scale of the challenge that is already here and how much deeper it’s going to become,” he added.

    Twitch, TikTok and Meta, which owns Facebook and Instagram, didn’t respond to requests for comment.

    Engel-Schmidt said that Denmark, the current holder of the European Union’s rotating presidency, had received interest in its proposed legislation from several other EU members, including France and Ireland.

    Intellectual property lawyer Jakob Plesner Mathiasen said that the legislation shows the widespread need to combat the online danger that’s now infused into every aspect of Danish life.

    “I think it definitely goes to say that the ministry wouldn’t make this bill, if there hadn’t been any occasion for it,” he said. “We’re seeing it with fake news, with government elections. We are seeing it with pornography, and we’re also seeing it also with famous people and also everyday people — like you and me.”

    The Danish Rights Alliance, which protects the rights of creative industries on the internet, supports the bill, because its director says that current copyright law doesn’t go far enough.

    Danish voice actor David Bateson, for example, was at a loss when AI voice clones were shared by thousands of users online. Bateson voiced a character in the popular “Hitman” video game, as well as Danish toymaker Lego’s English advertisements.

    “When we reported this to the online platforms, they say ‘OK, but which regulation are you referring to?’” said Maria Fredenslund, an attorney and the alliance’s director. “We couldn’t point to an exact regulation in Denmark.”


    ‘When it’s online, you’re done’

    Watson had heard about fellow influencers who found digitally-altered images of themselves online, but never thought it might happen to her.

    Delving into a dark side of the web where faceless users sell and share deepfake imagery — often of women — she said she was shocked how easy it was to create such pictures using readily available online tools.

    “You could literally just search ‘deepfake generator’ on Google or ‘how to make a deepfake,’ and all these websites and generators would pop up,” the 28-year-old Watson said.

    She is glad her government is taking action, but she isn’t hopeful. She believes more pressure must be applied to social media platforms.

    “It shouldn’t be a thing that you can upload these types of pictures,” she said. “When it’s online, you’re done. You can’t do anything, it’s out of your control.”

    Stefanie Dazio in Berlin, Kelvin Chan in London, and Barbara Ortutay in San Francisco, contributed to this report.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • Stability AI largely wins UK court battle against Getty Images over copyright and trademark

    [ad_1]

    LONDON (AP) — Artificial intelligence company Stability AI mostly prevailed against Getty Images Tuesday in a British court battle over intellectual property.

    Seattle-based Getty had accused Stability AI of infringing its copyright and trademark by scraping 12 million images from its website, without permission, to train its popular image generator, Stable Diffusion.

    The closely followed case at Britain’s High Court was among the first in a wave of lawsuits involving generative AI as movie studios, authors and artists challenged tech companies’ use of their works to train AI chatbots.

    Tech companies have long argued that “fair use” or “fair dealing” legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Tuesday’s ruling provides some clarity but still leaves big unanswered questions over copyright and AI, experts said.

    According to the judge’s written ruling, Getty narrowly won its argument that Stability had infringed its trademark, but lost the rest of its case.

    Both sides claimed victory.

    “This is a significant win for intellectual property owners,” Getty Images said in a statement.

    Shares of Getty dipped 3% before the opening bell in the U.S.

    Stability, based in London, said it was pleased with the ruling.

    “This final ruling ultimately resolves the copyright concerns that were the core issue,” Stability’s General Counsel Christian Dowell said.

    Getty had accused Stability of both primary and secondary copyright infringement.

    Legal experts said the first one involves the act of reproducing something without permission — similar to a dodgy factory churning out counterfeit Chanel handbags or pirated CDs — while the second involves importing those copies from another country.

    In this case, Getty said Stability’s use of its image library to train and develop Stable Diffusion’s AI model amounted to breach of primary copyright. Stability responded that the case doesn’t belong in the United Kingdom because the AI model’s training technically happened elsewhere, on computers run by U.S. tech giant Amazon.

    During the three-week trial in June, Getty dropped its primary copyright allegations, in a sign that it didn’t think they would succeed. But it still pursued the secondary infringement claims. Even if Stability’s AI training happened outside the U.K., Getty said offering the Stable Diffusion service to British users amounted to importing unlawful copies of its images into the country.

    Justice Joanna Smith rejected Getty’s claims, ruling that Stable Diffusion’s AI didn’t infringe copyright because it doesn’t “store or reproduce any Copyright Works (and has never done so).”

    Getty also sued for trademark infringement because its watermark appeared on some of the images generated by Stability’s chatbot.

    The judge sided with Getty but added that the case only partially succeeded, and that her findings are “both historic and extremely limited in scope.”

    “While I have found instances of trademark infringement, I have been unable to determine that these were widespread,” she said.

    Experts said Getty’s move to drop part of its copyright case means AI training is still in legal limbo.

    “The decision leaves the U.K. without a meaningful verdict on the lawfulness of an AI model’s process of learning from copyright materials,” said Iain Connor, an intellectual property partner at law firm Michelmores.

    Smith said there was “very real societal importance” in deciding how to strike a balance between the creative and tech industries. But she added that the court can only rule on the “diminished” case that remained and couldn’t consider “issues that have been abandoned.”

    A Getty spokeswoman declined to say whether there would be an appeal.

    Getty is also pursuing a copyright infringement lawsuit in the United States against Stability. It originally sued in 2023 but refiled the case in a San Francisco federal court in August.

    The Getty lawsuits are among a slew of cases that highlight how the generative AI boom is fueling a clash between tech companies and creative industries.

    AI companies are now fighting more than 50 copyright lawsuits — so many that a tech industry lobby group has called on President Donald Trump for help stop the court fights, saying they threaten AI innovation.

    Among the cases, Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit by authors while a federal judge dismissed a similar lawsuit from 13 authors against Meta Platforms. Warner Bros. has sued Midjourney for copyright infringement, as have Disney and Universal in seperate lawsuits, alleging that its image generator creates copyrighted characters.

    ___

    AP Technology Writer Matt O’Brien contributed to this report.

    [ad_2]

    Source link

  • Stability AI largely wins court battle against Getty Images over copyright, trademark

    [ad_1]

    LONDON — Artificial intelligence company Stability AI mostly prevailed against Getty Images Tuesday in a British court battle over intellectual property.

    Seattle-based Getty Images, which owns an extensive online library of images and video, had filed suit against Stability AI in a widely watched case that went to trial at Britain’s High Court in June.

    The case was among a wave of lawsuits filed by movie studios, authors and artists challenging tech companies’ use of their works to train AI chatbots.

    According to a judge’s ruling released Tuesday, Getty narrowly won its argument that Stability had infringed its trademark, but lost its claim for secondary infringement of copyright.

    Both sides claimed victory.

    “This is a significant win for intellectual property owners,” Getty Images said in a statement.

    Shares of Getty dipped 3% before the opening bell in the U.S.

    Stability said it was pleased with the ruling.

    “This final ruling ultimately resolves the copyright concerns that were the core issue,” Stability General Counsel Christian Dowell said.

    Getty argued that the development of Stability’s AI image maker, called Stable Diffusion, was a “brazen infringement” of its library of images “on a staggering scale.”

    While Getty accused Stability of infringing both its copyright and trademark, the company dropped its primary copyright allegations during the trial, indicating that it didn’t think its arguments would succeed.

    Getty also sued for trademark infringement because its watermark appeared on some of the images generated by Stability’s chatbot.

    Justice Joanna Smith said in her ruling that Getty’s trademark claims “succeed (in part)” but that her findings are “both historic and extremely limited in scope.”

    Stability argued that the case doesn’t belong in the United Kingdom because the AI model’s training technically happened elsewhere, on computers run by U.S. tech giant Amazon. It also argued that “only a tiny proportion” of the random outputs of its AI image-generator “look at all similar” to Getty’s works.

    Tech companies have long argued that “fair use” or “fair dealing” legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images.

    Getty is also still pursuing a claim of “secondary infringement” of copyright, saying that even if Stability’s AI training happened outside the U.K., offering the Stable Diffusion service to British users amounted to importing unlawful copies of its images into the country.

    Smith dismissed Getty’s argument, saying that Stable Diffusion’s AI didn’t infringe copyright because it doesn’t store “store or reproduce any Copyright Works (and has never done so).”

    Getty is also pursuing a copyright infringement lawsuit in the United States against Stability. It originally sued Getty in 2023 but refiled the case in a San Francisco federal court in August.

    The Getty lawsuits are among a slew of cases that highlight how the generative AI boom is fueling a clash between tech companies and creative industries.

    Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its Claude chatbot.

    Separately, a federal judge dismissed a lawsuit from a group of 13 authors who made similar accusations against Facebook owner Meta Platforms in training its AI system Llama.

    Warner Bros. has sued Midjourney for copyright infringement, alleging that its image generator enables subscribers to create AI-generated images and videos of copyrighted characters like Superman and Bugs Bunny.

    Disney and Universal also sued Midjourney earlier in a separate, joint copyright lawsuit, alleging the San Francisco-based startup pirated the libraries to generate and distribute unauthorized copies of famed characters like Darth Vader and the Minions.

    ___

    AP Technology Writer Matt O’Brien contributed to this report.

    [ad_2]

    Source link

  • AI song generator Udio offers brief window for downloads after Universal settlement upsets users

    [ad_1]

    Artificial intelligence song generation platform Udio said it would give its frustrated users 48 hours starting Monday to download their songs before the company shifts to a new business model to comply with a legal settlement.

    The short reprieve comes after Udio on Wednesday said it had settled copyright infringement claims brought by Universal Music, a label with artists including Taylor Swift, Olivia Rodrigo, Drake and Kendrick Lamar.

    AI companies are now fighting so many copyright lawsuits that a tech industry lobby group, the Chamber of Progress, last week called on President Donald Trump to sign an executive order directing federal attorneys “to intervene in legal cases” to defend the industry’s practice of building generative AI tools by feeding them on copyrighted works.

    Citing more than 50 pending federal cases, the group asked for help stopping court fights leading to “potentially company-killing penalties” that threaten AI innovation. But artists have warned that AI tools built on their works also threaten their livelihoods.

    In the biggest settlement so far, AI company Anthropic agreed to pay $1.5 billion — or $3,000 per book — to settle claims from authors who alleged the company illegally pirated nearly half a million of their works to train its chatbot.

    Udio and Universal didn’t disclose the financial terms of their new music licensing agreements. They also said they will team up on a new streaming platform.

    As part of the agreement, Udio immediately stopped allowing people to download songs they’ve created, which sparked a backlash and apparent exodus among paying users.

    “We know the pain it causes to you,” Udio later said in a post on Reddit’s Udio forum, where users were venting about feeling betrayed by the platform’s surprise move and complained that it limited what they could do with their music.

    Udio said it still must stop downloads as it transitions to a new streaming platform next year. But over the weekend, it said it will give people 48 hours starting at 11 a.m. Eastern time Monday to keep their “past creations.”

    “Udio is a small company operating in an incredibly complex and evolving space, and we believe that partnering directly with artists and songwriters is the way forward,” said Udio’s post.

    The settlement deal was the music industry’s first since Universal, along with Sony Music Entertainment and Warner Records, sued Udio and another AI song generator, Suno, last year over copyright infringement.

    Udio and Suno pioneered AI song generation technology, which can spit out new songs based on prompts typed into a chatbot-style text box. Users, who don’t need musical talent, can merely request a tune in the style of, for example, classic rock, 1980s synth-pop or West Coast rap.

    Record labels have accused the platforms of exploiting the recorded works of artists without compensating them.

    In its lawsuit filed against Udio last year, Universal sought to show how specific AI-generated songs made on Udio closely resembled Universal-owned classics like Frank Sinatra’s “My Way,” The Temptations’ “My Girl,” ABBA’s “Dancing Queen” and holiday favorites like “Rockin’ Around the Christmas Tree” and “Jingle Bell Rock.”

    A musician-led group, the Artist Rights Alliance, said Friday that the Universal-Udio settlement represents a positive step in creating a “legitimate AI marketplace” but raised questions about whether independent artists, session musicians and songwriters will be sufficiently protected from AI practices that present an “existential threat” to their careers.

    “Licensing is the only version of AI’s future that doesn’t result in the mass destruction of art and culture,” the group said. “But this promise must be available to all music creators, not just to major corporate copyright holders.”

    [ad_2]

    Source link

  • Smucker sues Trader Joe’s, saying its new PB&J sandwiches are too similar to Uncrustables

    [ad_1]

    The J.M. Smucker Co. is suing Trader Joe’s, alleging the grocery chain’s new frozen peanut butter and jelly sandwiches are too similar to Smucker’s Uncrustables in their design and packaging.

    In the lawsuit, which was filed Monday in federal court in Ohio, Smucker said the round, crustless sandwiches Trader Joe’s sells have the same pie-like crimp markings on their edges that Uncrustables do. Smucker said the design violates its trademarks.

    Smucker also asserted that the boxes Trader Joe’s PB&J sandwiches come in violate the Orrville, Ohio-based company’s trademarks because they are the same blue color it uses for the lettering on “Uncrustables” packages.

    Trader Joe’s boxes also show a sandwich with a bite mark taken out of it, which is similar to the Uncrustables design, Smucker said.

    “Smucker does not take issue with others in the marketplace selling prepackaged, frozen, thaw-and-eat crustless sandwiches. But it cannot allow others to use Smucker’s valuable intellectual property to make such sales,” the company said in its lawsuit.

    Smucker is seeking restitution from Trader Joe’s. It also wants a judge to require Trader Joe’s to deliver all products and packaging to Smucker to be destroyed.

    A message seeking comment was left Wednesday with Trader Joe’s, which is based in Monrovia, California.

    Michael Kelber, chair of the intellectual property group at Neal Gerber Eisenberg, a Chicago law firm, said Smucker’s registered trademarks will help bolster its argument. But Trader Joe’s might argue that the crimping on its sandwiches is simply functional and not something that can be trademarked, Kelber said.

    Trader Joe’s sandwiches also appear to be slightly more square than Uncrustables, so the company could argue that the shape isn’t the same, Kelber said.

    Uncrustables were invented by two friends who began producing them in 1996 in Fergus Falls, Minnesota. Smucker bought their company in 1998 and secured patents for a “sealed, crustless sandwich” in 1999.

    But it wasn’t easy to mass produce them. In the lawsuit, Smucker said it has spent more than $1 billion developing the Uncrustables brand over the last 20 years. Smucker spent years trying to perfect Uncrustables’ stretchy bread and developing new filling flavors like chocolate and hazelnut.

    Kelber said one of the biggest issues companies debate in cases like this one is whether the copycat product deceives consumers.

    Smucker claims that’s already happening with Trader Joe’s sandwiches. In the lawsuit, Smucker showed a social media photo of a person claiming that Trader Joe’s is contracting with Smucker to make the sandwiches under its own private label.

    This isn’t the first time Smucker has taken legal action to protect its Uncrustables brand. In 2022, it sent a cease and desist letter to a Minnesota company called Gallant Tiger, which was making upscale versions of crustless peanut butter and jelly sandwiches with crimped edges. Smucker said Wednesday that it hasn’t taken further action but continues to monitor Gallant Tiger.

    Smucker likely felt it had no choice but to sue this time around, Kelber said.

    “For the brand owner, what is the point of having this brand if I’m not going to enforce it?” Kelber said. “If they ignore Trader Joe’s, they are feeding that, and then the next person who does it they won’t have an argument.”

    Kelber said trademark cases often wind up being settled because neither company wants to go through an expensive trial.

    Smucker’s lawsuit comes a few months after a similar lawsuit filed against the Aldi by Mondelez International, which claimed that Aldi’s store-brand cookies and crackers have packaging that is too similar to Mondelez brands like Chips Ahoy, Wheat Thins and Oreos.

    [ad_2]

    Source link

  • French Chaos Delays Meeting on Future of European Fighter Jet

    [ad_1]

    BERLIN (Reuters) -A trilateral ministerial meeting on the future of France, Germany and Spain’s 100-billion-euro project to develop a European fighter jet has been postponed due to the political crisis in France, a German defence ministry spokesperson told Reuters.

    The defence ministers of the three countries had been scheduled to meet mid-October in a bid to resolve obstacles blocking the next phase in the development of the project, known as FCAS, the spokesperson said on Thursday evening.

    But France has been left with just a caretaker government after outgoing Prime Minister Sebastien Lecornu tendered his and his government’s resignation on Monday, hours after announcing the cabinet line-up. French President Emmanuel Macron is now searching for his sixth prime minister in under two years.

    “I confirm that the meeting is not taking place mid-October any more,” the spokesperson said. “We would like to schedule it as quickly as possible when there is a new French defense minister.”

    Macron’s office had no immediate comment.

    France’s Dassault Aviation, Airbus and Indra are involved in the scheme to start replacing French Rafale and German and Spanish Eurofighters with a sixth-generation fighter jet from 2040.

    But the project has been plagued by delays and rifts between the companies and governments over workshare and intellectual property rights.

    (Reporting by Andreas Rinke and Sabine Siebold in Berlin; Additional Reporting by Michel Rose in Paris and Aislinn Laing in Madrid; Writing by Sarah Marsh; Editing by Chris Reese and Deepa Babington)

    Copyright 2025 Thomson Reuters.

    Photos You Should See – Oct. 2025

    [ad_2]

    Reuters

    Source link

  • Judge approves $1.5 billion copyright settlement between AI company Anthropic and authors

    [ad_1]

    SAN FRANCISCO — A federal judge on Thursday approved a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots.

    U.S. District Judge William Alsup issued the preliminary approval in San Francisco federal court Thursday after the two sides worked to address his concerns about the settlement, which will pay authors and publishers about $3,000 for each of the books covered by the agreement. It does not apply to future works.

    “This is a fair settlement,” Alsup said, though he added that distributing it to all parties will be “complicated.” About 465,000 books are on the list of works pirated by Anthropic, according to Justin Nelson, an attorney for the authors.

    “We have some of the best lawyers in America in this courtroom and if anyone can do it, you can,” Alsup said.

    The Association of American Publishers called the settlement a “major step in the right direction in holding AI developers accountable for reckless and unabashed infringement.”

    “Anthropic is hardly a special case when it comes to infringement. Every other major AI developer has trained their models on the backs of authors and publishers, and many have sourced those works from the most notorious infringing sites in the world,” said Maria A. Pallante, president and CEO of the publisher group.

    San Francisco-based Anthropic said it is pleased with the preliminary approval.

    “The decision will allow us to focus on developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems. As we’ve consistently maintained, the court’s landmark June ruling that AI training constitutes transformative fair use remains intact. This settlement simply resolves narrow claims about how certain materials were obtained,” said Aparna Sridhar, deputy general counsel at Anthropic.

    The Authors Guild, meanwhile, said the settlement “marks a milestone in authors’ fights against AI companies’ theft of their works. It sends a clear signal to AI companies that infringement of authors’ rights comes at a steep price and will undoubtedly push AI companies towards acquiring the books they want legally, through licensing.”

    A Monday filing sought to convince the judge that the parties have set up a system designed to get out robust notice to all authors and publishers covered by the agreement, ensuring they get their cut of the pot if they want to sign off on the settlement or opt out to protect their legal rights moving forward.

    They also tried to assure him that the author and publishers group that cobbled the deal together are not doing any “back room” dealings that would hurt lesser-known authors.

    Alsup’s main concern centered on how the claims process will be handled in an effort to ensure everyone eligible knows about it so the authors don’t “get the shaft.” He had set a September 22 deadline for submitting a claims form for him to review before Thursday’s hearing to review the settlement again.

    The judge had raised worries about two big groups connected to the case — the Authors Guild and the Association of American Publishers — working “behind the scenes” in ways that could pressure some authors to accept the settlement without fully understanding it.

    Attorneys for the authors said in Monday’s filing they believe the settlement will result in a high claims rate, respects existing contracts and is “consistent with due process” and the court’s guidance.

    Alsup had dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites to help improve its Claude chatbot.

    Bestselling thriller novelist Andrea Bartz, who sued Anthropic with two other authors last year, said in a court declaration ahead of the hearing that she strongly supports the settlement and will work to explain its significance to fellow writers.

    “Together, authors and publishers are sending a message to AI companies: You are not above the law, and our intellectual property isn’t yours for the taking,” she wrote.

    Alsup also said in the courtroom Thursday that he plans to step down from the bench by the end of the year. President Bill Clinton nominated him for the federal bench in 1999.

    AP Technology Writer Matt O’Brien contributed to this story from Providence, Rhode Island.

    [ad_2]

    Source link

  • Judge approves $1.5B copyright settlement between AI company Anthropic and authors

    [ad_1]

    SAN FRANCISCO — A federal judge on Thursday approved a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots.

    U.S. District Judge William Alsup issued the approval in San Francisco federal court Thursday after the two sides worked to address his concerns about the settlement, which will pay authors and publishers about $3,000 for each of the books covered by the agreement. It does not apply to future works.

    A Monday filing sought to convince the judge that the parties have set up a system designed to get out robust notice to all authors and publishers covered by the agreement, ensuring they get their cut of the pot if they want to sign off on the settlement or opt out to protect their legal rights moving forward.

    They also tried to assure him that the author and publishers group that cobbled the deal together are not doing any “back room” dealings that would hurt lesser-known authors.

    Alsup’s main concern centered on how the claims process will be handled in an effort to ensure everyone eligible knows about it so the authors don’t “get the shaft.” He had set a September 22 deadline for submitting a claims form for him to review before Thursday’s hearing to review the settlement again.

    The judge had raised worries about two big groups connected to the case — the Authors Guild and the Association of American Publishers — working “behind the scenes” in ways that could pressure some authors to accept the settlement without fully understanding it.

    Attorneys for the authors said in Monday’s filing they believe the settlement will result in a high claims rate, respects existing contracts and is “consistent with due process” and the court’s guidance.

    Alsup had dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites to help improve its Claude chatbot.

    Bestselling thriller novelist Andrea Bartz, who sued Anthropic with two other authors last year, said in a court declaration ahead of the hearing that she strongly supports the settlement and will work to explain its significance to fellow writers.

    “Together, authors and publishers are sending a message to AI companies: You are not above the law, and our intellectual property isn’t yours for the taking,” she wrote.

    Alsup also said in the courtroom Thursday that he plans to step down from the bench by the end of the year.

    AP Technology Writer Matt O’Brien contributed to this story from Providence, Rhode Island.

    [ad_2]

    Source link

  • Appeals court rules Trump doesn’t have the authority to fire Copyright Office director

    [ad_1]

    WASHINGTON — A divided appeals court ruled Wednesday that President Donald Trump doesn’t have the authority to unilaterally remove and replace the director of the U.S. Copyright Office.

    A three-judge panel from the U.S. Court of Appeals for the District of Columbia Circuit voted 2-1 to temporarily block Trump’s Republican administration from firing Shira Perlmutter as the register of copyrights, who advises Congress on copyright issues.

    Perlmutter claims Trump fired her in May because he disapproved of advice she gave to Congress in a report related to artificial intelligence. Perlmutter had received an email from the White House notifying her that “your position as the Register of Copyrights and Director at the U.S. Copyright Office is terminated effective immediately,” her office said.

    Circuit Judges Florence Pan and J. Michelle Childs concluded that Perlmutter’s purported firing was likely illegal.

    “The Executive’s alleged blatant interference with the work of a Legislative Branch official, as she performs statutorily authorized duties to advise Congress, strikes us as a violation of the separation of powers that is significantly different in kind and in degree from the cases that have come before,” Pan wrote in the majority opinion.

    Perlmutter’s position is considered part of the legislative branch of government. Her office is housed within the Library of Congress. Its director is chosen by the librarian of Congress, who is also a legislative branch employee but is nominated by the president and is subject to Senate confirmation.

    U.S. District Judge Timothy Kelly, a Trump nominee, ruled in May that Perlmutter failed to meet her legal burden to show how removing her from the position would cause her irreparable harm.

    Pan and Childs, who were nominated by President Joe Biden, a Democrat, concluded that Kelly abused his discretion and failed to weigh other factors favoring Perlmutter’s request for a preliminary injunction.

    “The President’s purported removal of the Legislative Branch’s chief advisor on copyright matters, based on the advice that she provided to Congress, is akin to the President trying to fire a federal judge’s law clerk,” Pan wrote.

    Judge Justin Walker, a Trump nominee, wrote a dissenting opinion in which he said the register of copyrights “exercises executive power in a host of ways.”

    “Recently, repeatedly, and unequivocally, the Supreme Court has stayed lower-court injunctions that barred the President from removing officers exercising executive power,” Walker wrote.

    Pan said it appears Perlmutter is still serving as register despite her purported removal.

    “And because she continues to serve as Register at the present time, ruling in her favor would not disrupt the work of the U.S. Copyright Office,” Pan wrote. “To the contrary, it is her removal that would be disruptive.”

    Perlmutter’s attorneys say she is a renowned copyright expert who also has served as register of copyrights since then-Librarian of Congress Carla Hayden appointed her to the job in October 2020.

    Trump appointed Deputy Attorney General Todd Blanche to replace Hayden at the Library of Congress. The White House fired Hayden amid criticism from conservatives that she was advancing a “woke” agenda.

    The appeals court’s ruling says Blanche’s appointment to serve as acting librarian of Congress was likely unlawful, as well, because the position is subject to Senate confirmation.

    [ad_2]

    Source link

  • Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

    [ad_1]

    Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what could have been a financially devastating outcome in court.

    The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately respond to requests for comment. Anthropic declined to comment.

    In 2024, three book writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic, alleging that the startup illegally used their work to train its artificial intelligence models. In June, California district court judge William Alsup issued a summary judgment in Bartz v. Anthropic that largely sided with Anthropic, finding that the company’s usage of the books was “fair use” and thus legal.

    But the judge ruled that the manner in which Anthropic had acquired some of the works, by downloading them through so-called shadow libraries, including a notorious site called LibGen, constituted piracy. Alsup ruled that the book authors could still take Anthropic to trial in a class action for pirating their works; the legal showdown was slated to begin in December.

    Statutory damages for this kind of piracy start at $750 per infringed work, according to US copyright law. Because the library of books amassed by Anthropic was thought to contain approximately 7 million works, the AI company was potentially facing court-imposed penalties amounting to billions, possibly more than $1 trillion dollars.

    “It’s a stunning turn of events, given how Anthropic was fighting tooth and nail in two courts in this case. And the company recently hired a new trial team,” says Edward Lee, a law professor at Santa Clara University who closely follows AI copyright litigation. “But they had few defenses at trial, given how Judge Alsup ruled. So Anthropic was starting at the risk of statutory damages in ‘doomsday’ amounts.”

    Most authors who may have been part of the class action were just starting to receive notice that they qualified to participate. The Authors Guild, a trade group representing professional writers, sent out a notice alerting authors that they might be eligible earlier this month, and lawyers for the plaintiffs were scheduled to submit a “list of affected works” to the court on September 1. This means that many of these writers were not privy to the negotiations that took place.

    “The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled,” says James Grimmelmann, a professor of digital and internet law at Cornell University. “That will be a very important barometer of where copyright owner sentiment stands.”

    Anthropic is still facing a number of other copyright-related legal challenges. One of the most high-profile disputes involves a group of major record labels, including Universal Music Group, which allege that the company illegally trained its AI programs on copyrighted lyrics. The plaintiffs recently filed to amend their case to allege that Anthropic had used the peer-to-peer file sharing service BitTorrent to download songs illegally.

    Settlements don’t set legal precedent, but the details of this case will likely be watched closely as dozens of other high-profile AI copyright cases continue to wind through the courts.

    [ad_2]

    Kate Knibbs

    Source link

  • YouTube to begin testing a new AI-powered age verification system in the U.S.

    [ad_1]

    YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching.

    The tests initially will only affect a sliver of YouTube’s audience in the U.S., but it will likely become more pervasive if the system works as well at guessing viewers’ ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up.

    If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age.

    The safeguards include reminders to take a break from the screen, privacy warnings and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn’t show ads tailored to individual tastes if a viewer is under 18.

    If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card or a selfie.

    “YouTube was one of the first platforms to offer experiences designed specifically for young people, and we’re proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,” James Beser, the video service’s director of product management, wrote in a blog post about the age-verification system.

    People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age.

    The political pressure has been building on websites to do a better job of verifying ages to shield children from inappropriate content since late June when the U.S. Supreme Court upheld a Texas law aimed at preventing minors from watching pornography online.

    While some services, such as YouTube, have been stepping up their efforts to verify users’ ages, others have contended that the responsibility should primarily fall upon the two main smartphone app stores run by Apple and Google — a position that those two technology powerhouses have resisted.

    Some digital rights groups, such as the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns that age verification could infringe on personal privacy and violate First Amendment protections on free speech.

    [ad_2]

    Source link

  • A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

    A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

    [ad_1]

    Perplexity did not respond to requests for comment.

    In a statement emailed to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the statement says. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

    OpenAI is facing its own accusations of trademark dilution, though. In New York Times v. OpenAI, the Times alleges that ChatGPT and Bing Chat will attribute made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its reputation through trademark dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it did not; the Times argues that its actual reporting has debunked claims about the healthfulness of moderate drinking.

    “Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT director of external communications Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

    If publishers prevail in arguing that hallucinations can violate trademark law, AI companies could face “immense difficulties” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

    “It is absolutely impossible to guarantee that a language model will not hallucinate,” Sag says. In his view, the way language models operate by predicting words that sound correct in response to prompts is always a type of hallucination—sometimes it’s just more plausible-sounding than others.

    “We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not.”

    [ad_2]

    Kate Knibbs

    Source link

  • The world’s second Sphere will be built in the UAE capital after the first opened in Las Vegas

    The world’s second Sphere will be built in the UAE capital after the first opened in Las Vegas

    [ad_1]

    DUBAI, United Arab Emirates (AP) — The world’s second Sphere is planned to be built in the capital of the United Arab Emirates after the opening of the first giant dome entertainment complex in Las Vegas.

    Abu Dhabi’s Department of Culture and Tourism and Sphere Entertainment Co. announced the plan late Tuesday to bring a Sphere to the Middle East.

    Under the deal, Abu Dhabi will pay a franchise fee to Sphere Entertainment to build the second location using its designs. Abu Dhabi’s government will pay to build the structure, as well as annual fees to Sphere Entertainment “for creative and artistic content.”

    The announcement offered no financing information, nor did it say where the Sphere would be built in the Emirati capital. Abu Dhabi’s government did not immediately respond to questions about the project Wednesday. Sphere declined to comment beyond the initial announcement.

    The massive $2.3 billion Las Vegas Sphere opened in 2023 as the gambling capital’s most expensive entertainment venue. A high-resolution LED screen wraps halfway around the 17,500-seat audience. It has hosted concerts and sporting events inside the world’s largest spherical structure standing at 366 feet (111 meters) tall and 516 feet (157 meters) wide.

    However, efforts to build a second Sphere abroad have been choppy. London Mayor Sadiq Khan rejected a plan to build one in the city’s east over multiple concerns last year, including light pollution.

    Abu Dhabi has been trying to differentiate itself as a travel destination from neighboring Dubai in the UAE, an energy-rich federation of seven sheikhdoms on the Arabian Peninsula.

    The UAE is also preparing to open the first casino in the country. While the only one currently under construction is in the emirate of Ras al-Khaimah, other sheikhdoms in the country are believed to be actively considering having their own.

    Sphere is the brainchild of James Dolan, the executive chair of Madison Square Garden and the owner of the New York Knicks and Rangers.

    Stock in Sphere closed more than 6% higher Tuesday on the New York Stock Exchange to $48.91 a share. That’s a major boost after Benchmark last month downgraded Sphere Entertainment to “sell” over “concerns over the Sphere’s “scalability, high production costs and a potentially underwhelming profitability outlook.”

    Meanwhile, some projects in the UAE have failed to be built or been delayed for years after being announced in economic downturns.

    Trademark filings show Sphere Entertainment Co. trademark filings made as well in Japan, Oman and Qatar, though there’s no announced plans for similar venues. Companies often protectively trademark their names in other markets without necessarily having business there.

    [ad_2]

    Source link

  • An appeals court upholds a ruling that an online archive’s book sharing violated copyright law

    An appeals court upholds a ruling that an online archive’s book sharing violated copyright law

    [ad_1]

    NEW YORK (AP) — An appeals court has upheld an earlier finding that the online Internet Archive violated copyright law by scanning and sharing digital books without the publishers’ permission.

    Four major publishers — Hachette Book Group, HarperCollins Publishers, John Wiley & Sons and Penguin Random House — had sued the Archive in 2020, alleging that it had illegally offered free copies of more than 100 books, including fiction by Toni Morrison and J.D. Salinger. The Archive had countered that it was protected by fair use law.

    In 2023, a judge for the U.S. District Court in Manhattan decided in the publishers’ favor and granted them a permanent injunction. On Wednesday, the U.S. Court of Appeals for the Second Circuit concurred, asking the question: Was the Internet Archive’s lending program, a “National Emergency Library” launched early in the pandemic, an example of fair use?

    “Applying the relevant provisions of the Copyright Act as well as binding Supreme Court and Second Circuit precedent, we conclude the answer is no,” the appeals court ruled.

    In a statement Wednesday, the president and CEO of the Association of American Publishers, Maria Pallante, called the decision a victory for the publishing community.

    “Today’s appellate decision upholds the rights of authors and publishers to license and be compensated for their books and other creative works and reminds us in no uncertain terms that infringement is both costly and antithetical to the public interest,” Pallante said.

    The Archive’s director of library services, Chris Freeland, called the ruling a disappointment.

    “We are reviewing the court’s opinion and will continue to defend the rights of libraries to own, lend, and preserve books,” he said in a statement.

    [ad_2]

    Source link