ReportWire

Tag: copyright

  • These notable works are officially in the public domain as 2026 arrives

    [ad_1]

    New Year’s Day commemorates the passing of time and the start of a new chapter, so it is fitting that the same day also presents an opportunity to breathe new life into thousands of creative works nearly a century old. As of Jan. 1, 2026, characters like early Betty Boop and Nancy Drew, and a variety of popular movies, books and songs, have entered the the public domain. 

    They join a growing list of cultural icons that are no longer under copyright protection, including Popeye the Sailor Man and the “Steamboat Willie” version of Mickey Mouse.

    List of popular intellectual property entering the public domain in 2026

    The year 2026 marks the first time that copyrighted books, films, songs and art published in the ’30s enter the U.S. public domain. As of Jan. 1, protections have expired for published works from 1930 and sound recordings from 1925.

    Here are some of the most notable works that are now available for free use by anyone:

    • “The Murder at the Vicarage” by Agatha Christie, the first novel featuring elderly amateur detective Miss Marple.
    • “The Secret of the Old Clock” by Carolyn Keene, the first appearance of teen detective Nancy Drew, and three follow-ups.
    • “The Little Engine That Could” by Watty Piper.
    • Fleischer Studios’ “Dizzy Dishes,” the first cartoon in which Betty Boop appears.
    • Disney’s “The Chain Gang” and “The Picnic,” both depicting the earliest versions of Mickey’s dog Pluto.
    • The initial four months of “Blondie” comic strips by Chic Young, featuring the earliest iterations of the titular character and her then-boyfriend, Dagwood.
    • The film “All Quiet on the Western Front,” directed by Lewis Milestone, Best Picture winner at the 3rd Academy Awards.
    • “King of Jazz,” directed by John Murray Anderson, Bing Crosby’s first appearance in a feature film.
    • “Animal Crackers,” directed by Victor Heerman and starring the Marx Brothers.
    • “The Big Trail,” directed by Raoul Walsh, John Wayne’s first turn as leading man.
    • “But Not For Me,” music by George Gershwin, lyrics by Ira Gershwin.
    • “Georgia on My Mind,” music by Hoagy Carmichael, lyrics by Stuart Gorrell.
    • “Dream a Little Dream of Me,” music by Fabian Andre and Wilbur Schwandt, lyrics by Gus Kahn.
    • “Livin’ in the Sunlight, Lovin’ in the Moonlight,” music by Al Sherman, lyrics by Al Lewis.
    • Piet Mondrian’s painting, “Composition with Red, Blue, and Yellow.”

    The original Betty Boop, early Nancy Drew mysteries, and Mickey Mouse’s dog Pluto are among the creative works entering the public domain on Jan. 1, 2026.

    How the public domain works

    When a work’s copyright protections lapse, it lands in the public domain, allowing anyone to use and build upon it as they see fit for free and without needing permission.

    “Copyright gives rights to creators and their descendants that provide incentives to create,” Jennifer Jenkins, director of Duke University’s Center for the Study of the Public Domain, told CBS News’ Lee Cowan in 2024. “But the public domain really is the soil for future creativity.”

    The U.S. Constitution’s intellectual property clause establishes that works be protected for a limited amount of time, “to promote the progress of science and useful arts.” The Founding Fathers left it to Congress to sort out the specifics.

    Generally, in the U.S., works published or registered before 1978 retain copyright protections for 95 years. For later works, protection usually spans the creator’s lifetime and 70 years after.

    “If copyright lasted forever, it would be very difficult for a lot of creators to make the works they want to make without worrying about being in the crosshairs of a copyright lawsuit,” Jenkins said.

    Just because a work’s copyright has expired does not mean that members of the public cannot be held legally liable in some instances. For example, while the original Betty Boop from 1930 is in the public domain, the modern version is not. So to avoid infringement, any reuse would need to steer clear of her newer characteristics. Additionally, the character is subject to multiple trademarks, which further complicates its use.

    What’s entering the public domain in 2027?

    Copyrighted works from 1931 will see their protections expire in 2027. This includes Universal Pictures’ “Frankenstein” and “Dracula” films, Charlie Chaplin’s “City Lights,” Fritz Lang’s “M,” Herman Hupfeld’s jazz standard “As Time Goes By” and more.

    [ad_2]

    Source link

  • The TRUMP AMERICA AI Act is every bit as bad as you would expect. Maybe worse.

    [ad_1]

    Sometimes you can tell a bill will be really bad just from its title. So it goes with The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act, from Sen. Marsha Blackburn (R–Tenn.). And, boy, does it deliver on that disaster of a name, managing to combine nearly every bad tech policy idea of the past half-decade—including gutting Section 230 and creating new requirements around the suppression of sexuality online—into one massive piece of Trump-branded legislation.

    The bill’s title alone is asinine, even if we put the North Korea-ness meets word-salad nature of it aside. Following the normal rules of making acronyms, it would be the TRUMP AMIERICA (or perhaps AMIBERICA) AI act, though Blackburn is throwing rules to the wind and referring to it as the TRUMP AMERICA AI act.

    If only the problems stopped there!

    Alas, Blackburn is serving up a cornucopia of proposals that could throttle free speech and free markets online. An anti-tech omnibus, if you will, sold as a simple AI regulatory scheme.

    Techdirt‘s Mike Masnick calls it a “massively destructive internet policy overhaul masquerading as AI legislation.” It “would change nearly every US government policy regarding how the internet works, tackling AI, Section 230, copyright, and a bunch of other nonsense all in one bill.”

    Masnick has a nice rundown of the bill’s myriad flaws, which include instituting a “duty of care” for AI developers to “prevent and mitigate foreseeable harm to users” (per Blackburn’s summary of the bill). This duty would be enforced by the Federal Trade Commission (FTC).

    “This is one of those things that I’m sure sounds good to folks, but as we’ve explained over and over again this kind of ‘duty of care’ is basically an anti-230 that would do real damage,” writes Masnick.

    It’s basically just an invitation for lawyers to sue any time anything bad happens and someone involved in the bad thing that happened somehow used an AI tool at some point.

    And then you have to go through a big expensive legal process to explain “no, this thing was not because of AI” or whatever. It’s just a massive invitation to sue everyone, meaning that in the end you have just a few giant companies providing AI because they’ll be the only ones who can afford the lawsuits.

    And just in case that didn’t allow for enough ways to attack AI companies, another section of the bill would enable “the U.S. Attorney General, state attorneys general, and private actors to file suit to hold AI system developers liable for harms caused by the AI system for defective design, failure to warn, express warranty, and unreasonably dangerous or defective product claims.”

    Blackburn—who was once a proponent of light-touch regulation when it came to the internet—has also worked elements of the Kids Online Safety Act (KOSA) into the TRUMP AMERICA AI Act.

    It will require certain social media platforms, video games, stream services, and messaging applications “to implement tools and safeguards to protect users and visitors under the age of 17 to protect children from sex trafficking, suicide, and other abuses,” per Blackburn’s summary. As with KOSA, this requirement is promoted in a way that sounds unobjectionable—admirable, even—but would, in effect, require companies to suppress massive amounts of content, weaken privacy protections, and more.

    “This section generally requires covered platforms to exercise reasonable care in the design and use of features that increase minors’ online activity to prevent and mitigate harm to minors (e.g., mental health disorders and severe harassment),” the summary says.

    Enterprising lawyers can easily argue that all sorts of things contribute to mental health issues in their young clients, enabling lawsuits over generally unobjectionable (or, at the very least, totally legal) speech and neutral platform features. The biggest tech companies may be able to fight these, but all but the behemoths would be forced to preemptively ban a bunch of speech in order to avoid potential lawsuits.

    Section 11 of Blackburn’s bill is promoted as combating “the consistent pattern of bias against conservative figures demonstrated by Big Tech and AI systems.” But, in practice, it could require AI systems to have a pro-conservative slant—at least as long as President Donald Trump or other Republicans are in power.

    The bill would set up “audits of high-risk AI systems to undergo regular bias evaluations to prevent discrimination based on protected characteristics, including political affiliation.”

    Presumably, federal agencies would be tasked with conducting these audits, which could leave it up to political appointees—not exactly a notoriously unbiased bunch—to judge what does and doesn’t count as bias against a particular political group. How long before AI developers have to tailor their systems to spitting out politically favorable results?

    The effect of this section could be somewhat blunted by the fact that it only applies to “high-risk” systems, which Blackburn’s summary describes as “those that could pose significant risks to health, safety, rights, or economic security, including those in education, employment, law enforcement, or critical infrastructure.” But without a more precise definition, it’s hard to say how this would shake out or what it would mean for the sorts of general AI systems used by consumers.

    During the heyday of federal antitrust hearings about Big Tech, the idea of ending “self-preferencing” got a lot of play. Self-preferencing refers to tech companies using their services to promote or favor their other services, and for some reason, lawmakers are convinced that it’s a scourge.

    But self-preferencing comes with a lot of perks for tech users, not just for the companies involved. It means that when you Google a particular place or business, Google will automatically place a map of this location near the top of the search results. It means that Amazon will perhaps show you more products eligible for free shipping with a Prime membership—something Prime members want!—than products where shipping costs extra. And so on.

    The TRUMP AMERICA AI act would stop “systemically important platforms”—defined as including, but perhaps not limited to, “platforms with subscribers or monthly active users in the United States not less than 34% of the population of the United States”—from engaging in “self-preferencing or steering users to products or services offered by the platform operator,” per Blackburn’s summary.

    In effect, it would make Big Tech less user-friendly in the name of protecting us from Big Tech.

    A line tucked near the bottom of Blackburn’s summary says that the bill would prevent “systemically important platforms from disseminating sexual material harmful to minors.”

    It’s cloaked in euphemistic language: “sexual material harmful to minors” sure sounds like something very bad, like it might be referring to child pornography or other forms of illegal imagery.

    But we’ve seen, in myriad state laws targeting material harmful to minors, that this term can be used very broadly, encompassing not just any and all pornographic photos and videos but also written erotica, literature that describes sexual relationships, stories centered on gay and transgender characters, and so on.

    A requirement that big tech platforms ban “sexual material harmful to minors” would almost certainly mean that they must filter out anything that could be considered porn and perhaps much more.

    One of the most worrying bits of the bill concerns Section 230 of the Communications Decency Act. Blackburn’s bill would “establish a ‘Bad Samaritan’ carve-out that would deny immunity from civil liability to platforms that purposefully facilitate or solicit third-party content that violates federal criminal law.”

    Of course, Section 230 is already inapplicable to violations of federal criminal law. A company can’t break federal law and claim that Section 230 lets them do it.

    So what’s the true aim here? I think Masnick frames the issue pretty well:

    Right now, 230 lets platforms get frivolous lawsuits dismissed quickly at the motion to dismiss stage. This change would force every platform to go through lengthy, expensive litigation to prove they weren’t “facilitating” (an incredibly vague term) or “soliciting” third-party content that violates federal criminal law.

    That’s gutting the main reason Section 230 exists. Instead of quick dismissals, you get discovery, depositions, and trials, all while someone argues that because your algorithm showed someone a post, you were “facilitating” whatever criminal content they claim to find.

    Slippery words like “facilitate” and “solicit” give authorities a lot of leeway to punish tech companies for activities we generally think of as non-criminal, free-market, or speech-facilitating activities.

    The bill would put into policy Trump’s desire to ban states from passing their own AI regulation. Earlier this month, the president issued an executive order seeking to stop states from passing certain sorts of AI regulation so the country could have, instead, a “national framework”—though the order can’t actually create said framework or outright ban states from passing their own laws. Congress can, however. And Blackburn’s bill would preempt state AI laws in several arenas.

    Blackburn’s summary also lists a huge array of other changes the TRUMP AMERICA AI Act would enact. Some of these summaries are relatively vague—for instance, Section 8 is merely described as “establish[ing] requirements for companies providing AI chatbot and companion services to protect kids.”

    One section would require “interoperability for systemically important platforms, which include platforms with subscribers or monthly active users in the United States not less than 34% of the population of the United States.” Interoperability is one of those ideas that may sound nice in theory but presents huge technical challenges and security risks.

    Several sections seem designed to upend copyright laws, by ignoring concepts like fair use, satire, and parody. There’s a bit that would create “a federal right for individuals to sue companies for using their data (personal, copyrighted) for AI training without explicit consent” and another that would “hold individuals or companies liable if they produce an unauthorized digital replica of an individual in a performance.” Yet another section would deem “derivative works generated, synthesized, or produced by an AI system without authorization as infringing works, which would be ineligible for copyright protection.”

    The bill hasn’t even been formally introduced yet, let alone attracted official cosponsors, so it’s hard to say how Blackburn’s colleagues will treat the bill. But it seems clear that the measure’s title has been calculated to attract Trump’s endorsement, which could translate to a lot of Republican lawmakers falling in line, too.

    Blackburn’s announcement of the TRUMP AMERICA AI Act is also steeped in MAGA flattery and rhetoric. The bill would “codify President Trump’s executive order to create one rulebook for artificial intelligence,” it says.

    “I look forward to introducing the TRUMP AMERICA AI Act in the new year to create one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance,” said Blackburn.


    Patient “states he has a foreign body in his rectum that is vibrating. He states he was with a girl last night and doesn’t remember much.” Using data from the U.S. Consumer Product Safety Commission’s emergency room visits database, Defector has compiled a list of things people got stuck in their rectums and genitals in 2025.

    New York passes an immunity bill. The bill “provides immunity from prosecution for certain individuals engaged in prostitution who are victims of or witnesses to a crime and who report such crime or assist in the investigation or prosecution,” per the legislative summary. “This law recognizes that safety must be prioritized over punishment,” said Decriminalize Sex Work Legal Director Melissa Broudo. “It is a vital and common sense public safety measure that strengthens law enforcement’s ability to identify, investigate, and convict perpetrators of violence and trafficking.”

    Did China just ban sexting? “The Chinese government has banned the sharing of ‘obscene’ content in private online messages and increased the penalties for spreading pornographic material,” reports The Washington Post. “While the revision will target the dissemination of pornography and exploitative images,” the new regulation “may also mean that consensual sexting could also be dragged into China’s legal system.”

    Lol: The URLs trumpkennedycenter.org and trumpkennedycenter.com are owned by comedy writer Toby Morton, who predicted the renaming of the D.C. performing arts institution (it will become the “The Donald J. Trump and The John F. Kennedy Memorial Center for the Performing Arts”) and snapped up the web domains in advance.


    Washington, D.C. | 2017 (ENB/Reason)

    [ad_2]

    Elizabeth Nolan Brown

    Source link

  • Disney is investing $1 billion in OpenAI and licensing its characters for Sora

    [ad_1]

    (CNN) — Disney is taking a $1 billion equity stake in OpenAI, while also striking a deal that would allow its famous characters be used on Sora, the AI company’s video generation platform.

    Disney’s investment in OpenAI is the first such major licensing agreement for Sora.

    Under the agreement, users of OpenAI’s shortform video-generating social media network Sora will be allowed to make videos using more than 200 Disney animated characters. Those characters including Mickey and Minnie Mouse, Disney Princesses like Ariel, Belle, and Cinderella, characters from Frozen, Moana, and Toy Story. Animated characters from Marvel and Lucasfilm, including Black Panther and Star Wars characters like Yoda are included as well – although the agreement does not include any talent likenesses or voices.

    Users of OpenAI’s popular chatbot ChatGPT will also be able to ask the bot to create images using the Disney characters.

    “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Disney CEO Robert A. Iger, CEO said as part of a statement.

    OpenAI, which has come under scrutiny for copyright violations – and also for striking massive ‘circular’ deals leading to fears of an AI bubble – said the deal shows how the creative community and AI can get along.

    “Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

    Shortly after the announcement, Iger and Altman both sat down with CNBC’s David Faber, during which the Disney boss stressed that the deal “does not, in any way, represent a threat to the creators.”

    “In fact, the opposite, I think it honors them and respects them, in part because there’s a license fee associated with it,” Iger said, later adding that the goal is to “continue to honor, respect, value the creative community in general.”

    Iger also stressed that the deal allows Disney to “be comfortable that OpenAI is putting guardrails essentially around how these are used,” adding that, “really, there’s nothing for us to be concerned about from a consumer perspective.” Altman, too, stressed the presence of guardrails, telling Faber that “it’s very important that we enable Disney to set and evolve those guardrails over time, but they will, of course, be in there.”

    The deal is exclusive, per Iger, at least in part. The Disney CEO hinted that “there is exclusivity, basically, at the beginning of the three-year agreement,” but remained mum on what that means. Asked if OpenAI is pursuing similar deals with other companies, Altman said, “I won’t rule out anything in the future, but we think this alone is going to be a wonderful start.”

    Disney has previously sued AI companies for using their intellectual property. On Monday, the company sent Google a cease and desist letter, according to a source familiar with the situation.

    The cease and desist letter claims the company’s AI products, including its image and video generating products Veo and Nano Banana, are infringing Disney’s copyrights “on a massive scale,” by allowing users to create images and videos depicting their characters. The letter alleges that Google has “refused to implement any technological measures to mitigate or prevent copyright infringement.”

    In response, a Google spokesperson said they have “a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them.”

    More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

    Disney had already sent similar cease and desist letters to Meta and Character.AI. In June, Disney and Universal sued AI photo generation company Midjourney, alleging the company violated copyright law.

    This story has been updated with additional developments and context.

    [ad_2]

    Hadas Gold and CNN

    Source link

  • Apple Is Being Accused of Training Its AI Using Copyrighted Books

    [ad_1]

    Apple was hit with a lawsuit in California federal court by a pair of neuroscientists who say that the tech company misused thousands of copyrighted books to train its Apple Intelligence artificial intelligence model.

    Susana Martinez-Conde and Stephen Macknik, professors at SUNY Downstate Health Sciences University in Brooklyn, New York, told the court in a proposed class action on Thursday that Apple used illegal “shadow libraries” of pirated books to train Apple Intelligence.

    A separate group of authors sued Apple last month for allegedly misusing their work in AI training.

    Tech companies facing lawsuits

    The lawsuit is one of many high-stakes cases brought by copyright owners such as authors, news outlets, and music labels against tech companies, including OpenAI, Microsoft, and Meta Platforms, over the unauthorized use of their work in AI training. Anthropic agreed to pay $1.5 billion to settle a lawsuit from another group of authors over the training of its AI-powered chatbot Claude in August.

    Spokespeople for Apple and Martinez-Conde, Macknik, and their attorney did not immediately respond to requests for comment on the new complaint on Friday.

    Apple Intelligence is a suite of AI-powered features integrated into iOS devices, including the iPhone and iPad. 

    “The day after Apple officially introduced Apple Intelligence, the company gained more than $200 billion in value: ‘the single most lucrative day in the history of the company,’” the lawsuit said.

    According to the complaint, Apple utilized datasets comprising thousands of pirated books as well as other copyright-infringing materials scraped from the internet to train its AI system.

    The lawsuit said that the pirated books included Martinez-Conde and Macknik’s “Champions of Illusion: The Science Behind Mind-Boggling Images and Mystifying Brain Puzzles” and “Sleights of Mind: What the Neuroscience of Magic Reveals About Our Everyday Deceptions.”

    The professors requested an unspecified amount of monetary damages and an order for Apple to stop misusing their copyrighted work.

    Reporting by Blake Brittain in Washington, Editing by Alexia Garamfalvi and Rod Nickel.

    [ad_2]

    Reuters

    Source link

  • You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

    [ad_1]

    The complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, CEO Sam Altman announced that the company will be moving to an “opt-in” model that will “give rightsholders more granular control over generation of characters”—and Sora obsessives are not taking it particularly well.

    Given the type of content that was being generated with Sora and shared via the TikTok-style social app that OpenAI launched specifically to host user-generated Sora videos, the change shouldn’t come as a shock. Almost immediately, the platform was inundated with copyrighted material being used in ways that the rightsholders almost certainly did not care for, unless you think Nickelodeon really loved the subversiveness of Nazi SpongeBob. On Monday, the Motion Picture Association became one of the loudest voices calling for OpenAI to put an end to the potential infringement. It didn’t take long for OpenAI to respond and acquiesce.

    In a blog post, Altman said the new approach to copyrighted material in Sora will require rightsholders to opt-in to having their characters and content used—but he’s very sure that copyright holders love the videos, actually. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all),” Altman wrote, stating that his company wants to “let rightsholders decide how to proceed.”

    Altman also admitted, “There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.” It’s unclear if that will play with rightsholders. MPA CEO Charles Rivkin said in a statement that OpenAI “must acknowledge it remains their responsibility—not rightsholders’—to prevent infringement on the Sora 2 service,” and said “Well-established copyright law safeguards the rights of creators and applies here.”

    While OpenAI might be giving copyright holders more control of the outputs of its model, it doesn’t appear that they had much say on the inputs. A report from the Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. It’s not clear that OpenAI went out and got those rights to train Sora 2, but the generator is very good at spitting out accurate recreations of copyrighted material in a way that it could only do if it was fed a whole lot of existing content during training.

    The biggest AI training case thus far saw Anthropic pay out $1.5 billion to settle a copyright infringement case with authors of books the company pirated to train its models. The judge in that case did find that using copyrighted material for training without permission is fair use, though other courts may not agree with that call. Earlier this year, OpenAI asked the Trump administration to call AI model training fair use. So a lot of OpenAI’s strategy around Sora appears to be fucking around and hoping, if it makes the right allies, it’ll never have to find out.

    OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well.

    [ad_2]

    AJ Dellinger

    Source link

  • China’s Latest Digital Headache for American Corporations: ‘Export-Only’ Piracy

    [ad_1]

    Digital pirates in China are getting more sophisticated and are blocking their services domestically to avoid local law enforcement, and U.S. copyright holders would like to speak to the manager.

    The International Intellectual Property Alliance (IIPA), which represents various U.S. entertainment industries from Hollywood to gaming, is calling on China to do more to stop these operations, which have been dubbed ‘export-only’ piracy.

    The IIPA called out the practice and named notable offenders in a submission last week to the U.S. Trade Representative. The submission was part of the Trade Representative’s annual review of China’s compliance with World Trade Organization (WTO) obligations, TorrentFreak reports.

    “While significant piracy in China’s domestic market remains an enduring challenge, the exporting of pirated content, piracy services, and piracy devices (PDs) from China to foreign markets is a growing and equally troubling global trend,” the submission reads.

    The report highlights several of the worst offenders, including the internet TV platform and privacy device exporter FlujoTV (formerly MagisTV), which targets Latin America; the app LokLok, serving Southeast Asia; and the website GIMY, popular in Taiwan.

    The IIPA underscored how pirates are shifting tactics and searching for new loopholes to exploit. Another example provided by the group was the reskinning of video games.

    “Instead of traditional methods that involve technical cracking of game software for complete duplication and distribution, game piracy in China is increasingly characterized by reskinning the original games with non-substantial revisions,” the report says. It added that the changes could be as simple as making slight adjustments to the games’ source code.

    Additionally, the IIPA’s comments paint a picture of China’s copyright enforcement as slow, inconsistent, and bureaucratic. For example, even after initial sanctions against violators, rights holders often have to file new complaints for repeat offenses. E-commerce platforms usually only have to delist specific items, rather than shutting down entire shops. And geo-blocked services can operate completely under the radar.

    “This allows China-based operations to evade enforcement action by simply geo-blocking their services from access within China or serving a different set of content to users accessing these services from within China,” the IIPA wrote.

    The group is now calling for specific reforms to address the issue, including more resources and better coordination for the National Copyright Administration of China (NCAC), simpler complaint procedures, and clearer rules for user-uploaded content platforms.

    They also want China to enforce its laws against all piracy operations run from the country, even if the services aren’t accessible locally, and to improve cross-border cooperation so geo-blocked piracy doesn’t slip through the cracks.

    [ad_2]

    Bruce Gil

    Source link

  • AI’s Go-for-Broke Regulation Strategy

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Getty Images

    In the AI world, everyone always seems to be going for broke. It’s AGI or bust — or as the gloomier title of a recent book has it, If Anyone Builds It, Everyone Dies. This rhetorical severity is backed up by big bets and bigger asks, hundreds of billions of dollars invested by companies that now say they’ll need trillions to build, essentially, the only companies that matter. To put it another way: They’re really going for it.

    This is as clear in the scope of the infrastructure as it is in stories about the post-human singularity, but it’s happening somewhere else, too: In the quite human realm of law and regulation, where AI firms are making bids and demands that are, in their way, no less extreme. From The Wall Street Journal:

    OpenAI is planning to release a new version of its Sora video generator that creates videos featuring copyright material unless copyright holders opt out of having their work appear, according to people familiar with the matter …

    The opt-out process for the new version of Sora means that movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyright material in videos the tool creates.

    This is pretty close to the maximum possible bid OpenAI can make here, in terms of its relationship to copyright — a world in which rights holders must opt out of inclusion in OpenAI’s model is one in which OpenAI is all but asking to opt out of copyright as a concept. To arrive at such a proposal also seems to take for granted that a slew of extremely contentious legal and regulatory questions will be settled in OpenAI’s favor, particularly around the concept of “fair use.” AI firms are arguing in court — and via lobbyists, who are pointing to national-security concerns and the AI race with China — that they should be permitted not just to train on copyrighted data but to reproduce similar and competitive outputs. By default, according to this report, OpenAI’s future models will be able to produce images of a character like Nintendo’s Mario unless Nintendo takes action to opt out. Questions one might think would precede such a conversation — how did OpenAI’s model know about Mario in the first place? What sorts of media did it scrape and train on? — are here considered resolved or irrelevant.

    As many experts have already noted, various rights holders and their lawyers might not agree, and there are plenty of legal battles ahead (hence the simultaneous lobbying effort, to which the Trump administration seems at least somewhat sympathetic). But copyright isn’t the only area where OpenAI is making startlingly ambitious bids to alter the legal and regulatory landscape. In a deeply strange recent interview with Tucker Carlson, Sam Altman forced the conversation back around to an idea he and his company have been floating for a while now: AI “privilege.”

    If I could get one piece of policy passed right now relative to AI the thing I would most like, and this is intentional with some of the other things that we’ve talked about, is I’d like there to be a concept of AI privilege.

    When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information …

    We have decided that society has an interest in that being privileged and that we don’t, and that a subpoena can’t get, that the government can’t come asking your doctor for it or whatever. I think we should have the same concept for AI. I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you’d get if you’re talking to the human version of this.

    Coming from anyone else, this could be construed as an interesting philosophical detour through questions of theoretical machine personhood, the effect of AI anthropomorphism on users’ expectations of privacy, and how to manage incriminating or embarrassing information revealed in the course of intimate interactions with novel new sort of software. People already use chatbots for medical advice and legal consultation, and it’s interesting to think about how a company might offer or limit such services responsibly and without creating existential legal peril.

    Coming from Altman, though, it assumes an additional meaning: He would very much prefer that his company not be liable for potentially risky or damaging conversations that its software has with users. In other words, he’d like to operate a product that dispenses medical and legal advice while assuming as little liability for its outputs, or its users’ inputs, as possible — a mass-market product with the legal protections of a doctor, therapist, or lawyer but with as little responsibility as possible. There are genuinely interesting issues to work out here. But against the backdrop of numerous reports and lawsuits accusing chatbot makers of goading users into self-harm or triggering psychosis, it’s not hard to imagine why getting blanket protections might feel rather urgent right now.

    On both copyright and privacy, his vision is maximalist: not just total freedom for his company to operate as it pleases, but additional regulatory protections for it as well. It’s also probably aspirational — we don’t get to a copyright free-for-all without a lot of big fights, and a chatbot version of attorney-client privilege is the sort of thing that will likely arrive with a lot of qualifications and caveats. Still, each bid is characteristic of the industry and the moment it’s in. So long as they’re building something, they believe they might as well ask for everything.

    [ad_2]

    John Herrman

    Source link

  • Kim Dotcom loses latest bid to avoid U.S. extradition on Megaupload charges

    [ad_1]

    Wellington, New Zealand — A New Zealand court has rejected the latest bid by internet entrepreneur Kim Dotcom to halt his deportation to the United States on charges related to his file-sharing website Megaupload.

    Dotcom had asked the High Court to review the legality of an official’s August 2024 decision that he should be surrendered to the U.S. to face trial on charges of copyright infringement, money laundering and racketeering. It was the latest chapter in a protracted 13-year battle by the U.S. government to extradite the Finnish-German millionaire from New Zealand.

    The Megaupload founder had applied for what in New Zealand is called a judicial review, in which a judge is asked to evaluate whether an official’s decision was lawful.

    Internet mogul Kim Dotcom leaves with his girlfriend Elizabeth Donelly following his extradition appeal at the High Court in Auckland, New Zealand, in an Aug. 29, 2016 file photo.

    KATE DWEK/AFP/Getty


    A judge on Wednesday dismissed Dotcom’s arguments that the decision to deport him was politically motivated and that he would face grossly disproportionate treatment in the U.S. In a written ruling, Justice Christine Grice also rejected Dotcom’s claim that New Zealand’s police were wrong to charge his business partners, but not him, under domestic laws – which likely yielded laxer sentences than if the men had been tried in the U.S.

    The latest decision could be challenged in the Court of Appeal, where a deadline for filing is Oct. 8. It wasn’t immediately clear if Dotcom would do so.

    One of his lawyers, Ron Mansfield, told Radio New Zealand that Dotcom’s team had “much fight left in us as we seek to secure a fair outcome,” but he didn’t elaborate.

    Neither Dotcom nor Mansfield responded to a request for comment from The Associated Press on Thursday.

    New Zealand’s government hasn’t disclosed what will happen next in the extradition process or divulged an expected timeline for Dotcom to be surrendered to the United States.

    The saga stretches back to the January 2012 arrest by New Zealand authorities of Dotcom in a dramatic raid on his Auckland mansion, along with other company officers, at the request of the FBI. U.S. prosecutors said Megaupload raked in at least $175 million, mainly from people who used the site to illegally download songs, television shows and movies, before the FBI shut it down earlier that year.

    Lawyers for Dotcom and the others arrested argued that it was the users of the site, founded in 2005, who chose to pirate material, not its founders. But prosecutors said the men were the architects of a vast criminal enterprise, with the Department of Justice describing it as the largest criminal copyright case in U.S. history.

    He has been free on bail in New Zealand since February 2012.

    Interviewed at his sprawling home by 60 Minutes in 2014, Dotcom told correspondent Bob Simon that he was inspired to seek his riches by the James Bond movies, “where, you know, some characters had private islands and super tankers converted into yachts and space stations and underwater homes. So, you know, I got inspired by that.”

    “But you’re not playing James Bond, you’re playing Dr. No,” suggested Simon.

    “That’s what everybody says,” replied the web entrepreneur.

    Dotcom and his business partners fought the FBI’s efforts to extradite them for years, including by challenging New Zealand law enforcement’s actions during the investigation and arrests. In 2021, however, New Zealand’s Supreme Court ruled that Dotcom and two other men could be surrendered.

    Under New Zealand law, it remained up to the country’s justice minister to decide if the extradition should proceed. The minister, Paul Goldsmith, ruled in August 2024 that it should.
       
    But by then, Dotcom was the only person whose fate remained in question. Two of his former business partners, Mathias Ortmann and Bram van der Kolk, pleaded guilty to charges against them in a New Zealand court in June 2023 and were sentenced to two and a half years in jail.

    In exchange, U.S. efforts to extradite them were dropped. Part of Dotcom’s latest legal bid challenged the police decision not to extend a plea deal under New Zealand laws to him, too.

    Grice rejected that, saying the choice to only charge Ortmann and van der Kolk in New Zealand was “a proper exercise of the Police’s discretion.” The jurist also dismissed Dotcom’s claim that Goldsmith’s extradition decision was politically motivated.

    Prosecutors earlier abandoned their extradition bid against a fourth Megaupload officer, Finn Batato, who was arrested in New Zealand. Batato returned to Germany, where he died from cancer in 2022.

    In November 2024, Dotcom said in a post on X that he had suffered a stroke. He wrote on X in July that he was making “good progress” in his recovery but still suffered from speech and memory impairments.

    Goldsmith’s decision that Dotcom should be extradited was made before the stroke. But Grice said the minister had considered other “significant health conditions” Dotcom faced and wasn’t wrong to conclude that these shouldn’t prevent him from being deported.

    [ad_2]

    Source link

  • AI company Anthropic to pay authors $1.5 billion over pirated books used to train chatbots

    [ad_1]

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.Related video above: The risks to children under President Trump’s new AI policyThe landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.”As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.”We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.””We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.As part of the settlement, the company has also agreed to destroy the original book files it downloaded.Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT. Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.” The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.”On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.”It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.”This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

    Related video above: The risks to children under President Trump’s new AI policy

    The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

    The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

    “As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

    A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

    A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

    If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

    “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

    U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

    Anthropic said in a statement Friday that the settlement, if approved, “will resolve the plaintiffs’ remaining legacy claims.”

    “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, the company’s deputy general counsel.

    As part of the settlement, the company has also agreed to destroy the original book files it downloaded.

    Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

    Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

    Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the dataset.

    Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

    The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

    On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

    The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren’t registered with the U.S. Copyright Office.

    “On the one hand, it’s comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price,” said Thomas Heldrup, the group’s head of content protection and enforcement.

    On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules.

    “It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space,” Heldrup said.

    The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments.

    Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.

    The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Meta and Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court.

    “This indicates that maybe for other cases, it’s possible for creators and AI companies to reach settlements without having to essentially go for broke in court,” said Long, the legal analyst.

    The industry, including Anthropic, had largely praised Alsup’s June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

    Comparing the AI model to “any reader aspiring to be a writer,” Alsup wrote that Anthropic “trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

    But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

    With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.

    [ad_2]

    Source link

  • Anthropic reaches $1.5 Billion settlement with authors in landmark copyright case

    [ad_1]

    Anthropic has agreed to a $1.5 billion settlement with authors in a landmark copyright case, marking one of the first and largest legal payouts of the AI era.

    The AI startup agreed to pay authors around $3,000 per book for roughly 500,000 works, after it was accused of downloading millions of pirated texts from shadow libraries to train its large language model, Claude. As part of the deal, Anthropic will also destroy data it was accused of illegally acquiring.

    The fast-growing AI startup announced earlier this week that it had just raised an additional $13 billion in new venture capital funding in a deal that valued the company at $183 billion. It has also said that it is currently on pace to generate at least $5 billion in revenues over the next 12 months. The settlement amounts to nearly a third of that figure or more than a tenth of the new funding Anthropic just received.

    While the settlement does not establish a legal precedent, experts said it will likely serve as an anchor figure for the amount other major AI companies will need to pay if they hope to settle similar copyright infringement lawsuits. For instance, a number of authors are suing Meta for using their books without permission. As part of that lawsuit, Meta was forced to disclose internal company emails that suggest it knowingly used a library of pirated books called LibGen—which is one of the same libraries that Anthropic used. OpenAI and its partner Microsoft are also facing a number of copyright infringement cases, including one filed by the Author’s Guild.

    Aparna Sridhar, deputy general counsel at Anthropic, told Fortune in a statement: “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic’s approach to training AI models constitutes fair use. Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

    A lawyer for the authors who sued Anthropic said the settlement would have far-reaching impacts.
    “This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,”  Justin Nelson, partner with Susman Godfrey LLP and co-lead plaintiffs’ counsel on Bartz et al. v. Anthropic PBC, said in a statement. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

    The case, which was originally set to go to trial in December, could have exposed Anthropic to damages of up to $1 trillion if the court found that the company willfully violated copyright law. Santa Clara law professor Ed Lee said could that if Anthropic lost the trial, it could have “at least the potential for business-ending liability.” Anthropic essentially concurred with Lee’s conclusion, writing in a court filing that it felt “inordinate pressure” to settle the case given the size of the potential damages.

    The jeopardy Anthropic faced hinged on the means it had used to obtain the copyrighted books, rather than the fact that they had used the books to train AI without the explicit permission of the copyright holders. In July, U.S. District Court Judge William Alsup, ruled that using copyrighted books to create an AI model constituted “fair use” for which no specific license was required.

    But Alsup then focused on the allegation that Anthropic had used digital libraries of pirated books for at least some of the data it fed its AI models, rather than purchasing copies of the books legally. The judge suggested in a decision allowing the case to go to trial that he was inclined to view this as copyright infringement no matter what Anthropic did with the pirated libraries.

    By settling the case, Anthropic has sidestepped an existential risk to its business. However, the settlement is significantly higher than some legal experts were predicting. The motion is now seeking preliminary approval of what’s claimed to be “the largest publicly reported copyright recovery in history.”

    James Grimmelmann, a law professor at Cornell Law School and Cornell Tech, called it a “modest settlement.”

    “It doesn’t try to resolve all of the copyright issues around generative AI. Instead, it’s focused on what Judge Alsup thought was the one egregiously wrongful thing that Anthropic did: download books in bulk from shadow libraries rather than buying copies and scanning them itself. The payment is substantial, but not so big as to threaten Anthropic’s viability or competitive position,” he told Fortune.

    He said that the settlement helps establish that AI companies need to acquire their training data legitimately, but does not answer other copyright questions facing AI companies, such as what they need to do to prevent their generative AI models from producing outputs that infringe copyright. In several cases still pending against AI companies—including a case The New York Times has filed against OpenAI and a case that movie studio Warner Brothers filed just this week against Midjourney, a firm that makes AI that can generate images and videos—the copyright holders allege the AI models produced outputs that were identical or substantially similar to copyrighted works

    “The recent Warner Bros. suit against Midjourney, for example, focuses on how Midjourney can be used to produce images of DC superheroes and other copyrighted characters,” Grimmelmann said.

    While legal experts say the amount is manageable for a firm the size of Anthropic, Luke McDonagh, an associate professor of law at LSE, said the case may have a downstream impact on smaller AI companies if it does set a business precedent for similar claims.

    “The figure of $1.5 billion, as the overall amount of the settlement, indicates the kind of level that could resolve some of the other AI copyright cases. It could also point the way forward for licensing of copyright works for AI training,” he told Fortune. This kind of sum—$3,000 per work—is manageable for a firm valued as highly as Anthropic and the other large AI firms. It may be less so for smaller firms.”

    A business precedent for other AI firms

    Cecilia Ziniti, a lawyer and founder of legal AI company GC AI, said the settlement was a “Napster to iTunes” moment for AI.

    “This settlement marks the beginning of a necessary evolution toward a legitimate, market-based licensing scheme for training data,” she said. She added the settlement could mark the “start of a more mature, sustainable ecosystem where creators are compensated, much like how the music industry adapted to digital distribution.”

    Ziniti also noted the size of the settlement may force the rest of the industry to get more serious about licensing copyrighted works.

    “The argument that it’s too difficult to track and pay for training data is a red herring because we have enough deals at this point to show it can be done,” she said, pointing to deals that news publications, including Axel Springer and Vox, have entered into with OpenAI. “This settlement will push other AI companies to the negotiating table and accelerate the creation of a true marketplace for data, likely involving API authentications and revenue-sharing models.”

    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

    [ad_2]

    Beatrice Nolan

    Source link

  • Anthropic will pay a record-breaking $1.5 billion to settle copyright lawsuit with authors

    [ad_1]

    Anthropic will pay a record-breaking $1.5 billion to settle a class action lawsuit piracy lawsuit brought by authors. The settlement is the largest-ever payout for a copyright case in the United States.

    The AI company behind the Claude chatbot reached a settlement in the case last week, but terms of the agreement weren’t disclosed at the time. Now, The New York Times that the 500,000 authors involved in the case will get $3,000 per work.

    The settlement is “is the first of its kind in the AI era,” Justin A. Nelson, the lawyer representing the authors, said in a statement. “This landmark settlement far surpasses any other known copyright recovery. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

    The case has been closely watched as top AI companies are increasingly facing legal scrutiny over their use of copyrighted works. In June, the judge in the case ruled that Anthropic’s use of copyrighted material for training its large language model was , in a significant victory for the company. He did, however, rule that the authors and publishers could pursue piracy claims against the company since the books were downloaded illegally from sites like Library Genesis (also known as “LibGen”).

    As part of the settlement, Anthropic has also agreed to delete everything that was downloaded illegally and “said that it did not use any pirated works to build A.I. technologies that were publicly released,” according to The New York Times. The company has not admitted wrongdoing.

    “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic’s approach to training AI models constitutes fair use,” Anthropic’s Deputy General Counsel Aparna Sridhar said in a statement. “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

    [ad_2]

    Karissa Bell

    Source link

  • Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement

    [ad_1]

    Anthropic has agreed to pay at least $1.5 billion to settle a lawsuit brought by a group of book authors alleging copyright infringement, an estimated $3,000 per work. In a court motion on Friday, the plaintiffs emphasized that the terms of the settlement are “critical victories” and that going to trial would have been an “enormous” risk.

    This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. According to the settlement agreement, the class action will apply to approximately 500,000 works, but that number may go up once the list of pirated materials is finalized. For every additional work, the artificial intelligence company will pay an extra $3,000. Plaintiffs plan to deliver a final list of works to the court by October.

    “This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” says colead plaintiffs’ counsel Justin Nelson of Susman Godfrey LLP.

    Anthropic is not admitting any wrongdoing or liability. “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems,” Anthropic deputy general counsel Aparna Sridhar said in a statement.

    The lawsuit, which was originally filed in 2024 in the US District Court for the Northern District of California, was part of a larger ongoing wave of copyright litigation brought against tech companies over the data they used to train artificial intelligence programs. Authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber alleged that Anthropic trained its large language models on their work without permission, violating copyright law.

    This June, senior district judge William Alsup ruled that Anthropic’s AI training was shielded by the “fair use” doctrine, which allows unauthorized use of copyrighted works under certain conditions. It was a win for the tech company but came with a major caveat. As it gathered materials to train its AI tools, Anthropic had relied on a corpus of books pirated from so-called “shadow libraries,” including the notorious site LibGen, and Alsup determined that the authors should still be able to bring Anthropic to trial in a class action over pirating their work. (Anthropic maintains that it did not actually train its products on the pirated works, instead opting to purchase copies of books.)

    “Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI (at all or ever again). Authors argue Anthropic should have paid for these pirated library copies. This order agrees,” Alsup wrote in his summary judgement.

    [ad_2]

    Kate Knibbs

    Source link

  • Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

    [ad_1]

    Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what could have been a financially devastating outcome in court.

    The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately respond to requests for comment. Anthropic declined to comment.

    In 2024, three book writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic, alleging that the startup illegally used their work to train its artificial intelligence models. In June, California district court judge William Alsup issued a summary judgment in Bartz v. Anthropic that largely sided with Anthropic, finding that the company’s usage of the books was “fair use” and thus legal.

    But the judge ruled that the manner in which Anthropic had acquired some of the works, by downloading them through so-called shadow libraries, including a notorious site called LibGen, constituted piracy. Alsup ruled that the book authors could still take Anthropic to trial in a class action for pirating their works; the legal showdown was slated to begin in December.

    Statutory damages for this kind of piracy start at $750 per infringed work, according to US copyright law. Because the library of books amassed by Anthropic was thought to contain approximately 7 million works, the AI company was potentially facing court-imposed penalties amounting to billions, possibly more than $1 trillion dollars.

    “It’s a stunning turn of events, given how Anthropic was fighting tooth and nail in two courts in this case. And the company recently hired a new trial team,” says Edward Lee, a law professor at Santa Clara University who closely follows AI copyright litigation. “But they had few defenses at trial, given how Judge Alsup ruled. So Anthropic was starting at the risk of statutory damages in ‘doomsday’ amounts.”

    Most authors who may have been part of the class action were just starting to receive notice that they qualified to participate. The Authors Guild, a trade group representing professional writers, sent out a notice alerting authors that they might be eligible earlier this month, and lawyers for the plaintiffs were scheduled to submit a “list of affected works” to the court on September 1. This means that many of these writers were not privy to the negotiations that took place.

    “The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled,” says James Grimmelmann, a professor of digital and internet law at Cornell University. “That will be a very important barometer of where copyright owner sentiment stands.”

    Anthropic is still facing a number of other copyright-related legal challenges. One of the most high-profile disputes involves a group of major record labels, including Universal Music Group, which allege that the company illegally trained its AI programs on copyrighted lyrics. The plaintiffs recently filed to amend their case to allege that Anthropic had used the peer-to-peer file sharing service BitTorrent to download songs illegally.

    Settlements don’t set legal precedent, but the details of this case will likely be watched closely as dozens of other high-profile AI copyright cases continue to wind through the courts.

    [ad_2]

    Kate Knibbs

    Source link

  • The Video Game History Foundation’s fight for game preservation isn’t over

    The Video Game History Foundation’s fight for game preservation isn’t over

    [ad_1]

    Last week, the Video Game History Foundation (VGHF) released a expressing its regret that the US Copyright Office’s refused to grant an exemption to the Digital Millennium Copyright Act (DMCA) to help preserve rare video games. However, the VGHF continued by saying it won’t back down and will continue advocating for improved video game preservation.

    For some context, the VGHF had been a longtime supporter of the Software Preservation Network’s (SPN) petition to receive a for the sake of preserving video games, especially for researchers who need access to them and can’t do so due to unavailability. As the only currently legal way is to get a legitimate hard or soft copy of the game and play it on its corresponding console, researchers are encountering difficulties in progressing in their studies. Piracy would be illegal, of course, which is why the SPN is fighting for an exemption. However, there are those who don’t see things this way.

    Despite not convincing the Entertainment Software Association (ESA) and the US Copyright Office, the VGHF doesn’t regret supporting the SPN’s petition for a DMCA exemption. Its goal, and that of several like-minded organizations (as mentioned by ), is to help preserve out-of-print and obscure video games for future generations to enjoy. The petition sought to allow researchers to access these games remotely from libraries and archives.

    The ESA pushed hard against the petition, refusing to allow any remote game access whatsoever. ESA members have even ignored calls for comment on the situation, reports. As the VGHF says, researchers are now forced to use “extra-legal methods to access the vast majority of out-of-print video games that are otherwise unavailable.”

    Three years of fighting for a cause and not giving up shows that the VGHF remains committed to video game preservation. The organization ended its statement by calling game industry members to support its cause.

    [ad_2]

    Jeremy Gan

    Source link

  • White Stripes sue Trump for using ‘Seven Nation Army’ without permission

    White Stripes sue Trump for using ‘Seven Nation Army’ without permission

    [ad_1]

    The White Stripes sued former President Donald Trump on Monday in a case that alleges he used their hit song “Seven Nation Army” without permission in a video posted to social media.

    The band has accused Trump and his presidential campaign of copyright infringement for playing the song’s iconic opening riff over a video of Trump boarding a plane for campaign stops in Michigan and Wisconsin last month.

    The Trump campaign did not immediately return an emailed request for comment.

    The lawsuit, filed in federal court in Manhattan, said the band was also objecting to Trump’s use of the song because members Jack White and Meg White “vehemently oppose the policies adopted and actions taken by Defendant Trump when he was President and those he has proposed for the second term he seeks.”

    Several prominent musicians have previously criticized Trump for using their songs at rallies. Last week, a federal judge in Atlanta ruled that Trump and his campaign must stop using the song “Hold On, I’m Coming” after a lawsuit from the estate of Isaac Hayes Jr.

    Recommended reading:
    In our new special issue, a Wall Street legend gets a radical makeover, a tale of crypto iniquity, misbehaving poultry royalty, and more.
    Read the stories.

    [ad_2]

    The Associated Press

    Source link

  • The Internet Archive Loses Its Appeal of a Major Copyright Case

    The Internet Archive Loses Its Appeal of a Major Copyright Case

    [ad_1]

    The Internet Archive has lost a major legal battle—in a decision that could have a significant impact on the future of internet history. Today, the US Court of Appeals for the Second Circuit ruled against the long-running digital archive, upholding an earlier ruling in Hachette v. Internet Archive that found that one of the Internet Archive’s book digitization projects violated copyright law.

    Notably, the appeals court’s ruling rejects the Internet Archive’s argument that its lending practices were shielded by the fair use doctrine, which permits for copyright infringement in certain circumstances, calling it “unpersuasive.”

    In March 2020, the Internet Archive, a San Francisco-based nonprofit, launched a program called the National Emergency Library, or NEL. Library closures caused by the pandemic had left students, researchers, and readers unable to access millions of books, and the Internet Archive has said it was responding to calls from regular people and other librarians to help those at home get access to the books they needed.

    The NEL was an offshoot of an ongoing digital lending project called the Open Library, in which the Internet Archive scans physical copies of library books and lets people check out the digital copies as though they’re regular reading material instead of ebooks. The Open Library lent the books to one person at a time—but the NEL removed this ratio rule, instead letting large numbers of people borrow each scanned book at once.

    The NEL was the subject of backlash soon after its launch, with some authors arguing that it was tantamount to piracy. In response, the Internet Archive within two months scuttled its emergency approach and reinstated the lending caps. But the damage was done. In June 2020, major publishing houses, including Hachette, HarperCollins, Penguin Random House, and Wiley, filed the lawsuit.

    In March 2023, the district court ruled in favor of the publishers. Judge John G. Koeltl found that the Internet Archive had created “derivative works,” arguing that there was “nothing transformative” about its copying and lending. After the initial ruling in Hachette v. Internet Archive, the parties negotiated terms—the details of which have not been disclosed—though the archive still filed an appeal.

    James Grimmelmann, a professor of digital and internet law at Cornell University, says the verdict is “not terribly surprising” in the context of how courts have recently interpreted fair use.

    The Internet Archive did eke out a Pyrrhic victory in the appeal. Although the Second Circuit sided with the district court’s initial ruling, it clarified that it did not view the Internet Archive as a commercial entity, instead emphasizing that it was clearly a nonprofit operation. Grimmelmann sees this as the right call: “I’m glad to see that the Second Circuit fixed that mistake.” (He signed an amicus brief in the appeal arguing that it was wrong to classify the use as commercial.)

    “Today’s appellate decision upholds the rights of authors and publishers to license and be compensated for their books and other creative works and reminds us in no uncertain terms that infringement is both costly and antithetical to the public interest,” Association of American Publishers president and CEO Maria A. Pallante said in a statement. “If there was any doubt, the Court makes clear that under fair use jurisprudence there is nothing transformative about converting entire works into new formats without permission or appropriating the value of derivative works that are a key part of the author’s copyright bundle.”

    [ad_2]

    Kate Knibbs

    Source link

  • Condé Nast Signs Deal With OpenAI

    Condé Nast Signs Deal With OpenAI

    [ad_1]

    Condé Nast and OpenAI have struck a multi-year deal that will allow the AI giant to use content from the media giant’s roster of properties—which includes the New Yorker, Vogue, Vanity Fair, Bon Appetit, and, yes, WIRED. The deal will allow OpenAI to surface stories from these outlets in both ChatGPT and the new SearchGPT prototype.

    “It’s crucial that we meet audiences where they are and embrace new technologies while also ensuring proper attribution and compensation for use of our intellectual property,” Condé Nast CEO Roger Lynch wrote in a company-wide email. Lynch pointed to ongoing turmoil within the publishing industry while discussing the deal, noting that technology companies have made it harder for publishers to make money, most recently with changes to traditional search.

    “Our partnership with OpenAI begins to make up for some of that revenue, allowing us to continue to protect and invest in our journalism and creative endeavors,” he wrote.

    Lynch testified before Congress earlier this year on how AI companies like OpenAI trained their models, speaking in favor of licensing. He has previously been a vocal opponent of AI companies using content without first seeking permission, describing said data as “stolen goods.” After WIRED reported earlier this year on the web-scraping practices of the AI search engine startup Perplexity, Condé Nast sent a cease-and-desist letter demanding that the company cease using its content.

    Specific terms of the partnership have not been disclosed. OpenAI declined to comment on the deal’s terms.

    As OpenAI noted in a blog post announcing the deal, this isn’t the first media company to team up with a generative AI company. Publishers like The Atlantic, Axel Springer, and TIME have already struck deals, as have platforms like Reddit and Automattic, the owner of WordPress.com and Tumblr. Most major AI companies have traditionally gathered training data by scraping the internet without first licensing the copyrighted materials. This has resulted in a wave of lawsuits against the companies, including from other news outlets like The New York Times, arguing that the practice is unfair—and now, a continually growing wave of publishers choosing to cooperate with AI’s biggest players.

    Digital publishers rely on search engines and other platforms to drive readership to their stories. Changes to the algorithms that power Google Search or Facebook’s Feed can make or break media companies. As Google and other search engines move beyond traditional search and incorporate generative AI news summaries and other AI products into their offerings—and generative AI companies like OpenAI introduce their own search products—news outlets face a stark choice: If they do not allow these companies to scrape data, they risk making their work harder to find on the internet.

    This is a developing story. Check back for updates.

    [ad_2]

    Kate Knibbs

    Source link

  • Kim Dotcom, roguish face of 2010s online piracy, will finally be extradited to the US

    Kim Dotcom, roguish face of 2010s online piracy, will finally be extradited to the US

    [ad_1]

    Kim Dotcom, the Megaupload founder and hard-partying face of early 2010s online piracy, is finally headed to the US. Reuters reports that New Zealand’s justice minister signed an extradition order on Thursday to end the entrepreneur’s nearly 13-year legal battle, paving the way for the German-born Dotcom to face charges from the US government.

    “I considered all of the information carefully, and have decided that Mr Dotcom should be surrendered to the U.S. to face trial,” Goldsmith said in a statement. The decision came more than six years after a New Zealand court ruled Dotcom could be extradited to the US, paving the way for appeals that culminated in today’s decision.

    Kim Dotcom partying, toasting glasses with various others in a club atmosphere. Still from music video.

    YouTube / Kim Dotcom

    Once the 13th most visited site online, the file-hosting site Megaupload was a hotbed for pirated content. In early 2012, American authorities charged Dotcom and six others with racketeering, copyright infringement, money laundering and copyright distribution. The US indictment claimed Megaupload cost copyright holders $500 million in damages while making $175 million from ads and premium subscriptions.

    The raid on Dotcom’s Auckland mansion was dramatic fare among 2012’s relatively tame headlines. The New York Times reported at the time that when he saw the police, Dotcom barricaded himself inside, activating several electronic locks and waited in a safe room. When officers cut their way inside, they saw Dotcom standing near “a firearm that they said looked like a sawed-off shotgun.”

    Kim Dotcom on a comfortable water vehicle.Kim Dotcom on a comfortable water vehicle.

    YouTube / Kim Dotcom

    Dotcom (born Kim Schmitz) had several brushes with the law before that. He at least claimed to have spent three months in a Munich jail in 1994 for “breaking into Pentagon computers and observing real-time satellite photos of Saddam Hussein’s palaces.” Soon after, he received a suspended two-year sentence for a scam involving stolen phone card numbers.

    In 2001, he was accused in the largest insider-trading case in German history. He reportedly fled Germany to escape those charges, was captured in Thailand, extradited (this week isn’t his first go-round) and convicted in 2002. At some point after that, he moved to New Zealand, holing up in a luxurious mansion.

    You can see that mansion — and a taste of his larger-than-life persona — in his music video “Good Life.”

    Justice Minister Paul Goldsmith signed the extradition order on Thursday and followed standard practice in giving Dotcom “a short period of time to consider and take advice” on his decision.

    Dotcom, never one to mince words, posted a message on X that “the obedient US colony in the South Pacific just decided to extradite me for what users uploaded to Megaupload.”

    [ad_2]

    Will Shanklin

    Source link

  • AI startup argues scraping every song on the internet is ‘fair use’

    AI startup argues scraping every song on the internet is ‘fair use’

    [ad_1]

    When most tech companies are challenged with a lawsuit, the expected defense is to deny wrongdoing. To give a reasonable explanation of why the business’ actions were not breaking any laws. Music AI startups Udio and Suno have gone for a different approach: admit to doing exactly what you were sued for.

    Udio and Suno were sued in June, with music labels Universal Music Group, Warner Music Group and Sony Music Group claiming they trained their AI models by scraping copyrighted materials from the Internet. In a court filing today, Suno acknowledged that its neural networks do in fact scrape copyrighted material: “It is no secret that the tens of millions of recordings that Suno’s model was trained on presumably included recordings whose rights are owned by the Plaintiffs in this case.” And that’s because its training data “includes essentially all music files of reasonable quality that are accessible on the open internet,” which likely include millions of illegal copies of songs.

    But the company is taking the line that its scraping falls under the umbrella of fair use. “It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product,” the statement reads. Its argument seems to be that since the AI-generated tracks it creates don’t include samples, illegally obtaining all of those tracks to train the AI model isn’t a problem.

    Calling the defendants’ actions “evading and misleading,” the RIAA, which initiated the lawsuit, had an unsurprisingly harsh response to the filing. “Their industrial scale infringement does not qualify as ‘fair use’. There’s nothing fair about stealing an artist’s life’s work, extracting its core value, and repackaging it to compete directly with the originals,” a spokesperson for the organization said. “Defendants had a ready lawful path to bring their products and tools to the market – obtain consent before using their work, as many of their competitors already have. That unfair competition is directly at issue in these cases.”

    Whatever the next phase of this litigation entails, prepare your popcorn. It should be wild.

    [ad_2]

    Anna Washenko

    Source link

  • His Galaxy Wolf Art Kept Getting Ripped Off. So He Sued—and Bought a Home

    His Galaxy Wolf Art Kept Getting Ripped Off. So He Sued—and Bought a Home

    [ad_1]

    “With every one shop that I got to take [items] down, another 10 popped up out of nowhere,” Jödicke says. “I almost wanted to give up on my art, because I felt so devastated that people would just take my work and profit out of it, and I didn’t see anything from it.”

    The widespread popularity of Where Light and Dark Meet only magnified this feeling, making it unclear where Jödicke should start. “Where infringing use is widespread, it may not be feasible to pursue every single infringement,” Eziefula says. “Especially if overseas from the artist’s home jurisdiction, nor worthwhile, where the damage caused is minimal.”

    Too often, however, the damage is significant—both in diverting income from artists and in diluting their brand, making them a more difficult proposition for potential clients. People often feel entitled to artwork they find online, and artists experience hostility when they try to assert their ownership of it. Yet, that entitlement is exactly what broke the dam for Jödicke and paved the way for him to fight back.

    In 2020, Jödicke caught a lucky break of sorts when Aaron Carter—pop singer and brother of the Backstreet Boys’ Nick—used one of the artist’s other pieces, titled Brotherhood, to promote his clothing line on Twitter (now X). The image, which shares the same vibe as Jödicke’s galaxy wolf, depicts two lions butting heads, one white and one black, as their manes curl in the shape of a heart. A frustrated Jödicke called Carter out on Twitter. Demands for credit and or removal are often met with stony silence. On this occasion Jödicke received a response:

    “you should’ve taken it as a compliment dick a fan of MINE sent this to me,” Carter wrote alongside a repost of Jödicke’s tweet, according to an August 2020 court filing. “oh here they go again, the answer is No this image has been made public and im [sic] using it to promote my clothing line… guess I’ll see you in small claims court FUCKERY.”

    For the first time, thanks to Carter’s retort, Jödicke had options. The public nature of this exchange had IP lawyers lining up to represent him, and, after years of watching others make money from his art, Jödicke called Carter on his threat.

    After a year of court proceedings in US District Court in central California, Jödicke says he got a settlement in the low five figures for violation of his copyright. It was a revelatory moment. “I had never really had any kind of justice,” Jödicke says. “That really, really motivated me to seek further legal advice and see if I could do something against all the art theft.” (Carter died in 2022.)

    That was a singular infringement with an immediately identifiable infringer. Countering the widespread sale of his work on various pieces of merchandise would be a far more challenging task. His win against Carter, however, brought him to the attention of UK-based Edwin James IP. The firm approached Jödicke to offer its resources, specifically its specialism in stopping counterfeiters from domains where copyright law is more lax, like China.

    [ad_2]

    Geoffrey Bunting

    Source link