Poe, an AI chatbot platform owned by the question-and-answer site Quora and backed by a $75 million Andreessen Horowitz investment, is providing users with downloadable HTML files of articles published by paywalled journalistic outlets.
Prompting the service’s Assistant bot with the URL of this WIRED story about the AI-powered search service Perplexity plagiarizing one of our stories, for example, yields a detailed, 235-word summary and a 1-MB file containing an HTML capture of the entire article, which users can download from Poe’s servers directly from the chatbot.
WIRED was similarly able to retrieve articles from paywalled sites including The New York Times, Bloomberg Businessweek, The Atlantic, Forbes, Defector, and 404 Media in downloadable format simply by entering URLs into the Assistant bot’s interface. This appears to be just the latest example of the AI industry’s cavalier approach to intellectual property law, which is rapidly undermining existing business models in fields like journalism and music.
“This is a significant copyright issue,” James Grimmelmann, professor of digital and information law at Cornell University, wrote in an email. “Because they made a copy on their own server, that’s prima facie copyright infringement.” (Quora disputes this, comparing Poe to a cloud storage service.)
When asked to summarize the content of a test website controlled by my colleague Dhruv Mehrotra, the bot did not return a summary but did return an HTML file. According to the website’s server logs, immediately after the Assistant bot was prompted to summarize the site, a server identifying itself as “Quora Bot” visited the site. It did not attempt to visit the site’s robots.txt page, suggesting that Poe and Quora ignore the Robots Exclusion Protocol, a widely accepted though not legally binding web standard.
A prominent media executive, whom WIRED granted anonymity to candidly discuss a legally sensitive matter his company is actively investigating, says that his publication also observed servers identifying themselves as Quora bots accessing its site immediately after giving Poe’s chatbot prompts about specific articles; these prompts, he says, yielded much or all of the text of these articles.
“Poe is a platform that lets users ask questions and have back-and-forth dialog with a variety of AI-powered bots provided by third parties,” Quora spokesperson Autumn Besselman wrote in an email. “We do not have or train our own AI models. Poe has a feature that enables a user to show the contents of a URL to a bot, but the bot will only see content that it is served by the domain. We would be happy to connect with your technical team to help them make sure your paywalled content isn’t served to people using Poe.”
“The file attachments on Poe are created at the direction of users and operate similarly to cloud storage services, ‘read it later’ services, and ‘web clipper’ products, which we believe are all consistent with copyright law,” Besselman wrote in response to an email asking follow-up questions. Andreessen Horowitz did not respond to a request for comment.
Amazon’s cloud division has launched an investigation into Perplexity AI. At issue is whether the AI search startup is violating Amazon Web Services rules by scraping websites that attempted to prevent it from doing so, WIRED has learned.
An AWS spokesperson, who talked to WIRED on the condition that they not be named, confirmed the company’s investigation of Perplexity. WIRED had previously found that the startup—which has backing from the Jeff Bezos family fund and Nvidia, and was recently valued at $3 billion—appears to rely on content from scraped websites that had forbidden access through the Robots Exclusion Protocol, a common web standard. While the Robots Exclusion Protocol is not legally binding, terms of service generally are.
The Robots Exclusion Protocol is a decades-old web standard that involves placing a plaintext file (like wired.com/robots.txt) on a domain to indicate which pages should not be accessed by automated bots and crawlers. While companies that use scrapers can choose to ignore this protocol, most have traditionally respected it. The Amazon spokesperson told WIRED that AWS customers must adhere to the robots.txt standard while crawling websites.
“AWS’s terms of service prohibit customers from using our services for any illegal activity, and our customers are responsible for complying with our terms and all applicable laws,” the spokesperson said in a statement.
Scrutiny of Perplexity’s practices follows a June 11 report from Forbes that accused the startup of stealing at least one of its articles. WIRED investigations confirmed the practice and found further evidence of scraping abuse and plagiarism by systems linked to Perplexity’s AI-powered search chatbot. Engineers for Condé Nast, WIRED’s parent company, block Perplexity’s crawler across all its websites using a robots.txt file. But WIRED found the company had access to a server using an unpublished IP address—44.221.181.252—which visited Condé Nast properties at least hundreds of times in the past three months, apparently to scrape Condé Nast websites.
The machine associated with Perplexity appears to be engaged in widespread crawling of news websites that forbid bots from accessing their content. Spokespeople for The Guardian, Forbes, and The New York Times also say they detected the IP address on its servers multiple times.
WIRED traced the IP address to a virtual machine known as an Elastic Compute Cloud (EC2) instance hosted on AWS, which launched its investigation after we asked whether using AWS infrastructure to scrape websites that forbade it violated the company’s terms of service.
Last week, Perplexity CEO Aravind Srinivas responded to WIRED’s investigation first by saying the questions we posed to the company “reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work.” Srinivas then told Fast Company that the secret IP address WIRED observed scraping Condé Nast websites and a test site we created was operated by a third-party company that performs web crawling and indexing services. He refused to name the company, citing a nondisclosure agreement. When asked if he would tell the third party to stop crawling WIRED, Srinivas replied, “It’s complicated.”
The music industry has officially declared war on Suno and Udio, two of the most prominent AI music generators. A group of music labels including Universal Music Group, Warner Music Group, and Sony Music Group has filed lawsuits in US federal court on Monday morning alleging copyright infringement on a “massive scale.”
The plaintiffs seek damages up to $150,000 per work infringed. The lawsuit against Suno is filed in Massachusetts, while the case against Udio’s parent company Uncharted Inc. was filed in New York. Suno and Udio did not immediately respond to a request to comment.
“Unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all,” Recording Industry Association of America chair and CEO Mitch Glazier said in a press release.
The companies have not publicly disclosed what they trained their generators on. Ed Newton-Rex, a former AI executive who now runs the ethical AI nonprofit Fairly Trained, has written extensively about his experiments with Suno and Udio; Newton-Rex found that he could generate music that “bears a striking resemblance to copyright songs.” In the complaints, the music labels state that they were independently able to prompt Suno into producing outputs that “match” copyrighted work from artists ranging from ABBA to Jason Derulo.
One example provided in the lawsuit describes how the labels generated songs extremely similar to Chuck Berry’s 1958 rock hit “Johnny B. Goode” in Suno by using prompts like “1950s rock and roll, rhythm & blues, 12 bar blues, rockabilly, energetic male vocalist, singer guitarist,” along with snippets of the song’s lyrics. One song almost exactly replicated the “Go, Johnny, go” chorus; the plaintiffs attached side-by-side transcriptions of the scores and argued that such overlap was only possible because Suno had trained on copyrighted work.
The Udio lawsuit offers similar examples, noting that the labels were able to generate a dozen outputs resembling Mariah Carey’s perennial hit “All I Want for Christmas Is You.” It also offers a side-by-side comparison of music and lyrics, and notes that Mariah Carey soundalikes generated by Udio have already caught the attention of the public.
RIAA chief legal officer Ken Doroshow says Suno and Udio are trying to conceal “the full scope of their infringement.” According to the complaint against Suno, the AI company did not deny that it used copyrighted materials in its training data when asked in prelitigation correspondence, but instead said that the training data is “confidential business information.”
“Our technology is transformative; it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists,” said Suno CEO Mikey Schulman in a statement. “We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook.”
“They’d only get in trouble if they summarized the story incorrectly and made it defamatory when it wasn’t before. That’s something that they actually would be at legal risk for, especially if they don’t credit the original source clearly enough and people can’t easily go to that source to check,” he says. “If Perplexity’s edits are what make the story defamatory, 230 doesn’t cover that, under a bunch of case law interpreting it.”
In one case WIRED observed, Perplexity’s chatbot did falsely claim, albeit while prominently linking to the original source, that WIRED had reported that a specific police officer in California had committed a crime. (“We have been very upfront that answers will not be accurate 100% of the time and may hallucinate,” Srinivas said in response to questions for the story we ran earlier this week, “but a core aspect of our mission is to continue improving on accuracy and the user experience.”)
“If you want to be formal,” says Grimmelmann, “I think this is a set of claims that would get past a motion to dismiss on a bunch of theories. Not saying it will win in the end, but if the facts bear out what Forbes and WIRED, the police officer—a bunch of possible plaintiffs—allege, they are the kinds of things that, if proven and other facts were bad for Perplexity, could lead to liability.”
Not all experts agree with Grimmelmann. Pam Samuelson, professor of law and information at UC Berkeley, writes in an email that copyright infringement is “about use of another’s expression in a way that undercuts the author’s ability to get appropriate remuneration for the value of the unauthorized use. One sentence verbatim is probably not infringement.”
Bhamati Viswanathan, a faculty fellow at New England Law, says she’s skeptical the summary passes a threshold of substantial similarity usually necessary for a successful infringement claim, though she doesn’t think that’s the end of the matter. “It certainly should not pass the sniff test,” she wrote in an email. “I would argue that it should be enough to get your case past the motion to dismiss threshold—particularly given all the signs you had of actual stuff being copied.”
In all, though, she argues that focusing on the narrow technical merits of such claims may not be the right way to think about things, as tech companies can adjust their practices to honor the letter of dated copyright laws while still grossly violating their purpose. She believes an entirely new legal framework may be necessary to correct for market distortions and promote the underlying aims of US intellectual property law, among them to allow people to financially benefit from original creative work like journalism so that they’ll be incentivized to produce it—with, in theory, benefits to society.
“There are, in my opinion, strong arguments to support the intuition that generative AI is predicated upon large scale copyright infringement,” she writes. “The opening ante question is, where do we go from there? And the greater question in the long run is, how do we ensure that creators and creative economies survive? Ironically, AI is teaching us that creativity is more valuable and in demand than ever. But even as we recognize this, we see the potential for undermining, and ultimately eviscerating, the ecosystems that enable creators to make a living from their work. That’s the conundrum we need to solve—not eventually, but now.”
Outside the skatepark in Prague, on a scrubby patch of grass, Bartoš leans back into his deck chair as he tries to impress on me that Pirates are not your regular stiff politicians. From the campaign launch unfolding behind us, that’s pretty obvious. Yes, there are long speeches and polite rounds of applause. But there are also gangs of shirtless skateboarders, a blue-haired rapper, rainbow banners showing our solar-powered future, and references to the online forums where party members can vote on new policies or demand new leadership.
He disagrees that the broadening of the Pirates’ focus has diluted its identity. “We cannot be a single issue party,” he insists. Instead, he compares the Pirates’ evolution to Europe’s Greens, which started as a grassroots movement built around a single issue: the environment. Now the Greens are applying their original values to everything from housing to energy, as they sit in coalition governments in Germany, Luxembourg, Ireland, and Austria. Although the Pirates “don’t preach” like the Greens, he says, “we’re doing the same journey they did a while ago.”
The Czech branch demonstrates the Pirates’ potential—how an internet-first ideology can be woven into national politics—but it is also a microcosm of the party’s problems. Like other Pirates before it, the Czechs suffer from internal bickering, factionalism, and claims of sexual harassment. Former campaign manager Šárka Václavíková has spoken publicly about her decision to leave the party and her police complaint against a fellow party member for what she describes as stalking and psychological abuse. Over Zoom from her new home in Italy, she says sexual harassment of women was systemic before she left last year—a claim the party strongly denies. “Isolated incidents can, of course, happen, just as in society or any other party. However, if we had any information about such incidents, we would take immediate action,” party spokesperson Lucie Švehlíková told WIRED.
But Václavíková says she’s also disappointed with the direction of the party as a whole. “There are two factions in the Pirate Party,” she declares. There are the centrists, the people who want to appeal to everyone and are disowning the party’s Pirate Bay roots in the process. Václavíková says she identified with the other faction, whom she calls “the real pirates.” “For us,” she says, “the ideology of transparent policy and privacy, and also human rights, are more important than just gaining more power for our own profit.”
So far, Bartoš has prevented these issues from tearing the party apart. Part of why he has lasted so long, surviving a series of leadership challenges (including from Gregorová), is because he can clearly describe what makes the Pirates’ outlook different. Across Europe, other Pirates are still struggling to define what a better future—with more technology, not less—would actually look like. When I sign into a Zoom call with Tommy Klein, political adviser to the Pirates in Luxembourg, he is sitting in front of a poster emblazoned with the phrase “Save Our Internet.” When I ask how exactly the internet needs saving, he replies without enthusiasm that the poster is old. “It’s from the 2018 election,” he says.
Under Bartoš, however, the Czech Pirates have found a way to articulate a utopian vision of a technology-infused future that means more than just reducing Big Tech’s influence on the European internet. Like the Pirate Bureau 20 years ago, the Czech Pirates also have a bus—really more of a camper van—that carries illustrations of their message. There is a sun, with rays resembling internet nodes. Wind turbines and solar farms grow out of rolling pink hills. Slogans like “Girl Power” and “Tolerance” hover over people doing peace signs and smiling through heart-shaped glasses. In Bartoš, the original Pirate vision for an alternative technology-enabled future still lingers. “I believe that we can save the planet and society through technology,” he declares from his deck chair. Whether that optimism is still applicable, 20 years later, is up to the voters to decide.
Last week, an AI Overview search result from Google used one of my WIRED articles in an unexpected way that makes me fearful for the future of journalism.
I was experimenting with AI Overviews, the company’s new generative AI feature designed to answer online queries. I asked it multiple questions about topics I’ve recently covered, so I wasn’t shocked to see my article linked, as a footnote, way at the bottom of the box containing the answer to my query. But I was caught off guard by how much the first paragraph of an AI Overview pulled directly from my writing.
The following screenshot on the left is from an interview I conducted with one of Anthropic’s product developers about tips for using the company’s Claude chatbot. The screenshot on the right is a portion of Google’s AI Overview that answered a question about using Anthropic’s chatbot. Reading the two paragraphs side by side, it feels reminiscent of a classroom cheater who copied an answer from my homework and barely even bothered to switch up the phrasing.
Reece Rogers via Google
Without the AI Overviews enabled, my article was often the featured snippet highlighted at the top of Google search results, offering a clear link for curious users to click on when they were looking for advice about using the Claude chatbot. During my initial tests of Google’s new search experience, the featured snippet with the article still appeared for relevant queries, but it was pushed beneath the AI Overview answer that pulled from my reporting and inserted aspects of it into a 10-item bulleted list.
In email exchanges and a phone call, a Google spokesperson acknowledged that the AI-generated summaries may use portions of writing directly from web pages, but they defended AI Overviews as conspicuously referencing back to the original sources. Well, in my case, the first paragraph of the answer is not directly attributed to me. Instead, my original article was one of six footnotes hyperlinked near the bottom of the result. With source links located so far down, it’s hard to imagine any publisher receiving significant traffic in this situation.
“AI Overviews will conceptually match information that appears in top web results, including those linked in the overview,” wrote a Google spokesperson in a statement to WIRED. “This information is not a replacement for web content, but designed to help people get a sense of what’s out there and click to learn more.” Looking at the word choice and overall structure of the AI Overview in question, I disagree with Google’s characterization that the result may be just a “conceptual match” of my writing. It goes further. Also, even if Google developers did not intend for this feature to be a replacement of the original work, AI Overviews provide direct answers to questions in a manner that buries attribution and reduces the incentive for users to click through to the source material.
“We see that links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query,” said the Google spokesperson. No data to support this claim was offered to WIRED, so it’s impossible to independently verify the impact of the AI feature on click-through rates. Also, it’s worth noting that the company compared AI Overview referral traffic to more traditional blue-link traffic from Google, not to articles chosen for a featured snippet, where the rates are likely much higher.
After I reached out to Google about the AI Overview result that pulled from my work, the experimental AI search result for this query stopped showing up, but Google still attempted to generate an answer above the featured snippet.
Reece Rogers via Google
While many AI lawsuits remain unresolved, one legal expert I spoke with who specializes in copyright law was skeptical whether I could win any hypothetical litigation. “I think you would not have a strong case for copyright infringement,” says Janet Fries, an attorney at Faegre Drinker Biddle & Reath. “Copyright law, generally, is careful not to get in the way of useful things and helpful things.” Her perspective focused on the type of content in this specific example of original work, explaining that it is quite difficult to make a claim about instructional or fact-based writing, like my advice column, versus more creative work, like poetry.
I’m definitely not the first person to suggest focusing on your intended audience when writing chatbot prompts, so I agree that the fact-based aspect of my writing does complicate the overall situation. It’s hard for me, though, to imagine a world where Google arrives at that exact paragraph about Claude’s chatbot in its AI Overview results without referencing my work first.
In the drawn-out contract battle between TikTok and Universal Music Group, a high-profile exemption has been made for Taylor Swift. A few of her songs became available again as TikTok sounds on Thursday, just a week before the release of Swift’s latest album, The Tortured Poets Department. It remains unclear what kind of arrangement was made for her official music to come back or how long it will remain on the social media platform.
Madeline Macrae, a Swift fan and TikTok creator, heard the news Thursday morning and immediately started searching TikTok and Google to confirm it wasn’t some hoax. “I’m really excited to have that catalog back, and I don’t have to rely on sped-up versions or edited versions,” she says. “I can just use her actual music.” Songs like “Cruel Summer,” “Cardigan,” and “Style (Taylor’s Version)” can now be used by content creators on the platform, as first reported by Variety.
In addition to being excited about using Swift songs in new videos, Macrae is grateful for the pop megastar’s music to be potentially unmuted for her past videos on TikTok. “I was going back and forth on deleting them or keeping them, because they look kind of silly muted,” she says. When UMG’s music was initially pulled from TikTok’s library in January, many creators were stunned to see their archive of past videos with certain songs go silent overnight.
Does this mean that The Tortured Poets Department album will be available to use for videos on TikTok? It’s uncertain, but Macrae is hopeful: “I think this move also just shows the power of Taylor Swift.” Billie Eilish, another major UMG artist, will soon be promoting her upcoming album, May’s Hit Me Hard and Soft, but Eilish fans will have to wait to see if her music also returns to TikTok before it drops.
Most UMG artists have been absent from TikTok for nearly 10 weeks, greatly shifting the user experience on the social media platform and opening the door for non-UMG artists, like Beyoncé, to go viral with TikTok’s algorithm.
It remains a mystery when the long-standing contract dispute between TikTok and UMG will come to a resolution. As one of the biggest record companies in the world, UMG removing songs from TikTok has impacted the careers of many established artists as well as rising stars. Multiple artists expressed frustration about the move, often citing disrupted marketing plans or decreased audience reach. A spokesperson for UMG did not immediately respond to a request for comment.
No matter what eventually happens between the two companies, Swifties on TikTok are feeling grateful for her music’s return as they prepare for listening parties to celebrate the new album. “I already know my Friday night plans,” says Macrae. “Staying in with friends, drinking some wine, and just listening to this album.” Sounds like an evening of truly social media.
For the past few months, Morten Blichfeldt Andersen has spent many hours scouring OpenAI’s GPT Store. Since it launched in January, the marketplace for bespoke bots has filled up with a deep bench of useful and sometimes quirky AI tools. Cartoon generators spin up New Yorker–style illustrations and vivid anime stills. Programming and writing assistants offer shortcuts for crafting code and prose. There’s also a color analysis bot, a spider identifier, and a dating coach called RizzGPT. Yet Blichfeldt Andersen is hunting only for one very specific type of bot: Those built on his employer’s copyright-protected textbooks without permission.
Blichfeldt Andersen is publishing director at Praxis, a Danish textbook purveyor. The company has been embracing AI and created its own custom chatbots. But it is currently engaged in a game of whack-a-mole in the GPT Store, and Blichfeldt Andersen is the man holding the mallet.
“I’ve been personally searching for infringements and reporting them,” Blichfeldt Andersen says. “They just keep coming up.” He suspects the culprits are primarily young people uploading material from textbooks to create custom bots to share with classmates—and that he has uncovered only a tiny fraction of the infringing bots in the GPT Store. “Tip of the iceberg,” Blichfeldt Andersen says.
It is easy to find bots in the GPT Store whose descriptions suggest they might be tapping copyrighted content in some way, as Techcrunch noted in a recent article claiming OpenAI’s store was overrun with “spam.” Using copyrighted material without permission is permissable in some contexts but in others rightsholders can take legal action. WIRED found a GPT called Westeros Writer that claims to “write like George R.R. Martin,” the creator of Game of Thrones. Another, Voice of Atwood, claims to imitate the writer Margaret Atwood. Yet another, Write Like Stephen, is intended to emulate Stephen King.
When WIRED tried to trick the King bot into revealing the “system prompt” that tunes its responses, the output suggested it had access to King’s memoir On Writing. Write Like Stephen was able to reproduce passages from the book verbatim on demand, even noting which page the material came from. (WIRED could not make contact with the bot’s developer, because it did not provide an email address, phone number, or external social profile.)
OpenAI spokesperson Kayla Wood says it responds to takedown requests against GPTs made with copyrighted content but declined to answer WIRED’s questions about how frequently it fulfills such requests. She also says the company proactively looks for problem GPTs. “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies, including the use of content from third parties without necessary permission,” Wood says.
New Disputes
The GPT store’s copyright problem could add to OpenAI’s existing legal headaches. The company is facing a number of high-profile lawsuits alleging copyright infringement, including one brought by The New York Times and several brought by different groups of fiction and nonfiction authors, including big names like George R.R. Martin.
Chatbots offered in OpenAI’s GPT Store are based on the same technology as its own ChatGPT but are created by outside developers for specific functions. To tailor their bot, a developer can upload extra information that it can tap to augment the knowledge baked into OpenAI’s technology. The process of consulting this additional information to respond to a person’s queries is called retrieval-augmented generation, or RAG. Blichfeldt Andersen is convinced that the RAG files behind the bots in the GPT Store are a hotbed of copyrighted materials uploaded without permission.
Each method is weaponized—almost always against women—to degrade, harass, or cause shame, among other harms. Julie Inman Grant, Australia’s e-safety commissioner, says her office is starting to see more deepfakes reported to its image-based abuse complaints scheme, alongside other AI-generated content, such as “synthetic” child sexual abuse and children using apps to create sexualized videos of their classmates. “We know it’s a really underreported form of abuse,” Grant says.
As the number of videos on deepfake websites has grown, content creators—such as streamers and adult models—have used DMCA requests. The DMCA allows people who own the intellectual property of certain content to request it be removed from the websites directly or from search results. More than 8 billion takedown requests, covering everything from gaming to music, have been made to Google.
“The DMCA historically has been an important way for victims of image-based sexual abuse to get their content removed from the internet,” says Carrie Goldberg, a victims’ rights attorney. Goldberg says newer criminal laws and civil law procedures make it easier to get some image-based sexual abuse removed, but deepfakes complicate the situation. “While platforms tend to have no empathy for victims of privacy violations, they do respect copyright laws,” Goldberg says.
WIRED’s analysis of deepfake websites, which covered 14 sites, shows that Google has received DMCA takedown requests about all of them in the past few years. Many of the websites host only deepfake content and often focus on celebrities. The websites themselves include DMCA contact forms where people can directly request to have content removed, although they do not publish any statistics, and it is unclear how effective they are at responding to complaints. One website says it contains videos of “actresses, YouTubers, streamers, TV personas, and other types of public figures and celebrities.” It hosts hundreds of videos with “Taylor Swift” in the video title.
The vast majority of DMCA takedown requests linked to deepfake websites listed in Google’s data relate to two of the biggest sites. Neither responded to written questions sent by WIRED. The majority of the 14 websites had over 80 percent of the complaints leading to content being removed by Google. Some copyright takedown requests sent by individuals indicate the distress the videos can have. “It is done to demean and bully me,” one request says. “I take this very seriously and I will do anything and everything to get it taken down,” another says.
“It has such a huge impact on someone’s life,” says Yvette van Bekkum, the CEO of Orange Warriors, a firm that helps people remove leaked, stolen, or nonconsensually shared images online, including through DMCA requests. Van Bekkum says the organization is seeing an increase in deepfake content online, and victims face hurdles to come forward and ask that their content is removed. “Imagine going through a hiring process and people Google your name, and they find that kind of explicit content,” van Bekkum says.
Google spokesperson Ned Adriance says its DMCA process allows “rights holders” to protect their work online and the company has separate tools for dealing with deepfakes—including a separate form and removal process. “We have policies for nonconsensual deepfake pornography, so people can have this type of content that includes their likeness removed from search results,” Adriance says. “And we’re actively developing additional safeguards to help people who are affected.” Google says when it receives a high volume of valid copyright removals about a website, it uses those as a signal the site may not be providing high-quality content. The company also says it has created a system to remove duplicates of nonconsensual deepfake porn once it has removed one copy of it, and that it has recently updated its search results to limit the visibility for deepfakes when people aren’t searching for them.
Sarah Silverman’s lawsuit against OpenAI will advance with some of her legal team’s claims dismissed. The comedian sued OpenAI and Meta in July 2023, claiming they trained their AI models on her books and other work without consent. Bloombergreported on Tuesday that the unfair competition portion of the lawsuit will proceed. Judge Martínez-Olguín gave the plaintiffs until March 13 to amend the suit.
US District Judge Araceli Martínez-Olguín threw out portions of the complaint from Silverman’s legal team Monday, including negligence, unjust enrichment, DMCA violations and accusations of vicarious infringement. The case’s principal claim remains intact. It alleges OpenAI directly infringed on copyrighted material by training LLMs on millions of books without permission.
OpenAI’s motion to dismiss, filed in August, didn’t tackle the case’s core copyright claims. Although the suit will proceed, the judge suggested the federal Copyright Act may preempt the suit’s remaining claims. “As OpenAI does not raise preemption, the Court does not consider it,” Martínez-Olguín wrote.
The result of Silverman’s OpenAI hearing is similar to one in San Francisco in November when Silverman’s claims against Meta were also slashed down to the core copyright infringement claims. In that session, US District Judge Vince Chhabria described some of the plaintiffs’ dismissed claims as “nonsensical.”
If you create original music, it’s always helpful to understand your rights and revenue sources.
The guide below will provide a high-level view of music publishing, and spotlight the most relevant points for songwriters.
Music publishing is the business of songs.
It’s the work of promoting and earning money from composition copyrights. NOT from recordings, but rather the song that underlies any recording. In fact, the term “publishing” comes from the days before recorded music existed, back when the owners of songs published sheet music and songbooks.
Of course today a recording is often the thing that delivers a song to the marketplace, and to our ears. Streams, downloads, usage in social video, radio plays, placements on TV and film, game soundtracks, and even physical formats like vinyl and CD — they’re all means of transmitting a recording. Which, in turn, contains the song. So there is often a relationship between recorded tracks and the underlying songwriting when it comes to generating revenue.
But music publishing specifically deals with the song side of that revenue equation.
What is a “song?”
That might sound like a silly question, but the answer is crucial to the monetization of compositions:
From the perspective of music copyright, a song consists of the melody and lyrics. If it’s an instrumental composition, it’s just the melody.
Chord progression? Nope.
Groove, tempo, or key? Nope.
Synth pads and drum patterns? Definitely not.
Chord progressions are considered a basic building-block of music, similar to colors for a painter, or materials for an architect. So you can’t copyright chord sequences, only the melody and lyrics that tie those chords together.
Groove, tempo, and key are foundational aspects of an arrangement, but a “song” can be arranged in numerous ways. So arrangements aren’t songs.
Particular instruments like synth or drums? Effects and EQ? Nope. Those are production and arrangement choices, which can exist separately from the song.
It’s sometimes difficult to illustrate this point because so much music in the 21st century has blurred the lines between production and songwriting. For some artists, those two processes are the same creative act.
However, you could take any new chart-topping song and play it with an acoustic guitar around a campfire, or arrange it for an a cappela group, and the foundational elements are what remain — melody and lyrics. That’s the song.
Or imagine there’s no such thing as recorded music, and your music only exists as sheet music. The notes on the score, the words on the page. That’s the song.
The two most important kinds of musical copyright
As discussed in our guide to music copyright, there are two main forms of intellectual property rights that apply to musicians:
The composition copyright — These are rights related to the song. They are owned and controlled by the songwriter(s), unless those writers have a relationship with a music publisher to help them monetize the song and collect royalties.
The sound recording (or “master recording”) copyright — These are rights related to the ownership of a specific recording. Recordings are usually owned by labels, or the artists and producers who made the track.
If you’re an artist who records and writes your own music, you own BOTH rights listed above, as long as you haven’t signed away any rights or ownership to a label or publisher. If you record cover songs, you own the master recording but not the composition.
And again, music publishing is the business of making money from the composition copyright.
The 3 main kinds of music publishing royalties
Music publishing is a complicated business, so I’ll try to keep it simple.
There are — generally — three kinds of music publishing royalties you can earn from your composition copyright:
1. Mechanical Royalties
The name comes from a time when songs were mechanically reproduced in a physical format. And the publisher was paid a royalty for each of the units pressed. That’s still true for vinyl, CD, cassette, etc.
But mechanicals are also generated in the digital age, via streaming. And you can collect these royalties through certain rights societies or a collective such as the MLC.
In the world of music publishing, it’s deemed a digital reproduction of the song whenever a listener actively chooses to stream a certain song. “Choose” is perhaps loosely defined, since playlist listening can generate mechanicals. But it’s worth pointing out that non-interactive streams through digital radio services such as Pandora do NOT generate mechanical royalties.
Instead, they generate…
2. Performance Royalties
These royalties are paid for “public performances” of your song, including:
Radio play
Broadcast in public places such as a restaurant
Streams
Live performances at venues
As you can probably tell, “public performances” is an imprecise term, since you’re not expected to perform the song live in order to generate royalties (although live performances ARE included).
Nor are all performances or broadcasts “public,” since some people are streaming music with headphones on. Instead of “performance,” I tend to think about broadcast or projection of a song into a listening space.
You can collect Performance Royalties from a P.R.O. (performance rights organization) such as ASCAP, BMI, PRS, GEMA, etc.
The recording (owned by the artist or label), and the song (owned by the songwriter or publisher).
The fact that these are separate rights may illustrate why there are so many cover songs on TV shows. Because licensing the most famous version of a tune can be expensive.
If a music supervisor wants a Rolling Stones song, but doesn’t have the budget for the composition AND the original recording, well maybe they have the budget for the composition and an indie artist’s rendition.
Now, if you wrote and recorded your own song entirely, you obviously lose the advantage of being… the Rolling Stones. You don’t have a composition in high-demand. But you have a different kind of advantage: Speed!
Because you own the rights to both the track and the song, you’re able to quickly grant permissions and all negotiations are streamlined. That’s why so much independent music ends up in modern media, not just because it conserves the production’s budget, but because productions move fast and music supervisors don’t want to wait around and have 10 different meetings, calls, or email threads going just to clear one song.
One last interesting note about sync licensing and music publishing, when you DO get an original song placed, you’ll receive an upfront fee. But certain usages such as TV broadcast will ALSO generate the performance royalties discussed above.
What kind of publishing arrangement is best?
Like many aspects of the music business, there’s a stratification of services around songwriter rights. From basic royalty collection for unknown indies, all the way up to powerhouse publishers leveraging the catalogs of chart-topping legends.
The level of publishing assistance you need depends upon your songs, of course. How much revenue are they already generating? What potential there is for future earnings?
Basic publishing support for new songwriters
When you’re first starting out, you’ll want to build a publishing rights foundation. This would include:
Affiliating with a Performance Rights Organization
Registering your songs with that PRO
Affiliating with the MLC or a similar organization to collect mechanical royalties (in some cases this may be your existing PRO. Though in the USA, organizations like ASCAP and BMI do NOT collect mechanicals.)
Exploring sync licensing opportunities, either on your own or by including your music in a pre-cleared licensing catalog
Publishing administration for emerging songwriters
Once your music starts to gain traction, it may be time to professionalize your songwriter rights in a more focused way.
A publishing administration service is able to act on your behalf to collect your publishing royalties and possibly seek new opportunities for your songs. They do not claim ownership of your songs, but are empowered on a shorter-term basis to act as your publisher.
However, be aware that the administration service will take an additional cut of your publishing royalties beyond what the PROs and mechanical collection societies keep.
An actual publishing deal
When your songs show enough potential, doors will open to explore a publishing deal.
These deals can be structured in various ways, but the simple explanation is that you give up some (or all) of your rights in exchange for more revenue opportunity. When a music publisher has shared or total ownership of your song, they’re incentivized to work harder for that song’s success.
To be clear, just because you’ve reached the level where a publishing deal is realistic, doesn’t mean you NEED a traditional publishing deal. All careers are built different. There are good deals and bad deals. And if you’ve already found success for your songwriting without a publisher, it’s possible to sustain that momentum without outside help.
However, if a great publisher is interested in your songs, it’s possible they could add fuel to the fire — so whatever ownership you sacrifice might be worth it for greater revenue, more exposure, and possible advances.
Like with all things, just understand the tradeoffs before you ink any deals!
Conclusion
Well there it is: A bird’s-eye view of music publishing.
(Plus a few moments where we zoomed in close for detail.)
Hopefully this article gives you a greater sense of your rights and revenue opportunities when it comes to original songwriting.
If you want to learn more about your music rights that extends beyond the composition, check out our practical guide to music copyright.
Your music is more than just lyrics, melodies, hooks, and waveforms.
It’s intellectual property!
Do you record your own tracks? Do you write your own songs? Then YOU control your music copyright. (Assuming you haven’t signed certain rights away to a label or publisher).
The flow of money through the music business is mostly based on copyright. So understanding the power of music copyright can significantly impact your career, helping you control, protect, and monetize your original work.
In this guide, we’ll delve into:
The essentials of music copyright
The different kinds of music copyright
How to register the copyright for your music
And how to leverage your rights to generate income as an artist, songwriter, or producer
Let’s begin with some copyright basics.
Music copyright is a collection of rights associated with the ownership of intellectual property.
Copyright grants exclusive privileges to the creator(s) and rights-holder(s) of a particular work for a limited time.
Your rights include:
The right to reproduce that work
The right to be credited as the work’s creator (the “right of attribution”)
The right to approve or deny “derivative works”
The right to distribute the work
The right to perform the work publicly
The right to license the work
Because you are, presumably, a self-releasing musician who write and records original music, it’s simple. You OWN the songs you write. And you OWN the tracks you record.
Those assets are often referred to as the “composition” and the “sound recording.” And your ownership of a song or track empowers you to exploit your copyright. Meaning, you can put your copyright to work.
As the owner of your copyright, you can:
Earn money from the usage of your music
Have control over how your music is used*
Collect damages in the event that someone uses your music unlawfully
Transfer rights to another entity via sale, licensing, or assigning
* You CANNOT prevent another artist from performing your song live or distributing a cover version of your song, assuming you’ve already commercially released that song. However, you must be paid the associated publishing royalties for those usages.
What are the types of music copyright?
There are two primary types of music copyright:
Composition Copyright— Related to the music and lyrics that underlie any particular recording or performance of a song. The business of making money from composition copyrights is called Music Publishing.
Sound Recording Copyright — Related to a specific recorded version of a composition. The business of making money from sound recording copyrights is often thought of as the domain of record labels, though the rise of independent music over the last 30 years has altered that assumption to some degree.
Composition copyrights are typically owned by songwriters and publishers. If you write original music and have never signed away those rights to a publisher or other entity, you own your songs!
Sound Recordings copyrights are usually owned by artists or labels. If you self-produced your music, or if you funded your own recording project and never signed-away those rights to a label or other entity, you own your tracks!
What’s the difference between copyright, trademark, and patent?
Copyright, as discussed above, is a bundle of rights associated with the making and ownership of a sufficiently-original creative work.
Trademark is a name, phrase, symbol, or logo closely associated with the provider of a good, service, or artwork. When it comes to music, your band name could be trademarked, whereas a song or recording has a copyright. In the USA, you can apply to obtain a registered trademark at https://www.uspto.gov/.
Patent is the protection of inventions and processes, as well as improvements to those processes. Patents are meant to forward the interests of technological innovation. If you came up with a new kind of musical instrument, for instance, you may be able to patent it.
When does your copyright come into effect?
Technically, you own your musical copyright the moment you capture the composition or recording in a fixed medium. This could be something as crude as a voice memo on your phone. Or typing your lyrics to a friend in an email. And in many countries, this is sufficient to fully establish your copyright. However, things are slightly different in the USA.
Copyright in the USA
Despite owning your copyright from the moment your music provably exists, it’s still advisable in the USA to register your copyright. This step helps you secure the most protection for your work.
You’ll want to register with the U.S. Copyright Office. Because registration is necessary before you can file a lawsuit against any entity who’s infringed upon your rights.
By registering your copyright early (preferably before your music is released publicly), you’ll have additional benefits in the event of intentional infringement. This includes rewards of up to $150,000 and attorney fees.
However, approval of a formal registration can take a while. So I don’t usually advise that people WAIT for approval before releasing music.
Does copyright protection extend internationally?
There is no such thing as comprehensive, global copyright protection. Each country has its own laws and practices.
But through international treaties such as the Berne Convention, partner nations can help to enforce one another’s copyright protections for citizens.
If creators outside the United States want full protection in the event of infringement that happens within the USA, they can register their copyright with the USCO.
How long does copyright protection last?
There are some exceptions to the rule, but in general, copyright lasts for the duration of the author’s life PLUS another 70 years. For songs or recordings with multiple creators, copyright protection expires 70 years after the death of the last-surviving author.
The myth of “Poor Man’s Copyright”
The “poor man’s copyright” involves mailing a composition or recording to yourself via registered mail with a dated postmark. Then you leave the package sealed. If your copyright is infringed upon, you bring the unopened package before a judge.
Today this practice is somewhat obsolete. Because while “poor man’s copyright” may provide some evidence that your work was created before the infringing work, the Supreme Court of the USA ruled that in order to get legal protection, you need to register your copyright with the USCO.
Why Register Your Copyright?
It’s been stated a few times above that you’ll receive additional protections from registering your copyright. But here’s a more detailed explanation of those benefits:
If you file your registration AFTER your music has been used unlawfully, you can’t bring a lawsuit against the infringer until the application is approved by the USCO. This can take between 3-9 months, assuming there are no other issues with your registration forms.
After approval you can file a lawsuit. And in the event that you win the case, you are only entitled to receive between $200 and $30,000 per work. In some instances, that won’t even cover your legal fees.
Contrast that with an early registration. You can file a lawsuit immediately in federal or small claims court, and no attorney is required. You also stand to receive up to $150,000 in damages for intentional copyright infringement. Plus attorney fees.
Go to their registration portal and submit the proper form from the options below:
The PA form — short for “performing arts” — can be used to register a composition (song and lyrics) or collection of songs.
The SR form — short for “sound recording” — can be used to register… a sound recording (the recorded track or album).
IMPORTANT: If the creators and owners are exactly the same for every song in a collection, you can use the SR form to register BOTH the sound recording(s) and the composition(s) at the same time.
How do you earn money from your copyright?
As the owner of your work, there are various avenues to generate income, including:
Performance Royalties
This is a form of music publishing royalty.
It’s owed to songwriters and publishers when their compositions are played on the radio, performed in public, and more.
To collect these royalties, you should affiliate yourself with your country’s performing rights organization (PRO) such as:
Mechanical Royalties
This is another form of music publishing royalty, owed to songwriters and publishers when their compositions are streamed, downloaded, or mechanically reproduced in physical formats such as CD and vinyl.
As the owner of your sound recordings, you control “master rights” and can get paid when your tracks are streamed, downloaded, synced in TV or film, and sold on CD or vinyl.
You can think of these royalties as the typical revenue sources of labels. Because, as a self-releasing artist, you are acting as your own label.
Neighbouring Rights Revenue
This is another type of royalty associated with the recording, NOT the composition. It’s called “neighbouring” rights because it neighbours the world of publishing (songwriter rights), but is tied instead to the usage of the track.
In many countries around the world, radio airplay generates revenue for the recording rights holder and artist. However, this is NOT the case in the USA, where only publishers earn money from terrestrial radio play.
But in the USA, as the owner of your sound recordings, or as the primary artist who performed on them, you CAN collect an additional kind of revenue similar to neighbouring rights when your tracks are played via digital radio, satellite radio, and other forms of non-interactive music streaming.
To collect neighbouring rights revenue, check out organizations such as:
Sync Licensing Revenue
Explore licensing options for your compositions and sound recordings.
If another artist wants to record one of your songs that you’ve already released, you CAN’T say no. It’s just one of those odd little caveats to copyright. But you ARE owed money… which we actually already mentioned above: Mechanical Royalties!
Samples
Unlike cover songs, you do have ALL the power when it comes to someone else being able to sample one of your recordings.
And you can say yes, no, or anything in between. You set the terms, and you should be compensated for both the track and the underlying composition being used.
Derivative Works
You also have the ability to permit or deny someone the right to incorporate your music into one of their new compositions. Similar to samples, you set the terms.
Conclusion
Copyright ownership is yours the moment you fix your original music in some form of permanent media. But now you also know there may be additional benefits to formal copyright registration, plus how to file that registration. You also know the various forms of music revenue you should be earning for the usage of your copyrighted material.
Armed with all of that, you should be better empowered to protect your work, manage your catalog, and profit from your music!
Hell hath no fury like a Bill Ackman scorned: For those just tuning in, let me catch you up on the Harvard/antisemitism/plagiarism scandal that just won’t end.
Back in December, three elite university presidents—including Harvard President Claudine Gay, University of Pennsylvania President Liz Magill, and MIT President Sally Kornbluth—were trotted before Congress to give testimonies related to their handling of antisemitic speech and pro-Palestine activism on campus. Rep. Elise Stefanik (R–N.Y.) raked them all over the coals, declaring their answers unsatisfactory and insensitive and full of legalese, and Magill soon resigned.
Harvard initially stood by Gay, but then a mostly conservative collection of journalists and activists—as well as some big donors, like hedge fund manager Bill Ackman—publicized her extensive track record of plagiarism. Gay resigned, but not before calling everyone racist. (She is a black woman, and she claims that that’s the real reason people tried to take her down.)
Now Business Insider has accused Ackman’s wife—Neri Oxman, an entrepreneur and former MIT professor—of plagiarism herself. Oxman, they say, “stole sentences and whole paragraphs from Wikipedia, other scholars and technical documents in her academic writing.” (As an aside: Oxman’s work is interesting. “Her team at the MIT Media Lab coaxed silkworms to build sculptures,” notes the article. Oxman “also made undulating structures out of natural materials like cellulose and chitin, the material found in shrimp cells.”)
Now, Ackman has basically sworn revenge: “There has been no due process,” wrote Ackman this morning on X. “Neri Oxman was given 90 minutes to respond to a 7,000-word plagiarism allegation before Business Insider published a piece saying she was a plagiarist.” For the record, it’s good to give sources sufficient time to respond, but that’s not quite a due process issue.
“This experience has inspired me to save all news organizations from the trouble of doing plagiarism reviews,” he declared, vowing to helpfully review the work of all Business Insider reporters and MIT faculty, after claiming that Insider‘s source is most likely inside MIT. (Side-by-side reviews for plagiarism are getting easier and faster to do in the era of artificial intelligence.)
Now Ackman’s allegiance to his wife is being alternately memed and criticized:
when he says “I love you,” but ackman said “would they accuse you of plagiarism, I would burn the institutions to the ground, and salt the earth where they once stood, let there be a thousand dark ages, and you, lordess of my castle laboratory, my dark queen for eternity”
This all started in early Oct. when Bill Ackman went ballistic and tried to ruin the lives of some 18 year old Harvard students, get them blacklisted over an Israel letter, and got upset university leaders didn’t help with this project, so he escalated — v pathetic news cycle.
Truly Shakespearean if Bill Ackman’s wife loses her academic career because her husband led a national charge against the very crime she committed many years ago.
On one hand, it’s fair to collectively groan Why do we have another goddamn Harvard-related news cycle? On the other, we’re in a weird moment for plagiarism and the related subject of intellectual property. If ChatGPT is the death knell for plenty of academic writing, maybe it’s replacing something that had already mostly withered and died.
The focus of the Harvard kerfuffle could have been the initial congressional testimony, and the speech double standards present on college campuses. Or it could’ve been the intellectual bankruptcy of DEI bureaucracy. Instead, it is becoming trench warfare over plagiarism, which seems like the dumbest possible way for this to all go.
Israel pummels Hezbollah: Israeli military spokesman Daniel Hagari says the Israel Defense Forces (IDF) have struck Iran-backed Hezbollah in Lebanon in retaliatory fire, killing at least seven fighters. The IDF claims that Hezbollah struck an Israeli military base on Saturday, most likely due to Israel’s killing of a senior Hamas leader inside Lebanon last week.
Though war has been raging between Israel and Hamas since October 7, when Hamas terrorists infiltrated Israel, killing 1,200 civilians—in some cases brutally raping and beheading the victims—many had hoped that other factions in the Middle East, particularly those backed by Iran, would not be drawn into the conflict. With the increased Israel-Hezbollah conflict, as well as Houthi activity snarling global shipping and provoking some U.S. military action, that’s not looking likely.
Scenes from New York:
Surfer politics, spotted in Rockaway.
(Liz Wolfe)
QUICK HITS
The Supreme Court will decide whether former President Donald Trump can be kept off ballots via the 14th Amendment, which includes a section barring officials who have “engaged in insurrection” from holding public office. Oral arguments will be held on February 8.
Congress returns this week and is supposed to pass some funding bills, as another shutdown deadline looms on January 19.
Please enjoy the absolute worst segment on the Claudine Gay scandal, involving the most Hilaria Baldwin–esque overpronunciation of the word Latino you could possibly imagine.
“Often, when an issue becomes polarized, you’ll see thermostatic effects in public opinion, as when Democrats became more liberal on immigration in response to Donald Trump’s histrionic attacks on immigrants,” writes Josh Barro on Very Serious. “But while liberal figures on campus like to talk about themselves as a vanguard in a fight against conservative know-nothings who would take down knowledge and expertise, there is no pro-college backlash among liberals that is apparent in the polls.”
Alaska Airlines Flight 1282 had an accident while up in the air, and part of the plane flew off. A few injuries were sustained, but all passengers survived following an emergency landing.
Tell me you don’t know what unrealized gains are without telling me you don’t know what unrealized gains are:
BREAKING: Billionaires and centimillionaires held $8.5 TRILLION in untaxed, unrealized capital gains in 2022.
Unrealized gains are the largest source of income for the ultra-rich—but they’re completely UNTAXED under our tax code.
BURBANK, CA—Announcing the Beauty And The Beast character was available for public use as of Jan. 1, 2024, Disney CEO Bob Iger confirmed Tuesday that the company was relinquishing the rights to LeFou decades before the film’s copyright expired. “Go ahead, put LeFou in whatever silly slasher films you like—we do not care for him, and we never have,” said Iger, who called upon DreamWorks, Warner Bros., or “whoever the fuck” to go ahead and use the character in whatever creative projects they like. “If you want to use LeFou, we won’t sue you. So go on. You have my word. Technically, the copyright isn’t until 2086, but we hate that little shit. Just promise you won’t try to make him look cool because he’s not cool—he fucking sucks.” At press time, Iger added that anyone who tried to touch Lumière would be fucking dead.
The copyright on Mickey Mouse expires today, meaning The Walt Disney Company no longer has the exclusive rights to the character. Does this mean you can put Mickey in your own cartoon? Not exactly.
Under current law, works released between 1924 and 1978 are copyrighted for 95 years. As a result, the thousands of works copyrighted in 1928 enter the public domain today, meaning anyone can use or reprint them without permission. That includes books like D. H. Lawrence’s Lady Chatterley’s Lover and films like Charlie Chaplin’s The Circus. But the most high-profile addition is Steamboat Willie, the animated short that marked the debuts of both Mickey and his longtime paramour, Minnie.
The cartoon depicted Mickey Mouse working aboard a steamboat, making music, and vexing the boat’s captain, a large cat named Pete. The slapstick humor, anthropomorphized animals, and objects of later Disney works are present, although Mickey is much more mischievous—the antagonistic dynamic with a giant cat is more reminiscent of Tom & Jerry cartoons than the Mickey Mouse familiar to modern audiences.
The seven-minute film was revolutionary: It was the first cartoon to feature synchronized sound—rather than just a silent film with background music—and audiences loved it. Mickey Mouse spawned a franchise that over the following century would earn more than $80 billion and make Disney one of the most powerful media companies on the planet.
Losing out on its rodential cash cow would be a huge blow, and Disney jealously guarded its creation. When Steamboat Willie premiered in November 1928, U.S. law dictated that it would enter the public domain no later than 1984. But two different laws, one passed in 1976 and another in 1998, extended the maximum copyright term, each by twenty years. Each law passed after strenuous lobbying by Disney: The latter statute, the Copyright Term Extension Act, has been derisively referred to as the Mickey Mouse Protection Act.
Today’s expiration implies that Disney was either unable to secure another extension or unwilling to try. In recent years, Republican lawmakers have signaled their unwillingness to extend copyright law any further on Disney’s behalf. Sen. Josh Hawley (R–Mo.) even introduced the Copyright Clause Restoration Act of 2022, which would cap copyright terms at a maximum of 56 years—notably, the same term in effect when Walt Disney first released Steamboat Willie.
But this doesn’t mean that Mickey is completely free. The copyright that expires today only applies to Mickey Mouse as he first appeared: rat-like and mischievous, with pupil-less eyes and no gloves. All other interpretations, introduced later—including the magnanimous Mickey who greets visitors to Disney theme parks dressed in a bow tie and tails, with white gloves and human-like eyes and facial features—remain under lock and key.
“We will, of course, continue to protect our rights in the more modern versions of Mickey Mouse and other works that remain subject to copyright,” a Disney spokesperson told the Associated Press in a statement.
And while Mickey may lose copyright status, he will remain Disney’s exclusive trademark. According to Jennifer Jenkins, director of Duke University’s Center for the Study of the Public Domain, any new use of Mickey must ensure that it is unlikely to be mistaken for a Disney product. “There might be a risk of confusion if you use Mickey as a brand identifier on the kind of merchandise Disney sells,” Jenkins writes. “Consumers may also be confused if Mickey is used in an artistic work in a way that suggests it is a Disney production, for example by appearing as a logo at the beginning of an animation.”
On January 1, 2022, A.A. Milne’s Winnie-the-Poohentered the public domain, bringing the characters with it. The following day, wireless company Mint Mobile released a commercial in which actor Ryan Reynolds reads a version of the story. That May, British director Rhys Frake-Waterfield released stills from his film Winnie-the-Pooh: Blood and Honey, a horror flick in which Pooh and his sidekick Piglet revert to a feral state and mow down coeds after their human companion Christopher Robin leaves for college.
Just as with Mickey, Frake-Waterfield could only use Milne’s characters as they were depicted in the original book: Pooh was first drawn in his iconic red shirt in 1932, meaning that version of Pooh is still under copyright protection. Characters introduced in later works, like the buoyant Tigger who debuted in 1928’s The House at Pooh Corner, also remained protected. (TheHouse at Pooh Corner also falls into the public domain today, and Tigger is expected to be featured in Winnie-the-Pooh: Blood and Honey 2, premiering next month.)
What does all of this mean for Mickey Mouse? What does it matter if one particular version of a cartoon character enters the public domain?
Regardless of the artistic merit of a horror movie about Winnie-the-Pooh—and critics apparently found very little—the public domain is a boon for creative expression, allowing people to use established characters and works in new and inventive ways. Ironically, Steamboat Willie benefited significantly from the public domain. The cartoon made extensive use of the song “Turkey in the Straw,” a familiar tune with uncomfortable racist origins that dates back to the pre-Civil War era.
And “the Mickey character itself is based on such public domain fodder,” Jenkins writes. “His personality and antics drew from silent film stars such as Charlie Chaplin and Douglas Fairbanks,” as Walt Disney and animator Ub Iwerks acknowledged at the time. Even the cartoon’s title was a reference to the Buster Keaton film Steamboat Bill, Jr., released six months before Steamboat Willie. Since movie titles and personality traits are generally not copyrightable, all of this was fair game when Disney crafted Mickey Mouse.
Grand Theft Auto reveals are arguably among the biggest cultural events in all of gaming. It was no surprise, then, that hype for GTA VI blew through the roof as thousands of people patiently stared at a black screen, waiting for the official trailer to release. However, after someone leaked the trailer on Twitter, Rockstar made the decision to publish it early, which left livestreamers scrambling to Go Live as soon as possible to provide their reactions. Unfortunately, at least some of those reactions were hit with copyright strikes.
Grand Theft Auto 6 Comments: A Dramatic Reading
According toIGN, content creators reacting to the GTA VI video ran into some trouble. Streams across TikTok were muted, possibly because the trailer makes use of Tom Petty’s “Love Is a Long Road.” The song is copyrighted, after all, and most platforms have restrictions on copyrighted materials. Meanwhile, some streams on other platforms were taken down entirely. In the video below, for instance, YouTuber TheProfessional details how his reaction video was hit with copyright strikes. Thankfully, after some time passed, most content was brought back.
TheProfessional
It’s hard to specify how widespread the issue was given that it was temporary, but the strikes point to the chaotic flurry surrounding the trailer’s release. GTA VI has been in development for many years now, with copious leaks providing tons of information on the highly anticipated crime simulator. We’ve learned that the game will take place in Vice City and really bring theFlorida energy, and will feature two protagonists in a Bonnie and Clyde kind of relationship. Kotaku readers also shared their many wants from the next Grand Theft Auto, and we’ve learned that it will skip PC when it launches sometime in 2025.
Big news this week from ClassIn, a leader in blended, hybrid, and remote learning solutions, who announced what they describe as a first-of-its-kind platform—bringing curriculum and content discovery, management, editing, and distribution into its planning and instructional platform. Since TeacherIn’s beta went live at the beginning of the year, it has gained over 110,000 users globally, and more than 25,000 courses have been created.
The platform’s new content discovery marketplace will also manage and distribute licenses and offer copyright protection for publishers by using in-house developed audio-visual encoders to prevent infringement. I had the chance to chat with Ted Mo Chen, Vice President of globalization at ClassIn, before the announcement about the particulars. Click below to listen and scroll down for more details about the service from the company along with a few other takeaways from the conversation:
Highlights from the conversation:
Collaboration and Efficiency: “Teacher” encourages educators to collaborate, share, and modify course materials, fostering a sense of community among teachers to improve content quality and customization.
The Death of the Textbook—This time it’s for real!: Ted discussed the evolving landscape of education, emphasizing the shift away from traditional textbooks in favor of more dynamic, multimedia, and interactive teaching materials.
AI in Education: While the platform is not AI-focused at launch, the company plans to incorporate artificial intelligence in the future to help teachers recommend pedagogical strategies and enhance the delivery of educational content.
Adapting to the Educational Ecosystem: The company’s platform is designed to cater to the specific needs of teachers and educators, aiming to address the limitations of generic video conferencing and note-taking platforms in the education sector.
More details from the release:
Built with collaborative curriculum and open publishing in mind, TeacherIn helps courseware creators collaborate to create high-quality materials by building upon each other’s curriculum in the cloud. While traditional document editors function on standalone files, courseware creators can now build an entire curriculum in ClassIn.
Over the past several years, educators and content providers have emphasized the benefits of digital curriculum over traditional instruction – citing flexibility, instruction personalization, better integration into LMS, the ability to measure curriculum usage, and cost savings. Yet, educators lack a platform to discover and manage their digital curriculum effectively. None of the many tools and platforms available to educators allowed them to complete simple functions, such as tracking versions, collaborating on edits, and clear visibility into updates.
“ClassIn’s powerful platform manages so many elements of the teaching and learning process – from course planning to lesson planning to the delivery of engaging instruction to student assessment and class analytics, it made sense to add a platform for curriculum discovery and management,” said Sara Gu, Co-Founder, and COO at ClassIn. “Now educators, publishers, and instructional designers have a platform to create and manage all their digital curriculum that integrates seamlessly with the rest of ClassIn’s comprehensive suite of capabilities.”
In an increasingly resource-constrained system, TeacherIn:
Provides a consolidated curriculum and content discovery platform for educators
Allows for easy course creation by district leaders and teachers
Makes managing digital curriculum seamless—from licenses to edits to pushing the most updated versions to teachers—TeacherIn provides curriculum management that is cloud-based, collaborative, and easy for educators
Provides publishers with valuable usage analytics and makes it easy to manage access licenses – ensuring no copyright issue
Provides monetization opportunities for educators and content creators who make their materials available for discovery and purchase
Kevin is a forward-thinking media executive with more than 25 years of experience building brands and audiences online, in print, and face to face. He is an acclaimed writer, editor, and commentator covering the intersection of society and technology, especially education technology. You can reach Kevin at KevinHogan@eschoolnews.com
NEW YORK (AP) — John Grisham, Jodi Picoult and George R.R. Martin are among 17 authors suing OpenAI for “systematic theft on a mass scale,” the latest in a wave of legal action by writers concerned that artificial intelligence programs are using their copyrighted works without permission.
In papers filed Tuesday in federal court in New York, the authors alleged “flagrant and harmful infringements of plaintiffs’ registered copyrights” and called the ChatGPT program a “massive commercial enterprise” that is reliant upon “systematic theft on a mass scale.”
The suit was organized by the Authors Guild and also includes David Baldacci, Sylvia Day, Jonathan Franzen and Elin Hilderbrand among others.
“It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the U.S.,” Authors Guild CEO Mary Rasenberger said in a statement. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”
The lawsuit cites specific ChatGPT searches for each author, such as one for Martin that alleges the program generated “an infringing, unauthorized, and detailed outline for a prequel” to “A Game of Thrones” that was titled “A Dawn of Direwolves” and used “the same characters from Martin’s existing books in the series “A Song of Ice and Fire.”
The press office for OpenAI did not immediately respond to requests for comment.
Earlier this month, a handful of authors that included Michael Chabon and David Henry Hwang sued OpenAI in San Francisco for “clear infringement of intellectual property.”
In August, OpenAI asked a federal judge in California to dismiss two similar lawsuits, one involving comedian Sarah Silverman and another from author Paul Tremblay. In a court filing, OpenAI said the claims “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”
Author objections to AI have helped lead Amazon.com, the country’s largest book retailer, to change its policies on e-books. The online giant is now asking writers who want to publish through its Kindle Direct Program to notify Amazon in advance that they are including AI-generated material. Amazon is also limiting authors to three new self-published books on Kindle Direct per day, an effort to restrict the proliferation of AI texts.
LONDON — Back in the spring, Britain was sounding pretty relaxed about the rise of AI. Then something changed.
The country’s artificial intelligence white paper — unveiled in March — dealt with the “existential risks” of the fledgling tech in just four words: high impact, low probability.
Less than six months later, Prime Minister Rishi Sunak seems newly troubled by runaway AI. He has announced an international AI Safety Summit, referred to “existential risk” in speeches, and set up an AI safety taskforce with big global aspirations.
Helping to drive this shift in focus is a chorus of AI Cassandras associated with a controversial ideology popular in Silicon Valley.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI.
Not everyone’s convinced it’s the right approach, however, and there’s mounting concern Britain runs the risk of regulatory capture.
The race to ‘God-like AI’
Effective altruists claim that super-intelligent AI could one day destroy humanity, and advocate policy that’s focused on the distant future rather than the here-and-now. Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.
“The view is that the outcome of artificial super-intelligence will be binary,” says Émile P. Torres, philosopher and former EA, turned critic of the movement. “That if it’s not utopia, it’s annihilation.”
In the U.K., key government advisers sympathetic to the movement’s concerns, combined with Sunak’s close contact with leaders of the AI labs – which have longstanding ties to the movement – have helped push “existential risk” right up the U.K.’s policy agenda.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” – urging policymakers and AI developers to pump the brakes.
It echoed the influential “AI pause” letter calling for a moratorium on “giant AI experiments,” and, in combination with a later letter saying AI posed an extinction risk, helped fuel a frenzied media cycle that prompted Sunak to issue a statement claiming he was “looking very carefully” at this class of risks.
Known as “Effective Altruism,” the movement was conceived in the ancient colleges of Oxford University, bankrolled by the Silicon Valley elite, and is increasingly influential on the U.K.’s positioning on AI | Carl Court/Getty Images
“These kinds of arguments around existential risk or the idea that AI would develop super-intelligence, that was very much on the fringes of credible discussion,” says Mhairi Aitken, an AI ethics researcher at the Alan Turing Institute. “That’s really dramatically shifted in the last six months.”
The EA community credited Hogarth’s FT article with telegraphing these ideas to a mainstream audience, and hailed his appointment as chair of the U.K.’s Foundation Model Taskforce as a significant moment.
Under Hogarth, who has previously invested in AI labs Anthropic, Faculty, Helsing, and AI safety firm Conjecture, the taskforce announced a new set of partners last week – a number of whom have ties to EA.
Three of the four partner organizations on the lineup are bankrolled by EA donors. The Centre for AI Safety is the organization behind the “AI extinction risk” letter (the “AI pause” letter was penned by another EA-linked organization, the Future of Life Institute). Its primary funding – to the tune of $5.2 million – comes from major EA donor organization, Open Philanthropy.
Another partner is Arc Evals, which “works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization.”
It’s a project of the Alignment Research Centre, an organization that has received $1.5 million from Open Philanthropy, $1.25 million from high-profile EA Sam Bankman-Fried’s FTX Foundation (which it promised to return after the implosion of his crypto empire), and $3.25 million from the Survival and Flourishing Fund, set up by Skype founder and prominent EA, Jaan Tallinn. Arc Evals is advised by Open Philanthropy CEO, Harold Karnofsky.
Finally, the Community Intelligence Project, a body working on new governance models for transformative technology, began life with an FTX regrant, and a co-founder appealed to the EA community for funding and expertise this year.
Joining the taskforce as one of two researchers is Cambridge professor David Krueger, who has received a $1 million grant from Open Philanthropy to further his work to “reduce the risk of human extinction resulting from out-of-control AI systems”. He describes himself as “EA-adjacent.” One of the PhD students Kruger advises, Nitarshan Rajkumar, has been working with the British government’s Department for Science, Innovation and Technology (DSIT) as an AI policy adviser since April.
A range of national security figures and renowned computer scientist, Yoshua Bengio, are also joining the taskforce as advisers.
Combined with its rebranding as a “Frontier AI Taskforce” which projects its gaze into the future of AI development, the announcements confirmed the ascendancy of existential risk on the U.K.’s AI agenda.
‘X-risk’
Hogarth told the FT that biosecurity risks – like AI systems designing novel viruses – and AI-powered cyber-attacks weigh heavily on his mind.The taskforce is intended to address these threats, and to help build safe and reliable “frontier” AI models.
When ChatGPT-mania reached its zenith in April, tech investor Ian Hogarth penned a viral Financial Times article warning that the race to “God-like AI” “could usher in the obsolescence or destruction of the human race” | John Phillips/Getty Images
“The focus of the Frontier AI Taskforce and the U.K.’s broader AI strategy extends to not only managing risk, but ensuring the technology’s benefits can be harnessed and its opportunities realized across society,” said a government spokesperson, who disputed the influence of EA on its AI policy.
But some researchers worry that the more prosaic threats posed by today’s AI models, like bias, data privacy, and copyright issues, have been downgraded. It’s “a really dangerous distraction from the discussions we need to be having around regulation of AI,” says Aitken. “It takes a lot of the focus away from the very real and ethical risks and harms that AI presents today.”
The EA movement’s links to Silicon Valley also prompt some to question its objectivity. The three most prominent AI labs, OpenAI, DeepMind and Anthropic, all boast EA connections – with traces of the movement variously imprinted on their ethos, ideology and wallets.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own. Musk recently hired Dan Hendrycks, director of Center for AI Safety, as an adviser to his new start-up, xAI, which is also doing its part to prevent the AI apocalypse.
To counter the threat, the EA movement is throwing its financial heft behind the field of AI safety. Head of Open Philanthropy, Harold Karnofsky,wrote a February blog post announcing a leave of absence to devote himself to the field, while an EA career advice center, 80,000 hours, recommends “AI safety technical research” and “shaping future governance of AI” as the two top careers for EAs.
Tech mogul Elon Musk claims to be a fan of the closely related “longtermist” ideology, calling it a “close match” to his own | Dimitrios Kambouris/Getty Images for The Met Museum/Vogue
Trading in an insular jargon of “X-risk” (existential risks) and “p(doom)” (the probability of our impending annihilation), the AI-focused branch of effective altruism is fixated on issues like “alignment” – how closely AI models are attuned to humanity’s value systems – amid doom-laden warnings about “proliferation” – the unchecked propagation of dangerous AI.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist. A vocal critic, former Googler Timnit Gebru, has denounced this “dangerous brand of AI safety,” noting that she’d seen the movement gain “alarming levels of influence” in Silicon Valley.
Meanwhile, the “strong intermingling” of EAs and companies building AI “has led…this branch of the community to be very subservient to the AI companies,” says Andrea Miotti, head of strategy and governance at AI safety firm Conjecture. He calls this a “real regulatory capture story.”
The pitch to industry
Citing the Center for AI Safety’s extinction risk letter, Hogarth called on AI specialists and safety researchers to join the taskforce’s efforts in June, noting that at “a pivotal moment, Rishi Sunak has stepped up and is playing a global leadership role.”
On stage at the Tony Blair Institute conference in July, Hogarth – perspiring in the midsummer heat but speaking with composed conviction – struck an optimistic note. “We want to build stuff that allows for the U.K. to really have the state capacity to, like, engineer the future here,” he said.
Although the taskforce was initially intended to build up sovereign AI capability, Hogarth’s arrival saw a new emphasis on AI safety. The U.K. government’s £100 million commitment is “the largest amount ever committed to this field by a nation state,” he tweeted.
Despite its popularity among a cohort of technologists,critics say the movement’s thinking lacks evidence and is alarmist | Hollie Adams/Getty Images
The taskforce recruitment ad was shared on the Effective Altruism forum, and Hogarth’s appointment was announced in Effective Altruism UK’s July newsletter.
Hogarth is not the only one in government who appears to be sympathetic to the EA movement’s arguments. Matt Clifford, chair of government R&D body, ARIA, and adviser to the AI taskforce as well as AI sherpa for the safety summit, has urged EAs to jump aboard the government’s latest AI safety push.
“I would encourage any of you who care about AI safety to explore opportunities to join or be seconded into government, because there is just a huge gap of knowledge and context on both sides,” he said at the Effective Altruism Global conference in London in June.
“Most people engaged in policy are not familiar … with arguments that would be familiar to most people in this room about risk and safety,” he added, but cautioned that hyping apocalyptic risks was not typically an effective strategy when it came to dealing with policymakers.
Clifford said that ARIA would soon announce directors who will be in charge of grant-giving across different areas. “When you see them, you will see there is actually a pretty good overlap with some prominent EA cause areas,” he told the crowd.
A British government spokesperson said Clifford is “not part of the core Effective Altruism movement.”
Civil service ties
Influential civil servants also have EA ties. Supporting the work of the AI taskforce is Chiara Gerosa, who in addition to her government work is facilitating an introductory AI safety course “for a cohort of policy professionals” for BlueDot Impact, an organization funded by Effective Ventures, a philanthropic fund that supports EA causes.
The course “will get you up to speed on extreme risks from AI and governance approaches to mitigating these risks,” according to the website, which states alumni have gone on to work for the likes of OpenAI, GovAI, Anthropic, and DeepMind.
People close to the EA movement say that its disciples see the U.K.’s AI safety push as encouragement to get involved and help nudge policy along an EA trajectory.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher who asked not to be named as they didn’t want to risk jeopardizing EA connections.
EAs are “scrambling to be part of Rishi Sunak’s announced Foundation Model Taskforce and safety conference,” according to an AI safety researcher | Pool photo by Justin Tallis via AFP/Getty Images
“One said that while Rishi is not the ‘optimal’ candidate, at least he knows X-risk,” they said. “And that ‘we’ need political buy-in and policy.”
“The foundation model taskforce is really centring the voices of the private sector, of industry … and that in many cases overlaps with membership of the Effective Altruism movement,” says Aitken. “That to me, is very worrying … it should really be centring the voices of impacted communities, it should be centring the voices of civil society.”
Jack Stilgoe, policy co-lead of Responsible AI, a body funded by the U.K.’s R&D funding agency, is concerned about “the diversity of the taskforce.” “If the agenda of the taskforce somehow gets captured by a narrow range of interests, then that would be really, really bad,” he says, adding that the concept of alignment “offers a false solution to an imaginary problem.”
A spokesperson for Open Philanthropy, Michael Levine, disputed that the EA movement carried any water for AI firms. “Since before the current crop of AI labs existed, people inspired by effective altruism were calling out the threats of AI and the need for research and policies to reduce these risks; many of our grantees are now supporting strong regulation of AI over objections from industry players.”
From Oxford to Whitehall, via Silicon Valley
Birthed at Oxford University by rationalist utilitarian philosopher William MacAskill, EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare.
Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley, and a mutated version called “long-termism” that is fixated on ultra-long-term timeframes now dominates. MacAskill’s most recent book What We Owe the Future conceptualizes a million-year timeframe for humanity and advocates the colonization of space.
EA began life as a technocratic preoccupation with how charitable donations could be optimized to wring out maximal benefit for causes like global poverty and animal welfare. Over time, it fused with transhumanist and techno-utopian ideals popular in Silicon Valley | Mason Trinca/Getty Images
Oxford University remains an ideological hub for the movement, and has spawned a thriving network of think tanks and research institutes that lobby the government on long-term or existential risks, including the Centre for the Governance of AI (GovAI) and the Future of Humanity Institute at Oxford University.
Other EA-linked organizations include Cambridge University’s Centre for the Study of Existential Risk, which was co-founded by Tallinn and receives funding from his Survival and Flourishing Fund – which is also the primary funder of the Centre for Long Term Resilience, set up by former civil servants in 2020.
The think tanks tend to overlap with leading AI labs, both in terms of membership and policy positions. For example, the founder and former director of GovAI, Allan Dafoe, who remains chair of the advisory board, is also head of long-term AI strategy and governance at DeepMind.
“We are conscious that dual roles of this form warrant careful attention to conflicts of interest,” reads the GovAI website.
GovAI, OpenAI and Anthropic declined to offer comment for this piece. A Google DeepMind spokesperson said: “We are focused on advancing safe and responsible AI.”
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a research affiliate at the Centre for the Study of Existential Risk who doesn’t identify as EA. “There’s definitely been a push to place people directly out of existential risk bodies into policymaking positions,” he says.
The movement has been accruing political capital in the U.K. for some time, says Luke Kemp, a researcher at the Centre for the Study of Existential Risk who doesn’t identify as EA | Pool photo by Stefan Rousseau via AFP/Getty Images
CLTR’s head of AI policy, Jess Whittlestone, is in the process of being seconded to DSIT on a one day a week basis to assist on AI policy leading up to the AI Safety Summit, according to a CLTR August update seen by POLITICO. In the interim, she is informally advising several policy teams across DSIT.
A former specialist adviser to the Cabinet Office meanwhile, Markus Anderljung, is now head of policy at GovAI.
Kemp says he has expressed reservations about existential risk organizations attempting to get staff members seconded to government. “We can’t be trusted as objective and fair regulators or scholars, if we have such deep connections to the bodies we’re trying to regulate,” he says.
“I share the concern about AI companies dominating regulatory discussions, and have been advocating for greater independent expert involvement in the summit to reduce risks of regulatory capture,” said CLTR’s Head of AI Policy, Dr Jess Whittlestone. “It is crucial for U.K. AI policy to be informed by diverse perspectives.”
Instead of the risks of existing foundation models like GPT-4, EA-linked groups and AI companies tend to talk up the “emergent” risks of frontier models — a forward-looking stance that nudges the regulatory horizon into the future.
This framing “is a way of suggesting that that’s why you need to have Big Tech in the room – because they are the ones developing these frontier models,” suggests Aitken.
At the frontier
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics. The paper explored the controversial idea of licensing the most powerful AI models, a proposal that’s been criticized for its potential to cement the dominance of leading AI firms.
Earlier in July, CLTR and GovAI collaborated on a paper about how to regulate so-called frontier models, alongside members of DeepMind, OpenAI, and Microsoft and academics | Lionel Bonaventure/AFP via Getty Images
CLTR presented the paper to No. 10 with the prime minister’s special advisers on AI and the director and deputy director of DSIT in attendance, according to the CLTR memo.
Such ideas appear to be resonating. In addition to announcing the “Frontier AI Taskforce”, the government said in September that the AI Summit would focus entirely on the regulation of “frontier AI.”
The British government disputes the idea that its AI policy is narrowly focused. “We have engaged extensively with stakeholders in creating our AI regulation white paper, and have received a broad and diverse range of views as part of the recently closed consultation process which we will respond to in due course,” said a spokesperson.
Spokespeople for CLTR and CSER said that both groups focus on risks across the spectrum, from near-term to long-term, while a CLTR spokesperson stressed that it’s an independent and non-partisan think tank.
Some say that it’s the external circumstances that have changed, rather than the effectiveness of the EA lobby. CSER professor Haydn Belfield, who identifies as an EA, says that existential risk think tanks have been petitioning the government for years – on issues like pandemic preparedness and nuclear risk in addition to AI.
Although the government appears more receptive to their overtures now, “I’m not sure we’ve gotten any better at it,” he says. “I just think the world’s gotten worse.”
Update: This story has been updated to clarify Luke Kemp’s job title.
Controlling the distribution of music — and thus making sure composers get paid for their labour and talent — has been a problem that dates back to the invention of the printing press.
In 1498, less than 50 years after Johannes Gutenberg revealed the printing press, a savvy entrepreneur named Ottaviano Petrucci received a patent from the Venetian Senate for publishing musical notation with one of these new-fangled machines, giving him a monopoly on sheet music. He controlled the copyright and publishing of all music. But then in 1516, Pope Leo X stripped away Petrucci’s power when it came to organ music and gave it all to Andrea Antico, someone who pleased the pontiff more.
This mess continued through the centuries. In England, Elizabeth I granted William Byrd and Tomas Tallis a patent on all music publishing, which not only included all music created in the kingdom but also prohibited foreign vendors from peddling their music in England. The cherry on top was that Byrd and Tallis also owned the rights to the printing of blank music paper. In other words, if you were an English composer, you had to pay them even before you wrote down a single note. Soon after, a French composer named Jean-Baptiste Lully managed to secure control over all operas performed in France and became one of the wealthiest people in the country.
Story continues below advertisement
It took a while for these royal-granted monopolies to be wiped out, leading to the Berne Convention of 1886, which set the first true international standards for who had the right to copy and distribute intellectual property with a focus on the rights of the creators and not the publishers. Those terms have been renegotiated a number of times in the last century-and-a-half. Meanwhile, technology marched on, adding new levels of complexity to protecting the rights of artists, especially in the digital age.
One area that’s blown up is allegations of copyright infringement by one musical artist upon another. We’ve seen it with cases involving George Harrison and the Chiffons, Marvin Gaye and both Robin Thicke and Ed Sheeran, Chuck Berry and the Beach Boys, Sam Smith and Tom Petty, Vanilla Ice vs. David Bowie and Queen, The Hollie and Radiohead, Spirit and Led Zeppelin, and dozens of others. These accusations of plagiarism — many completely unfounded, in my view — have sucked up an enormous amount of court time and money.
There’s a thriving industry of ambulance-chasing lawyers who “discover” that a newer song has certain sonic similarities to a song from the past. The composer of the older song is contacted and told that if they sign on, there could be a songwriting credit for them on the new song (meaning that they’ll get a stream or royalties) or at the very least receive some kind of out-of-court settlement. Dua Lipa is currently facing three such lawsuits, the latest being over an alleged unauthorized sample in her hit Levitating. It’s all very nutty, especially the current “dembow” case that seeks to upend the rhythmic foundations of music.
Story continues below advertisement
With so many competing interests, unclear statutes, differing interpretations between territories, gullible juries and advancing technology, protection of copyright is just as much a disaster as it was in the days of Petrucci and Antico.
Underpinning all this is a mathematical fact: There remain just 12 notes in the western scale and a finite number of ways they can be combined into pleasing combinations. With 100,000 new songs being uploaded to streaming music services every day, unexpected and unintentional duplication is inevitable. And with AI-composed music quickly being adopted, the situation will get even worse.
Or will it? Probably, but there have been some interesting developments of late.
First, Judge Beryl Howell of the U.S. District Court for the District of Columbia ruled that any kind of art — including music — solely created by artificial intelligence cannot be subject to copyright. Why? Because “human authorship is an essential part of a valid copyright claim.” This is in line with some rules followed in Canada. Meanwhile, the people in charge of the Grammy Awards have new guidelines that say “only human creators” can win an award. “A work with no human authorship is not eligible in any category.” That may be, but they haven’t ruled out considering songs that feature a portion created by AI, so we’ll call that half a win for humans.
But Damien Riehl and Noah Rubin want to settle this once and for all. They’ve created an algorithm that can generate 300,000 eight-note melodies every second in order to create a database of 68 billion “songs.” Those melodies were then copyrighted and released online into the public domain, meaning that they’re usable by anyone. They claim that these files — which sit on a small hard drive — contain “every melody that’s ever existed and ever can exist…. No song is new. Noah and I have exhausted the data set. Noah and I have made all the music to be able to allow future songwriters to make all of their music.”
Story continues below advertisement
Their point? That copyright law is completely broken and needs to be updated properly. Riehl outlined everything in a TEDx talk.
The Riehl/Rubin conjecture has yet to be tested in court, but it’s inevitable that it will be. I look forward to the outcome.