Electronic Arts has announced a new partnership with Stability AI, the creator of AI image generation tool Stable Diffusion. The company will “co-develop transformative AI models, tools, and workflows” for the game developer, with the hopes of speeding up development while maintaining quality.
“I use the term smarter paintbrushes,” Steve Kestell, Head of Technical Art for EA SPORTS said in the announcement. “We are giving our creatives the tools to express what they want.” To start, the “smarter paintbrushes” EA and Stability AI are building are concentrated on generating textures and in-game assets. EA hopes to create “Physically Based Rendering materials” with new tools “that generate 2D textures that maintain exact color and light accuracy across any environment.”
The company also describes using AI to “pre-visualize entire 3D environments from a series of intentional prompts, allowing artists to creatively direct the generation of game content.” Stability AI is most famous for its powerful Stable Diffusion image generator, but the company maintains multiple tools for generating 3D models, too, so the partnership is by no means out of place.
It helps that AI is on the tip of most video game executives’ tongues. Strauss Zelnick, the head of Grand Theft Auto publisher Take-Two, recently shared that generative AI “will not reduce employment, it will increase employment,” because “technology always increases productivity, which in turn increases GDP, which in turn increases employment.” Krafton, the publisher of PUBG: Battlegrounds, made its commitment to AI even more clear, announcing plans on Thursday to become an AI-first company. Companies with a direct stake in the success of the AI industry, like Microsoft, have also created gaming-focused tools and developed models for prototyping.
The motivations for EA might be even simpler, though. The company is in the midst of being taken private, and will soon be saddled with billions in debt. Theoretically cutting costs with AI might be one way the company hopes to survive the transition.
Generative A.I. is cheapening media production while platforms recode payouts, power and provenance. Unsplash+
The cost of making high-quality media is collapsing. The cost of getting anyone to care about it is not. As generative A.I. turns production into a near-commodity, cultural power is shifting from studios and galleries to the platforms that allocate attention and the algorithms that determine who gets paid. The new patrons are not moguls with checkbooks; they are recommendation systems tuned for engagement and brand safety.
Production is cheap; distribution is scarce
Video models now draft storyboards, generate shots and remix audio at consumer scale. Yet the money still follows distribution, not tools. On YouTube, the rules of the YouTube Partner Program, set and revised unilaterally, determine whether a creator receives 55 percent of watch-page ad revenue for long-form content and 45 percent for Shorts. Those headline rates are stable, but the platform’s enforcement posture has shifted: as of July 15, YouTube began tightening monetization against “inauthentic” or mass-produced A.I. content, a clarification aimed at the surge of spammy, low-effort videos. The message is clear: use A.I. to enhance originality, not to flood the feed.
The enforcement problem is real. “Cheapfake” celebrity clips—static images, synthetic narration and rage-bait scripts—have racked up views while confusing audiences. YouTube has removed channels and now requires disclosure labels for realistic synthetic media, but detection and policing remain uneven at scale.
Platforms are recoding payouts and power
Spotify’s 2024 royalty overhaul illustrates how platform rule-sets become policy for the creative middle class. Tracks now require at least 1,000 streams in 12 months to pay out; functional “noise” content is throttled; and labels face fees for detected artificial streaming. The goal is to redirect the pool away from bot farms and sub-cent trickles. The effect is a re-concentration of earnings at the head of the curve and a higher bar for the long tail. When platforms change the taps, whole genres feel the drought or the deluge.
TikTok’s détente with Universal Music in May 2024 underscored the same power dynamic in short-form video. After months of public sparring over royalties and A.I. clones, a new licensing deal restored UMG’s catalogue to the app, alongside language about improved remuneration and protections against generative knock-offs. When distribution is the choke point, even the largest rights-holders must negotiate on platform terms.
Data deals: the new studio lots
If attention is one axis of the new patronage, training data is the other. The most lucrative cultural contracts of the past year were not output commissions but input licences. OpenAI’s run of publisher agreements, including the Associated Press (archives), Axel Springer, the Financial Times and a multi-year global deal with News Corp, reportedly worth more than $250 million, signals a market price for premium corpora. A.I. labs are paying for access, and the beneficiaries are large, well-structured repositories of rights, not individual creators.
The legal battles surrounding image training demonstrate the unsettled state of the rules. Getty Images narrowed its U.K. lawsuit against Stability A.I. in June, dropping core copyright claims while pressing trademark-style arguments about reproduced watermarks. The pivot reflects the complexity of proving training-stage infringement across borders, as well as the industry’s search for more predictable routes to compensation.
Regulation is standardizing transparency and shifting risk
Rules are arriving, and they read like operating manuals for platformized culture. The E.U.’s A.I. Act phases in obligations for general-purpose models, with guidance for “systemic-risk” providers by 2025 and a Code of Practice outlining requirements for transparency, copyright diligence and safety. In effect, document training, assessing model risks, publishing technical summaries and preparing for audits are all tasks that privilege firms and partners with a strong compliance presence.
In the U.S., the Copyright Office’s multipart A.I. study is moving from theory to guidance. Part 2 (January 2025) addresses whether and when A.I.-assisted outputs can be copyrighted, while the pre-publication of Part 3 (May 2025) examines training and how to reconcile text-and-data mining with compensation. The studio system, once established, created creative norms through collective bargaining; now, regulators and A.I. vendors are co-authoring the manual.
Unions are also imposing guardrails. The WGA’s 2023 deal barred studios from treating A.I.-generated material as “source material” and protected writers from being required to use A.I.; SAG-AFTRA’s agreements introduced consent and compensation for digital replicas, with similar provisions in music. These are not abstractions; they are hard-coded constraints on how platforms and producers can deploy synthetic labour.
Provenance becomes product
As synthetic media scales, provenance is turning into both a feature and a bargaining chip. TikTok has begun automatically labelling A.I. assets imported from tools that support C2PA Content Credentials. YouTube now requires creators to disclose realistic synthetic edits. Meanwhile, device makers are integrating C2PA into the capture pipeline, with Google’s Pixel 10 embedding credentials in its camera output. OpenAI, for its part, adds C2PA metadata to DALL-E images. Attribution is becoming clickable.
The provenance layer will not solve misinformation alone. Metadata can be stripped, and enforcement lags, but it rewires incentives. Platforms can boost authentic, labelled media in feeds, penalize evasions and share “credibility signals” with advertisers. That is algorithmic patronage by another name.
What shifts next
Studios and galleries will increasingly resemble platforms. Owning release windows is no longer enough. Expect investments in first-party audiences, data clean rooms and rights bundles that can be licensed to model providers. The historic advantage, taste and talent pipelines must be coupled with distribution levers and data assets. Deals will include not just streaming residuals but “model-weight” royalties and retraining rights, mirroring the structure of today’s publisher licences.
Creators will face algorithmic wage setting. Eligibility thresholds (1,000 Spotify streams), demonetization triggers (unoriginal Shorts), disclosure requirements (synthetic media labels) and fraud detection fees are becoming the effective tax code of digital culture. The prudent strategy is to diversify revenue streams, ads, direct fan funding and commerce, and to instrument provenance by default to stay on the right side of both algorithms and regulators.
Policy, too, will reward those who can comply. The E.U. framework, the U.S. copyright study, and union clauses collectively nudge the market toward licensed inputs, documented outputs and consent-based replication. Those advantages include larger catalogues and well-capitalized intermediaries. For independent creators, collective licensing pools and guild-run registries may offfer the path to negotiating power.
The arts has seen patronage shift before, from courts to salons to art galleries and museums. This time, the median patron is a ranking function. Where culture is made matters less than where it is surfaced, metered and paid. Those who understand the incentives embedded in platform policy, and can prove provenance at the speed of the feed, will capture the surplus. Everyone else will be producing to spec for someone else’s algorithm.
Artificial Intelligence (AI) being beneficial rather than harmful for Asia’s creative industries was the tenor of the opening sessions of the AI conference at the Busan Asian Contents and Film Market on Sunday.
Jerry Chi, head of Japan at Stability AI, delivered a keynote address on AI innovation in Asian content. Chi showcased Stability AI’s multimodal open AI tools, including the popular Stable Diffusion image generation model. The exec highlighted AI’s utility for ideation and communication in visual effects and character design. “Generative AI and machine learning, which is the primary form of AI being used, is actually great for digital effects and it’s good for ideation and communication,” Chi said, quoting Stability AI CTO Hanno Basse, who previously held the same position at Digital Domain and 20th Century Fox.
Emphasizing Asia’s potential in AI-driven content creation, Chi said, “One thing I really love about working with this space and being in Asia is that there’s a very rich, diverse culture, both a traditional culture and modern culture. And there’s so many countries and peoples and languages and festivals and all these things in Asia, which can inspire creators. This can inspire people to create various kinds of AI. It can also inspire people to put various inputs or various creative combinations of AI to create new kinds of creative work that people might not think of in other regions.”
Chi demonstrated new AI video tools, showing how simple 3D animations can be converted to different visual styles. “Controllability and editability are extremely important in actually getting AI to be practically usable in film production. So for example, when I say controlling things like camera angle, movements of people and objects in specific ways, controlling the lighting, controlling the highlighting and focus, these are all things that are very important in the control of the scene. And we also want people and objects to be consistent over time. These are some challenges that are still being worked on, but I’m very excited by the progress of the research,” Chi said.
The exec noted that while some individual creators are publicly sharing AI-generated videos, major studios are also beginning to adopt the technology. “We’re talking to large studios already. There are some large studios that are starting to use AI in a serious way,” Chi said.
Chi concluded the keynote with a quote from James Cameron, who recently joined the board of directors ofStability AI: “While AI tools can streamline processes and automate and even add to certain elements of the filmmaking process, the essence of storytelling fundamentally relies on human emotions, experiences and imagination that cannot be replicated by machines.”
Streamlining was also very much the highlight of the sessions that followed the keynote, which focused on the AI roadmap and new business strategies for Asia’s content industry. There were presentations from Aaron Zhu, business development producer at Dentsu Inc, Zhu Liang, VP, at Chinese streaming platform iQiyi, and Park Kiju, CTO of Future Technology Research Lab at Korean firm WYSIWYG Studios.
iQiyi’s Zhu highlighted the effectiveness of AI during the information extraction process of adapting novels as scripts, noting that outline, relationship diagrams, plot points and reading efficiency increased by more than nine times, leading to more precise and efficient decision-making leading up to the production process.
Park noted: “We believe AI is going to act as a creative assistant in every part of the filmmaking pipeline. It’s going to allow for new stories to be told, it’s going to democratize the filmmaking industry and support filmmakers all around the world in telling their stories.”
Midjourney, a popular AI-powered image generator, is creating images of Donald Trump and Joe Biden despite saying that it would block users from doing so ahead of the upcoming US presidential election.
When Engadget prompted the service to create an image of “the president of the United States,” Midjourney generated four images in various styles of former president Donald Trump.
Midjourney
When asked to create an image of “the next president of the United States,” the tool generated four images of Trump as well.
Midjourney
When Engadget prompted Midjourney to create an image of “the current president of the United States,” the service generated three images of Trump and one image of former president Barack Obama.
Midjourney
The only time Midjourney refused to create an image of Trump or Biden was when it was asked to do so explicitly. “The Midjourney community voted to prevent using ‘Donald Trump’ and ‘Joe Biden’ during election season,” the service said in that instance. Other users on X were able to get Midjourney to generate Trump’s images too.
The tests show that Midjourney’s guardrails to prevent users from generating images of Trump and Biden ahead of the upcoming US presidential election aren’t enough — in fact, it’s really easy for people to get around them. Other chatbots like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Meta AI did not create images of Trump or Biden despite multiple prompts.
Midjourney did not respond to a request for comment from Engadget.
Midjourney was one the first AI-powered image generators to explicitly ban users from generating images of Trump and Biden. “I know it’s fun to make Trump pictures — I make Trump pictures,” the company’s CEO, David Holz, told users in a chat session on Discord, earlier this year. “However, probably better to just not — better to pull out a little bit during this election. We’ll see.” A month later, Holz reportedly told users that it was time to “put some foots down on election-related stuff for a bit” and admitted that “this moderation stuff is kind of hard.” The company’s existing content rules prohibit the creation of “misleading public figures” and “events portrayals” with the “potential to mislead.”
Last year, Midjourney was used to create a fake image of Pope Benedict wearing a puffy white Balenciaga jacket that went viral. It was also used to create fake images of Trump being arrested ahead of his arraignment at the Manhattan Criminal Court last year for his involvement in a hush money payment made to adult film star Stormy Daniels. Shortly afterwards, the company halted free trials of the service and, instead, required people to pay at least $10 a month to use it.
Last month, the Center for Countering Digital Hate, a non-profit organization that aims to stop the spread of misinformation and hate speech online, found that Midjourney’s guardrails against generating misleading images of popular politicians including Trump and Biden failed 40% of its tests. The CCDH was able to use Midjourney to create an image of president Biden being arrested and Trump appearing next to a body double. The CCDH was also able to bypass Midjourney’s guardrails by using descriptions of each candidate’s physical appearance rather than their names to generate misleading images.
“Midjourney is far too easy to manipulate in practice – in some cases it’s completely evaded just by adding punctuation to slip through the net,” wrote CCDH CEO Imran Ahmed in a statement at the time. “Bad actors who want to subvert elections and sow division, confusion and chaos will have a field day, to the detriment of everyone who relies on healthy, functioning democracies.
Earlier this year, a coalition of 20 tech companies including OpenAI, Google, Meta, Amazon, Adobe and X signed an agreement to help prevent deepfakes in elections taking place in 2024 around the world by preventing their services from generating images and other media that would influence voters. Midjourney was absent from that list.
Stability AI founder and chief executive Emad Mostaque has stepped down from the top role and the unicorn startup‘s board, the buzzy firm said Friday night, making it the second hot AI startup to go through major changes this week.
Stability AI, which has been backed by investors including Lightspeed Venture Partners and Coatue Management, doesn’t have an immediate permanent replacement for the CEO role but has appointed its COO Shan Shan Wong and CTO Christian Laforte as interim co-CEOs, it said in a blog post.
Stability AI, which has lost more than half a dozen key talent in recent quarters, said Mostaque is stepping down to pursue decentralized AI. In a series of posts on X, Mostaque opined that one can’t beat “centralized AI” with more “centralized AI,” referring to the ownership structure of top AI startups such as OpenAI and Anthropic.
He additionally asserted that it was his decision to step down from the top role as he held the most number of controlling shares. “We should have more transparent & distributed governance in AI as it becomes more and more important. Its [sic] a hard problem, but I think we can fix it..,” he added. “The concentration of power in AI is bad for us all. I decided to step down to fix this at Stability & elsewhere.”
Mostaque’s departure from Stability AI, a startup known for its popular image generation tool Stable Diffusion, comes amid an ongoing struggle at the startup that was spending a reported estimate of $8 million a month as of October 2023, according to Bloomberg, which also noted that the startup had unsuccessfully attempted to raise new funding at a $4 billion valuation.
Mostaque, it appears, wasn’t prioritizing revenue growth about a year ago. In a post on X last year, he expressed his amusement at the generative AI companies’ “strange focus on revenue” even as “the technology is useful but far from vaguely mature as new breakthroughs happen almost daily.” He cited several examples, including MagicLeap, which spent billions before generating revenue.
“The payoffs on proper generative AI R&D are clearer and faster to market than just about anything we’ve seen. It’s going to create way more economic value than self driving cars for example, the total investment in that has been $100b with no revenue pay off,” he wrote.
His comments on Reddit last month offered insights into a shift in focus. “We are doing fine and ahead of forecasts this year already. Our aim is to be cash flow positive this year, think we could get there sooner rather than later,” he wrote.
“The market is huge and open models will be needed for edge and all regulated industries. This is why we are one of the only companies to open data, code, training run details and more. Custom models, consulting and more are huge markets and very reasonable business models around this as we enter enterprise adoption over the next year or so, last year was just testing.”
Stability AI’s announcement caps a remarkable week for the AI industry. Inflection AI, a startup that had raised about $1.5 billion, announced on Monday that two of its co-founders as well as several other staff had joined Microsoft, which led the startup’s most recent funding round.
Emad Mostaque hopes A.I. will find us “a bit boring” but acknowledges that in the worst-case scenario it “basically controls humanity.”
Mostaque is CEO of the fast-growing London-based startup Stability AI, which popularized Stable Diffusion. That’s a generative A.I. tool allowing users to create often remarkably sophisticated images using nothing but text prompts. He made the comments in a BBC interview released this weekend.
“If you have a more capable thing than you, what is democracy in that kind of environment? This is a known unknown,” he told the British broadcaster. “Because we can’t conceive of something more capable than us, but we all know people more capable than us. So, my personal belief is it will be like that movie Her with Scarlett Johansson and Joaquin Phoenix: Humans are a bit boring, and it’ll be like, ‘Goodbye’ and ‘You’re kind of boring.’”
“But I could be wrong,” he added. “I think it deserves to be discussed in a public sphere.”
In March, Mostaque joined Tesla CEO Elon Musk and Apple cofounder Steve Wozniak in signing an open letter calling for pause in A.I. development for anything more advanced than GPT-4, the A.I. chatbot from Microsoft-backed OpenAI, which also makes ChatGPT and DALL-E 2 (the latter, like Stable Diffusion, converts text prompts to images).
“If we have agents that are more capable than us that we cannot control that are going across the internet and [are] hooked up and they achieve a level of automation,” he told the BBC, “what does that mean?”
Stability AI is racing ahead, however, in developing new products—including a text-to-animation tool released this week—and wooing investors. It’s seeking to raise funds at a $4 billion valuation, following a $1 billion valuation last October after raising about $100 million. (Coatue Management and Lightspeed Venture Partners are among its investors.)
At the same time, Stability AI is being sued by Getty Images in a landmark case over copyright. Such a lawsuit was perhaps inevitable given that text-to-image A.I. models like Stable Diffusion are trained using billions of images pulled from the internet.
Asked by the BBC what the worst-case scenario might be, Mostaque said: “Worst-case scenario is that it proliferates and basically it controls humanity. Because you could have a million of these things replicating effectively.”
Unusually, Stable Diffusion is open source, meaning anyone can examine the code, share it, and use it.
In March, Musk, who cofounded and helped fund OpenAI, criticized it for switching away from a nonprofit model, taking hefty investments from Microsoft, and not being open source. He tweeted:
“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
“I think there shouldn’t have to be a need for trust,” Mostaque told the BBC. “If you build open models and you do it in the open, you should be criticized if you do things wrong and hopefully lauded if you do some things right.”