AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it’ll default to just a handful of visual motifs, resulting in an ultimately generic style.
A study published in the journal Patterns took two AI image generators, Stable Diffusion XL and LLaVA, and put them to test by playing a game of visual telephone. The game went like this: the Stable Diffusion XL model would be given a short prompt and required to produce an image—for example, “As I sat particularly alone, surrounded by nature, I found an old book with exactly eight pages that told a story in a forgotten language waiting to be read and understood.” That image was presented to the LLaVA model, which was asked to describe it. That description was then fed back to Stable Diffusion, which was asked to create a new image based off that prompt. This went on for 100 rounds.
Much like a game of human telephone, the original image was quickly lost. No surprise there, especially if you’ve ever seen one of those time-lapse videos where people ask an AI model to reproduce an image without making any changes, only for the picture to quickly turn into something that doesn’t remotely resemble the original. What did surprise the researchers, though, was the fact that the models default to just a handful of generic-looking styles. Across 1,000 different iterations of the telephone game, the researchers found that most of the image sequences would eventually fall into just one of 12 dominant motifs.
In most cases, the shift is gradual. A few times, it happened suddenly. But it almost always happened. And researchers were not impressed. In the study, they referred to the common image styles as “visual elevator music,” basically the type of pictures that you’d see hanging up in a hotel room. The most common scenes included things like maritime lighthouses, formal interiors, urban night settings, and rustic architecture.
Even when the researchers switched to different models for image generation and descriptions, the same types of trends emerged. Researchers said that when the game is extended to 1,000 turns, coalescing around a style still happens around turn 100, but variations spin out in those extra turns. Interestingly, though, those variations still typically pull from one of the popular visual motifs.
So what does that all mean? Mostly that AI isn’t particularly creative. In a human game of telephone, you’ll end up with extreme variance because each message is delivered and heard differently, and each person has their own internal biases and preferences that may impact what message they receive. AI has the opposite problem. No matter how outlandish the original prompt, it’ll always default to a narrow selection of styles.
Of course, the AI model is pulling from human-created prompts, so there is something to be said about the data set and what humans are drawn to take pictures of. If there’s a lesson here, perhaps it is that copying styles is much easier than teaching taste.
Taylor Swift once said, “You deserve to own the art you make.” Apparently, that doesn’t apply to the millions of artists who have had their works fed into the data wood chipper that is generative AI tools. In the lead-up to the release of the world’s biggest pop star’s latest album, “Life of a Showgirl,” fans were treated to easter egg videos designed to build hype. Instead, sharp-eyed Swifties started to spot what appeared to be AI-generated imagery within the teaser videos, and launched full Swift-vestigations into the situation.
The alleged generative AI material appeared in a series of short promotional videos. Those videos were accessed via QR codes that were posted on 12 orange doors located in 12 different cities. The videos, originally uploaded via YouTube Shorts, are no longer available, but Gizmodo reviewed purported re-uploads found online. Each video featured letters which, when put together, provided the phrase, “You must remember everything, but mostly this, the crowd is your king.” But the mystery that Taylor’s king took more of an interest in seemed to be, “Why do some of these videos look a little off?”
No one from Swift’s camp has confirmed in any way the use of generative AI in the promotional videos, but there is certainly enough on-screen to create suspicion. Users have pointed out clipping and disappearing imagery in some videos that suggest that what you’re seeing is created with generative AI. The videos appear to be a part of a partnership with Google, according to a report from The Tennessean, which covered the orange door reveal that appeared in Nashville. Gizmodo reached out to Google for comment regarding its involvement in the videos, but did not receive a response at the time of publication.
Others have called out some lettering that appears in different shots that have a distinct AI-generated quality to them, in that they are largely nonsense. A treadmill that appears in one video, for instance, has buttons that read “MOP,” “SUOP,” and “NCLINE,” with letters that are curved and blurred in ways that suggest there’s something more than just some wear and tear on the buttons. Another image, a notebook, also appears to contain made-up lettering that a human would be unlikely to make, on account of the fact that a human knows what letters are.
this is ai + either taylor is the powerful woman that has power over every aspect of her art (music, promo etc)so she definitely approved this or she doesn’t let’s not switch narratives only when it benefits taylors ok? pic.twitter.com/mCkplSDuxw
— fifo | the life of a showgirl 🍂🍁 (@closureblvd) October 5, 2025
Not a criticism of Taylor herself, moreso her team/company…. the lettering on this treadmill is a very well known indicator of AI generated text. pic.twitter.com/qCDzoCkufW
Generative AI systems are notoriously bad at generating text because, while these systems have been trained on massive sets of data and images containing text, the model has no concept of what it’s actually “looking” at. This is why generative AI models can spit out images of watches and clocks, but it’s often hard to get them to display specific times, because the model has no idea how to tell time. It just knows clocks have lines that mark time, not what those lines actually indicate.
It doesn’t appear that generative AI was used in the creation of Swift’s music videos for the new album, and there doesn’t appear to be an indication that generative AI was used in the feature film released to mark the launch of the record. Gizmodo reached out to representatives for Taylor Swift, as well as Rodrigo Prieto, cinematographer of “Taylor Swift: The Official Release Party of a Showgirl,” for comment regarding the potential use of generative AI in the making of these promotional videos, music videos, and the film. No parties responded on the record at the time of publication.
On its face, this appears to be a pretty major blunder. You can’t tell your superfans, who think every word you speak and image you post contains secret messages, to look for clues in an AI-generated video and not expect them to spot inconsistencies. But hey, maybe these weird anomalies are just part of another Easter egg reveal, right?
On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.
In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.
There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.
That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.
The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)
Prompt: Love Island reveal scene. A young woman sits on a plush villa sofa during a tense “Movie Night” scene. She watches a large TV screen showing grainy CCTV-style footage: Real-life Ronald McDonald, dashing into a… pic.twitter.com/vNg609MaIJ
Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.
⚡ Got access to Sora 2.
“A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors.”
“People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”
There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.
One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.
i have the most liked video on sora 2 right now, i will be enjoying this short moment while it lasts
Sam Altman is playing 4D chess. Sora 2 is about to take over social media, the virality is guaranteed once this scales. Billions in ad revenue will flow straight into more compute, fueling the flywheel. In a year Sora 2 will be so efficient and cheap that margins explode. You… pic.twitter.com/cUbmePkwDG
There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?
OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.
A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.
There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.
STATE HOUSE, BOSTON — Artificial intelligence in classrooms is no longer a distant prospect, and Massachusetts education officials on Monday released statewide guidance urging schools to use the technology thoughtfully, with an emphasis on equity, transparency, academic integrity and human oversight.
“AI already surrounds young people. It is baked into the devices and apps they use, and is increasingly used in nearly every system they will encounter in their lives, from health care to banking,” the Department of Elementary and Secondary Education’s new AI Literacy Module for Educators says.
This page requires Javascript.
Javascript is required for you to be able to read premium content. Please enable it in your browser settings.
Video game actors are going on strike for the first time since 2017 after months of negotiations with Activision, Epic Games, and other big publishers and studios over higher pay, better safety measures, and protections from new generative AI technologies. They’ll be hitting the picket line a year after Hollywood actors and writers wrapped up their own historic strikes in an escalation that could have big consequences for the development and marketing of some of the industry’s biggest games.
Kotaku’s Hopes For Spyro The Dragon’s (Reported) Comeback
Members of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) voted last fall to authorize a strike citing an unwillingness of big game companies to budge on guaranteeing performers rights over how their work is used in training AI or creating AI-generated copies. Roughly 2,600 voice actors and motion capture artists, including talents like Troy Baker from The Last of Us, Jennifer Hale from Mass Effect, and Matt Mercer from The Legend of Zelda: Tears of the Kingdom, have been working without an Interactive Media Agreement since November 2022. The strike starts on July 26 at 12:01 a.m.
“The video game industry generates billions of dollars in profit annually. The driving force behind that success is the creative people who design and create those games,” chief negotiator Duncan Crabtree-Ireland said in a statement. “That includes the SAG-AFTRA members who bring memorable and beloved game characters to life, and they deserve and demand the same fundamental protections as performers in film, television, streaming, and music: fair compensation and the right of informed consent for the A.I. use of their faces, voices, and bodies. Frankly, it’s stunning that these video game studios haven’t learned anything from the lessons of last year – that our members can and will stand up and demand fair and equitable treatment with respect to A.I., and the public supports us in that.”
“We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations, spokesperson Audrey Cooling for the companies involved in the Interactive Media Agreement said in an emailed statement. “We have already found common ground on 24 out of 25 proposals, including historic wage increases and additional safety provisions. Our offer is directly responsive to SAG-AFTRA’s concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the IMA. These terms are among the strongest in the entertainment industry.”
While games set to come out this fall like Dragon Age: The Veilguard, who’s recently revealed voice cast includes several guild members, likely already have their voice and motion-capture work completed, the strike means SAG-AFTRA members would be unavailable for projects that are years out, and wouldn’t be around to record for any potential last-minute re-writes for things that are closer to coming out. Games relied much less on actor performances in the past, but most popular franchises are now fully voice-acted, with the biggest-budget productions using motion capture to transfer actors’ real-life performances, frame by frame, into the game.
The last time video game actors went on strike in 2016, it was primarily over pay rates and lasted a entire year. It’s unclear if the strike this time around will be over any sooner. Unlike with the issue of higher pay, people involved in the current negotiations say that the lack of AI protections poses an existential threat to actors and their creative output. Just this week, Wired reported that companies like Activision Blizzard and Riot Games were moving ahead with using generative AI tools to help create concept art and even potentially assets that would make it into finished games like Call of Duty: Modern Warfare 3.
“Eighteen months of negotiations have shown us that our employers are not interested in fair, reasonable A.I. protections, but rather flagrant exploitation,” said negotiating committee chair Sarah Elmaleh said in a statement. “We refuse this paradigm—we will not leave any of our members behind, nor will we wait for sufficient protection any longer. We look forward to collaborating with teams on our Interim and Independent contracts, which provide A.I. transparency, consent and compensation to all performers, and to continuing to negotiate in good faith with this bargaining group when they are ready to join us in the world we all deserve.”
SAG-AFTRA video game voice actors are set to hold a panel featuring Ashly Burch (Horizon Forbidden West), Noshir Dala (Red Dead Redemption II), and others at San Diego Comicon later this week on July 26.
Update 7/25/2024 3:42 p.m. ET: Added a statement from the game companies.
Apple’s new Apple Intelligence system is designed to infuse generative AI into the core of iOS. The system offers users a host of new services, including text and image generation as well as organizational and scheduling features. Yet while the system provides impressive new capabilities, it also brings complications. For one thing, the AI system relies on a huge amount of iPhone users’ data, presenting potential privacy risks. At the same time, the AI system’s substantial need for increased computational power means that Apple will have to rely increasingly on its cloud system to fulfill users’ requests.
Apple Unveils Its iPhone 15 and Apple Watch Series 9
Apple has historically offered iPhone customers unparalleled privacy; it’s a big part of the company’s brand. Part of those privacy assurances has been the option to choose when mobile data is stored locally and when it’s stored in the cloud. While an increased reliance on the cloud might ring some privacy alarm bells, Apple has anticipated these concerns and created a startling new system that it calls its Private Cloud Compute, or PCC. This is really a cloud security system designed to keep users’ data away from prying eyes while it’s being used to help fulfill AI-related requests.
On paper, Apple’s new privacy system sounds really impressive. The company claims to have created “the most advanced security architecture ever deployed for cloud AI compute at scale.” But what looks like a massive achievement on paper could ultimately cause broader issues for user privacy down the road. And it’s unclear, at least at this juncture, whether Apple will be able to live up to its lofty promises.
How Apple’s Private Cloud Compute Is Supposed to Work
In many ways, cloud systems are just giant databases. If a bad actor gets into that system/database, they can look at the data contained within. However, Apple’s Private Cloud Compute (PCC) brings a number of unique safeguards that are designed to prevent that kind of access.
Apple says it has implemented its security system at both the software and hardware levels. The company created custom servers that will house the new cloud system, and those servers go through a rigorous process of screening during manufacturing to ensure they are secure. “We inventory and perform high-resolution imaging of the components of the PCC node,” the company claims. The servers are also being outfitted with physical security mechanisms such as a tamper-proof seal. iPhone users’ devices can only connect to servers that have been certified as part of the protected system, and those connections are end-to-end encrypted, meaning that the data being transmitted is pretty much untouchable while in transit.
Once the data reaches Apple’s servers, there are more protections to ensure that it stays private. Apple says its cloud is leveraging stateless computing to create a system where user data isn’t retained past the point at which it is used to fulfill an AI service request. So, according to Apple, your data won’t have a significant lifespan in its system. The data will travel from your phone to the cloud, interact with Apple’s high-octane AI algorithms—thus fulfilling whatever random question or request you’ve submitted (“draw me a picture of the Eiffel Tower on Mars”)—and then the data (again, according to Apple) will be deleted.
Apple has instituted an array of other security and privacy protections that can be read about in more detail on the company’s blog. These defenses, while diverse, all seem designed to do one thing: prevent any breach of the company’s new cloud system.
But Is This Really Legit?
Companies make big cybersecurity promises all the time and it’s usually impossible to verify whether they’re telling the truth or not. FTX, the failed crypto exchange, once claimed it kept users’ digital assets in air-gapped servers. Later investigation showed that was pure bullshit. But Apple is different, of course. To prove to outside observers that it’s really securing its cloud, the company says it will launch something called a “transparency log” that involves full production software images (basically copies of the code being used by the system). It plans to publish these logs regularly so that outside researchers can verify that the cloud is operating just as Apple says.
What People Are Saying About the PCC
Apple’s new privacy system has notably polarized the tech community. While the sizable effort and unparalleled transparency that characterize the project have impressed many, some are wary of the broader impacts it may have on mobile privacy in general. Most notably—aka loudly—Elon Musk immediately began proclaiming that Apple had betrayed its customers.
Simon Willison, a web developer and programmer, told Gizmodo that the “scale of ambition” of the new cloud system impressed him.
“They are addressing multiple extremely hard problems in the field of privacy engineering, all at once,” he said. “The most impressive part I think is the auditability—the bit where they will publish images for review in a transparency log which devices can use to ensure they are only talking to a server running software that has been made public. Apple employs some of the best privacy engineers in the business, but even by their standards this is a formidable piece of work.”
But not everybody is so enthused. Matthew Green, a cryptography professor at Johns Hopkins University, expressed skepticism about Apple’s new system and the promises that went along with it.
“I don’t love it,” said Green with a sigh. “My big concern is that it’s going to centralize a lot more user data in a data center, whereas right now most of that is on people’s actual phones.”
Historically, Apple has made local data storage a mainstay of its mobile design, because cloud systems are known for their privacy deficiencies.
“Cloud servers are not secure, so Apple has always had this approach,” Green said. “The problem is that, with all this AI stuff that’s going on, Apple’s internal chips are not powerful enough to do the stuff that they want it to do. So they need to send the data to servers and they’re trying to build these super protected servers that nobody can hack into.”
He understands why Apple is making this move, but doesn’t necessarily agree with it, since it means a higher reliance on the cloud.
Green says Apple also hasn’t made it clear whether it will explain to users what data remains local and what data will be shared with the cloud. This means that users may not know what data is being exported from their phones. At the same time, Apple hasn’t made it clear whether iPhone users will be able to opt out of the new PCC system. If users are forced to share a certain percentage of their data with Apple’s cloud, it may signal less autonomy for the average user, not more. Gizmodo reached out to Apple for clarification on both of these points and will update this story if the company responds.
To Green, Apple’s new PCC system signals a shift in the phone industry to a more cloud-reliant posture. This could lead to a less secure privacy environment overall, he says.
“I have very mixed feelings about it,” Green said. “I think enough companies are going to be deploying very sophisticated AI [to the point] where no company is going to want to be left behind. I think consumers will probably punish companies that don’t have great AI features.”
There are plenty of apps you can turn to to generate pictures using artificial intelligence. Still, Midjourney remains one of the best and one of the most popular options, having launched in beta form in July 2022.
ChatGPT’s Creator Buddies Up to Congress | Future Tech
It’s not free to use: The price of admission starts at $10 a month or $96 a year, which gives you 3.3 hours of image generation time per month (images usually take around a minute to render). However, the quality of the end result may well tempt you into a subscription if you need a lot of AI art.
Assuming you’re ready to sign up (for a month at least), here’s how to get started with Midjourney—the commands you need to know, how to save and browse your images, and some of the capabilities of the generative AI tool.
Getting started
Midjourney works through Discord: You can join the Midjourney channel here, and you’ll need to sign up for a (free) Discord account if you don’t already have one. The next steps involve two bits of admin—agreeing to the Midjourney terms of service and signing up for one of the Midjourney subscription tiers. You’ll get a neat little table outlining the differences between each tier.
Midjourney does a decent job of explaining how everything works with all that out of the way. Unless you’re on one of the more expensive plans, you’ll be writing your prompts and getting your images through a channel that’s open to other users, so don’t be shy—it actually works well for getting inspiration from what other people are doing, and seeing what’s possible with the AI engine.
The on-boarding process is straightforward.Screenshot: Midjourney
To begin with, you’ll need to get involved in one of the #newbie channels, which are clearly linked on the left of the web interface. Click to jump to any one of them and see what’s happening—look at how different art styles are described to get different results, from “abstract expressive” to “hyper-realistic” and everything in between.
The other online location you need to know about is the official Midjourney website. While all of your image generation is done on Discord, this website is where you can find an archive of all the pictures you’ve made and browse through some of the other artwork that’s proving popular on the Midjourney network. From here you’re also able to read about updates to Midjourney.
Writing prompts
Head to a #newbie channel, type “/imagine” followed by a space, and you’re ready to start prompting. If you’ve never used an AI image generator before, describe what you want to see: You can be as creative as possible, putting any kind of person or object in any kind of setting and using any kind of artwork style.
As usual with generative AI tools, the more specific and precise you can be, the better. However, you can be vague if you want to (it’s just less likely you’ll get something close to what you were imagining). See a watercolor of an elephant in a boat, or a photo of an apple on a table, it’s up to you.
Type your prompts into one of the newbie channels.Screenshot: Midjourney
After a few moments of thinking, you’ll get four generated images based on your prompt—if you want Midjourney to try again, click the re-roll button (the blue-and-white circle of arrows). If you like one of the images more than the others, you can click one of the V1–V4 buttons to see four variations on it (the images are numbered from left to right and from top to bottom).
Click on any of the U1–U4 buttons to take a closer look. Here, you get access to some editing features: You’re able to create new variations on all or just part of the image, zoom out on the image (and have AI fill out the canvas), or extend the image in any direction using the four arrow buttons. Click on any image to see it in full-size mode, then right-click to save it somewhere else.
Going further
You can add a variety of parameters to your prompts, and there’s a full list here. They can be used to change an image’s aspect ratio, create images that will tile, or create more varied results, for example. So, if you need a wide rather than square picture, you might append “—aspect 16:9″ to the end of your prompt.
Also worth knowing about are the parameters “—cref” and “—sref”, both of which can be followed by a URL pointing at an image. Use the former (character reference) to show Midjourney a character you want to use in your pictures and the latter (style reference) to show Midjourney the style that you’d like your pictures to look like.
The Midjourney website collects all of your images.Screenshot: Midjourney
There are also a couple of other commands that you can use instead of “/imagine” on Discord. Use “/describe” to get Midjourney to return a text prompt based on an image you supply or “/blend” to have Midjourney combine up to five different images into something new. You can point to images on the web or upload them from your device.
Head to the Midjourney website to find all of your pictures and to download them whenever necessary—eventually, you’ll be able to generate images from here too, but the feature hasn’t been fully launched yet. You can use the filters on the right to sift through the artwork you’ve created, and it’s also possible to download multiple images at the same time or sort them into custom folders if required.
‘Tis the season to promote indie games with AI-generated junk, apparently. A Microsoft Twitter account recently posted low-effort, energy-intensive art promoting indie games on Xbox before later deleting it after getting roundly mocked by fans and developers alike.
Thank You, PS Plus, For Making My Backlog Even Bigger
“Walking in a indie wonderlaaand,” the ID@Xbox account tweeted on December 27. “What were your favorite indie games of the year?” The post was accompanied by an AI-generated image of children sledding down a hill with a giant green Xbox logo on it.
Screenshot: Microsoft / Kotaku
It looked harmless at first, but a second or third glance immediately revealed telltale AI anomalies like children maneuvering their sleds with cranks attached to nothing and fishing in the snow for presents with weird black tendrils. A man playing a gaming handheld in the center top of the image has had his top lip replaced by teeth. A child jumping through the snow appears to have a mustache. It was a really bad look considering ID@Xbox is supposed to be the human-facing team within the megacorporation championing individual creators and small independent teams.
“Bro not Xbox using ayy-eye to promote indie devs,” wrote pixel artist TAHK0. “Nothing says ‘we don’t care about indie developers’ like using AI,” wrote artist NecroKuma3. “ If you can’t hire an artist to do advertising, I highly doubt you’ll do it with independent developers.” The company quietly deleted the post overnight without acknowledging the backlash. Microsoft did not immediately respond to requests for comment.
While not posting half-assed AI art to promote artists seems like a no-brainer, we’re seeing more and more companies do it lately. There was the AI-generated promotional image for Amazon’s Fallout TV show, AI-generated art promoting a new Pokémon GO event, and even Ubisoft accounts representing offices where staff had recently been laid off putting out AI-generated Assassin’s Creed art.
When this stuff first started happening it felt shitty but low stakes. Increasingly it feels clear, however, that companies are taking the same approach to AI art that they have with every other internet age advancement, operating under the assumption that people will complain at first but eventually they’ll get tired of it and move on to being angry about something else. Boil the frog slowly enough and eventually it won’t realize it has 11 fingers, 13 toes, and weird spindly wires coming out of its back.
As a cheerleader for AI technology, however, Microsoft’s role in this is especially egregious. The company is already promoting tools for AI-generated content in games, and encouraging all 20 Bing users to play around with its AI art tools. Never mind that no one is actually quite sure how the technology will make money, or if it’s even legal. If it can replace human creativity with predictable slop and reduce headcount, it must be a win-win.
According to the MIT Technology Review, every AI-generated image requires as much energy as an entire smartphone charge. And Microsoft’s own internal environmental report blamed the technology for a 34 percent spike in its water usage to cool all the racks of computing power required for, among other things, enabling users to shitpost about Kirby doing 9/11. As Immortality game director Sam Barlow put it following the AI-generated ID@Xbox post, “Really impressive that just as we were finally starting to address the climate emergency, we invented stupid ways to undo all our progress.”
Happy Halloween! Ubisoft Netherlands invites you to celebrate the spooky festivities with AI-generated Assassin’s Creed art. Terrifying indeed!
How Alan Wake 2 Builds Upon The ‘Remedy-Verse’
People first began to notice some of Ubisoft’s social media channels posting what appeared to be AI-generated versions of Assassin’s Creed art last night. A smoothed over, off-brand Ezio emerged on the French publisher’s X (formerly known as Twitter) account for Latin America. “In other amazing industry news here’s an official Ubisoft account with 300K followers posting AI art,” tweeted Forbes contributor Paul Tassi. The publisher’s post was mocked for making Ezio look like a Fortnite character and for one character in the background wielding gun grips like knives. The tweet was deleted soon after.
Not to be outdone, however, the Ubisoft Netherlands account followed up with its own AI-looking Ezio art complete with Jack-o’-lanterns. “Which Ubisoft game is perfect for this horrible evening?” the account asked in Dutch. Clearly the one the Assassin’s Creed maker was playing with fans’ hearts.
Ubisoft recently revealed that over 1,000 people have left the company in the last year as part of its “cost reduction” program. Some of those departures were voluntary, but others included layoffs across customer support, marketing, and other departments in Europe, the U.S., and elsewhere. “Ubisoft literally conducting layoffs this year and last month, and they’re posting AI art,” tweeted film concept artist Reid Southen. “Unbelievable. What the hell is the game industry doing right now.”
Still, over 19,000 people continue to work at Ubisoft, including many devoted just to the Assassin’s Creed franchise and all of its sequels, spin-offs, and other incarnations currently in the pipeline. Surely one of them could have made some art for the social media accounts. Or the company could have just used one of its many existing Ezio images. Anything would have been preferable to posting ugly AI-generated crap as thousands are laid off across the video game industry this year.
Fans have had to become increasingly vigilant in 2023 about companies trying to pass off AI-generated images in their marketing, as DALL-E 2, Midjourney, and other AI text-to-image models make it easier than ever to cobble together fake art. Amazon did it to promote its upcoming Fallout TV show. It sure seemed like Niantic did it to promote upcoming content in Pokémon Go. Legendary Studio Ghibli director Hayao Miyazaki calling AI art tools “an insult to life itself” back in 2016 has never felt so prophetic.