Gamers are a passionate bunch, and we’re no exception. These are the week’s most interesting perspectives on the wild, wonderful, and sometimes weird world of video game news.
The Top 10 Most-Played Games On Steam Deck: August 2023 Edition
Scott Pilgrim Takes Off, the new animated series based on Bryan Lee O’Malley’s graphic novels, is out on Netflix. The eight-episode series reunites the voice cast of the 2010 live-action movie Scott Pilgrim vs. the World and is a hilarious blend of the series’ quick wit and well-measured pop culture references. All of this sounds like a recipe for success, right? Well, it’s a little more complicated. Read More
Ubisoft’s new The Divisiongame isn’t even out yet, as it’s still in beta testing and won’t launch officially until 2024. But after trying the beta, I already want one feature from the upcoming game to become standard in every video game I play in the future. Read More
OpenAI is the research organization behind ChatGPT, the AI-generated chatbot that took the internet by storm last year for its capacity to have really weird conversations with tech journalists. It’s at the center of Microsoft’s big bet on generative AI tools transforming the world, gaming, and more, and it’s now at risk of imploding after its CEO, Sam Altman, was mysteriously ousted by the OpenAI board of directors and Twitch co-founder Emmett Shear was desperately recruited to replace him. Here’s all you really need to know about OpenAI to appreciate what a clusterfuck the last few days have been. Read More
How much time has to pass before it becomes acceptable to remaster or even remake a game? 10 years? 15 years? What about three-ish years? Is that enough time between the original and the remaster? Well, that’s what’s happening early next year as Naughty Dog is remastering 2020’s The Last of Us Part II.Read More
Image: Kotaku / Asier Romero / Luis Molinero (Shutterstock)
Whenever a new blockbuster first-person shooter drops, gamers limber up so they can once again argue over how multiplayer matches get made and the algorithmic systems that determine who plays against whom and when. The recent release of Call of Duty: Modern Warfare III is no exception—not long after its multiplayer servers booted on November 10, players began flocking to Reddit, X (Twitter), and everywhere in between to complain about the quality (or perceived lack thereof) of Activision’s matchmaking. But, as with so many issues in the gaming industry, there’s a serious lack of nuance and true understanding at play here. Read More
Remember when it took us seven years to get a new The Last of Us game? Remember when there was even a question about whether or not we’d ever get a sequel to Naughty Dog’s post-apocalyptic action game because the ending was so intentionally ambiguous and thought-provoking?
Now, it seems we can’t go a year without being reminded that Sony thinks as many people should experience this series as possible, while folks associated with the HBO adaptation praise the game in ways that border on the absurd. Now, we’re getting a remaster of The Last of Us Part II, and it feels like we’re reaching peak Last of Us fatigue. Read More
Video games engine provider Unity announced earlier today the introduction of two new machine-learning platforms, one of which in particular has developers and artists asking questions of the company that, at time of publishing, have yet to be answered.
The Week In Games: What’s Releasing Beyond Lara Croft
Today we’re announcing two new AI products: Unity Muse, an expansive platform for AI-driven assistance during creation, and Unity Sentis, which allows you to embed neural networks in your builds to enable previously unimaginable real-time experiences.
Muse is essentially just ChatGPT but for Unity specifically, and purports to let users ask questions about coding and resources and get instant answers. Sentis, however, is more concerning, as it “enables you to embed an AI model in the Unity Runtime for your game or application, enhancing gameplay and other functionality directly on end-user platforms.”
just to jump on the train, which dataset y’all pull the art from???
Unity needs to be fully transparent about what ML models will be implemented, including the data they have been trained on. I don’t see any possible way ML, in current iterations, can be effective without training on countless ill gotten data.
REALLY concerning image generator stuff. What datasets?
Hi, what dataset was this trained on? Is this using artwork from artists without their permission? Animations? Materials? How was this AI trained?
You do realize that AI-created assets can’t be used commercially, so what was the rationale for adding this feature?
Which datasets were used in development of this? Did you negotiate & acquire all relevant licenses directly from copyright holders?
It’s a very specific question, one that at time of publishing Unity has yet to answer, either on Twitter or on the company’s forums (I’ve emailed the company asking the question specifically, and will update if I hear back). Those familiar with “AI”’s legal and copyright struggles can find the outline of an answer in this post by Unity employee TreyK-47, though, when he says you can’t use the tech as it exists today “for a current commercial or external project”.
Note that while there are clear dangers to jobs and the quality of games inherent in this push, those dangers are for the future; for the now, this looks (and sounds) like dogshit.
Last year, AI generated art finally broke through the mainstream—but not without significant public controversy. The rampant art theft required to build an AI’s dataset and the resulting forgeries eventually led to a class action lawsuit against AI generators. Yet that hasn’t stopped developers from using the technology to generate images, narrative, music and voice acting for their commercial video games. Some game developers see the technology as the future, but caution against over-selling its benefits and present capabilities.
AI has been making headlines lately for the wrong reasons. Netflix Japan was blasted by professional artists for using AI to make background art—while leaving the human painter uncredited. Around mid-February, gaming and anime voice actors spoke out about the “pirate” websites that hosted AI versions of their voices without their consent. AI seems to be everywhere. One procedurally generated game has already sold millions of copies.
The promise of user-generated gaming experiences
A few years ago, Ubisoft Toronto, known for games like Far Cry 6 and Tom Clancy’s Splinter Cell, was not only using AI in its development process—it created an entire design system that heavily relied on procedural generation. “In the future — potentially as soon as 2032 — the process of making digital nouns beautifully will be fully automated,” Ubisoft director Clint Hocking wrote in a Polygon op-ed that claimed that within a decade, players would use AI prompts to build their games. Think “a side-scroller where I am an ostrich in a tuxedo trying to escape a robot uprising,”as Hocking put it. This futuristic vision of games would work in the same way you might tell AI image generator Midjourney to produce new images based on text descriptions.
Despite the eyebrow-raising boldness of his claim, the industry has already seen some strides. Watch Dogs: Legion, Ubisoft’s open world action-adventure game, seemed impressive for what it was: A blockbuster title that randomly generated NPCs in every playthrough and promised to allow players to “play as anyone.” While reviewers did encounter “repetitive loops” in the quest system, Legion seemed like a solid first step in the future of procedurally generated gameplay.
“10 years [to create an AI game] is insane, as it takes 5 to 10 years to make a standard AAA game,” said Raj Patel, former product manager on Watch Dogs: Legion. He was wary of how designing non-linear games incurred an additional layer of labor-intensive complexity. He told Kotaku over messages that he didn’t think that AI games could be “wholly original, bespoke, [and] from scratch with the same quality” as existing AAA games. “There is certainly potential [in machine generated games], but Star Citizen has been in development for 10 years so far,” he said of a space sim MMO that boasts of procedurally generated planets. The game has raised nearly $400 million, but has not been released since it was first announced in 2010.
If Ubisoft’s forays into NFTs and web3 are any indication, the company has been quick to jump on trends that sound buzzy to investors. But that didn’t mean that they were necessarily pushing the technology forward.
Game designer and AI researcher Younès Rabii felt that integrating AI with these expensive processes was more about “hype” than a technological inevitability. “There’s always a 15 to 20 year gap between what academia has produced in terms of [AI] advances and what the industry actually uses,” Rabii told Kotaku over Zoom. They had strong feelings about how Watch Dogs: Legion seemed to fall short in being the public face of what AI games could be. “This is because it’s way too long to train [developers] to use [advanced AI]. It’s not worth the risk. It doesn’t bring enough money to the table.” Ubisoft told investors that the game’s predecessors have sold around ten million each, but never publicly released the sales data for Legion beyond its launch period. They felt that Ubisoft had taken the risk with Legion as a marketing hook. “It’s not that interesting… they have a series of simple nouns and properties, and they behave according to it.”
Image: Ubisoft
Reviewers seemed to agree with him. One critic noted that “there’s not much of a human element” to the Londoners in the game, and that they “don’t meaningfully interact with each other.” Another struggled with “repetitive” missions. Kotaku panned the campaign for being “empty and soulless,” but praised the more interesting DLC for ditching the procedurally generated recruitment altogether.
Hocking himself admitted in a Washington Post interview that “reinventing open world design” during Legion’s development had been “uncertain,” “difficult,” and “scary. Being able to play as any character in the game was an idea that Ubisoft had never experimented with before.” Human designers had to manually account for every single possibility that the players could choose—it wasn’t a computer that could understand how human players would emotionally respond to randomly generated scenarios. Hocking had been much less optimistic about the possibility of creating a gameplay experience that didn’t feel entirely samey. “There isn’t infinite diversity,” Hocking said in the interview. “You’re still going to encounter, ‘Oh, yeah. I recognize that voice. I recognize that person. Or, this is one of the people who has the technician fighting style. They fight in a certain way, [similar to] that other person.’ But it still blurs the lines quite a bit.”
Artificial intelligence has always been a part of game development
Florence Smith Nicholls, story tech at the award-winning indie studio behind Mutazione, also had a more muted perspective of AI. They told Kotaku over video call that AI was already being used extensively in AAA development, like in Fortnite. “When people [say] it’s going to completely revolutionize gaming, it feels kind of similar to what we’ve had with discussions around NFTs and the blockchain.” They pointed to the chess playing program Deep Blue as an example of artificial intelligence in gaming.
Screenshot: Epic Games
Mostly, though, we’ve seen a wide range of applications for AI in games when it comes to automation–but how we define such a thing can get confusing for the average person. Because of popular generators such as Midjourney and Chat GPT, most people associate them with neural networks that create text or images based on a dataset that it scrapes from the internet. Researchers have very broad definitions of AI. “If you showed someone Google Maps in 1990 and showed that you could plot a route between any two points on the planet… that would be considered a hard AI problem,” said Cook. “Now people just think of that as something that your phone does. It’s the same thing in games. As [technology] becomes more normal, they no longer look like AI to us.”
“We talk about AI when it doesn’t work,” said Alan Zucoconi, a director of AI game development at the University of London. “When it works, it’s invisible and seamless.” He acknowledged that artists and programmers don’t see eye-to-eye on the technology. “There is friction [with AI], especially for artists… Those same artists are using AI every day, they just don’t call it AI,” said Zucconi. “Tools like the select all regions tool in Photoshop, smudging colors… tools we take for granted are not seen as AI… so I find it very fascinating when people think that these are something new. It’s not.”
“The real utility [of AI] in the short term is helping with more discrete tasks in the process of producing work,” Patel wrote, recounting his experiences with working on Ubisoft games. “In one game, we had AI testing the open world… It would log the framerate and any clipping issues. The machines would be left running moving through the world and note areas where things had issues. That helped us find areas to check without having real people have to do that otherwise tedious work. Real people could focus on checking, verifying, and figuring out details.” Rather than risking whether or not a player might be able to tell if something was AI-generated, “[AI] let our QA staff not do the tedious parts and focus their time more efficiently on problem areas.”
Automated development often sounds incredibly sinister when coming out of the mouth of a gaming executive who doesn’t sound adequately troubled about the plight of crunching developers. But testing has been automated for years, and QA professionals are calling for studios to ditch fully manual testing. Despite the popular image of QA as low-skilled work, AI experience is often a necessary prerequisite to being a games tester, because automated testing is often a key aspect of a studio’s workflow. And it’s not just testing—automation is a shipped feature of AAA video games too.
Mike Cook is an AI researcher and game designer at King’s College London. He told Kotaku over a Zoom call that games such as Minecraft are procedurally generated by AI, and blockbuster games such as Assassin’s Creed makes use of AI for certain mechanics. “When your character places their hands and legs in unusual places to climb up the side of a building, that’s not a handmade animation,” he said. “There’s an AI that’s helping figure out where your body’s limbs should go to make it look normal.” He noted that online matchmaking and improving connectivity were both aspects of games that were supported by AI.
Limitations and ethical challenges of AI and procedural generation
Despite the possibilities, Nicholls said that procedurally generated content was only really useful for “very specific tasks.” They cited examples such as changing the weather or generating foliage in Fortnite. AI would need to be able to handle several different tasks in order to be considered a game-changing force in development.
However, they had concerns about which developers would benefit from extensive automation. They pointed out that in the case of art outsourcing (the practice in which studios pay cheaper studios to create low-level assets), the “main” studios were doing more “intellectual work” such as design. They thought that AI could similarly create an underclass of artists whose work is less valued.
Sneha Deo, an AI ethicist from Microsoft, draws the connection more overtly. “I would say a lot of the undercutting of [tech labor] value that happens today is due to differences in the value of currency.” It’s cheaper to hire developers from a country with a less powerful currency, rather than paying developers from the U.S. or western Europe. She also attributed the devaluation of human labor to the last mile effect. “Humans trick themselves into thinking if a machine can do it, then the [labor] that the humans are adding to it isn’t as valuable because most of it is automated.” So even if AI created new ‘AI design’ jobs, those jobs might not necessarily pay a reasonable amount.
While he’s normally exuberant about the possibilities of machine learning, Zucconi seemed uncomfortable when asked about whether or not AI would devalue the labor of voice actors. When directly pressed about the possibility of paying actors for using their voices in AI (as Hocking raises in his op-ed), he said: “Licensing voices is probably going to happen. We’re very close to having that technology… I’m hopeful that this is a good future because it means that people can have more work opportunities.” The ability to commercially profit from one’s own “likeness” is enshrined in state publicity laws. Celebrities have been licensing their likeness to third parties for years—the most famous recent example being Donald Trump’s embarrassing foray into NFTs.
Jennifer Hale, voice actor for female Shepherd, tweeted that AI voices created without consent were “harming voice actors,” Screenshot: Electronic Arts
Despite his optimism, it seemed that professional voice actors felt differently. Voice actors for popular franchises such as Cowboy Bebop and Mass Effect both spoke out against AI versions of their voices being falsified and used without consent. Some bad actors had even used AI-generated voices to dox people. It’s reminiscent of how decades ago, Jet Li turned down a role for TheMatrix because he was concerned about Warner Bros. reusing his motion-captured movements after he collected his last check.
“I think what matters is not any specific deal,” Cook said in regards to compensation and AI-generated art. “I don’t know if licenses are better than labor. What does matter is that the people who are actually doing this job are the ones that get to decide what should be happening,” he said. “And the problem is that in most of these creative jobs, the power dynamic isn’t there to allow people to have that voice.” He also noted that it was easy for artists to accidentally sign away their rights in perpetuity.
Unlike blockchain technology, developers can see clear benefits to adopting automation more broadly in game development. One indie developer told Games Industry that AI development could help smaller studios stay competitive. Failure rates are incredibly high, especially for developers who don’t have massive AAA-sized budgets. No Man’s Sky used machine-generated content to create expansive worlds, only to have a disastrous launch–and it took five years for the game to eventually become a success story.
Deo saw AI as one method of bridging the resource gap between the global north and south. “What’s the rightness or wrongness around using these models to generate art or narrative or text if that’s not your strength? I think about game design as this collaborative process that favors people who already have strong networks,” she said over Zoom video. “[These people] can tap their friends or their networks to come in and do that manual work, [which] is democratized by the replacement of human labor by AI art.”
Image: Latitude
Deo acknowledged that AI art could undercut junior artists who were trying to break into the industry, but thought that it wasn’t an ethical quandary that should rest on independent creators. “It’s not a black and white thing. I think at larger studios, that’s a place where there’s an ethical issue of: ‘How does this undercut labor that’s already undervalued?”
It was a convenient way to think about AI in a positive light. But AAA games like Fortnite have already taken “inspiration” from indie games such as Among Us. That was just for a game mode. It didn’t feel like a logical leap to think that big studios could borrow development methods too.
Could machine-generated games be fun?
And there’s another major stakeholder that’s critical to the success of AI games: the players. Right now, the average person still thinks that “human” and “machine” generated art have inherent differences. “There’s a sense of difficulty in knowing the authorship of certain artwork,” said Nicholls. While games are often attributed to leads in more public-facing roles, they are products of entire teams–and AI only complicates the idea of authorship. Especially when generators such as Midjourney are raising legal and ethical questions on who owns the art that the machine produces. “I wonder if now there’s more unease around AI because people fear that they won’t be able to tell if something is AI generated or not.” Before AI became a prominent image-making tool, it would be reasonable to assume that any painting had some kind of human element. Now, even Bungie community moderators struggle to differentiate between AI and human art.
But Cook thinks that these machines we call “video games” contain a complexity that can only be built by humans. “Maybe it’s possible for AI to generate games but the games that left an impact on us… they’re boundary breaking. Concept breaking. Those are things we can’t necessarily predict with enough data or computer power… If we wanted infinite Grand Theft Auto campaigns or Star Trek episodes, then they would start to feel samey.”
Nevertheless, games such as Minecraft and No Man’s Sky are immensely popular. Although the popular image of artificial intelligence is associated with perfection, that’s not what Cook thinks that gamers necessarily want.
“Players like to be surprised. They actually like it when the AI breaks…Some of the most memorable things that people pull out of these AI systems is when they’ve gone wrong a bit. But I think something that’s really important is that they like to be able to share and talk about these things,” he said. “Although Minecraft or Spelunky 2 has an infinite number of levels and worlds in it, that infinity isn’t really important. What’s important is the one world that you have, or the one thing that you shared with other people. So in the Valheim world, the Valheim world generator is not important. What’s important is the server that you built with your friends.“