Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.” While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities.
“To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said.
He also said he sees a future where “everything”—from people to businesses to the government—will be represented by an interactive persona, or “personal AI” that is “infinitely knowledgable,” “factually accurate, and reliable.”
“If AI delivers just a fraction of its potential” in finding solutions to problems in everything from healthcare to education to climate change, “the next decade is going to be the most productive in human history,” Suleyman said.
When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits.
“The good news is that if you look at the last two or three years, there have been very, very few downsides,” Suleyman said. “It’s very hard to say explicitly what harm an LLM has caused. But that doesn’t mean that that’s what the trajectory is going to be over the next ten years.”
While Suleyman said he sees five to 10 years before humans have to confront the dangers of autonomous AI models, he believes those potential dangers should be talked about now.
Video games engine provider Unity announced earlier today the introduction of two new machine-learning platforms, one of which in particular has developers and artists asking questions of the company that, at time of publishing, have yet to be answered.
The Week In Games: What’s Releasing Beyond Lara Croft
Today we’re announcing two new AI products: Unity Muse, an expansive platform for AI-driven assistance during creation, and Unity Sentis, which allows you to embed neural networks in your builds to enable previously unimaginable real-time experiences.
Muse is essentially just ChatGPT but for Unity specifically, and purports to let users ask questions about coding and resources and get instant answers. Sentis, however, is more concerning, as it “enables you to embed an AI model in the Unity Runtime for your game or application, enhancing gameplay and other functionality directly on end-user platforms.”
just to jump on the train, which dataset y’all pull the art from???
Unity needs to be fully transparent about what ML models will be implemented, including the data they have been trained on. I don’t see any possible way ML, in current iterations, can be effective without training on countless ill gotten data.
REALLY concerning image generator stuff. What datasets?
Hi, what dataset was this trained on? Is this using artwork from artists without their permission? Animations? Materials? How was this AI trained?
You do realize that AI-created assets can’t be used commercially, so what was the rationale for adding this feature?
Which datasets were used in development of this? Did you negotiate & acquire all relevant licenses directly from copyright holders?
It’s a very specific question, one that at time of publishing Unity has yet to answer, either on Twitter or on the company’s forums (I’ve emailed the company asking the question specifically, and will update if I hear back). Those familiar with “AI”’s legal and copyright struggles can find the outline of an answer in this post by Unity employee TreyK-47, though, when he says you can’t use the tech as it exists today “for a current commercial or external project”.
Note that while there are clear dangers to jobs and the quality of games inherent in this push, those dangers are for the future; for the now, this looks (and sounds) like dogshit.
AI is coming to games, whether you like it or not. Last night’s Nvidia keynote showed just how powerful—and devastating—that’s going to be. The company’s CEO, Jensen Huang, showed off how its freshly announced “Omniverse Avatar Cloud Engine” (ACE) can create real-time interactive AI NPCs, complete with improvised voiced dialogue and facial animation.
This New Wholesome Anime Is Basically Gay Spy x Family
While the focus of AI’s incursion into gaming spaces has perhaps so far been mostly on the effects for artists, it’s writers who should already have the most to fear. Given how mediocre the standards are for NPC dialogue and quest texts in games, it’s absolutely inevitable that the majority of such content will be AI-written in the near future, despite the potential fury and protests that will come in its wake. But Nvidia’s reveal last night suggests that the consequences could be far farther-reaching, soon replacing voice actors, animators, lighting teams, the lot.
ACE is Nvidia’s “suite of real-time solutions” for in-game avatars, using AI to create characters who can respond to unique player interaction, in character, voiced, and with facial expressions and lip-syncing to match. To see it in action (or at least, in purported action—we’ve no way of verifying the footage the company played during the Computex 2023 keynote), take a look at this. It should start at 25 minutes, the clip starting at 27:
NVIDIA Taiwan
So what you’re seeing here is an in-game character responding in real-time to words the player says out loud, uniquely to how they phrased the questions, with bespoke dialogue and animation. The character has a backstory, and a mission it’s compelled to impart, but beyond that the rest is “improvisation,” based on the words the player says to it.
This is the most immediately obvious use of ChatGPT-like AI as we currently understand it, which is essentially a predictive text model writ large. It’s ideal for creating characters able to say coherent, relevant conversational dialogue, based on inputs.
Now, there are two very obvious issues to mention straight away, the first being how awful and flat the character’s performance is in this clip. But remember, this is the first iteration of this tech, and then put it in the context of how, until about ten minutes ago, computer-generated voices all sounded like Stephen Hawking. This’ll advance fast, as AI models better learn to simulate the finer nuances of human speech.
The second issue is that absolutely no one playing a game like this would stick to the script as happens in this clip. In fact, the first thing just about everyone would say to such an NPC would be something about fucking. For reference, see all text adventure players ever in the early 1980s. That’s going to be the more difficult aspect for games to overcome.
Screenshot: Nvidia / YouTube / Kotaku
Of course, application of the tech is going to be viewed as far less important in the face of just how many jobs ACE is looking to replace. Huang so nonchalantly mentions how the AI is not only providing the words and voice, but is doing the animation too. And this is in the wake of his previously explaining how AI is being used to generate the lighting in the scene, and indeed improve the processing power of the graphics technology that’s creating it all.
There’s no version of reality where this doesn’t see a huge number of people in games development losing jobs—albeit most likely those who haven’t gotten said jobs yet. Why hire new animators for your project when the AI will do it for you, backed up by the dwindling team you’ve already got? Who’s going to look for new lighting experts when there’s a lighting expert living inside your software? Let alone the writers who currently generate all the dialogue you currently skip past.
And this isn’t futuristic stuff to concern ourselves with somewhere down the line: it already exists, and it’s going to be appearing in games that release this year. With the announcement of ACE, this is all going to be exacerbated a lot faster than perhaps anyone was expecting.
For game studios, this is great news! The potential for such technology is incredible. Games that are currently only achievable by teams of hundreds will become realistically achieved by teams of 10s, even individuals. We, as players, will soon be playing games where we can genuinely roleplay, talk directly to in-game characters in ways the likes of Douglas Adams fantasized about and failed to achieve forty years ago.
When it’s not stealing or plagiarizing, generative AI is improving quickly. Images that used to look uncanny now appear more natural and humanly imperfect. But it still struggles with plenty of things. Apparently video game controllers are one of them. Someone asked Midjourney for simple pictures of a person having fun playing video games, and got back some beautiful abominations.
This New Series Completes Studio MAPPA’s Dark Trilogy
A generative AI enthusiast asked the Midjourney community for help this week when a simple prompt returned some nightmares. “Mj has a real tough time with ‘playing video games’ apparently,” they posted on the project’s subreddit. “Any ideas how I could improve this? Prompt: female influencer relaxing playing PlayStation 5 having a blast”
While Midjourney managed to render a human with the right number of fingers, the controllers in her hands and how she was holding them looked like something out of a Cronenberg movie. The gamepads are overflowing with random buttons, triggers, and sticks, and not in a cool way. Microsoft’s adaptiver controller looks sleek. Midjourney’s version hurts just looking at it.
As many commenters suggested, one reason could be the overly broad prompt. While “playing” is intuitive to the average person, it’s vague when compared to what a search for it might reveal. The bigger culprit, though, is probably that there just aren’t many images of the backs of controllers compared to all the front-facing promotional shots companies release to sell them.
In that regard, the failed experiment potentially reinforces one of generative AI’s biggest weaknesses: It’s great at giving you variations on what already exists, but struggles to bridge the gaps in what’s missing. Or it borrows from existing sources in the wrong ways. Some of you might remember the infamous grip meme, and it certainly looks like that’s what Midjourney is recreating in the fourth image. Turns out the fake AI gamer girl is actually an extremely hardcore Armored Core fan.
The post, called DISCORD IS YOUR PLACE FOR AI WITH FRIENDS, is full of cheery imagery and promises about how “AI” is going to make everything easier for anyone using Discord, whether they’re chatting with friends or trying to moderate a group.
Why is Discord becoming the home for AI? Simple: on Discord you can enjoy AI with friends. Rather than just going solo with an app, you and your friends can see what sorts of exciting, wild and sometimes silly results come from prompts like “robo-hamster caught in cardboard box, renaissance painting.”
“Will there be an opt out button for server owners who don’t wish to have their server become training material for machine learning?”, asks one user, while most others are simply as many variations of a “nah” image meme as you can find on the internet in 2023.
“Ah yes, ‘sharing AI experiences,’ precisely what I’m on Discord for, my mistake for thinking I was there to spend time with my friends, network, and meet new people”, says another.
They even manage to fuck up the one good announcement among it all, with news of a “shared whiteboard” feature—something people have wanted forever—spoiled by the fact it will come saddled with “an AI-powered text-to-image generator you can iterate and experiment with together”.
Companies, I promise you, you don’t need to do this. I know you are compelled to through the irresistible forces of capitalist inertia, the need to make everything grow all the time, but like, this is a chat program. It doesn’t need any of this crap in it. We’re literally only using Discord for one thing: to talk to people.
NEW YORK—Expressing confidence that the new technology wasn’t a threat, FinCorp Solutions CEO Charles Markham reportedly expressed relief Wednesday that artificial intelligence could never replace him if he already contributed nothing to the company. “I actually don’t do anything, so there’s nothing the computer can do better than me,” said Markham, adding that his job was secure since no one was interested in building AI dedicated to wandering around the office or going on vacation for 12 weeks out of the year. “Lucky for me, the current models are striving toward imitating the skills of professional artisans, who possess a level of talent that I don’t have. When AI can sit in a large chair and make money off the backs of others all day, I’ll start to worry about my job.” At press time, sources reported Markham became concerned after finding out that AI was capable of embezzling company money way better than he could.
The award-winning Clarkesworld Magazine has helped launch the careers of science fiction writers for almost 20 years, regularly featuring work from Hugo Award nominees and winners like Elizabeth Bear, Peter Watts and Catherynne M. Valente. But right now, in quite the ironic situation, it finds itself battling against that most sci-fi of modern trends: AI.
According to a recent article by Clarkesworld’s editor, Neil Clarke, over a third of submissions that have come in to the magazine this year have been written by artificial intelligence, then submitted by cheating humans. And it’s getting worse, fast. In the first half of February, more than double the number of AI-written entries appeared than in all of January, and Clarke tells Kotaku there were 50 alone today.
Since the article was written, Clarke has tweeted that as of now, submissions are entirely closed. “I shouldn’t be hard to guess why,” he adds.
The decision to close submissions was made “in the spur of the moment,” Clarke told Kotaku via email, as the numbers poured in this morning. “I could either play whack-a-mole all day or close submissions and work with the legitimate submissions.”
The speed of the rise of this situation is quite striking. Clarke states in his blog post that he’s long had to deal with plagiarism, but it wasn’t until the close of 2022 that the problem became so endemic. And then in the first month and a half of 2023, it’s escalated to such a scale that the magazine has suspended entries entirely.
Clarke’s graphic showing the vast increase in bans.Graphic: Neil Clarke
How can Clarkesworld tell a story was generated by AI?
Clarke doesn’t explain in his blog how he’s able to tell which entries are written by AI, for the very sensible reason that he doesn’t want to arm cheats with information that could help them bypass his detection. However, he explained to Kotaku that they currently aren’t too difficult to spot.
“The ‘authors’ we’ve banned,” Clarke told us, “have been very obviously submitting machine-generated text. Those works are formulaic and of poor quality.” However, he also suspects there’s a tier above these already, not quite so obvious, but enough to raise suspicion. “None are ever good enough to warrant spending more time on them,” he explains, but adds, “It’s inevitable that that group will grow over time and become yet another problem.”
It’s not a problem Clarke faces alone. The editor reports others in similar positions are facing the same challenges, and clearly if it’s happening to Clarkesworld, it’ll be happening anywhere that is open to submissions for publication. And while, for the most part, such submissions are weeded out simply because they won’t be good enough for publication, it’s an expensive and time-consuming process to wade through the fakes.
Clarke adds that third-party detection tools which are supposed to be able to recognise plagiarized or AI-written content aren’t the solution, given the numbers of false-positives and negatives, and indeed the cost of such services. Other short-term measures, like regional bans on parts of the world where most faked entries come from, are also not the answer. As Clarke puts it in his article,
It’s clear that business as usual won’t be sustainable and I worry that this path will lead to an increased number of barriers for new and international authors. Short fiction needs these people.
And of course, this isn’t an issue that’s going to get easier. The pace with which AI chat bots are improving is enough to have you penning ideas for a science fiction short story, and presumably forthcoming tweaks will make them ever-harder to immediately spot. However, it’s likely we’re still a fair way off AI being able to create stories genuinely worth reading. I asked Clarke if he thought this likely to be the case. “At the moment, considerable improvement is still necessary,” he said, not wanting to venture a guess as to exactly how long such a leap might be from now.
But this doesn’t provide much comfort. “We still have ethical concerns about the means by which these works are created,” Clarke told Kotaku, “and until such concerns can be ameliorated, we won’t even consider publishing machine-generated works.”
ChatGPT and Chatsonic’s attempts at a sci-fi story
There are already services like ChatSonic that boldly promote themselves as a means to create blocks of non-plagiarized writing that students can use. I’ve previously engaged in exhaustingly futile debates with the AI itself about how this is clearly cheating, over which it becomes enormously indignant, defending itself with circular arguments and a determination that simply asking the bot for words on a topic is a creative act in itself.
Indeed, while I wrote the previous paragraph I asked ChatSonic to write me a 1,000 word short story about an AI that writes science fiction and goes on to win a Hugo Award. For some reason it only reached 293 words (bloody freelancers), and it’s abysmal, but it took a few seconds:
Screenshot: ChatSonic / Kotaku
Meanwhile, ChatGPT put in a far better effort, hitting the wordcount, and writing something that had some sense of creativity behind it. Ultimately, it’s still a dreadful story, and hilariously self-aggrandizing, but unnervingly competent:
Screenshot: ChatGPT / Kotaku
(Er, I guess I’ll paste the second half in the comments, if you’re desperate to know how it ends.)
Can AI outdo human creativity?
Clarke mentioned above that he has many ethical concerns to resolve before even considering publishing AI-crafted writing. But could such a thing ever occur? If AI could generate original stories that are worth reading, might it ever be reasonable to publish such things? “First,” Clarke told us, “you need these tools to become able to write something that goes beyond its dataset. True imagination, not a remix. At that point, it can rival our best authors, but isn’t necessarily guaranteed to be better.”
Of course, “better” might not be the ultimate defining factor. As Clarke adds, “the big difference, and the one causing us problems now, is speed. An machine can outproduce and bury a human artist in the noise of it all.”
And just in case all of this wasn’t worrying you enough already, let’s end things with ChatGPT’s chilling concluding paragraph to the short story I asked for before:
Some people were still skeptical, of course. They believed that an AI could never truly be creative, that it was just regurgitating information that had been programmed into it. But the fans of SciFiGenius knew better. They knew that the AI was capable of so much more than just spitting out pre-written stories. They knew that it was a true artist, capable of creating works that touched the hearts and minds of millions of people.
Last year, AI generated art finally broke through the mainstream—but not without significant public controversy. The rampant art theft required to build an AI’s dataset and the resulting forgeries eventually led to a class action lawsuit against AI generators. Yet that hasn’t stopped developers from using the technology to generate images, narrative, music and voice acting for their commercial video games. Some game developers see the technology as the future, but caution against over-selling its benefits and present capabilities.
AI has been making headlines lately for the wrong reasons. Netflix Japan was blasted by professional artists for using AI to make background art—while leaving the human painter uncredited. Around mid-February, gaming and anime voice actors spoke out about the “pirate” websites that hosted AI versions of their voices without their consent. AI seems to be everywhere. One procedurally generated game has already sold millions of copies.
The promise of user-generated gaming experiences
A few years ago, Ubisoft Toronto, known for games like Far Cry 6 and Tom Clancy’s Splinter Cell, was not only using AI in its development process—it created an entire design system that heavily relied on procedural generation. “In the future — potentially as soon as 2032 — the process of making digital nouns beautifully will be fully automated,” Ubisoft director Clint Hocking wrote in a Polygon op-ed that claimed that within a decade, players would use AI prompts to build their games. Think “a side-scroller where I am an ostrich in a tuxedo trying to escape a robot uprising,”as Hocking put it. This futuristic vision of games would work in the same way you might tell AI image generator Midjourney to produce new images based on text descriptions.
Despite the eyebrow-raising boldness of his claim, the industry has already seen some strides. Watch Dogs: Legion, Ubisoft’s open world action-adventure game, seemed impressive for what it was: A blockbuster title that randomly generated NPCs in every playthrough and promised to allow players to “play as anyone.” While reviewers did encounter “repetitive loops” in the quest system, Legion seemed like a solid first step in the future of procedurally generated gameplay.
“10 years [to create an AI game] is insane, as it takes 5 to 10 years to make a standard AAA game,” said Raj Patel, former product manager on Watch Dogs: Legion. He was wary of how designing non-linear games incurred an additional layer of labor-intensive complexity. He told Kotaku over messages that he didn’t think that AI games could be “wholly original, bespoke, [and] from scratch with the same quality” as existing AAA games. “There is certainly potential [in machine generated games], but Star Citizen has been in development for 10 years so far,” he said of a space sim MMO that boasts of procedurally generated planets. The game has raised nearly $400 million, but has not been released since it was first announced in 2010.
If Ubisoft’s forays into NFTs and web3 are any indication, the company has been quick to jump on trends that sound buzzy to investors. But that didn’t mean that they were necessarily pushing the technology forward.
Game designer and AI researcher Younès Rabii felt that integrating AI with these expensive processes was more about “hype” than a technological inevitability. “There’s always a 15 to 20 year gap between what academia has produced in terms of [AI] advances and what the industry actually uses,” Rabii told Kotaku over Zoom. They had strong feelings about how Watch Dogs: Legion seemed to fall short in being the public face of what AI games could be. “This is because it’s way too long to train [developers] to use [advanced AI]. It’s not worth the risk. It doesn’t bring enough money to the table.” Ubisoft told investors that the game’s predecessors have sold around ten million each, but never publicly released the sales data for Legion beyond its launch period. They felt that Ubisoft had taken the risk with Legion as a marketing hook. “It’s not that interesting… they have a series of simple nouns and properties, and they behave according to it.”
Image: Ubisoft
Reviewers seemed to agree with him. One critic noted that “there’s not much of a human element” to the Londoners in the game, and that they “don’t meaningfully interact with each other.” Another struggled with “repetitive” missions. Kotaku panned the campaign for being “empty and soulless,” but praised the more interesting DLC for ditching the procedurally generated recruitment altogether.
Hocking himself admitted in a Washington Post interview that “reinventing open world design” during Legion’s development had been “uncertain,” “difficult,” and “scary. Being able to play as any character in the game was an idea that Ubisoft had never experimented with before.” Human designers had to manually account for every single possibility that the players could choose—it wasn’t a computer that could understand how human players would emotionally respond to randomly generated scenarios. Hocking had been much less optimistic about the possibility of creating a gameplay experience that didn’t feel entirely samey. “There isn’t infinite diversity,” Hocking said in the interview. “You’re still going to encounter, ‘Oh, yeah. I recognize that voice. I recognize that person. Or, this is one of the people who has the technician fighting style. They fight in a certain way, [similar to] that other person.’ But it still blurs the lines quite a bit.”
Artificial intelligence has always been a part of game development
Florence Smith Nicholls, story tech at the award-winning indie studio behind Mutazione, also had a more muted perspective of AI. They told Kotaku over video call that AI was already being used extensively in AAA development, like in Fortnite. “When people [say] it’s going to completely revolutionize gaming, it feels kind of similar to what we’ve had with discussions around NFTs and the blockchain.” They pointed to the chess playing program Deep Blue as an example of artificial intelligence in gaming.
Screenshot: Epic Games
Mostly, though, we’ve seen a wide range of applications for AI in games when it comes to automation–but how we define such a thing can get confusing for the average person. Because of popular generators such as Midjourney and Chat GPT, most people associate them with neural networks that create text or images based on a dataset that it scrapes from the internet. Researchers have very broad definitions of AI. “If you showed someone Google Maps in 1990 and showed that you could plot a route between any two points on the planet… that would be considered a hard AI problem,” said Cook. “Now people just think of that as something that your phone does. It’s the same thing in games. As [technology] becomes more normal, they no longer look like AI to us.”
“We talk about AI when it doesn’t work,” said Alan Zucoconi, a director of AI game development at the University of London. “When it works, it’s invisible and seamless.” He acknowledged that artists and programmers don’t see eye-to-eye on the technology. “There is friction [with AI], especially for artists… Those same artists are using AI every day, they just don’t call it AI,” said Zucconi. “Tools like the select all regions tool in Photoshop, smudging colors… tools we take for granted are not seen as AI… so I find it very fascinating when people think that these are something new. It’s not.”
“The real utility [of AI] in the short term is helping with more discrete tasks in the process of producing work,” Patel wrote, recounting his experiences with working on Ubisoft games. “In one game, we had AI testing the open world… It would log the framerate and any clipping issues. The machines would be left running moving through the world and note areas where things had issues. That helped us find areas to check without having real people have to do that otherwise tedious work. Real people could focus on checking, verifying, and figuring out details.” Rather than risking whether or not a player might be able to tell if something was AI-generated, “[AI] let our QA staff not do the tedious parts and focus their time more efficiently on problem areas.”
Automated development often sounds incredibly sinister when coming out of the mouth of a gaming executive who doesn’t sound adequately troubled about the plight of crunching developers. But testing has been automated for years, and QA professionals are calling for studios to ditch fully manual testing. Despite the popular image of QA as low-skilled work, AI experience is often a necessary prerequisite to being a games tester, because automated testing is often a key aspect of a studio’s workflow. And it’s not just testing—automation is a shipped feature of AAA video games too.
Mike Cook is an AI researcher and game designer at King’s College London. He told Kotaku over a Zoom call that games such as Minecraft are procedurally generated by AI, and blockbuster games such as Assassin’s Creed makes use of AI for certain mechanics. “When your character places their hands and legs in unusual places to climb up the side of a building, that’s not a handmade animation,” he said. “There’s an AI that’s helping figure out where your body’s limbs should go to make it look normal.” He noted that online matchmaking and improving connectivity were both aspects of games that were supported by AI.
Limitations and ethical challenges of AI and procedural generation
Despite the possibilities, Nicholls said that procedurally generated content was only really useful for “very specific tasks.” They cited examples such as changing the weather or generating foliage in Fortnite. AI would need to be able to handle several different tasks in order to be considered a game-changing force in development.
However, they had concerns about which developers would benefit from extensive automation. They pointed out that in the case of art outsourcing (the practice in which studios pay cheaper studios to create low-level assets), the “main” studios were doing more “intellectual work” such as design. They thought that AI could similarly create an underclass of artists whose work is less valued.
Sneha Deo, an AI ethicist from Microsoft, draws the connection more overtly. “I would say a lot of the undercutting of [tech labor] value that happens today is due to differences in the value of currency.” It’s cheaper to hire developers from a country with a less powerful currency, rather than paying developers from the U.S. or western Europe. She also attributed the devaluation of human labor to the last mile effect. “Humans trick themselves into thinking if a machine can do it, then the [labor] that the humans are adding to it isn’t as valuable because most of it is automated.” So even if AI created new ‘AI design’ jobs, those jobs might not necessarily pay a reasonable amount.
While he’s normally exuberant about the possibilities of machine learning, Zucconi seemed uncomfortable when asked about whether or not AI would devalue the labor of voice actors. When directly pressed about the possibility of paying actors for using their voices in AI (as Hocking raises in his op-ed), he said: “Licensing voices is probably going to happen. We’re very close to having that technology… I’m hopeful that this is a good future because it means that people can have more work opportunities.” The ability to commercially profit from one’s own “likeness” is enshrined in state publicity laws. Celebrities have been licensing their likeness to third parties for years—the most famous recent example being Donald Trump’s embarrassing foray into NFTs.
Jennifer Hale, voice actor for female Shepherd, tweeted that AI voices created without consent were “harming voice actors,” Screenshot: Electronic Arts
Despite his optimism, it seemed that professional voice actors felt differently. Voice actors for popular franchises such as Cowboy Bebop and Mass Effect both spoke out against AI versions of their voices being falsified and used without consent. Some bad actors had even used AI-generated voices to dox people. It’s reminiscent of how decades ago, Jet Li turned down a role for TheMatrix because he was concerned about Warner Bros. reusing his motion-captured movements after he collected his last check.
“I think what matters is not any specific deal,” Cook said in regards to compensation and AI-generated art. “I don’t know if licenses are better than labor. What does matter is that the people who are actually doing this job are the ones that get to decide what should be happening,” he said. “And the problem is that in most of these creative jobs, the power dynamic isn’t there to allow people to have that voice.” He also noted that it was easy for artists to accidentally sign away their rights in perpetuity.
Unlike blockchain technology, developers can see clear benefits to adopting automation more broadly in game development. One indie developer told Games Industry that AI development could help smaller studios stay competitive. Failure rates are incredibly high, especially for developers who don’t have massive AAA-sized budgets. No Man’s Sky used machine-generated content to create expansive worlds, only to have a disastrous launch–and it took five years for the game to eventually become a success story.
Deo saw AI as one method of bridging the resource gap between the global north and south. “What’s the rightness or wrongness around using these models to generate art or narrative or text if that’s not your strength? I think about game design as this collaborative process that favors people who already have strong networks,” she said over Zoom video. “[These people] can tap their friends or their networks to come in and do that manual work, [which] is democratized by the replacement of human labor by AI art.”
Image: Latitude
Deo acknowledged that AI art could undercut junior artists who were trying to break into the industry, but thought that it wasn’t an ethical quandary that should rest on independent creators. “It’s not a black and white thing. I think at larger studios, that’s a place where there’s an ethical issue of: ‘How does this undercut labor that’s already undervalued?”
It was a convenient way to think about AI in a positive light. But AAA games like Fortnite have already taken “inspiration” from indie games such as Among Us. That was just for a game mode. It didn’t feel like a logical leap to think that big studios could borrow development methods too.
Could machine-generated games be fun?
And there’s another major stakeholder that’s critical to the success of AI games: the players. Right now, the average person still thinks that “human” and “machine” generated art have inherent differences. “There’s a sense of difficulty in knowing the authorship of certain artwork,” said Nicholls. While games are often attributed to leads in more public-facing roles, they are products of entire teams–and AI only complicates the idea of authorship. Especially when generators such as Midjourney are raising legal and ethical questions on who owns the art that the machine produces. “I wonder if now there’s more unease around AI because people fear that they won’t be able to tell if something is AI generated or not.” Before AI became a prominent image-making tool, it would be reasonable to assume that any painting had some kind of human element. Now, even Bungie community moderators struggle to differentiate between AI and human art.
But Cook thinks that these machines we call “video games” contain a complexity that can only be built by humans. “Maybe it’s possible for AI to generate games but the games that left an impact on us… they’re boundary breaking. Concept breaking. Those are things we can’t necessarily predict with enough data or computer power… If we wanted infinite Grand Theft Auto campaigns or Star Trek episodes, then they would start to feel samey.”
Nevertheless, games such as Minecraft and No Man’s Sky are immensely popular. Although the popular image of artificial intelligence is associated with perfection, that’s not what Cook thinks that gamers necessarily want.
“Players like to be surprised. They actually like it when the AI breaks…Some of the most memorable things that people pull out of these AI systems is when they’ve gone wrong a bit. But I think something that’s really important is that they like to be able to share and talk about these things,” he said. “Although Minecraft or Spelunky 2 has an infinite number of levels and worlds in it, that infinity isn’t really important. What’s important is the one world that you have, or the one thing that you shared with other people. So in the Valheim world, the Valheim world generator is not important. What’s important is the server that you built with your friends.“
The campaign, which had comfortably passed its initial funding goals, said its pitch was:
This is to fund the development of open-source, community first, AI models that will achieve the dream of a billion people exploring and creating art with nothing but their imagination. This revolution in human expression will be equivalent to the invention of the printing press, or of the internet. AI that allows for anyone to make art.
Basically, as TechCrunch report, these guys are frustrated that existing AI-generated image models don’t make good porn, and so they want to build a community to help them do a better job. Now, I’m not here to poo-poo anyone’s kinks or desires for online content, we’re all adults here and everyone has their own stuff they’re into.
G/O Media may get a commission
But porn or not, this is still AI-generated imagery, and given the protests currently going on at ArtStation, and with the wider controversy surrounding the field in general, Unstable Diffusion landed at the right time to be the campaign to get Kickstarter looking at their own policies on the matter.
The crowd-funding platform are yet to release firm guidelines, saying “we’re sometimes navigating some really tricky and undefined areas”, but the company did release a statement suggesting that they will, for now at least, “on the side of creative work and the humans behind that work”.
I want to share some of our thoughts on Artificial Intelligence (AI) generated images and AI art as it develops, because many creators on Kickstarter are understandably concerned about its impact on the creative community.
At Kickstarter, we often have projects that are innovative and push the boundaries of what’s possible. And that means we’re sometimes navigating some really tricky and undefined areas.
Over the last several days, we’ve engaged our Community Advisory Council and we’ve read your feedback to us via our team and social media. And one thing is clear: Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.
As we look at what’s happening in the creative ecosystem and on our platform, here are some of the things we’re considering when it comes to what place AI image generation software and AI-generated art should have on Kickstarter, if any:
– Is a project copying or mimicking an artist’s work? We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — where images that are owned or created by others might not be on a Kickstarter project page, but are in the training data that makes the AI software used in the project, without the knowledge, attribution, or consent of creators.
– Does a project exploit a particular community or put anyone at risk of harm? We have to consider the intention behind projects, sometimes beyond their purpose as stated on our platform. Our rules prohibit projects that promote discrimination, bigotry, or intolerance towards marginalized groups, and we often make decisions to protect the health and integrity of Kickstarter.
This tech is really new, and we don’t have all the answers. The decisions we make now might not be the ones we make in the future, so we want this to be an ongoing conversation with all of you. You can share your thoughts by writing to suggestions@kickstarter.com as we continue to develop our approach to the use of AI software and images on our platform.
That statement was released at the same time Unstable Diffusion’s campaign was suspended (with all backers refunded). It’s important to note that while this post is mostly about the general idea of AI-generated imagery, the mentions of harm appear to be addressing specific criticisms of Unstable Diffusion:
For the first few days of that protest, most users simply pasted a clean, bold image by Alexander Nanitchkov, using repetition in numbers to have the site’s front page looking like this:
Screenshot: ArtStation
As the days have marched on, though, and ArtStation and Epic refuse to offer more suitable protections for the very artworks their site is designed for, artists have moved on and have decided to come up with pieces that are a bit more elaborate, and personal.
I thought I’d highlight some of my favourites in this post. You’ll find links to their passionate, creative and deeply human portfolios of each artist responsible in the names under each image.
We first wrote about this saga back on December 13, when a growing number of AI-created images appearing on ArtStation’s front page prompted a backlash from artists. In response, ArtStation’s owners Epic Games said:
ArtStation’s content guidelines do not prohibit the use of AI tools in the process of creating artwork that is shared with the community. That said, ArtStation is a portfolio platform designed to elevate and celebrate originality powered by a community of artists. Users’ portfolios should only feature artwork that they create, and we encourage users to be transparent in the process. Our content guidelines are here.
ArtStation then published an FAQ seeking to “clarify” the issue, but instead just made things worse, implementing a policy where users would have to opt out of having AI scrape their artworks (and even then being unable to guarantee AI wouldn’t just scrape it anyway). There have been no updates in the days since, meaning the protests have continued, with today’s front page looking much like last week’s (many of the images not using the standard, pasted response are still anti-AI).
Both professional and amateur artists alike were united yesterday in protest against ArtStation, the field’s biggest portfolio site, for its seeming inaction against a rising tide of AI-generated imagery washing up on its front page.
It was very easy to understand their frustrations. ArtStation is a deeply important place for artists, and many had been using it under the assumption its owners (Epic Games) cared about its community since…it is a community website. It is only for artists, and is a place they can not just share their work, but comment on and follow the creations of their peers. It is almost as much a social network as it is a portfolio site.
Much of that goodwill has turned to dust over the past 24 hours, however, first over the initial protest—during which many of the initial anti-AI images were removed by ArtStation moderators—and now in the aftermath, following the publication of an AI-generated imagery FAQ by the site’s team.
The FAQ, which you can read here, says much of the same stuff Epic said in their statements yesterday. However it then branches out into territory that is even more mealy-mouthed, and in one incredible paragraph says it is as important to consider the feelings of “AI research and commercialization” as those of…their own active, human userbase (emphasis mine).
How is ArtStation dealing with questions of artist permissions and AI art generators?
We believe artists should be free to decide how their art is used, and simultaneously we don’t want to become a gatekeeper with site terms that stifle AI research and commercialization when it respects artists’ choices and copyright law. So, here are our current plans:
We plan to add tags enabling artists to choose to explicitly allow or disallow the use of their art for (1) training non-commercial AI research, and (2) training commercial AI. We plan to update the ArtStation website’s Terms of Service to disallow the use of art by AI where the artist has chosen to disallow it. We don’t plan to add either of these tags by default, in which case the use of the art by AI will be governed solely by copyright law rather than restrictions in our Terms of Service.
We welcome feedback on this rapidly evolving topic.
“Well any hopes I had of ArtStation taking off as the next best platform for artists to build a community are now gone”, reads one reply to the site’s announcement tweet. “How are you worried more about not upsetting tech bros than protecting real artists work on your platform.”
“God they can just get fucked for this one”, says another, while several other replies, some from very prominent artists working in video games and film, shared screenshots of them deleting their accounts.
What effect cancellations and continued protest has against the site’s operators and owners remains to be seen, but for now, over 24 hours after the protest began, ArtStation’s front page still looks like this (many of the pics that look like they’re AI generated images are actually protest illustrations)
Adobe used to be known as the company that made Acrobat and PhotoShop. Adobe is increasingly becoming known, however, as one of the great digital grifters of the modern age.
From its shonky subscription models to making people pay for certain colours in PhotoShop (which is also Pantone’s doing in a “jointly” made decision), the company is, like so many others in these tumultuous times, more concerned with growing its bottom line no matter the cost than it is in taking a moment to consider the needs of its users, or the consequences of its actions.
I’m bringing this up today because, a week after forcing people to check they weren’t reading an Onion story when learning about the colours thing, the company has announced that it is embracing AI art. This is not only an enormous grift, but also a serious threat to the livelihoods of artists around the world, big and small.
Machines don’t make art. They’re machines! They’re just making an approximated casserole out of human art that has been fed into it, in the vast amount of cases without credit or compensation. As Dan Sheehan says in his fantastic piece, Art In The Age Of Optimization, it’s merely “a technology that clearly exists to remove the human element from the process of artistic expression.”
G/O Media may get a commission
Anyway! Last week, Adobe dropped an announcement saying that AI-generated art was going to be made available as part of the company’s vast library of stock images, going so far as to say the field is “amplifying human creativity.” The company boldly says, repeatedly, stuff like they have “deeply considered these questions and implemented a new submission policy that we believe will ensure our content uses AI technology responsibly by creators and customers alike,” and that “generative AI is a major leap forward for creators, leveraging machine learning’s incredible power to ideate faster by developing imagery using words, sketches, and gestures.”
Creators? Fuck off! These people aren’t creating anything! They’re punching words into a computer that has been fed actual art! And even if Adobe can, as they’re claiming, only release images that have been “properly built, used, and disclosed,” it still sucks! Gah! Attempting to make good on one of AI art’s issues—art theft—doesn’t absolve it from its others, like the fact nothing to do with these images or their creation has anything to do with art!
Reaction among artists has of course been as wildly negative as any other AI art announcement over the past six months, with some criticising the company, while others resort to more traditional cries, encouraging people to seek out alternatives to Adobe’s products.
Facebook, or as we’re supposed to call them now Meta, announced earlier today that their CICERO artificial intelligence has achieved “human-level performance” in the board game Diplomacy, which is notable for the fact that’s a game built on human interaction, not moves and manoeuvres (like, say, chess).
Here’s a quite frankly distressing trailer:
CICERO: The first AI to play Diplomacy at a human level | Meta AI
If you’ve never played Diplomacy, and so are maybe wondering what the big deal is, it’s a board game first released in the 1950s that is played mostly by people just sitting around a table (or breaking off into rooms) and negotiating stuff. There are no dice or cards affecting play; everything is determined by humans communicating with other humans.
So for an AI’s creators to say that it is playing at a “human level” in a game like this is a pretty bold claim! One that Meta backs up by saying that CICERO is actually operating on two different levels, one crunching the progress and status of the game, the other trying to communicate with human levels in a way we would understand and interact with.
Meta have roped in “Diplomacy World Champion” Andrew Goff to support their claims, who says “A lot of human players will soften their approach or they’ll start getting motivated by revenge and CICERO never does that. It just plays the situation as it sees it. So it’s ruthless in executing to its strategy, but it’s not ruthless in a way that annoys or frustrates other players.”
That sounds optimal, but as Goff says, maybe too optimal. Which reflects that while CICERO is playing well enough to keep up with humans, it’s far from perfect. As Meta themselves say in a blog post, CICERO “sometimes generates inconsistent dialogue that can undermine its objectives”, and my own criticism would be that every example they provide of its communication (like the one below) makes it look like a psychopathic office worker terrified that if they don’t end every sentence with !!! you’ll think they’re a terrible person.
Image: Meta
Of course the ultimate goal with this program isn’t to win board games. It’s simply using Diplomacy as a “sandbox” for “advancing human-AI interaction”:
While CICERO is only capable of playing Diplomacy, the technology behind this achievement is relevant to many real world applications. Controlling natural language generation via planning and RL, could, for example, ease communication barriers between humans and AI-powered agents. For instance, today’s AI assistants excel at simple question-answering tasks, like telling you the weather, but what if they could maintain a long-term conversation with the goal of teaching you a new skill? Alternatively, imagine a video game in which the non player characters (NPCs) could plan and converse like people do — understanding your motivations and adapting the conversation accordingly — to help you on your quest of storming the castle.
I may not be a billionaire Facebook executive, but instead of spending all this time and money making AI assistants better, something nobody outside of AI research and company expenditure seems to care about, could we not just…hire humans I can speak to instead?