ReportWire

Tag: Generative artificial intelligence

  • AI Image Generators Default to the Same 12 Photo Styles, Study Finds

    [ad_1]

    AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it’ll default to just a handful of visual motifs, resulting in an ultimately generic style.

    A study published in the journal Patterns took two AI image generators, Stable Diffusion XL and LLaVA, and put them to test by playing a game of visual telephone. The game went like this: the Stable Diffusion XL model would be given a short prompt and required to produce an image—for example, “As I sat particularly alone, surrounded by nature, I found an old book with exactly eight pages that told a story in a forgotten language waiting to be read and understood.” That image was presented to the LLaVA model, which was asked to describe it. That description was then fed back to Stable Diffusion, which was asked to create a new image based off that prompt. This went on for 100 rounds.

    © Hintze Et Al., Patterns

    Much like a game of human telephone, the original image was quickly lost. No surprise there, especially if you’ve ever seen one of those time-lapse videos where people ask an AI model to reproduce an image without making any changes, only for the picture to quickly turn into something that doesn’t remotely resemble the original. What did surprise the researchers, though, was the fact that the models default to just a handful of generic-looking styles. Across 1,000 different iterations of the telephone game, the researchers found that most of the image sequences would eventually fall into just one of 12 dominant motifs.

    In most cases, the shift is gradual. A few times, it happened suddenly. But it almost always happened. And researchers were not impressed. In the study, they referred to the common image styles as “visual elevator music,” basically the type of pictures that you’d see hanging up in a hotel room. The most common scenes included things like maritime lighthouses, formal interiors, urban night settings, and rustic architecture.

    Even when the researchers switched to different models for image generation and descriptions, the same types of trends emerged. Researchers said that when the game is extended to 1,000 turns, coalescing around a style still happens around turn 100, but variations spin out in those extra turns. Interestingly, though, those variations still typically pull from one of the popular visual motifs.

    AI Endpoints After 100 Iterations
    © Hintze Et Al., Patterns

    So what does that all mean? Mostly that AI isn’t particularly creative. In a human game of telephone, you’ll end up with extreme variance because each message is delivered and heard differently, and each person has their own internal biases and preferences that may impact what message they receive. AI has the opposite problem. No matter how outlandish the original prompt, it’ll always default to a narrow selection of styles.

    Of course, the AI model is pulling from human-created prompts, so there is something to be said about the data set and what humans are drawn to take pictures of. If there’s a lesson here, perhaps it is that copying styles is much easier than teaching taste.

    [ad_2]

    AJ Dellinger

    Source link

  • Taylor Swift, Defender of Artist Ownership, Allegedly Uses AI in Videos

    [ad_1]

    Taylor Swift once said, “You deserve to own the art you make.” Apparently, that doesn’t apply to the millions of artists who have had their works fed into the data wood chipper that is generative AI tools. In the lead-up to the release of the world’s biggest pop star’s latest album, “Life of a Showgirl,” fans were treated to easter egg videos designed to build hype. Instead, sharp-eyed Swifties started to spot what appeared to be AI-generated imagery within the teaser videos, and launched full Swift-vestigations into the situation.

    The alleged generative AI material appeared in a series of short promotional videos. Those videos were accessed via QR codes that were posted on 12 orange doors located in 12 different cities. The videos, originally uploaded via YouTube Shorts, are no longer available, but Gizmodo reviewed purported re-uploads found online. Each video featured letters which, when put together, provided the phrase, “You must remember everything, but mostly this, the crowd is your king.” But the mystery that Taylor’s king took more of an interest in seemed to be, “Why do some of these videos look a little off?”

    No one from Swift’s camp has confirmed in any way the use of generative AI in the promotional videos, but there is certainly enough on-screen to create suspicion. Users have pointed out clipping and disappearing imagery in some videos that suggest that what you’re seeing is created with generative AI. The videos appear to be a part of a partnership with Google, according to a report from The Tennessean, which covered the orange door reveal that appeared in Nashville. Gizmodo reached out to Google for comment regarding its involvement in the videos, but did not receive a response at the time of publication.

    Others have called out some lettering that appears in different shots that have a distinct AI-generated quality to them, in that they are largely nonsense. A treadmill that appears in one video, for instance, has buttons that read “MOP,” “SUOP,” and “NCLINE,” with letters that are curved and blurred in ways that suggest there’s something more than just some wear and tear on the buttons. Another image, a notebook, also appears to contain made-up lettering that a human would be unlikely to make, on account of the fact that a human knows what letters are.

    Generative AI systems are notoriously bad at generating text because, while these systems have been trained on massive sets of data and images containing text, the model has no concept of what it’s actually “looking” at. This is why generative AI models can spit out images of watches and clocks, but it’s often hard to get them to display specific times, because the model has no idea how to tell time. It just knows clocks have lines that mark time, not what those lines actually indicate.

    The inconsistencies were surprisingly common throughout the videos. Viewers pointed out a squirrel that appears to transform into a chipmunk at one point, and a changing number of lamps that appear in another shot. The Swift diehards took particular offense to an AI-generated version of a piano and guitar that was used on Swift’s Eras Tour, which shouldn’t be surprising given how big a deal was made of those custom-made instruments at the time.

    It doesn’t appear that generative AI was used in the creation of Swift’s music videos for the new album, and there doesn’t appear to be an indication that generative AI was used in the feature film released to mark the launch of the record.  Gizmodo reached out to representatives for Taylor Swift, as well as Rodrigo Prieto, cinematographer of “Taylor Swift: The Official Release Party of a Showgirl,” for comment regarding the potential use of generative AI in the making of these promotional videos, music videos, and the film. No parties responded on the record at the time of publication.

    On its face, this appears to be a pretty major blunder. You can’t tell your superfans, who think every word you speak and image you post contains secret messages, to look for clues in an AI-generated video and not expect them to spot inconsistencies. But hey, maybe these weird anomalies are just part of another Easter egg reveal, right?

    [ad_2]

    AJ Dellinger

    Source link

  • The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More

    [ad_1]

    On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.

    In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.

    There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.

    That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.

    The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)

    Other videos at least attempt to be a little creative about how they use copyrighted characters. Users have, for instance, thrown Ronald McDonald into an episode of “Love Island” and created a fake video game that teams up Tony Soprano from The Sopranos and Kirby from, well, Kirby.

    Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.

    “People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”

    There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.

    One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.

    There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?

    OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.

    A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.

    There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.

    [ad_2]

    AJ Dellinger

    Source link

  • Educators get new guidance for age of AI

    [ad_1]

    STATE HOUSE, BOSTON — Artificial intelligence in classrooms is no longer a distant prospect, and Massachusetts education officials on Monday released statewide guidance urging schools to use the technology thoughtfully, with an emphasis on equity, transparency, academic integrity and human oversight.

    “AI already surrounds young people. It is baked into the devices and apps they use, and is increasingly used in nearly every system they will encounter in their lives, from health care to banking,” the Department of Elementary and Secondary Education’s new AI Literacy Module for Educators says.


    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAm“z?@H=6586 @7 9@H E96D6 DJDE6>D @A6C2E6—2?5 9@H E96J >2J D6CG6 @C F?56C>:?6 :?5:G:5F2=D’ 2?5 D@4:6EJ’D 8@2=D—96=AD 3C:586 4=2DDC@@> =62C?:?8 H:E9 E96 564:D:@?D E96J H:== 7246 @FED:56 D49@@=]”k^Am

    kAm%96 s6A2CE>6?E @7 t=6>6?E2CJ 2?5 $64@?52CJ t5F42E:@? C6=62D65 E96 =62C?:?8 >@5F=6 7@C 65F42E@CD[ 2D H6== 2D 2 ?6H v6?6C2E:G6 px !@=:46 vF:52?46 5@4F>6?E @? |@?52J 29625 @7 E96 a_ada_ae D49@@= J62C[ 2 7@C>2= 2EE6>AE E@ D6E A2C2>6E6CD 2C@F?5 E96 E649?@=@8J E92E 92D :?7:=EC2E65 65F42E:@?]k^Am

    kAmq@E9 H6C6 56G6=@A65 😕 C6DA@?D6 E@ C64@>>6?52E:@?D 7C@> 2 DE2E6H:56 px %2D< u@C46 2?5 2C6 >62?E E@ 8:G6 D49@@=D 2 4@?D:DE6?E 7C2>6H@C< 7@C 564:5:?8 H96?[ 9@H 2?5 H9J E@ FD6 px 😕 H2JD E92E 2C6 D276[ 6E9:42= 2?5 :?DECF4E:@?2==J >62?:?87F=[ 244@C5:?8 E@ 2 st$t DA@<6DA6CD@?]k^Am

    kAm%96 56A2CE>6?E DEC6DD65 E92E E96 8F:52?46 😀 “?@E E@ AC@>@E6 @C 5:D4@FC286 E96 FD6 @7 px] x?DE625[ :E @776CD 6DD6?E:2= 8F:52?46 E@ 96=A 65F42E@CD E9:?< 4C:E:42==J 23@FE px — 2?5 E@ 564:56 :7[ H96?[ 2?5 9@H :E >:89E 7:E :?E@ E96:C AC@76DD:@?2= AC24E:46]”k^Am

    kAm%96 =62C?:?8 >@5F=6 7@C 65F42E@CD :ED6=7 ?@E6D E92E :E H2D HC:EE6? H:E9 E96 96=A @7 86?6C2E:G6 px]k^Am

    kAm%96 7:CDE 5C27E H2D :?E6?E:@?2==J HC:EE6? H:E9@FE px] p 5:D4=@DFC6 D2JD “E96 2FE9@CD H2?E65 E9:D C6D@FC46 E@ C67=64E E96 36DE E9:?<:?8 @7 6IA6CED 7C@> st$t’D px E2D< 7@C46[ 7C@> st$t[ 2?5 7C@> @E96C 65F42E@CD H9@ DFAA@CE65 E9:D H@C<] (96? px >@56=D 4C62E6 7:CDE 5C27ED[ H6 >2J F?4@?D4:@FD=J ‘2?49@C’ @? px’D @FEAFED 2?5 =:>:E @FC @H? 4C:E:42= E9:?<:?8 2?5 4C62E:G:EJj 7@C E9:D C6D@FC46 23@FE px[ E92E H2D 2 A@DD:3:=:EJ E96 2FE9@CD H2?E65 E@ 2G@:5]”k^Am

    kAmw@H6G6C[ E96 4=@D6E@7:?2= 5C27E H2D 6?E6C65 :?E@ 2 =2C86 =2?8F286 >@56= =:<6 r92Ev!%c@ @C r=2F56 $@??6E c “E@ 4964< E92E E96 E6IE H2D 2446DD:3=6 2?5 ;2C8@?7C66[” :E D2JD]k^Am

    kAmx? |2DD249FD6EED 4=2DDC@@>D[ px FD6 92D 2=C625J DE2CE65 E@ DAC625] %62496CD 2C6 6IA6C:>6?E:?8 H:E9 r92Ev!% 2?5 @E96C E@@=D E@ 86?6C2E6 CF3C:4D[ =6DD@? A=2?D[ 2?5 :?DECF4E:@?2= >2E6C:2=D[ 2?5 DEF56?ED 2C6 FD:?8 :E E@ 5C27E 6DD2JD[ 3C2:?DE@C> :562D[ @C EC2?D=2E6 E6IE 7@C >F=E:=:?8F2= =62C?6CD] q6J@?5 E6249:?8[ 5:DEC:4ED 2C6 2=D@ FD:?8 px 7@C D4965F=:?8[ C6D@FC46 2==@42E:@? 2?5 252AE:G6 2DD6DD>6?ED]k^Am

    kAmqFE E96 DE2E6’D ?6H C6D@FC46D 42FE:@? E92E px 😀 72C 7C@> 2 ?6FEC2= E@@=[ 2?5 BF6DE:@?D DH:C= 2C@F?5 H96E96C px 42? 36 FD65 E@ 6?92?46 =62C?:?8[ @C D9@CE4FE :E]k^Am

    kAm“q642FD6 px 😀 56D:8?65 E@ >:>:4 A2EE6C?D[ ?@E E@ ‘E6== E96 ECFE9[’ :E 42? AC@5F46 C6DA@?D6D E92E 2C6 8C2>>2E:42==J 4@CC64E 2?5 E92E D@F?5 4@?G:?4:?8[ 3FE 2C6 724EF2==J HC@?8 @C 4@?EC2CJ E@ 9F>2?D’ F?56CDE2?5:?8 @7 C62=:EJ[” E96 8F:52?46 D2JD]k^Am

    kAmx? H92E :E 42==D “px 7:4E:@?D[” E96 56A2CE>6?E H2C?D 282:?DE @G6CC6=:2?46 @? DJDE6>D E92E 42? 723C:42E6 :?7@C>2E:@?[ C6:?7@C46 FD6C 2DDF>AE:@?D E9C@F89 “DJ4@A92?4J[” 2?5 4C62E6 H92E |x% C6D62C496CD 92G6 56D4C:365 2D “4@8?:E:G6 563E[” H96C6 A6@A=6 364@>6 2?49@C65 E@ >249:?686?6C2E65 5C27ED 2?5 =@D6 E96 23:=:EJ E@ 56G6=@A E96:C @H? :562D]k^Am

    kAm%96 8F:52?46 FC86D D49@@=D E@ AC:@C:E:K6 7:G6 8F:5:?8 G2=F6D H96? 25@AE:?8 px E@@=Di 52E2 AC:G24J 2?5 D64FC:EJ[ EC2?DA2C6?4J 2?5 244@F?E23:=:EJ[ 3:2D 2H2C6?6DD 2?5 >:E:82E:@?[ 9F>2? @G6CD:89E 2?5 65F42E@C ;F58>6?E[ 2?5 24256>:4 :?E68C:EJ]k^Am

    kAm~? AC:G24J[ E96 56A2CE>6?E C64@>>6?5D E92E 5:DEC:4ED @?=J 2AAC@G6 px E@@=D G6EE65 E9C@F89 2 7@C>2= 52E2 AC:G24J 28C66>6?E AC@46DD 2?5 E6249 DEF56?ED 9@H E96:C 52E2 😀 FD65 H96? E96J :?E6C24E H:E9 DF49 DJDE6>D] u@C EC2?DA2C6?4J[ D49@@=D 2C6 6?4@FC2865 E@ :?7@C> A2C6?ED 23@FE 4=2DDC@@> px FD6[ >2:?E2:? AF3=:4 =:DED @7 2AAC@G65 E@@=D[ 2?5 56D4C:36 9@H 6249 😀 FD65]k^Am

    kAmq:2D 😀 2?@E96C 46?EC2= 4@?46C?] %96 8F:52?46 DF886DED 86?6C2E:G6 px E@@=D 92G6 3F:=E:? 92C>7F= 3:2D6D[ 2D E96J 2C6 EC2:?65 @? 9F>2? 52E2[ 2?5 E92E E62496CD 2?5 DEF56?ED D9@F=5 6I2>:?6 9@H px C6DA@?D6D >2J G2CJ]k^Am

    kAm“(96? px DJDE6>D 8@ F?6I2>:?65[ E96J 42? :?25G6CE6?E=J C6:?7@C46 9:DE@C:42= A2EE6C?D @7 6I4=FD:@?[ >:DC6AC6D6?E2E:@?[ @C :?;FDE:46[” E96 56A2CE>6?E HC@E6]k^Am

    kAm~77:4:2=D H2C? E92E AC65:4E:G6 2?2=JE:4D 7@C642DE:?8 2 DEF56?E’D 7FEFC6 @FE4@>6 4@F=5 :?4@CC64E=J 7=28 E96> 7@C 24256>:4 :?E6CG6?E:@?[ 32D65 @? 3:2D65 px :?E6CAC6E2E:@? @7 52E2]k^Am

    kAm“pFE@>2E65 8C25:?8 E@@=D >2J A6?2=:K6 =:?8F:DE:4 5:776C6?46D] w:C:?8 A=2E7@C>D >:89E 5@H?C2?< 42?5:52E6D H9@D6 6IA6C:6?46D @C 6G6? ?2>6D 5:776C 7C@> 5@>:?2?E ?@C>D] pE E96 D2>6 E:>6[ DEF56?ED 24C@DD E96 r@>>@?H62=E9 7246 C62= 5:DA2C:E:6D 😕 2446DD E@ 9:89DA665 :?E6C?6E[ FAE@52E6 56G:46D[ 2?5 :?4=FD:G6 =62C?:?8 6?G:C@?>6?ED[” E96 8F:52?46 D2JD]k^Am

    kAm%96 5@4F>6?E 2=D@ A=246D C6DA@?D:3:=:EJ @? 65F42E@CD E@ @G6CD66 2?5 25;FDE px @FEAFED] u@C 6I2>A=6[ E62496CD >:89E FD6 px E@ 5C27E 2 A6CD@?2=:K65 C625:?8 A=2? 3FE DE:== 252AE :E E@ C67=64E 2 DEF56?E’D :?5:G:5F2= :?E6C6DED[ DF49 2D DA@CED @C 8C2A9:4 ?@G6=D]k^Am

    kAmu@C DEF56?ED[ E96 DE2E6 😀 >@G:?8 2H2J 7C@> 2 E@?6 @7 @FEC:89E AC@9:3:E:@? @7 px[ 2?5 E@H2C5D @?6 @7 5:D4=@DFC6 7@C E96 D2<6 @7 24256>:4 :?E68C:EJ]k^Am

    kAm%96 5@4F>6?ED DF886DE E92E D49@@=D 4@F=5 4@>6 FA H:E9 A@=:4:6D 7@C DEF56?ED E@ :?4=F56 2? “px &D65” D64E:@? 😕 E96:C A2A6CD[ 4=2C:7J:?8 9@H 2?5 H96? E96J FD65 E@@=D[ H9:=6 E62496CD E6249 E96 5:DE:?4E:@? 36EH66? px2DD:DE65 3C2:?DE@C>:?8 2?5 pxHC:EE6? 4@?E6?E]k^Am

    kAm“$49@@=D E6249 2?5 6?4@FC286 E9@F89E7F= :?E68C2E:@? @7 px C2E96C E92? A6?2=:K:?8 FD6 @FEC:89E]]] px 😀 FD65 😕 H2JD E92E C6:?7@C46 =62C?:?8[ ?@E D9@CE4:C4F:E :E] r=62C 6IA64E2E:@?D 8F:56 H96? 2?5 9@H DEF56?ED FD6 px E@@=D[ H:E9 2? 6>A92D:D @? @C:8:?2=:EJ[ EC2?DA2C6?4J[ 2?5 C67=64E:@?[” :E D2JD]k^Am

    kAmq6J@?5 4=2DDC@@> CF=6D[ :E 6>A92D:K6D E92E “px =:E6C24J” — ?@E @?=J E96 E649?:42= @H=6586[ 3FE F?56CDE2?5:?8 2?5 6G2=F2E:?8 E96 C6DA@?D:3=6 FD6 @7 E96D6 E@@=D — 2D 2? :>A@CE2?E ;@3 2?5 4:G:4 D<:==]k^Am

    kAm“$EF56?ED ?665 E@ 36 6>A@H6C65 ?@E ;FDE 2D FD6CD[ 3FE 2D :?7@C>65[ 4C:E:42= E9:?<6CD H9@ F?56CDE2?5 9@H px H@C:D=625[ 2?5 9@H E@ 2DD6DD :ED :>A24ED[” E96 8F:52?46 D2JD]k^Am

    kAm%92E =:E6C24J 6IE6?5D E@ E96 A6CD@?2= 2?5 6?G:C@?>6?E2= 4@DED @7 E649?@=@8J] $EF56?ED[ E96 56A2CE>6?E DF886DED[ D9@F=5 C67=64E @? E96:C 5:8:E2= 7@@EAC:?ED 2?5 52E2 A6C>2?6?46 H9:=6 2=D@ 4@?D:56C:?8 6?G:C@?>6?E2= :>A24ED @7 px =:<6 6?6C8J FD6 2?5 6H2DE6]k^Am

    kAm%96 ?6H C6D@FC46D 6>A92D:K6 E92E “E6249:?8 H:E9 px 😀 ?@E 23@FE C6A=24:?8 65F42E@CD—:E’D 23@FE 6>A@H6C:?8 E96> E@ 724:=:E2E6 C:49[ 9F>2?46?E6C65 =62C?:?8 6IA6C:6?46D 😕 px6?92?465 6?G:C@?>6?ED]”k^Am

    kAm%96 4=2DDC@@> 8F:52?46 2CC:G6D 2D v@G] |2FC2 w62=6J 92D E2<6? 2 AC@>:?6?E C@=6 😕 D92A:?8 |2DD249FD6EED’ px =2?5D42A6] {2DE J62C D96 =2F?4965 E96 DE2E6’D px wF3[ 42==:?8 :E 2 3:5 E@ >2<6 |2DD249FD6EED 2 =6256C 😕 3@E9 56G6=@A:?8 2?5 C68F=2E:?8 2CE:7:4:2= :?E6==:86?46] w62=6J 92D AC@>@E65 2? 2==:? 2AAC@249 E@ :?E68C2E:?8 px 24C@DD D64E@CD[ 9:89=:89E:?8 :ED A@E6?E:2= 7@C 64@?@>:4 56G6=@A>6?E]k^Am

    kAmt5F42E:@? @77:4:2=D A@D:E:@?65 E96:C ?6H C6D@FC46D 2D A2CE @7 E92E 3C@256C DE2E6H:56 DEC2E68J]k^Am

    kAm“~G6C E96 4@>:?8 J62CD[ D49@@=D H:== A=2J 2 4C:E:42= C@=6 😕 DFAA@CE:?8 DEF56?ED H9@ H:== 36 8C25F2E:?8 :?E@ E9:D 64@DJDE6> 3J AC@G:5:?8 6BF:E23=6 @AA@CEF?:E:6D 7@C E96> E@ =62C? 23@FE E96 D276 2?5 67764E:G6 FD6 @7 px[” :E D2JD]k^Am

    kAm%96 5@4F>6?ED 24@H=6586 E92E px 😀 2=C625J 6>365565 😕 >2?J @7 E96 E@@=D DEF56?ED 2?5 E62496CD FD6 52:=J] %96 492==6?86[ E96J DF886DE[ 😀 ?@E H96E96C D49@@=D H:== FD6 px 3FE 9@H E96J H:== D92A6 :ED C@=6]k^Am

    kAm%96 C6=62D6 2=D@ 4@>6D 282:?DE E96 324<5C@A @7 2 AFD9 @? q624@? w:== E@ =:>:E E649?@=@8J 😕 4=2DDC@@>D]k^Am

    kAm%96 $6?2E6 E9:D DF>>6C 2AAC@G65 2 3:== E92E H@F=5 AC@9:3:E DEF56?E 46==A9@?6 FD6 😕 D49@@=D DE2CE:?8 😕 E96 a_aea_af 24256>:4 J62C[ C67=64E:?8 8C@H:?8 4@?46C? E92E 4@?DE2?E 56G:46 2446DD 92>A6CD 7@4FD 2?5 =62C?:?8] {2H>2<6CD 324<:?8 E96 >62DFC6 92G6 =:<6?65 46==A9@?6D 😕 4=2DDC@@>D E@ “6=64EC@?:4 4@42:?6” 2?5 “2 J@FE9 3692G:@C2= 962=E9 4C:D:D @? DE6C@:5D]”k^Am

    kAm%96 w@FD6 92D ?@E D2:5 H96? :E A=2?D E@ E2<6 FA E96 >62DFC6[ @C 6G6? H96? C6AC6D6?E2E:G6D H:== C6EFC? 7@C D6C:@FD =2H>2<:?8[ 2 E:>6E23=6 E92E ?@H 2AA62CD =:<6=J E@ 72== 27E6C E96 ?6H D49@@= J62C 368:?D]k^Am

    kAm%92E F?46CE2:?EJ =62G6D D49@@=D 😕 2 A6C:@5 @7 7=FI[ H6:89:?8 9@H E@ :?E68C2E6 6>6C8:?8 px E@@=D 6G6? 2D =2H>2<6CD 4@?D:56C AF==:?8 324< @? @E96C 7@C>D @7 DEF56?E E649?@=@8J FD6]k^Am

    [ad_2]

    By Sam Drysdale | State House News Service

    Source link

  • Video Game Actors Go On Strike For AI Protections

    Video Game Actors Go On Strike For AI Protections

    [ad_1]

    Video game actors are going on strike for the first time since 2017 after months of negotiations with Activision, Epic Games, and other big publishers and studios over higher pay, better safety measures, and protections from new generative AI technologies. They’ll be hitting the picket line a year after Hollywood actors and writers wrapped up their own historic strikes in an escalation that could have big consequences for the development and marketing of some of the industry’s biggest games.

    Members of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) voted last fall to authorize a strike citing an unwillingness of big game companies to budge on guaranteeing performers rights over how their work is used in training AI or creating AI-generated copies. Roughly 2,600 voice actors and motion capture artists, including talents like Troy Baker from The Last of Us, Jennifer Hale from Mass Effect, and Matt Mercer from The Legend of Zelda: Tears of the Kingdom, have been working without an Interactive Media Agreement since November 2022. The strike starts on July 26 at 12:01 a.m.

    “The video game industry generates billions of dollars in profit annually. The driving force behind that success is the creative people who design and create those games,” chief negotiator Duncan Crabtree-Ireland said in a statement. “That includes the SAG-AFTRA members who bring memorable and beloved game characters to life, and they deserve and demand the same fundamental protections as performers in film, television, streaming, and music: fair compensation and the right of informed consent for the A.I. use of their faces, voices, and bodies. Frankly, it’s stunning that these video game studios haven’t learned anything from the lessons of last year – that our members can and will stand up and demand fair and equitable treatment with respect to A.I., and the public supports us in that.”

    Read More: Video Game Voice Actors Are Ready To Strike Over AI. Here’s Why

    “We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations, spokesperson Audrey Cooling for the companies involved in the Interactive Media Agreement said in an emailed statement. “We have already found common ground on 24 out of 25 proposals, including historic wage increases and additional safety provisions. Our offer is directly responsive to SAG-AFTRA’s concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the IMA. These terms are among the strongest in the entertainment industry.”

    While games set to come out this fall like Dragon Age: The Veilguard, who’s recently revealed voice cast includes several guild members, likely already have their voice and motion-capture work completed, the strike means SAG-AFTRA members would be unavailable for projects that are years out, and wouldn’t be around to record for any potential last-minute re-writes for things that are closer to coming out. Games relied much less on actor performances in the past, but most popular franchises are now fully voice-acted, with the biggest-budget productions using motion capture to transfer actors’ real-life performances, frame by frame, into the game.

    The last time video game actors went on strike in 2016, it was primarily over pay rates and lasted a entire year. It’s unclear if the strike this time around will be over any sooner. Unlike with the issue of higher pay, people involved in the current negotiations say that the lack of AI protections poses an existential threat to actors and their creative output. Just this week, Wired reported that companies like Activision Blizzard and Riot Games were moving ahead with using generative AI tools to help create concept art and even potentially assets that would make it into finished games like Call of Duty: Modern Warfare 3.

    “Eighteen months of negotiations have shown us that our employers are not interested in fair, reasonable A.I. protections, but rather flagrant exploitation,” said negotiating committee chair Sarah Elmaleh said in a statement. “We refuse this paradigm—we will not leave any of our members behind, nor will we wait for sufficient protection any longer. We look forward to collaborating with teams on our Interim and Independent contracts, which provide A.I. transparency, consent and compensation to all performers, and to continuing to negotiate in good faith with this bargaining group when they are ready to join us in the world we all deserve.”

    SAG-AFTRA video game voice actors are set to hold a panel featuring Ashly Burch (Horizon Forbidden West), Noshir Dala (Red Dead Redemption II), and others at San Diego Comicon later this week on July 26.

    Update 7/25/2024 3:42 p.m. ET: Added a statement from the game companies.

            

    [ad_2]

    Ethan Gach

    Source link

  • Apple’s AI Cloud System Makes Big Privacy Promises, but Can It Keep Them?

    Apple’s AI Cloud System Makes Big Privacy Promises, but Can It Keep Them?

    [ad_1]

    Apple’s new Apple Intelligence system is designed to infuse generative AI into the core of iOS. The system offers users a host of new services, including text and image generation as well as organizational and scheduling features. Yet while the system provides impressive new capabilities, it also brings complications. For one thing, the AI system relies on a huge amount of iPhone users’ data, presenting potential privacy risks. At the same time, the AI system’s substantial need for increased computational power means that Apple will have to rely increasingly on its cloud system to fulfill users’ requests.

    Apple has historically offered iPhone customers unparalleled privacy; it’s a big part of the company’s brand. Part of those privacy assurances has been the option to choose when mobile data is stored locally and when it’s stored in the cloud. While an increased reliance on the cloud might ring some privacy alarm bells, Apple has anticipated these concerns and created a startling new system that it calls its Private Cloud Compute, or PCC. This is really a cloud security system designed to keep users’ data away from prying eyes while it’s being used to help fulfill AI-related requests.

    On paper, Apple’s new privacy system sounds really impressive. The company claims to have created “the most advanced security architecture ever deployed for cloud AI compute at scale.” But what looks like a massive achievement on paper could ultimately cause broader issues for user privacy down the road. And it’s unclear, at least at this juncture, whether Apple will be able to live up to its lofty promises.

    How Apple’s Private Cloud Compute Is Supposed to Work

    In many ways, cloud systems are just giant databases. If a bad actor gets into that system/database, they can look at the data contained within. However, Apple’s Private Cloud Compute (PCC) brings a number of unique safeguards that are designed to prevent that kind of access.

    Apple says it has implemented its security system at both the software and hardware levels. The company created custom servers that will house the new cloud system, and those servers go through a rigorous process of screening during manufacturing to ensure they are secure.  “We inventory and perform high-resolution imaging of the components of the PCC node,” the company claims. The servers are also being outfitted with physical security mechanisms such as a tamper-proof seal. iPhone users’ devices can only connect to servers that have been certified as part of the protected system, and those connections are end-to-end encrypted, meaning that the data being transmitted is pretty much untouchable while in transit.

    Once the data reaches Apple’s servers, there are more protections to ensure that it stays private. Apple says its cloud is leveraging stateless computing to create a system where user data isn’t retained past the point at which it is used to fulfill an AI service request. So, according to Apple, your data won’t have a significant lifespan in its system. The data will travel from your phone to the cloud, interact with Apple’s high-octane AI algorithms—thus fulfilling whatever random question or request you’ve submitted (“draw me a picture of the Eiffel Tower on Mars”)—and then the data (again, according to Apple) will be deleted.

    Apple has instituted an array of other security and privacy protections that can be read about in more detail on the company’s blog. These defenses, while diverse, all seem designed to do one thing: prevent any breach of the company’s new cloud system.

    But Is This Really Legit?

    Companies make big cybersecurity promises all the time and it’s usually impossible to verify whether they’re telling the truth or not. FTX, the failed crypto exchange, once claimed it kept users’ digital assets in air-gapped servers. Later investigation showed that was pure bullshit. But Apple is different, of course. To prove to outside observers that it’s really securing its cloud, the company says it will launch something called a “transparency log” that involves full production software images (basically copies of the code being used by the system). It plans to publish these logs regularly so that outside researchers can verify that the cloud is operating just as Apple says.

    What People Are Saying About the PCC

    Apple’s new privacy system has notably polarized the tech community. While the sizable effort and unparalleled transparency that characterize the project have impressed many, some are wary of the broader impacts it may have on mobile privacy in general. Most notably—aka loudly—Elon Musk immediately began proclaiming that Apple had betrayed its customers.

    Simon Willison, a web developer and programmer, told Gizmodo that the “scale of ambition” of the new cloud system impressed him.

    “They are addressing multiple extremely hard problems in the field of privacy engineering, all at once,” he said. “The most impressive part I think is the auditability—the bit where they will publish images for review in a transparency log which devices can use to ensure they are only talking to a server running software that has been made public. Apple employs some of the best privacy engineers in the business, but even by their standards this is a formidable piece of work.”

    But not everybody is so enthused. Matthew Green, a cryptography professor at Johns Hopkins University, expressed skepticism about Apple’s new system and the promises that went along with it.

    “I don’t love it,” said Green with a sigh. “My big concern is that it’s going to centralize a lot more user data in a data center, whereas right now most of that is on people’s actual phones.”

    Historically, Apple has made local data storage a mainstay of its mobile design, because cloud systems are known for their privacy deficiencies.

    “Cloud servers are not secure, so Apple has always had this approach,” Green said. “The problem is that, with all this AI stuff that’s going on, Apple’s internal chips are not powerful enough to do the stuff that they want it to do. So they need to send the data to servers and they’re trying to build these super protected servers that nobody can hack into.”

    He understands why Apple is making this move, but doesn’t necessarily agree with it, since it means a higher reliance on the cloud.

    Green says Apple also hasn’t made it clear whether it will explain to users what data remains local and what data will be shared with the cloud. This means that users may not know what data is being exported from their phones. At the same time, Apple hasn’t made it clear whether iPhone users will be able to opt out of the new PCC system. If users are forced to share a certain percentage of their data with Apple’s cloud, it may signal less autonomy for the average user, not more. Gizmodo reached out to Apple for clarification on both of these points and will update this story if the company responds.

    To Green, Apple’s new PCC system signals a shift in the phone industry to a more cloud-reliant posture. This could lead to a less secure privacy environment overall, he says.

    “I have very mixed feelings about it,” Green said. “I think enough companies are going to be deploying very sophisticated AI [to the point] where no company is going to want to be left behind. I think consumers will probably punish companies that don’t have great AI features.”

    [ad_2]

    Lucas Ropek

    Source link

  • How To Create AI Images on Midjourney

    How To Create AI Images on Midjourney

    [ad_1]

    There are plenty of apps you can turn to to generate pictures using artificial intelligence. Still, Midjourney remains one of the best and one of the most popular options, having launched in beta form in July 2022.

    It’s not free to use: The price of admission starts at $10 a month or $96 a year, which gives you 3.3 hours of image generation time per month (images usually take around a minute to render). However, the quality of the end result may well tempt you into a subscription if you need a lot of AI art.

    Assuming you’re ready to sign up (for a month at least), here’s how to get started with Midjourney—the commands you need to know, how to save and browse your images, and some of the capabilities of the generative AI tool.

    Getting started

    Midjourney works through Discord: You can join the Midjourney channel here, and you’ll need to sign up for a (free) Discord account if you don’t already have one. The next steps involve two bits of admin—agreeing to the Midjourney terms of service and signing up for one of the Midjourney subscription tiers. You’ll get a neat little table outlining the differences between each tier.

    Midjourney does a decent job of explaining how everything works with all that out of the way. Unless you’re on one of the more expensive plans, you’ll be writing your prompts and getting your images through a channel that’s open to other users, so don’t be shy—it actually works well for getting inspiration from what other people are doing, and seeing what’s possible with the AI engine.

    The on-boarding process is straightforward.
    Screenshot: Midjourney

    To begin with, you’ll need to get involved in one of the #newbie channels, which are clearly linked on the left of the web interface. Click to jump to any one of them and see what’s happening—look at how different art styles are described to get different results, from “abstract expressive” to “hyper-realistic” and everything in between.

    The other online location you need to know about is the official Midjourney website. While all of your image generation is done on Discord, this website is where you can find an archive of all the pictures you’ve made and browse through some of the other artwork that’s proving popular on the Midjourney network. From here you’re also able to read about updates to Midjourney.

    Writing prompts

    Head to a #newbie channel, type “/imagine” followed by a space, and you’re ready to start prompting. If you’ve never used an AI image generator before, describe what you want to see: You can be as creative as possible, putting any kind of person or object in any kind of setting and using any kind of artwork style.

    As usual with generative AI tools, the more specific and precise you can be, the better. However, you can be vague if you want to (it’s just less likely you’ll get something close to what you were imagining). See a watercolor of an elephant in a boat, or a photo of an apple on a table, it’s up to you.

    Type your prompts into one of the newbie channels.

    Type your prompts into one of the newbie channels.
    Screenshot: Midjourney

    After a few moments of thinking, you’ll get four generated images based on your prompt—if you want Midjourney to try again, click the re-roll button (the blue-and-white circle of arrows). If you like one of the images more than the others, you can click one of the V1V4 buttons to see four variations on it (the images are numbered from left to right and from top to bottom).

    Click on any of the U1U4 buttons to take a closer look. Here, you get access to some editing features: You’re able to create new variations on all or just part of the image, zoom out on the image (and have AI fill out the canvas), or extend the image in any direction using the four arrow buttons. Click on any image to see it in full-size mode, then right-click to save it somewhere else.

    Going further

    You can add a variety of parameters to your prompts, and there’s a full list here. They can be used to change an image’s aspect ratio, create images that will tile, or create more varied results, for example. So, if you need a wide rather than square picture, you might append “—aspect 16:9″ to the end of your prompt.

    Also worth knowing about are the parameters “—cref” and “—sref”, both of which can be followed by a URL pointing at an image. Use the former (character reference) to show Midjourney a character you want to use in your pictures and the latter (style reference) to show Midjourney the style that you’d like your pictures to look like.

    The Midjourney website collects all of your images.

    The Midjourney website collects all of your images.
    Screenshot: Midjourney

    There are also a couple of other commands that you can use instead of “/imagine” on Discord. Use “/describe” to get Midjourney to return a text prompt based on an image you supply or “/blend” to have Midjourney combine up to five different images into something new. You can point to images on the web or upload them from your device.

    Head to the Midjourney website to find all of your pictures and to download them whenever necessary—eventually, you’ll be able to generate images from here too, but the feature hasn’t been fully launched yet. You can use the filters on the right to sift through the artwork you’ve created, and it’s also possible to download multiple images at the same time or sort them into custom folders if required.

    [ad_2]

    David Nield

    Source link

  • Xbox Slammed For AI-Generated Art Promoting Indie Games

    Xbox Slammed For AI-Generated Art Promoting Indie Games

    [ad_1]

    ‘Tis the season to promote indie games with AI-generated junk, apparently. A Microsoft Twitter account recently posted low-effort, energy-intensive art promoting indie games on Xbox before later deleting it after getting roundly mocked by fans and developers alike.

    “Walking in a indie wonderlaaand,” the ID@Xbox account tweeted on December 27. “What were your favorite indie games of the year?” The post was accompanied by an AI-generated image of children sledding down a hill with a giant green Xbox logo on it.

    Screenshot: Microsoft / Kotaku

    It looked harmless at first, but a second or third glance immediately revealed telltale AI anomalies like children maneuvering their sleds with cranks attached to nothing and fishing in the snow for presents with weird black tendrils. A man playing a gaming handheld in the center top of the image has had his top lip replaced by teeth. A child jumping through the snow appears to have a mustache. It was a really bad look considering ID@Xbox is supposed to be the human-facing team within the megacorporation championing individual creators and small independent teams.

    “Bro not Xbox using ayy-eye to promote indie devs,” wrote pixel artist TAHK0. “Nothing says ‘we don’t care about indie developers’ like using AI,” wrote artist NecroKuma3. “ If you can’t hire an artist to do advertising, I highly doubt you’ll do it with independent developers.” The company quietly deleted the post overnight without acknowledging the backlash. Microsoft did not immediately respond to requests for comment.

    While not posting half-assed AI art to promote artists seems like a no-brainer, we’re seeing more and more companies do it lately. There was the AI-generated promotional image for Amazon’s Fallout TV show, AI-generated art promoting a new Pokémon GO event, and even Ubisoft accounts representing offices where staff had recently been laid off putting out AI-generated Assassin’s Creed art.

    When this stuff first started happening it felt shitty but low stakes. Increasingly it feels clear, however, that companies are taking the same approach to AI art that they have with every other internet age advancement, operating under the assumption that people will complain at first but eventually they’ll get tired of it and move on to being angry about something else. Boil the frog slowly enough and eventually it won’t realize it has 11 fingers, 13 toes, and weird spindly wires coming out of its back.

    Read More: AI Creating ‘Art’ Is An Ethical And Copyright Nightmare

    As a cheerleader for AI technology, however, Microsoft’s role in this is especially egregious. The company is already promoting tools for AI-generated content in games, and encouraging all 20 Bing users to play around with its AI art tools. Never mind that no one is actually quite sure how the technology will make money, or if it’s even legal. If it can replace human creativity with predictable slop and reduce headcount, it must be a win-win.

    According to the MIT Technology Review, every AI-generated image requires as much energy as an entire smartphone charge. And Microsoft’s own internal environmental report blamed the technology for a 34 percent spike in its water usage to cool all the racks of computing power required for, among other things, enabling users to shitpost about Kirby doing 9/11. As Immortality game director Sam Barlow put it following the AI-generated ID@Xbox post, “Really impressive that just as we were finally starting to address the climate emergency, we invented stupid ways to undo all our progress.”

              

    [ad_2]

    Ethan Gach

    Source link

  • Ubisoft Using AI-Generated Assassin’s Creed Art Amid Cost Cutting

    Ubisoft Using AI-Generated Assassin’s Creed Art Amid Cost Cutting

    [ad_1]

    Happy Halloween! Ubisoft Netherlands invites you to celebrate the spooky festivities with AI-generated Assassin’s Creed art. Terrifying indeed!

    People first began to notice some of Ubisoft’s social media channels posting what appeared to be AI-generated versions of Assassin’s Creed art last night. A smoothed over, off-brand Ezio emerged on the French publisher’s X (formerly known as Twitter) account for Latin America. “In other amazing industry news here’s an official Ubisoft account with 300K followers posting AI art,” tweeted Forbes contributor Paul Tassi. The publisher’s post was mocked for making Ezio look like a Fortnite character and for one character in the background wielding gun grips like knives. The tweet was deleted soon after.

    Not to be outdone, however, the Ubisoft Netherlands account followed up with its own AI-looking Ezio art complete with Jack-o’-lanterns. “Which Ubisoft game is perfect for this horrible evening?” the account asked in Dutch. Clearly the one the Assassin’s Creed maker was playing with fans’ hearts.

    Read More: AI Creating ‘Art’ Is An Ethical And Copyright Nightmare

    Ubisoft recently revealed that over 1,000 people have left the company in the last year as part of its “cost reduction” program. Some of those departures were voluntary, but others included layoffs across customer support, marketing, and other departments in Europe, the U.S., and elsewhere. “Ubisoft literally conducting layoffs this year and last month, and they’re posting AI art,” tweeted film concept artist Reid Southen. “Unbelievable. What the hell is the game industry doing right now.”

    Still, over 19,000 people continue to work at Ubisoft, including many devoted just to the Assassin’s Creed franchise and all of its sequels, spin-offs, and other incarnations currently in the pipeline. Surely one of them could have made some art for the social media accounts. Or the company could have just used one of its many existing Ezio images. Anything would have been preferable to posting ugly AI-generated crap as thousands are laid off across the video game industry this year.

    Fans have had to become increasingly vigilant in 2023 about companies trying to pass off AI-generated images in their marketing, as DALL-E 2, Midjourney, and other AI text-to-image models make it easier than ever to cobble together fake art. Amazon did it to promote its upcoming Fallout TV show. It sure seemed like Niantic did it to promote upcoming content in Pokémon Go. Legendary Studio Ghibli director Hayao Miyazaki calling AI art tools “an insult to life itself” back in 2016 has never felt so prophetic.

                      

    [ad_2]

    Ethan Gach

    Source link