ReportWire

Tag: AI Slop

  • Why People Think AI Fight Between Tom Cruise & Brad Pitt Was A Scam

    [ad_1]

    On February 10, filmmaker Ruairí Robinson made a bold claim on X. “This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk,” he wrote. The post was accompanied by a video of Tom Cruise and Brad Pitt performing a fight scene together. The clip gained attention for its apparent sophistication; it appeared well-choreographed, competently shot, and appropriately lit, which are all elements that other AI video tools have struggled to convincingly replicate. If Robinson’s claim was true, this was a significant leap forward in AI video technology. The kind of thing AI hype-men have been shilling for years and which has—until, if Robinson is to be believed, right now—turned out to be nothing more than snake oil. There’s just one problem: It’s probably still snake oil.

    Aron Peterson, a writer and software developer who has also worked in film production, post-production, and visual effects, posted a blog on his website, Shokunin Studio, questioning Robinson’s story. “The claims being made immediately rubbed me up the wrong way,” Peterson wrote. “Other demos of the Seedance model had the usual errors we have come to expect from AI video generators [but this one didn’t].” In particular, Peterson explained, “AI video generators are really bad at simulating realistic camera moves, especially handheld shaky cam,” but in the Cruise/Pitt video, “we can see the camera movement.”

    So Peterson started researching Seedance 2.0, the new AI tool from TikTok developer ByteDance that’s already doing large-scale copyright infringement, which Robinson used to create the video. Peterson “hopped over to Seedance’s website and it only took 10 seconds to find green screen footage of two stuntmen performing the same fight choreography we see in the Cruise vs Pitt scene,” he said. He also posted a comparison of the two videos on YouTube.

     

    “Was the input really just a 2 line prompt or was it actually 2 lines, green screen video footage, and face references too?” Peterson asked. “The evidence appears to show that stuntmen were filmed from several angles, that a clip had to be generated for every angle, and then finally all clips were stitched together for marketing.” Peterson’s evidence implies that the Cruise/Pitt fight scene wasn’t entirely AI generated; instead, it was probably just face replacement and background creation laid on top of footage that already existed. As TV writer David Slack put it on Bluesky, “In other words, like most AI hype — it was a con.”

    [ad_2]

    Jen Lennon

    Source link

  • NFL-Related Accounts on Facebook Are Posting Some of the Most Shameless AI Slop Yet

    [ad_1]

    If you haven’t checked your Facebook account in a while, fear not, the spam accounts are still doing very well. Now with eerie and ever-advancing AI slop in their arsenal—and, lately, football fans to prey on.

    There’s a group of accounts on Facebook that claims to be a bunch of fan accounts for various National Football League teams. But a quick scroll through these pages, each sporting a couple thousand followers, reveals misinformation paired with a series of seemingly AI-generated photos. Based on the comment sections of these photos and the amount of likes some of them get, people are fully believing what they post.

    “After His Desire To Return To The Steelers Was Not Fulfilled, Instead Of Reacting With Anger Or Resentment, The Former Player Chose To Retire And Join The Pittsburgh Police Department To “Wear Pittsburgh Colors Once Again.” a Pittsburgh Steelers fan account with 11,000 followers claimed in a post earlier this week. The post does not mention the name of the so-called player but is accompanied by a seemingly AI-generated image of former football wide receiver Adam Thielen in police uniform. Thielen recently announced his retirement, and briefly played for the Steelers late last year. He has not shared any plans to join Pittsburgh law enforcement.

    Another such account, a Denver Broncos fan account with more than 6,000 followers called “Wild Horse Warriors,” found a victim not in a player, but in Broncos reporter Cody Roark. A post with an AI-generated image of Roark holding a child claimed that he had passed away following a domestic violence incident and left behind a 5-year-old child. Not only was Roark alive and well, he doesn’t even have kids to begin with.

    “Usually you see that happen to, like, high-profile celebrities,” Roark told The Denver Post. “For that to happen to me was just really weird.”

    The account was just created in November and has now been shut down by Meta after The Denver Post reached out for comment. In its two-month existence, the account reportedly disseminated a slew of misinformation posts about Broncos players as well, including a false claim that wide receiver Courtland Sutton refused to wear an LGBTQ+ solidarity armband during a game. But even though “Wild Horse Warriors” is now a thing of the past, similar accounts continue to proliferate on Facebook. One such account, called “Broncos Stampede Crew,” made the same LGBTQ+ armband claim about Broncos quarterback Bo Nix. The phone number attached to that account appears to be based in Vietnam.

    What do these accounts have to gain from fake AI-generated news about football players? While it’s not certain how these specific accounts operate, the pattern seems to fit what has long been utilized by Facebook spam accounts. Each post by these fake fan accounts links out to an article from a website that pretends to be a reputable news organization like “ESPNS” or “NCC News.”

    “Spam Pages largely leveraged the attention they obtained from viewers to drive them to off-Facebook domains, likely in an effort to garner ad revenue,” Harvard researchers wrote in a study from 2024. These websites are usually “heavily ad-laden content farm domains—some of which themselves appeared to consist of primarily AI-composed text.”

    Other pages could be trying to accumulate an audience and good standing with the algorithm by using these fake shock-value clickbait news stories first, before completely changing the purpose of the page.

    “It could be that these were nefarious pages that were trying to build an audience and would later pivot to trying to sell goods or link to ad-laden websites or maybe even change their topics to something political altogether,” Georgetown researcher Josh Goldstein told NPR in a 2024 interview about AI spam accounts on Facebook.

    [ad_2]

    Ece Yildirim

    Source link

  • New Google TV Update Is a Serious Bid to Get You to Watch AI Outputs from Your Couch

    [ad_1]

    Google TV, the operating system mainly serving the successor devices to Google’s defunct Chromecast line of products, is far from ubiquitous when you compare it to the overwhelmingly more popular Roku operating system and Samsung’s Tizen, but for what it’s worth, GTV is the one trying the hardest to shoehorn AI into the user experience. And an upcoming change announced Monday at CES will bring image and video generation via Google Gemini’s Nano Banana text-to-image model family to your TV.

    Like anything announced at CES, the implied promise is that people will want to use this, and the suite of features being described here is, I have to admit, intriguing. 

    There are some AI assistant features mentioned in this announcement, but since the advantage Google TV has over most smart TV operating systems is that it’s connected to your Google account, the most interesting new change is that Gemini will be able to search your library on Google Photos, and apply the Nano Banana features you may have already futzed around with on your smartphone, but from the comfort of your couch this time. This means adding uncanny effects to your family photos via the Photos Remix feature, and the ability to, according to Google’s press release about the update “transform memories into cinematic immersive slideshows.” 

    This next ability is listed separately in Google’s press release, even though it sounds a bit like the first: “Use Nano Banana and Veo to reimagine your personal photos or create original media directly on your TV.” 

    As photos accompanying the announcement make clear, much of what’s on offer here is designed to, well, get TV viewers to watch a slop generator.  

    In one image, Google AI Premium users are invited to create videos. Another shows the actual video creation interface, which has what look like Pixar-style animated sample videos with suggested prompts like, “Fluff fish swimming on coral reefs made with squishy yarn.” There’s a popup at the bottom of this menu with the text, “Describe your video…” Below that is instructional text about pressing and holding the mic button on your remote to talk.

    It all paints a picture of an activity you’re meant to enjoy in your living room: the “generate videos of our family members” game, perhaps. But the window dressing is more wholesome and kid-oriented than Sora’s more brainrot-forward approach to user-generated video.

    Anecdotally, most people I know who tried Sora had their curiosity slaked after a few days on the app, and don’t really revisit it. I can see that being a problem with generating custom videos on Google TV as well. But there is, at the very least, something novel about messing around with AI while curled up with the dog and a bowl of popcorn. 

     

    Google’s release says these features will come to certain TCL devices first, and will expand to the rest of the Google TV universe “over the coming months.” 

    [ad_2]

    Mike Pearl

    Source link