ReportWire

Think you can spot a fake video? Sora 2 is putting that to the test.

[ad_1]

RALEIGH, N.C. — A new wave of ultra-realistic AI videos is sweeping social media after the release of Sora 2, a text-to-video generator from the makers of ChatGPT that can turn a few words into lifelike, cinematic scenes.

What You Need To Know

  • Sora 2, which previously required an invite code, is temporarily open to everyone
  • Most videos generated in Sora 2 include a visible watermark
  • Cybersecurity company “DeepStrike” reports deepfake files increased from 500,000 in 2023 to eight million in 2025

The technology has sparked both awe and anxiety. For some people it represents a new creative frontier, but as videos become more convincing, even experts admit the line between real and fake is getting harder to see.


What You Need To Know

  • Sora 2, which previously required an invite code, is temporarily open to everyone
  • Most videos generated in Sora 2 include a visible watermark
  • Cybersecurity company “DeepStrike” reports deepfake files increased from 500,000 in 2023 to eight million in 2025




 

“It’s getting better and better, and the tells are different because there are so many different AI models,” said Madeline Salazar, a content creator who’s worked in the entertainment industry for the last 10 years. “You have to be on the lookout for all sorts of things. It’s hard to catch.”

Salazar has built a large following on social media, teaching technology in fun, relatable ways. Her “AI or Real” series challenges her audience to guess whether what they are seeing was filmed or generated. She hopes the videos, which have generated millions of views, make people a little more curious about what they scroll past every day.

The new tells of fake videos

The old giveaways like six-fingered hands, blurred teeth, or limbs that bend in impossible ways are no longer as reliable as they used to be. Salazar says the newest AI models get those details right, so people have to look for more subtle clues.

“I saw somebody post about a video from a gym and the weights are uneven on the side,” said Salazar.

She says textures and fine details are often the biggest hints. Foam in a latte may appear to ripple or dance. Hair strands or fine lines can shift slightly from frame to frame. Even objects that should stay perfectly still, like lamps or walls, can drift a little because the model is still learning how to process pixels.

“The way that these AI models process pixels is not 100% accurate yet,” said Salazar. “I bet in a month or two it’ll be gone. But for now, that is something you can look out for.”

AI can also struggle with complex structures, especially ones with repeating patterns, tight angles, or intersecting lines. Playground equipment, buildings and architectural features may bend, warp, or fail to line up the way they should in real life. Those distortions, she says, are often easier to spot once you know to look for them.

Salazar adds that some creators are intentionally fooling people by generating fake security-camera or bodycam footage because viewers already expect those videos to be lower quality.

“One big trend going around is AI-generated security camera footage,” she said. “You already expect the footage to be grainy. So these security camera A.I. generated videos are created to fool people.”

Context clues matter most

Sometimes the biggest giveaway is not in the image itself, but in the details surrounding it.

“When I tell people what to look out for, one big thing is context,” Salazar said. “Is that account posting a lot of similar videos? Is there a watermark all over it? What is their track record?”

Her advice applies to a viral picture earlier this year that claimed to show trash washing up into homes along the Outer Banks. A closer inspection revealed rooflines that did not meet correctly and windows placed in odd locations. Looking further into the source of the image, the account that posted it had a feed full of other AI-generated content. Taken together, those clues strongly suggest the photo was not real, even though many people in the comment section believed it.

The dark side of AI pranks

While many people are turning to AI videos for fun and entertainment, the technology has also fueled pranks that have led to real world consequences. In one trend that spread widely, people generated fake images and videos of a homeless intruder inside their homes and sent them to family members to provoke a reaction.

In multiple cases, families believed the images were real and called 911, prompting actual police responses. Law enforcement agencies in several states have warned that these AI-generated intruder hoaxes can divert resources from real emergencies and potentially lead to dangerous situations. In October two juveniles in Ohio were criminally charged in connection to one of the incidents.

Salazar believes cases like those are part of the reason why public opinion around AI has soured.

“There’s this whole anti-AI rhetoric forming because of that,” she said. “But as a producer, I could have misinformed you five years ago with no AI. It’s not the technology doing the misinforming. It’s people behind the videos who have bad intentions.”

A creative upside

Despite the risks, Salazar sees the positives. She believes AI tools can level the playing field for independent creators and smaller production houses, giving them access to technology to require content that would’ve required a lot more money.

“Now we have the advantage to level up our media for relatively cheap,” Salazar said.

A digital reality check

As AI gets closer to mimicking reality, Salazar says it may push all of us to slow down, stay more skeptical, and really question what we see. She believes this moment could help rebuild habits that may have been lost in the digital age.

“We’ve always been taught since we were children, ‘Don’t believe everything you hear. Don’t believe everything you see on the internet,’” she said. “Maybe AI is bringing a reset where we can look at everything with a critical eye again and not be so passive in what we believe online.”

[ad_2]

Rob Wu

Source link