Security camera company Ring has launched a new public tool to help people determine if a given video has been edited in some way, including with generative AI technology. And while the tool has some limitations, it’s a step in the right direction that all video platforms should be working on to help us determine what’s real in the AI age.
Users can visit the Ring Verify landing page and upload any Ring video that they’re wondering about. The company describes its system like a “security seal on a package.” If even a second has been edited out or it’s been cropped, the “seal breaks,” as it were.
“Ring Verify works for all Ring videos, no matter which Ring device recorded them,” the company said in a blog post announcing the program. “There’s nothing to set up—it’s automatically included with every video that was downloaded from December 2025 moving forward. Whether you’re receiving footage from a neighbor, reviewing a video for a claim, or checking that a shared video is the real deal, you can now verify it’s authentic Ring footage that hasn’t been tampered with.”
A spokesperson for Ring told Gizmodo that the feature, “was built using C2PA (Coalition for Content Provenance and Authenticity) protocol, which aims to prove the content authentically came from a given source (Ring), and does operate using a metadata signature.” That signature only works to tell whether something is definitively authentic, and users can’t necessarily call anything that isn’t verified “fake.” But it’s a helpful way to quickly check if a video shared with you has been tampered with.
As the Verge points out, this new tool doesn’t really help with the most common potential use case. If you’re wondering whether that home security camera footage you’re seeing on TikTok or Instagram is real, this isn’t going to tell you that. And that’s a shame. Because security camera footage is some of the most difficult AI-generated footage to parse when it comes to authenticity.
The common fisheye warp of a security camera or the nighttime pixelation that’s expected from home cameras is often used to hide the telltale signs that a given video has been manipulated. But if you upload a video you found on TikTok or Instagram it’s likely to have been edited in some way (either by length or aspect ratio) which will mean the Verify tool will tell you it’s been altered. However, those alterations don’t necessarily mean that it’s AI.
Google has a digital watermark program called SynthID that recently became accessible to all users on Gemini. Uploading an image to Gemini, it will be able to tell you whether the image was created using Google’s AI generator tools. But, again, the capabilities there are limited. Just because it’s missing the invisible watermark doesn’t mean that it’s “real.” It just means Google didn’t help create it.
These tools are admittedly imperfect, but at least it’s something for the time being. Because AI generated images are getting scary good. And everyone now has to be on their toes. AI fakes aren’t going away anytime soon. And you really can’t blindly believe anything you see on the internet anymore.
Matt Novak
Source link