Imagine this scenario. It is 2030; deep fakes And artificial intelligenceGenerated content is everywhere and you are a member of a new profession: a reality notary. From your desktop, clients ask you to verify the authenticity of photos, videos, emails, contracts, screenshots, audio recordings, text message threads, social media posts and biometric records. People are arriving desperately to protect their money, their reputations and their sanity, but also their freedom.
All four are at stake on a rainy Monday when an elderly woman tells you that her son has been accused of murder. She carries the evidence against him: a USB key containing monitoring images of the shooting. It is sealed in a plastic bag stapled to an affidavit, which explains that the disk contains evidence the prosecution intends to use. At the bottom is a string of numbers and letters: a cryptographic chop.
The sterile laboratory
On supporting science journalism
If you enjoy this article, please consider supporting our award-winning journalism by subscribe. By purchasing a subscription, you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Your first step is not to watch the video, that would be like walking through a crime scene. Instead, you connect the drive to an offline computer with a write blocker, a hardware device that prevents data from being written back to the drive. It’s like bringing evidence into a sterile laboratory. The computer is where you hash the file. Cryptographic hashinga digital forensics integrity check, has an “avalanche effect” so that any small change (a removed pixel or an audio adjustment) results in entirely different code. If you open the disk without protecting it, your computer could quietly modify metadata… information about the case – and you will not know if the case you received is the same as the one the prosecution intends to present. When you chop up the video, you get the same string of numbers and letters printed on the affidavit.
Then you create a copy and hash it, checking that the codes match. Then you lock the original in a secure archive. You move the copy to a forensics workstation, where you watch the video — what appears to be security camera footage showing the woman’s adult son approaching a man in an alley, raising a gun and firing a shot. The video is compelling because it is boring: no cinematic angles, no dramatic lighting. You’ve seen it before: it recently began circulating online, weeks after the murder. The affidavit states the exact time the police uploaded it from a social platform.
Looking at the grainy images, you remember why you’re doing this. You were still in college in the mid-2020s when deepfakes went from novelty to big business. Verification companies have reported a 10x increase in deepfakes between 2022 and 2023, and face swap attacks increased by more than 700% in just six months. By 2024, a deepfake fraud attempt happened every five minutes. You had friends whose bank accounts were being emptied, and your grandparents were transferring thousands to a bank account. virtual kidnapping scammer after receiving altered photos of your cousin while she was traveling through Europe. You entered this profession because you saw how one manufacturing could ruin a life.
Fingerprints
The next step in video analysis is to perform a provenance check. In 2021, the Coalition for Content Provenance and Authenticity (C2PA) was founded to develop a standard for tracking the history of a file. C2PA Content identification information works like a passport, collecting stamps as the file travels around the world. If the video contains them, you can follow its creation and modifications. But most have been slow to gain adoption, and content identifying information is often stripped as files circulate online. In a 2025 Washington Post testjournalists attached content identifying information to an AI-generated video, but every major platform they uploaded it to removed the data.
Then you open the file’s metadata, although it rarely survives online transfers. The timestamps do not match the time of the murder. They were reset at some point (all are now listed at midnight) and the device field is empty. The software tag tells you that the file was last saved by the common type of video encoder used by social platforms. There is no indication that the clip came directly from a surveillance system.
When you look at the public court records in the homicide case, you learn that the owner of the property with the security camera was slow to respond to the police request. The surveillance system was set to overwrite the data every 72 hours, and by the time police accessed it, the footage was gone. That’s what caused a sensation when the video surfaced anonymously online, with the murder shown from the exact angle of that security camera.
The physics of deception
You begin Internet research that investigators call open source intelligence, or OSINT. You ask an AI agent to search for an earlier copy of the video. After eight minutes, it delivers the results. A video released two hours before police uploaded it shows a partial recording indicating the recording was made with a phone.
The reason you find C2PA data is because companies like Truepic and Qualcomm have developed ways for phones and cameras to cryptographically sign content at the point of capture. What is clear now is that the video is not from a security camera.
You look at it again for physics that doesn’t make sense. The slowed-down images scroll like a flip-book. You look at the shadows, the lines of an alleyway door. Then, at the edge of a wall, a light that shouldn’t be there flashes. It is not the flickering of a light bulb but a rhythmic shimmer. Someone filmed a screen.
Flickering is a sign of two clocks out of sync. A phone’s camera scans the world line by line, up and down, several times per second, while a screen updates in cycles: 60, 90 or 120 times per second. When a phone records a screen, it can capture the shimmer of the screen update. But that still doesn’t tell you if the recorded screen shows the truth. Someone might have simply recorded the original surveillance monitor to save the footage before it was overwritten. To prove a deepfake, you have to look deeper.
Artifacts of the forgery
Now you’re looking for watermarks, invisible statistical patterns inside the image. For example, Synthesizer ID is Google DeepMind’s watermark for Google-created AI content. Your software finds clues as to what might be a watermark, but nothing certain. Cropping, compressing or filming a screen can damage watermarks, leaving only traces, such as erased words on paper. This doesn’t mean that the AI generated the entire scene; this suggests that an AI system may have modified the images before the screen was recorded.
Then you run it through a deepfake detector like Defender of reality. The scan points to abnormalities around the shooter’s face. You split the video into still images. You use the InVID-WeVerify plugin to take clear images and do reverse image searches on the accused son’s face to see if it appeared in another context. Nothing happens.
On the drive is other evidence, including more recent footage from the same camera. The masonry aligns with the video. This is not a manufactured scene.
You return to the shooter’s face. The aisle lighting is harsh, providing a distinct grain. His jacket, his hands, and the wall behind him emit a crude digital noise, but not his face. It’s slightly sweeter, from a cleaner source.
Security cameras give moving objects a distinct blur and their images are compressed. The shooter has this blurry, blocky quality, except for his face. You watch the video again, zooming in on the face only. The jawline is slightly trembling – two layers are slightly misaligned.
The final calculation
You return to the moment when the shooter appears. He raises the weapon in his left hand. You call the woman. She tells you that her son is right-handed and sends you videos of him playing sports when he was a teenager.
Finally, you go to the alley. Building maintenance records indicate the camera is 12 feet high. You measure his height and downward angle, using basic trigonometry to calculate the shooter’s height, three inches taller than the woman’s son.
The video now makes sense: It was made by cloning the son’s face, using an AI generator to superimpose it on the shooter, and recording the screen with a phone to remove the generator’s watermark. Cleverly, whoever did this chose a phone that would generate content identifying information, so viewers would see a cryptographically signed statement that the clip was recorded on that phone and that no changes were declared after capture. By doing this, the director of the video essentially falsified a certificate of authenticity for a lie.
The notarized document you send to the public defender will not read like a thriller but like a laboratory report. In 2030, there will no longer be a “reality notary” Science fiction; it is the person whose services we use to ensure that people and institutions are what they appear to be.



























