A group of Cornell Computer Scientists has unveiled that they believe that AI Generation can be a new source in the fight against video, deep -fax and doctorate clips.
Water marking techniques, called “noise -coded alumni”, hide itself authentication data to help investigators replace doctorate videos. This view was published in the June 27 issue by Peter Michael, Zikon Hao, Surge Conscious and Assistant Professor Abe Davis. ACM transactions on graphics And will be presented by Michael on August 10 by Michael.
This system adds barely a flicker to light sources in a scene. The cameras record this random sample, though viewers may not be able to detect it, and each lamp or screen that carrys its own unique code of flickers.
For example, imagine a press conference filmed at the White House briefing room. Studio lights will be planned to flicker with a unique code. If a viral clip of this press conference later circulates with it, which shows an inflammatory statement, investigators can run it through a decoader, and by testing whether the recorded light code line can be applied, can determine if the footage was doctor or not.
“Carnell’s computer science assistant Professor Abe Davis said,” Each watermark is a low -veinedavediveedive low loyalty, time -tumped version of a lowly vaidipedivedived video under a slightly different lighting in each watermark. We call these code videos. ” “When someone manipulates the video, the parts of the manipulation begin to contradict what they see in these code videos, which allows us to see where the changes are made. And if someone tries to produce a fake video with AI, the resulting code videos are just as random.”
Although scientists acknowledge that rapid motion and strong sunlight can be a hindrance to the usefulness of this technique, they are happy with the conference – Rome presentations, television interviews or lectures – hall speeches.


