I found an interesting article from https://viterbischool.usc.edu/news/2019/06/deep-fakes-researchers-develop-forensic-techniques-to-identify-tampered-video… . They have discovered a way to find fake photos or ones that have been manipulated in some fashion. Here is an excerpt from above:
While previous methods of detecting deep fakes would often use frame by frame analysis of various aspects of a video, these prior methods, the USC ISI authors contend are quite computationally heavy, take more time, and also have greater room for error. However, the newer tool developed by ISI which was tested on over 1,000 videos is less computationally intensive. It thus has the potential to scale and be used to automatically detect fakes that are uploaded in the millions of profiles on Facebook or other social media platforms in near real-time.
This effort, led by principal investigator Wael Abd-Almageed, a computer vision, facial recognition and bio-metrics expert, looks at a piece of video content as a whole. The researchers used artificial intelligence to look for inconsistencies in the images through time, not just on a “frame by-frame” basis. This is a key distinction says Abd-Almageed, as sometimes you cannot detect the manipulation on a frame by frame level, but looking for facial motion inconsistencies.
To develop this first forensic tool, the USC ISI researchers used a two-step process. First, they input hundreds examples of verified videos of a person. Then they laid each video on top of one another. Then, using a deep learning algorithm known as a convolution neural network, the researchers identified features and patterns in a person’s face, with specific attention to how the eyes closes or how the mouth moves. Once they had a model for an individual’s face and the movements surrounding their facial movements, they could develop a tool that compares a newly input video with the parameters of the previous models to determine if a piece of content was outside the norm and thus was not authentic. One can imagine this working in the same way a bio-metric reader recognises a face, retina scan or fingerprint.
“If you think deep fakes as they are now is a problem–think again. Deep fakes as they are now are just the tip of the iceberg and manipulated video using artificial intelligence methods will become a major source of misinformation,” Abd-Almageed says. One can imagine a world where everyone guards their video assets as much as they guard their bank PIN number.
This got me thinking that the BB-AI may be able to do the same. SO is anyone interested? Comment below...
Clem