The world is baffled over deepfakes, that is, AI-manipulated photos/videos. A recent app called DeepNudes, which used AI to ‘undress women’, stirred huge controversy and was soon taken down by the creator.
We’re entering a new era where DeepFakes will be common and if means are not invented to control them, it will be a menace. On similar lines, researchers at the University of California have developed a deep neural network that can spot deepfake photos.
The neural network is developed to spot patterns in raw data and is modeled similar to a human brain. A team of researchers led by Prof. Amit K. Roy-Chowdhury fed a set of images, including both manipulated and non-manipulated photos, to the neural network.
The researchers knew which photos were morphed and which weren’t. To train the network, researchers highlighted the pixels along the boundaries of the elements digitally added to the photo. It is a known fact that deepfake photos have smoother pixels in the parts that are artificially added.
While most of the time, it is not possible to detect morphed photos by the naked eye, but a computer that could examine the photos pixel-by-pixel can use this fact to spot the deepfakes.
After the neural network was shown images outside of the dataset previously fed to it, the network was able to spot deepfakes “most of the time.”
The results were satisfactory but the neural network currently works only for photos. Researchers are figuring out a way to apply it to deepfake videos as well.
However, we cannot say that this is a fix for deepfakes as the neural network is not 100% accurate.
According to Prof. Roy-Chowdhury, “If you want to look at everything that’s on the internet, a human can’t do it on the one hand, and an automated system probably can’t do it reliably. So it has to be a mix of the two.”
Nonetheless, this development is a silver lining that we currently have within our grasp to detect deepfake photos. For more information on the same, you can read more about the deep neural network here.