If you’re worried about the malevolent potential of deepfake video, you’re not alone – so is Facebook. The company has launched a project to sniff out deepfake videos, and it’s pledging more than $10m to the cause. It has pulled in a range of partners including Microsoft for help.
Deepfakes are videos that use AI to superimpose one person’s face on another. They work using generative adversarial networks (GANs), which are battling neural networks. One of the networks focuses on producing a lifelike image. The other network checks the first network’s output by matching it against real-life images. If it finds inconsistencies, the first network has another go at it. This keeps happening until the second network can’t find any more mistakes.
This leads to some highly convincing pictures and videos. Deepfake AI has produced fake porn videos, fake profile pictures, and (for demonstration purposes) fake presidential statements. They’re also getting easier to create. For an example of just how good it’s getting, watch this video that seamlessly morphs Bill Hader into Tom Cruise and Seth Rogen.
That video is great entertainment, but now imagine a fake clip spreading like wildfire on Facebook in which Trump says he’s bombing Venezuela. Or one where the CEO of a US blue-chip says that it’s pulling out of China and taking a massive earnings hit, tanking its stock. That’s not so funny.
No wonder, then, that the social media giant has finally decided to take a stand against the technology. Its DeepFake Detection Challenge will, as the name suggests, help people detect deepfakes.
AI relies on lots of data to generate its images, so to create AI that spots deepfakes, Facebook has to come up with its own dataset. It will take unmodified, non-deepfake videos, and then use a variety of AI techniques to tamper with them. It will make this entire dataset available to researchers, who can use it to train AI algorithms that spot deepfakes.
Facebook has some heavyweight help. Along with Microsoft, it’s working with the Partnership on AI, and academics from Cornell Tech, MIT, Oxford University, UC Berkeley, the University of Maryland, College Park, and the University at Albany. These partners will create tests to see how effective each researchers’ detection model is.
One impressive aspect of all this is the way that Facebook is generating and handling the data set. Perhaps wary of the privacy implications of just scraping its own user data, the company is making every effort to do it right. It is working with an agency that is hiring actors. They’ll sign a waiver agreeing to let researchers use their images. It will also only share the dataset with entrants to the contest, so that black hats can’t use it to create better deepfakes.
This dataset will hopefully help to advance existing research on deepfake detection. In June 2019, researchers at the University of Southern California’s Information Sciences Institute created a model to detect inconsistencies in motion that lead to strange facial movements. The University at Albany looks for a lack of blinking (many deepfakes don’t often blink, apparently).
This is a much-needed step forward because just like the GANs themselves, we can expect both AI factions to compete in a kind of arms race, with one side creating increasingly convincing videos that could be used for malicious intent and the other side creating AI to detect them. In that scenario, the people try to detect fakes need all the help they can get.