How to mitigate the impact of deepfakes

With deepfakes becoming more and more common — and more and more convincing — how can you protect your business?

Why it’s time to consider deepfakes a threat — election year aside — and what your business can actually do to mitigate the impact

Deepfakes are just one unfortunate product of recent developments in the field of artificial intelligence. Fake media generated by machine-learning algorithms have gained a lot of traction in recent years. Alyssa Miller’s talk at RSA Conference 2020, titled Losing our reality, provides some insights on why it’s time to consider deepfakes a threat — election year aside — and what your business can actually do to mitigate the impact if it’s attacked in such a way.

How deepfakes are made

The most common approach to creating a deepfake is using a system called GAN, or generative adversarial network. GANs consist of two deep neural networks competing against each other. To prepare, both networks are trained on real images. Then, the adversarial part begins, with one network generating images (hence the name generative) and the other one trying to determine whether the image is genuine or fake (the latter network is called discriminative).

After that, the generative network learns, and learns from the result. At the same time, the discriminative network learns how to improve its performance. With each cycle, both networks get better.

Fast forward, say, a million training cycles: The generative neural network has learned how to generate fake images that an equally advanced neural network cannot distinguish from real ones.

This method is actually useful in many applications; depending on the preparatory data, the generative network learns to generate certain kinds of images.

Of course, for deepfakes, the algorithm is trained on real photos of certain people, resulting in a network that can generate an infinite number of convincing (but fake) photos of the person ready to be integrated into a video. Similar methods could generate fake audio, and scammers are probably using deepfake audio already.

How convincing deepfakes have become

Early deepfake videos looked ridiculous, but the technology has evolved enough at this point for such media to become frighteningly convincing. One of the most notable examples of frighteningly convincing deepfakes from 2018 was fake Barack Obama talking about, well, deepfakes (plus the occasional insult aimed at the current US president). In the middle of 2019, we saw a short video of fake Mark Zuckerberg being curiously honest about the current state of privacy.

To understand how good the technology has become, simply watch the video below. Impressionist Jim Meskimen created it in collaboration with deepfake artist Sham00k. The former was responsible for the voices, and the latter applied the faces of some 20 celebrities to the video using deepfake software. The result is truly fascinating.

As Sham00k says in the description of his behind-the-scenes video, “the full video took just over 250 hours of work, 1,200 hours of footage, 300,000 images and close to 1 terabyte of data to create.” That said, making such a video is no small feat. But such convincing disinformation can potentially have massive effects on markets — or, say, elections — which makes the process seem frighteningly easy and inexpensive.

For that reason, almost at the same time that the abovementioned video was published, California outlawed political deepfake videos during election season. However, problems remain. For starters, deepfake videos in general are a form of expression — like political satire. California’s ban doesn’t exactly protect freedom of speech.

The second problem is both technical and practical: How exactly are you supposed to tell a deepfake video from a real one?

How to detect deepfakes

Machine learning is all the rage among scientists all over the world, and the deepfake problem looks interesting and challenging enough to tempt many of them to jump in. For this reason quite a few research projects have focused on how to use image analysis to detect deepfakes.

For example, a paper published in June 2018 describes how analyzing eye blinks can aid in the detection of deepfake videos. The idea being that typically not enough photos are available of a certain person blinking, so neural networks may not have enough to train on. In fact, people in deepfakes at the time the paper was published were blinking far too rarely to believe, and though people found the discrepancy hard to detect, computer analysis helped.

Two papers submitted in November 2018 suggested looking for face-warping artifacts and inconsistent head poses. Another one, from 2019, described a sophisticated technique that analyzes the facial expressions and movements that are typical for an individual’s speaking pattern.

However, as Miller points out, those methods are unlikely to succeed in the long run. What such research really does is provide feedback to deepfake creators, helping them improve their discriminative neural networks, in turn leading to better training of generative networks and further improving deepfakes.

Using corporate communications to mitigate deepfake threats

Given the abovementioned issues, no purely technological solution to the deepfake problem is going to be very effective at this point. But other options exist. Specifically, you can mitigate the threat with effective communications. You’ll need to monitor information related to your company and be ready to control the narrative should you face a disinformation outbreak.

The following suggestions summarize Alyssa Miller’s suggestions for preparing your company to face the deepfake threat — by the way, the same methods can be useful for dealing with other types of PR mishaps as well:

  • Minimize channels for company communications;
  • Drive consistent information distribution;
  • Develop a disinformation response plan (treat these as security incidents);
  • Organize a centralized monitoring and reporting function;
  • Encourage responsible legislation and private sector fact verification;
  • Monitor development of detection and prevention countermeasures.
Tips