loader image

Deepfakes: Humanity’s Zero-Day Attack

“Deepfakes” are pictures, videos, and audio clips that convincingly imitate a real person’s appearance or voice for malicious purposes. Deepfake attacks have been used to impersonate politicians and other authorities as part of espionage, fraud, and propaganda campaigns. With this said, how do we respond to different deepfake attack elements: advanced social engineering attacks, identiy theft, and propoganda?

How it happens

Most people are already aware that photos are easy to edit with programs like Photoshop. Videos, however, are harder to fake, and require pretty advanced skills in computer animation and special effects to make passable counterfeits by hand. But advancements in machine learning during this decade have made that work easier.

Using generative adversarial networks (GANs), photos or videos of a target’s face can be analyzed to create a digital model. An actor can puppeteer the model similar to how Snapchat filters let users puppeteer quirky face masks. With a talented voice actor (warning: strong language) or a text-to-speech synthesizer GAN trained to copy samples of authentic speech from the target, a complete audio and video representation of the target can be made to say and do whatever the attacker wants.

This may sound like a lot of work, but it’s frighteningly easy to create deepfakes good enough to fool at least some people. Even very primitive fake media (sometimes called “shallow fakes”) can meet a large audience and have a significant impact. And it doesn’t take much more skill to create fake media that almost nobody can distinguish from authentic footage. The most crucial constraints on producing convincing deepfakes have nothing to do with the skill of the attacker. All one needs to create fake media is an ample supply of pictures, videos, and voice samples, and enough computing power and time for the GAN to build and refine it. All the hard work of constructing a convincing face model is done automatically.

Can we reliably detect deepfakes?

While human vision systems (that is, your eyes, ears, and brain) are pretty easy to fool with these techniques, researchers have already developed systems that distinguish authentic images from fake ones. However, these so-called “discriminator networks” are the very basis of GAN frameworks. Researchers warn that each new advancement in deepfake detectors can be used to refine existing deepfake frameworks if attackers get access to detector programs.

In 2019, Google released a large batch of high quality deepfakes made with various methods, and the source material used to construct them, to aid researchers in searching for better detection methods. But the development of detector programs has not kept pace with the advancements in deepfake construction. New methods announced this year make it possible to puppeteer a face model with passable results using only a single photo of the target.

Who is at risk of deepfake attacks?

We have already seen that public figures such as politicians and celebrities are at significant risk of being maliciously imitated, but researchers warn that the advent of deepfake malware scouts a broader range of targets. Company executives have been imitated in attempts to fool other employees into taking action that aids the threat actor in stealing money or data. If attackers haven’t already done so, in the near future we should expect to see Remote Access Trojans (RATs), which can remotely control targets’ computers, gathering source material for deepfake attack creation by recording users through their webcams.

High ranking executives are not the only ones at risk. Anyone might have their identity stolen using this technique as long as it remains profitable. The cost of creating deepfake media is negligible and the time to learn how to do it is relatively short. As people volunteer so much data online, there’s no shortage of source material and potential targets. Meanwhile, the prospective profits are enormous. According to the United States FTC, 3.2 million instances of identity theft were reported last year, totalling loss of about $1.9 billion.

If you publicly post photos of yourself on Twitter, Instagram, Facebook, etc. anyone can easily find enough material to spoof your face. If you host livestream videos or have a bad habit of accepting phone calls from unknown numbers, your voice can be recorded and used to produce deepfake audio. If an attacker also knows your email address or other personal information often collected and breached en masse, it is possible that you could be imitated with deepfake media to trick your coworkers, friends, relatives, and neighbors into handing over money, company assets, or intellectual property.

How can I protect myself from deepfake attacks?

Here are the most important and easiest ways to protect yourself.

  • Prevent your face and voice from being recorded without your knowledge. Any employee who has the ability to compromise enterprise security by being spied on should cover their webcam when not in use, reject or screen phone calls from unknown numbers, and practice vigilant Internet hygiene.
  • All employees should avoid spyware by practicing good Internet hygiene and avoid falling for scams by attending security trainings. IT leaders should verify enterprise integrity with breach and attack simulation to prevent deepfake malware from reaching employees in the first place.
  • All employees should get comfortable with verifying each other’s identities. As social engineering attacks become more powerful and persuasive, security professionals should brief their organizations about getting ahead of the curve with multifactor identity verification. Employees need to get comfortable with new protocols regarding the handling of sensitive information.
  • Be cautious when posing in photos that will be shared publicly without your control. Model this behavior yourself and encourage others to do the same; when a friend asks to take a photo with you, ask your friend if they intend to share the photo and with whom. If they are posting it on social media, ask what privacy setting they will use. If they’re going to post the photo publicly, think twice before agreeing to be in the photo.
  • If you must have publicly visible photos online, such as for maintaining brand presence, minimize the number of public photos as much as possible. While it takes dozens or even hundreds of photos to make a fully controllable 3D model of a face with flawless results, remember that passable results are possible using only a single photo.

This is where the easy advice ends. Now the harder work begins.

Change your privacy settings on all your social media accounts to make all posts default to friends-only. Make private or delete old posts and photos you no longer wish to keep online.

Reviewing all your past posts to inspect for privacy settings and informational content may seem like a lot of work. But there is a way to make that work easier. Many social media platforms have an anniversary feature that shows you posts from any number of years ago. Turn on this feature and use it every day. Check the privacy settings of each post shown to you each day to make sure they align with your present-day concern for privacy. By the end of 1 year, you’ll have finished reviewing every post you’ve made on your profile page. Do this continuously, and you’ll always be confident in the integrity of your social media content and reduce your risk of falling victim to a deepfake attack.

If you want to investigate other aspects of your social posting, you can download all your data and pick through it on your own terms.

Bad advice that needs to be corrected

If you want to publicly post photos of yourself frequently, you might be tempted to use one of many experimental methods for fooling computer vision systems, such as adding procedurally generated noise to your photos. These techniques have stopped systems from recognizing human faces in the past, while still maintaining visibility to humans. This kind of obfuscation can prevent your photos from being collected by bots that rely on image classification programs to find source material, but this method will not protect you from an attacker who already knows your identity. It only stops your photos from being collected automatically. Additionally, photos that remain human-readable are more likely to produce deepfakes that remain human-foolable. Some artifacts in modified images produce even worse artifacts in corresponding deepfakes, but as mentioned previously, even primitive results can fool some people. It’s too easy for uncritical audiences to write off some slight imperfections as video compression artifacts.

What’s more, these obfuscation methods aren’t future proof. I treated a photo of my own with some previously discovered methods of obfuscation to test how they stand up today. For speed and ease of use in this demonstration, I used Google’s image classification system embedded in its reverse image search function. Here I’m simply testing whether automated photo collection by facial recognition is possible, not whether the photo in question will make a passable one-photo deepfake.

Methods I tested included adding multiple types of procedural noise and fractal patterns, image masks specifically designed by researchers to fool facial recognition systems, color distortion tricks, and more. The results of my tests were quite surprising. The only tactic that prevented recognition of my face was also the simplest: obscuring my face with a prop. I’m amazed that none of the methods I found recommended online actually worked.

Google Image Result

To reiterate, every high-tech method I tested, including from reputable academic research, failed this very basic test just 0-2 years after respective dates of publication. I hope it is abundantly clear that you should not rely on emerging research for defeating facial recognition and preventing deepfake attacks. Even though major deepfake attacks have mostly targeted high-profile figures, we expect the frequency to increase as the technology continues to improve and become more accessible. You may believe you will never fall victim to a deepfake attack, however, it’s very important to still proceed with caution and ensure that your personal data remains protected. Continuously assessing your security infrastructure for any weaknesses, training your employees on how to spot red flags, and maintaining good Internet hygiene are a few ways to lessen your chances of being effected by deepfake attacks.


Do you want to learn more about cybersecurity? Please subscribe to our newsletter.