May 11, 2020

All’s Clear for Deepfakes? Think Again.

By Hany Farid, Robert Chesney, Danielle Citron

The verdict was in, and it was a comforting one: Deepfakes are the “dog that never barked.” So said Keir Giles, a Russia specialist with the Conflict Studies Research Centre in the United Kingdom. Giles reasoned that the threat posed by deepfakes has become so entrenched in the public’s imagination that no one would be fooled should they appear. Simply put, deepfakes “no longer have the power to shock.” Tim Hwang agreed but for different reasons, some technical, some practical. Hwang asserted that the more deepfakes are made, the better machine learning becomes at detecting them. Better still, the major platforms are marshaling their efforts to remove deepfakes, leaving them “relegated to sites with too few users to have a major effect.”

We disagree with each of these claims. Deepfakes have indeed been “barking,” though so far their bite has most often been felt in ways that many of us never see. Deepfakes in fact have taken a serious toll on people’s lives, especially the lives of women. As is often the case with early uses of digital technologies, women are the canaries in the coal mine. According to Deeptrace Labs, of the approximately 15,000 deepfake videos appearing online, 96 percent involve deepfake sex videos; and 99 percent of those involve women’s faces being inserted into porn without consent. Even for those who have heard a great deal about the potential harms from deepfakes, the opportunity to be shocked remains strong. Consider the fate that befell journalist and human rights activist Rana Ayyub. When a deepfake sex video appeared in April 2018 showing Ayyub engaged in a sex act in which she never engaged, the video spread like wildfire. Within 48 hours, the video appeared on more than half of the cellphones in India. Ayyub’s Facebook profile and Twitter account were overrun with death and rape threats. Posters disclosed her home addressed and claimed that she was available for anonymous sex. For weeks, Ayyub could hardly eat or speak. She was terrified to leave her house lest strangers make good on their threats. She stopped writing, her life’s work, for months. That is shocking by any measure.

Is this really any different from the threat posed by familiar, lower-tech forms of fraud? Yes. Human cognition predisposes us to be persuaded by visual and audio evidence, but especially so when the video or audio in question is of such quality that our eyes and ears cannot readily detect that something artificial is at work. Video and audio have a powerful impact on people. We credit them as true on the notion that we can believe what our eyes and ears are telling us. The more salacious and negative the deepfake, moreover, the more inclined we are to pass them on. Researchers have found that online hoaxes spread 10 times faster than accurate stories. And if a deepfake aligns with our viewpoints, then we are still more likely to believe it. Making matters worse, the technologies associated with creating deepfakes are likely to diffuse rapidly in the years ahead, bringing the capability within realistic reach of an ever-widening circle of users—and abusers.

“Instead of being ‘fooled’ by deepfakes, people may grow to distrust all video and audio recordings.”
— Hany Farid

Growing awareness of the deepfake threat is itself potentially harmful. It increases the chances that people will fall prey to a phenomenon that two of us (Chesney and Citron) call the Liar’s Dividend. Instead of being “fooled” by deepfakes, people may grow to distrust all video and audio recordings. Truth decay is a boon to the morally corrupt. Liars can escape accountability for wrongdoing and dismiss real evidence of their mischief by saying it is “just a deepfake.” Politicians have already tried to leverage the Liar’s Dividend. At the time of the release of the Access Hollywood tape in 2016, for example, then-candidate Trump struggled to defend his words of “I don’t even wait. And when you’re a star, they let you do it. You can do anything. ... Grab them by the pussy. You can do anything.” A year later, however, President Trump tried to cast doubt on the recording by saying that it was fake or manipulated. The president later made a similar claim in trying to distance himself from his own comments on the firing of FBI Director James Comey during an interview with NBC’s Lester Holt. Such attempts will find a more receptive audience in the future, as awareness grows that it is possible to make fake videos and audio recordings that cannot be detected as fakes solely by our eyes and ears.

Can technology really save us, as Hwang suggested, by spawning reliable detection tools? As an initial matter, we disagree with the claim that detection tools inevitably will win the day in this cat-and-mouse game. We certainly are not there yet with respect to detection of deepfakes that are created with generative adversarial networks, and it is not clear that we should be optimistic about reaching that point. Decades of experience with the arms races involving spam, malware, viruses and photo fakery have taught us that playing defense is difficult and that adversaries can be highly motivated and innovative, constantly finding ways to penetrate defenses. Even if capable detection technologies emerge, moreover, it is not assured that they will prove scaleable, diffusible and affordable to the extent needed to have a dramatic impact on the deepfake threat.

To be sure, deepfakes are not the exclusive way to spread lies. Cheapfakes (lower-tech methods to edit video or audio in misleading ways) have long been profoundly effective in spreading misinformation. On May 23, 2019, a deceptively edited video of Nancy Pelosi that made her seem drunk or incapacitated began circulating. The president and his allies gleefully tweeted it to millions even though the Washington Post had already debunked the video as fakery. But it does not follow that we have nothing to fear from deepfakes. If readily debunked cheapfakes have proved so capable, it follows that we can expect wider and deeper harms from fakes that are more believable at first glance and harder to detect on closer inspection.

“Because the half-life of social media posts is measured in hours, the harm from deepfakes in the form of nonconsensual porn, misinformation or fraud will come fast and furious.”
— Hany Farid

Though laudable, the fact that major platforms like Facebook and Twitter have banned some manner of digital forgeries does not mean that deepfakes will inevitably be “relegated to sites with too few users to have a major effect.” Those bans do not implement themselves immediately or perfectly. Fakes have to be detected, and then they have to be judged fraudulent in a particular way that contravenes the policy (something that is not always self-evident even when modification is detected). Even then, much of the harm is already done. Because the half-life of social media posts is measured in hours, the harm from deepfakes in the form of nonconsensual porn, misinformation or fraud will come fast and furious, even with well-intended and well-executed policies. The dry, debunking truth will never quite catch up with the sizzling lie. So, yes, deepfakes will continue to “percolate in shady areas,” as suggested, but their best days having an impact through high-volume platforms likely lie ahead of us, not behind us. 

Now is not the time to sit back and claim victory over deepfakes or to suggest that concern about them is overblown. The coronavirus has underscored the deadly impact of believable falsehoods, and the election of a lifetime looms ahead. More than ever we need to trust what our eyes and ears are telling us.


Originally published as “All’s Clear for Deepfakes: Think Again” by Lawfare on May 11, 2020. Reprinted with the author’s permission.

Hany Farid is a professor at the UC Berkeley School of Information and EECS. He specializes in digital forensics.

Last updated: June 12, 2020