From PBS News Hour
Why ‘deepfake’ videos are becoming more difficult to detect
By Miles O'Brien
Sophisticated and inaccurate altered videos known as “deepfakes” are causing alarm in the digital realm. The highly realistic manipulated videos are the subject of a House Intelligence Committee hearing on Thursday. As Miles O’Brien reports, the accelerating speed of computers and advances in machine learning make deepfakes ever more difficult to detect, among growing fears of their weaponization...
Hany Farid:
The nightmare situation is that there's a video of President Trump saying, "I have launched nuclear weapons against North Korea." And somebody hacks his Twitter account, and that goes viral, and, in 30 seconds, we have global nuclear meltdown.
Do I think it's likely? No. But it's not a zero probability, and that should scare the bejesus out of you, right? Because the fact that that is not impossible is really worrisome.
Miles O’Brien:
Farid is most worried about deepfakes rearing their ugly head during the 2020 election. So he and his team are carefully learning the candidates' patterns of speech and how they correlate with gestures as a way to spot deepfakes.
Hany Farid:
We do that, of course, by analyzing hundreds of hours of hours of video of individuals.
We're focused on building models for all of the major party candidates, so that enough we can upload a video to our system. We can analyze it by comparing it to previous interviews, and then asking, what is the probability that this is consistent with everything we have seen before?
Watch the segment, listen to the audio, or read the full transript...
Hany Farid is a professor in the UC Berkeley in the School of Information with a joint appointment in EECS.