Keeping Deepfakes Out of Court May Take Shared Effort
By Jule Pattison-Gordon
As artificial intelligence evolves and digital fakery becomes both more pervasive and more convincing, court officials are already thinking about how to stop it from unduly influencing legal proceedings.
No solution will be foolproof, but experts say the time has come to start preparing guardrails and considering countermeasures. Members of judicial and tech spaces alike are sounding this alarm about the possibility — and probability — that deepfaked evidence could soon show up in courts. If juries fall for fabrications, they’d base decisions on falsehoods and unfairly harm litigants. And real images and videos could be mistakenly discounted as fakes, causing similar damages...
Litigants or courts can get digital forensic analysts to weigh in, but such work, while helpful, can be expensive and time-consuming, said Hany Farid, digital forensics and misinformation expert at the University of California, Berkeley School of Information.
Digital forensic analysts can look for visible and invisible clues in an image. Extra fingers and distorted eyes are hallmarks of obvious deepfakes, but there are subtler signs...
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley. He specializes in digital forensics.