From Scientific American
Defend Yourself against AI Impostor Scams with a Safe Word
By Ben Guarino
The most common fraud in the U.S. over the past year was the impostor scam. More than 856,000 instances, collectively draining $2.7 billion nationwide, were reported to the Federal Trade Commission in 2023. First, swindlers fake familiarity or authority—maybe by stealing the identity of a friend or relative or claiming to be a bank representative or a federal agent. Then, in that guise, they call, text or e-mail you and attempt to take your money.
And now artificial intelligence has larded these scams with an additional layer of duplicity: inexpensive voice-cloning services that an impersonator can easily abuse to make deceptive—and astonishingly convincing—phone calls in another person’s voice. These AI tools digest speech samples (perhaps snatched from videos posted online or from a supposedly “wrong number” phone call) and generate audio replicas of the stolen voice that can be manipulated to say basically anything...
Using a verbal password or code phrase may simply be the most straightforward way to combat AI voice scams. “I like the code word idea because it is simple and, assuming the callers have the clarity of mind to remember to ask, nontrivial to subvert,” says Hany Farid, a professor at the University of California, Berkeley, who has studied audio deepfakes. “Right now there is no other obvious way to know that the person you are talking to is who they say they are.” Farid and his wife have a code word. His pro tip: “Ask each other what the code is every once in a while—because unlike a [computer] password, we don’t use the code word very often, so it is easy to forget...”
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley. He specializes in digital forensics.