From WIRED
Why Hollywood Really Fears Generative AI
By Will Bedingfield
The future of Hollywood looks a lot like Deepfake Ryan Reynolds selling you a Tesla. In a video, since removed but widely shared on Twitter, the actor is bespectacled in thick black frames, his mouth mouthing independently from his face, hawking electric vehicles: “How much do you think it would cost to own a car that’s this fucking awesome?”
On the verisimilitude scale, the video, which originally circulated last month, registered as blatantly unreal. Then its creator, financial advice YouTuber Kevin Paffrath, revealed he had made it as a ploy to attract the gaze of Elon Musk. (Which it did: the Tesla CEO replied to Paffrath’s tweet with a “nice.”) Elsewhere on Twitter, people beseeched Reynolds to sue. Instead, his production company responded with a similarly janky video in which a gray-looking Musk endorsed gin made by Aviation, a company Reynolds co-owns. That video has also since been deleted...
But will people care if what they’re watching was made by an AI trained on human scripts and performances? When the day comes that ChatGPT and other LLMs can produce filmable scenes based on simple prompts, unprotected writers rooms for police procedurals or sitcoms would likely shrink. Voice actors, particularly those not already famous for on-camera performances, are also in real danger. “Voice cloning is essentially now a solved problem,” says Hany Farid, a professor at the University of California, Berkeley who specializes in analyzing deepfakes...
Hany Farid is a professor at the UC Berkeley School of Information and EECS. He specializes in digital forensics.