A segment by a news outlet in Japan revived rumors that Vladimir Putin uses body doubles for public appearances, and generative AI is supposedly being used to make photos of those doubles look more like the real Russian president. The persistent and unsubstantiated claim has circulated for months and resurfaced in Japan, according a Daily Mail report . Japanese researchers reportedly analyzed video footage and photos of Putin at different events and suggested there are at least three people playing him. Although such claims are promptly dismissed by the Kremlin , the story still illustrates a side effect of the rapid advances in AI: advances in AI deepfakes.
An AI deepfake is a technology that uses algorithms to swap faces in videos, images, and other digital content to make the fake appear real. Deepfakes have been recognized as a dangerous new tool for political manipulation. And detecting deepfakes has become a fast-growing industry, with OpenAI—a leader in generative AI—claiming to be able to identify a deepfake with 99% accuracy. But researchers say doing so is not easy—and will continue to get harder.
Launched in 1994, Digimarc provides digital watermarking that helps to identify and protect digital assets and copyright ownership. Watermarks are one of the more commonly suggested ways to identify deepfakes, and companies like Google are exploring the approach. During Google Cloud Next in San Francisco, Alphabet and Google CEO Sundar Pichai unveiled a watermark feature in the Google Vertix AI platform.
As a strategy, however, watermarking has a number of weaknesses . McCormack said it won’t be enough to stop AI deepfakes. “The problem with solely doing Generative AI with watermarking is not every generative AI engine is going to adopt it,” he said. “The problem with any system of authenticity is unless you have ubiquity, just marking it one way doesn’t do anything.”
Because AI-image generators pull from freely available images from the internet and social media, other experts have advocated including code that would degrade the image in the generator or utilize a poison pill by mislabelling the data so that when it is fed into an AI model, the AI is unable to create the desired image and could potentially collapse under the stress. AI deepfakes have been created spoofing U.S. President Joe Biden, Pope Francis, and former President Donald Trump. As deepfake technology is improving by the day, AI detectors are left playing Whack-a-mole in a race to keep up.
Adding to the dangers of AI deepfakes is the increase in AI-generated child-sex adult material (CSAM). In a report released in October by the UK Internet watchdog group the Internet Watch Foundation , child pornography is rapidly spreading online using open-source AI models that are free to use and distribute. The IWF suggested that deepfake pornography has advanced to the point where telling the difference between AI-generated images of children and real images of children has become increasingly difficult, leaving law enforcement pursuing online phantoms instead of actual abuse victims.
Another issue McCormack raised is that many of the AI models on the market are open source and can be used, modified, and shared without restriction. “If you’re putting a security technique into an open source system,” McCormack said. “You’re putting a lock there, and then right next to it, you’re putting the blueprint for how to pick the lock, so it doesn’t really add a lot of value.” Sexton said that he is optimistic about one thing when it comes to image generation models: as larger models like Stable Diffusion, Midjourney, and Dall-E become better and retain strong guardrails, then older models that allow local creation of CSAM will fall into disuse.
While government leaders are a prime target for AI deepfakes, more and more Hollywood celebrities are finding their likenesses used in online scams and advertisements featuring AI deepfakes. Looking to stop the unauthorized use of her image, Hollywood actor Scarlett Johansson said she is pursuing legal action against AI company Lisa AI, who used an AI-generated image of the Avengers star in an ad. Last month, YouTube giant Mr. Beast alerted his followers to an online scam using his likeness, and even Tom Hanks was the victim of an AI deepfake campaign that used his likeness to promote, of all things, a dental plan.
In September, Pope Francis , the subject of many AI deepfakes, made the technology the centerpiece of a holiday sermon for World Peace Day. The religious leader called for open dialogue on AI and what it means for humanity. “The remarkable advances made in the field of artificial intelligence are having a rapidly increasing impact on human activity, personal and social life, politics and the economy,” Francis said.
As AI detection tools get better, McCormack said, AI deepfake generators will also get better, leading to another front of the AI arms race that began with the launch of ChatGPT last year. “Generative AI didn’t create the deepfake problem or the disinformation problem,” McCormack said. “It just democratized it.”
This article discusses the resurfacing of rumors that Vladimir Putin uses body doubles for public appearances, and how generative AI is being used to make photos of those doubles look more like the real Russian president. It also discusses the dangers of AI deepfakes, including the increase in AI-generated child-sex adult material (CSAM), and how Hollywood celebs like Scarlett Johansson, Mr. Beast, and Tom Hanks have been victims of online scams and advertisements featuring AI deepfakes. Pope Francis even called for open dialogue on AI and what it means for humanity. AI deepfakes are becoming increasingly difficult to detect, leading to an AI arms race.
You can read more about this topic here: Decrypt: The Vladimir Putin Body Double Rumor Won’t Die—And AI Makes It Seem Plausible