OpenAI, a pioneer in the field of generative AI, is stepping up to the challenge of detecting deepfake imagery amid a rising prevalence of misleading content spreading on social media. At the recent Wall Street Journal’s Tech Live conference in Laguna Beach, California, the company’s chief technology officer, Mira Murati, unveiled a new deepfake detector boasting “99% reliability” in determining if a picture was produced using AI. AI-generated images can include everything from light-hearted creations to deceptive images that can cause financial havoc. While the tool’s release date remains under wraps, its announcement has stirred significant interest.
In January 2022, the company unveiled a text classifier that purportedly distinguished human writing from machine-generated text from models like ChatGPT. But by July, OpenAI quietly shut down the tool, posting an update that it had an unacceptably high error rate. If Murati’s claim is true, this would be a significant moment for the industry, as current methods of detecting AI-generated images are not typically automated.
OpenAI is not only working on detecting harmful AI images, it is also setting guardrails to censor its own model even beyond what is publicly stated in its content guidelines. Other companies such as DeepMedia, Microsoft and Adobe are also rolling up their sleeves to develop AI watermarking systems, although they are not foolproof. With regulators inching towards criminalizing deepfakes, these innovations are not just technological feats but societal necessities.
Humans, however, are not infallible. Lasting solutions will require tech leaders, lawmakers and the public to work together in navigating this complex new frontier. #AITools #GenerativeAI #DeepfakeDetection #AIWatermarking
You can read more about this topic here: Decrypt: From ‘Low Rate of Accuracy’ to 99% Success: Can OpenAI’s New Tool Detect Deepfakes?