The UK-based Internet Watch Foundation (IWF) is warning of the rapid spread of AI-generated child sexual abuse material (CSAM). In a recent report, they found over 20,254 AI-generated CSAM images on a single darkweb forum in just one month and warned of the potential for it to “overwhelm” the internet. The IWF is now tracking instances of AI-generated CSAM of real victims of sexual abuse as well as manipulated pictures of celebrities and famous children. The IWF is calling for international collaboration to fight the scourge of CSAM and is urging the UK prime minister to make it a priority on the global AI safety summit’s agenda in November. AI developers are encouraged to take measures such as prohibiting the use of their AI for creating child abuse material and de-indexing related models. Microsoft President Brad Smith suggested using KYC policies modeled after those employed by financial institutions to help identify criminals using AI models, while the State of Louisiana passed a law increasing the penalty for the sale and possession of AI-generated child pornography. The US Department of Justice has also updated its Citizen’s Guide To US Federal Law On Child Pornography page, emphasizing that images of child pornography are not protected under the First Amendment and are illegal under federal law.
This is concerning news as AI image generators become more advanced and make it easier for criminals to create realistic replicas of human beings. The IWF is calling for a multi-tiered approach to combat the abuse of AI, including changes to relevant laws, updating law enforcement training, and establishing regulatory oversight for AI models.
This is an ongoing problem which requires countries to work together to ensure that legislation is fit for purpose and to prioritize the removal of child abuse material from AI models.
You can read more about this topic here: Decrypt: AI-Generated Child Abuse Material Could ‘Overwhelm’ the Internet, UK Group Warns