Meta Platforms Inc., the parent company of Facebook, has updated its policies to prevent the use of its emergent generative AI tools in political advertising and in areas subject to stringent regulation, like politics, employment, and credit services. This step reflects growing concerns about AI’s potential to disseminate false information, especially as political campaigns intensify. The tech giant has explicitly prohibited the use of these tools for creating content related to sensitive subjects, including elections and health products. Google has also adopted a cautious stance, excluding political keywords from AI-generated ads and mandating transparency for election-related advertisements featuring synthetic content. Meanwhile, social media platforms TikTok and Snapchat have opted out of political ads altogether, and Twitter, under its prior brand name, has not ventured into AI-powered ad tools. Meta’s policy chief, Nick Clegg, has emphasized the need for updated rules to govern the intersection of AI and politics. Additionally, Meta has also taken measures to prevent the realistic AI depiction of public figures and to watermark AI-generated content, further tightening controls over AI-generated misinformation in the public sphere. President Biden has also issued a 26-point executive order seeking to reign in AI developments stateside and abroad. This news is good as it shows that tech giants are taking steps to ensure the accuracy of their AI-generated content and to prevent the spread of false information. #AIPolicy #AIAds #GenerativeAI #AISafety

You can read more about this topic here: Decrypt: Meta to Block Political, Financial Advertisers From Using AI: Report

Want more Byte Syze Crypto news?

Invalid email address
We promise not to spam you. You can unsubscribe at any time.