Meta, the parent company of Instagram and Facebook, announced on Wednesday that it will enforce a requirement for political advertisers globally to disclose their use of artificial intelligence in their ads. This initiative is part of a broader effort to combat “deepfakes” and other digitally altered deceptive content.
Political Advertisers Will Reveal AI-Generated Deepfakes
The new rule is scheduled to be implemented next year, ahead of the 2024 US election and other upcoming global elections. This policy applies to any political or social issue advertisements on Facebook or Instagram that utilize digital tools to create non-existent individuals, distort the accurate depiction of events, or manipulate a person to appear as if they are saying or doing things they haven’t, as outlined in a company blog post.
Minor AI applications in ads, like image cropping or color correction, that have no significant bearing on the ad’s message or claims are exempt from the disclosure rule.
This announcement follows Meta’s decision to limit political advertisers from utilizing the company’s AI advertising tools for tasks such as background generation, text recommendations, or music selection in video content.
Tech Companies Tackle Political Ad Transparency and AI Usage
Microsoft took a similar step on Tuesday by introducing a tool that will be available for free to political campaigns in the spring. This tool can apply a “watermark” to campaign content to confirm its authenticity, reassuring viewers.
Microsoft President Brad Smith, in a blog post, explained, “These credentials become part of the content’s history and travel with it, creating a permanent record and context wherever it’s published. When a user encounters an image or video that contains Content Credentials, they can learn about its creator and origin by clicking on an embedded pin that reveals the asset’s history.
The effort to restrict politicians’ utilization of AI in advertisements is a response to concerns voiced by civil society organizations and policymakers. They’ve highlighted the potential threats to democracy posed by the uncontrolled use of AI-generated content in political discussions. Many have expressed concerns that AI could amplify the spread of disinformation by both foreign and domestic entities, a danger that has grown due to recent reductions in content moderation teams within the industry.
Facebook Will Reject Ads That Do Not Meet terms
This also underscores an uncommon step taken by Meta to oversee political discourse. The platform has faced criticism for permitting politicians to disseminate false information in their campaign advertisements and for excluding politicians’ statements from third-party fact-checking. Previously, Mark Zuckerberg, the company’s CEO, has contended that politicians should have the freedom to make inaccurate statements, with viewers and voters responsible for deciding how to hold them accountable.
However, the choices to mandate that Meta’s political advertisers reveal their AI usage and to impose restrictions on Meta’s AI tools for political ads indicate that there may be constraints on how much leeway Zuckerberg is willing to grant to politicians regarding new technology.
Meta stated in its blog post on Wednesday, “If we find that an advertiser fails to disclose as required, we will reject the ad, and persistent failure to disclose may lead to penalties against the advertiser.
Check These Out
- Facebook Advertising: Facebook Ads Manager | Manage Facebook Ads
- Why Tech Entrepreneurs and Politicians Struggle to Form Strong Connections
- Create Facebook Political Ads – How to Create Facebook Political Ads | Facebook Political Ads Create
- SHE Entrepreneurs Leadership Program 2023 in Sweden Fully Funded – APPLY NOW!
- Nancy Mace Gives a Sneak Peek of the House Hearing About AI Deepfakes