Meta recently announced several tools to aid advertisers on Facebook and Instagram that utilize generative AI.
Outsiders feared that these tools would enable more false advertising content – especially related to political campaigns.
Political ads have skyrocketed in the past decade and are expected to top $2 billion for the 2024 elections in the US.
Now Meta has banned the use of these AI advertising tools in political campaigns in a move meant to attempt to curb the use of generative AI to create misleading ads.
Generative AI is capable of imitating voices and likenesses making it very easy for campaigns to put words into the mouths of political adversaries and create confusion.
Likewise, Meta announced that any political advertiser must disclose if AI was used in any way to create or alter ads that were not developed using Meta’s advertising tools.
Meta has received bad press in the past about fostering misinformation through its platform.
The D.C. Attorney General is still investigating Meta’s internal handling of misinformation surrounding the COVID-19 vaccine from several years ago.
After Meta CEO Mark Zuckerberg testified in front of Congress in 2021 to discuss the proliferation of misinformation, the company was accused of rolling back some of the safety mechanisms to police content.
The company continues to struggle with this – most recently related to pro-Hamas content, which the company eventually began to pull down after the EU threatened to sue them again.
This latest topic over the use of AI will only continue the debate on posts, advertisers, and how to tell the difference between facts and misinformation.