Meta spent years figuring out how to handle political advertising across Facebook and Instagram. It put systems into place and developed policies for what types of political ads were and were not allowed on its platforms.
But that was before the rise of consumer artificial intelligence.
On Wednesday, Meta introduced a new policy to grapple with A.I.’s effects on political advertising. The Silicon Valley company said that starting next year, it would require political advertisers around the world to disclose when they had used third-party A.I. software in political or social issue ads to synthetically depict people and events.
Meta added that it would bar advertisers from using its own A.I.-assisted software to create political or social issue ads, as well as ads related to housing, employment, credit, health, pharmaceuticals or financial services. Those advertisers would be able to use third-party A.I. tools such as the image generators DALL-E and Midjourney, but with disclosures.
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of generative A.I. in ads that relate to potentially sensitive topics in regulated industries,” the company said.
Meta is reckoning with a wave of A.I. tools that the public has embraced over the past year. As consumers have flocked to ChatGPT, Google Bard, Midjourney and other “generative A.I.” products, big tech companies such as Meta have had to rethink how to handle a new era of manipulated or outright false imagery, video and audio.
Political advertising has long been a contentious issue for Meta. In 2016, Facebook was criticized for a lack of oversight after Russians used the social network’s ads to sow discontent among Americans. Since then, Mark Zuckerberg, Meta’s founder and chief executive, has spent billions of dollars working to tamp down disinformation and misinformation on the company’s platforms and has hired independent contractors to closely monitor political ads that go through the system.
The company has also not shied away from allowing politicians to lie in ads on the platform, which Mr. Zuckerberg has defended on the grounds of free speech and public discourse. Meta has also shown reluctance to limit the speech of elected officials. Nick Clegg, Meta’s president of global affairs, has called for regulatory guidance on such issues instead of having tech companies determine the rules.
Those who run political ads on Meta are currently required to complete an authorization process and include a “paid for by” disclaimer on the ads, which are stored in the company’s public Ad Library for seven years so journalists and academics can study them.
When Meta’s new A.I. policy goes into effect next year, political campaigns and marketers will be asked to disclose whether they used A.I. tools to alter the ads. If they have and the ad is accepted, the company will run it with the information that it was created with A.I. tools. Meta said it would not require advertisers to disclose alterations that were “inconsequential or immaterial to the claim, assertion or issue raised,” such as photo retouching and image cropping.
Political and social issue ads that have apparently used A.I. to alter images, video and audio but have failed to disclose doing so will be rejected, the company said. Organizations that repeatedly try to submit such ads without disclosures will be penalized, it added, without specifying what the penalties might be. The company has long had third-party fact-checking partners review, rate and potentially remove ads that are designed to spread misinformation.
By barring advertisers from using the company’s own A.I.-assisted software to create political or social issue ads, Meta may be able to prevent headaches or litigation related to its advertising technology.
In 2019, the Justice Department sued the company for allowing advertisers to discriminate against Facebook users based on their race, gender, religion and other characteristics. The company eventually settled the lawsuit, agreeing to alter its ad technology and pay a penalty of $115,054.