Meta, in an update to its artificial intelligence (AI) policy, is mandating the disclosure of AI use in political or social advertisements.
The development comes ahead of the federal elections in Canada to contain electoral misinformation. The new rules will apply to ads showing photorealistic images, visuals, or audio that has been altered with the use of AI, or which show a real person being digitally manipulated.
Ads that depict a non-existent person or an event, doctored visuals of a real event, or a misleading image or audio recording of an event, will also need to be disclosed.
Last week, Meta announced partnering with notable news agencies such as Agence France-Presse (AFP) and the Australian Associated Press (APP) for fact checking of content across its platforms ahead of the Australian federal elections. The agencies will review content for Meta and help identify fallacies.
Meta said it would remove any deepfake content that violates the company’s policies or rate it “altered”, subsequently ranking the labelled content lower down in its feed to reduce its distribution. Individuals sharing AI-generated content will be asked to disclose it.
“For content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI,” Cheryl Seeto, Meta’s Head of Policy in Australia, said.