Google will require political advertisements to clearly inform viewers if they feature content generated by artificial intelligence (AI), the company has announced in an update to its political content policy.
The development arrives roughly a year ahead of the next US presidential election, which is scheduled for November 2024. However, the revised policy is also poised to demonstrate a significant impact on electoral processes in other online markets, including India, Africa, and the European Union (EU). The policy will cover both Google and YouTube (owned by Alphabet) and apply to video, audio and image content in promotional political material.
In light of the update, the ads will be required to carry a disclaimer to clearly establish the use of AI-generated content for the viewers. The new rule, which comes into effect next month (mid-November), aims to bring increased transparency to paid political content. The update comes at a time when concerns related to potential abuse of generative AI tools are elevating a set of complex challenges, particularly for sensitive events like electoral processes.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” a Google spokesperson said. The revised policy would further support responsible political advertising and provide voters with the information they need to make informed decisions, according to the company.
In the US, ads with highly persuasive AI-generated imagery in some presidential campaigns have already begun to create waves within political circles. Google, therefore, will mandate labels for synthetic content that could potentially mislead viewers online. Labels will be required “if a person is saying or doing something they didn’t say or do” in the ads, but not if the AI alterations comprise regular edits such as resizing and color correction.
With generative AI tools progressing rapidly, digital safety and tech experts have raised concerns about the potential rise in misinformation during the upcoming election cycles around the world. Leading tech corporations such as Meta and Alphabet have already been facing regulatory scrutiny for inadequate or insufficient content moderation mechanisms on their social media platforms, which has yielded critical examples of real-world harm in developing markets.
The risk of misinformation and disinformation on online platforms — often resulting in violence, harassment, and bullying — is rising drastically due to the accessibility and sophistication of advanced artificial intelligence tools. The convincing imagery and audio tweaks afforded by AI tools could significantly impact electoral processes in countries that lack political economy and the market power to question Big Tech corporations.
The lack of digital literacy in these markets further exacerbates the situation for information disorders, as large segments of online population easily fall prey to coordinated disinformation campaigns.