Meta, the owner of Instagram and Facebook, is setting up a team to prevent voters from being misled by deceptive content generated through artificial intelligence (AI) in the upcoming European Union (EU) elections.
This year will witness more than 60 states going to the polls with a rise in technology-related challenges posing potential threats to the integrity of electoral processes. With the increased development and implementation of misleading AI-generated content on the internet, including deepfakes, audio, and realistic imagery, pressure has mounted on tech conglomerates to step up measures to counter misinformation and disinformation on their social media platforms.
Meta, which has been under intense regulatory scrutiny for its questionable data privacy practices for the past two years, has announced the formation of an “Elections Operation Center” to deal with deceptive AI during the EU Parliament elections scheduled for June. The Facebook owner’s approach to managing content across its platforms during the elections include countering misinformation, influence operations, and risks stemming from the abuse of AI technologies, according to a statement published by the company this week.
“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognize that speed is especially important during breaking news events,” says Meta. The tech giant will use keyword detection to group related content so that it is easier for its fact-checkers to identify and investigate the content that violates its guidelines. Meta is also creating what it says is a new research tool, called “Meta Content Library”, which has a “powerful search capability” to facilitate digital investigators in their work.
According to Meta, it does not allow advertisements that contain debunked content as well as those targeting the EU and discouraging people from participating in the elections. Ads that question the legitimacy of electoral processes, contain preliminary claims of election victory and cast doubt on the legitimacy of the methods and processes of elections and their outcomes are also prohibited. The firm is collaborating with the European Fact-Checking Standards Network (EFCSN) to train fact-checkers on investigating AI-generated and digitally altered media. It is also developing a media literacy campaign to raise awareness among people regarding misleading content.
To tackle influence operations — which Meta defines as “coordinated efforts to manipulate or corrupt public debate for a strategic goal” — the firm has developed specialised global teams to counter coordinated inauthentic behaviour. “This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity,” says Meta. “In preparation, we conducted a session to focus on threats specifically associated with the EU Parliament elections.”
Meta applies its Community Standards and Ads Standards to all type of content, it says, including material generated through AI. The company will take action against AI media if it violates its policies. Meta says it believes that it is important for users to be aware of the content that has been created using AI even if it does not violate the company’s guidelines. “We already label photorealistic images created using Meta AI, and we are building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads.”
Full statement here