OpenAI, creator of the widely successful artificial intelligence (AI) chatbot ChatGPT, has revealed that its platform is being used by malicious actors to undermine elections globally.
In a report released last week, OpenAI states that it has “disrupted more than 20 operations and deceptive networks from around the world” that attempted to weaponise ChatGPT and other OpenAI products to undermine electoral processes.
The malicious activities ranged from debugging malware and writing misleading articles for websites to generating content that was used by fake accounts on social media, according to AI.
The AI creator says several deceptive networks posted content about elections in India, the United States (US), Rwanda, and the European Union (EU). These coordinated campaigns, however, were unable to attract “viral engagement” or build “sustained audiences”.
In addition, OpenAI claims it encountered a covert operation that generated comments about political affairs in the US, Poland, Germany, Italy, as well as the European Parliament elections in France.
A majority of social media posts that the company identified as being generated from OpenAI models received few or no likes, shares, or comments. The company did, however, identify some instances when real people replied to these posts.
The 54-page report arrives less than a month before the US goes to polls. The year 2024 is witnessing more than 40 states holding general elections, involving billions of people. Besides mis-and-disinformation, the dramatic rise in deceptive material such as AI-facilitated deepfakes have frothed serious concerns over election integrity.
Moreover, the weaponisation of ChatGPT has led to an alarming rise in the generation of misleading written posts, along with hashtags, which are being deployed in coordinated disinformation campaigns by both political wings and their supporters on social media platforms.
In the lead-up to and during the general elections held on February 8, 2024, in Pakistan, Facter — a digital investigation cell operated by Media Matters for Democracy (MMfD) — debunked a significant volume of AI-generated videos. Majority of these videos depicted electoral candidates announcing a boycott of the elections or joining hands with their rivals. Deceptive images of ballot boxes and polling stations surfaced, too.