OpenAI, owner of the highly successful artificial intelligence (AI) chatbot ChatGPT, has announced it is rolling out tools to combat disinformation ahead of a series of elections around the world this year.
The year 2024 is set to witness a significant number of elections in some of the world’s largest and highly sensitive democracies, including Pakistan. Various leading tech companies, such as Facebook’s parent organisation Meta, have announced protective measures to prevent a potential rise in electoral misinformation and disinformation in the wake of the electoral events.
However, since the October 2022 launch of ChatGPT, an AI chatbot with the ability to produce large volumes of text, concerns have mounted regarding the potential spread of textual and visual misinformation during democratic processes.
OpenAI, which has already become the focus of major copyright lawsuits, is too facing a slew of questions as to how it will manage its AI products during elections. In response to these concerns, OpenAI has released a statement outlining its approach to countering misinformation.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” says OpenAI. “We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges.”
In its update, OpenAI places a particular emphasis on what it labels as “abuse”. “We work to anticipate and prevent relevant abuse—such as misleading ‘deepfakes’, scaled influence operations, or chatbots impersonating candidates.”
OpenAI says it is developing tools that will help identify whether an image used in a political campaign is generated using DALL.E-3, which is OpenAI’s image-generating application. “Our internal testing has shown promising early results, even where images have been subject to common types of modifications,” the firm says.
“We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.” As for ChatGPT, users will have the option to report potential violations.
The firm’s AI chatbot, ChatGPT, will allow users to access real-time news reporting around the world, which includes links and attribution.
With the rise in accessible and easy-to-use AI technologies, especially image and video generating applications, experts have raised concerns about a possible increase in voter manipulation in the upcoming elections around the world, including in the United States and India, where instances of AI-facilitated media have already made headlines.
“Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust,” OpenAI says, adding that it will share more information regarding the development in the coming weeks.
AI experts have also reported being reached out by political representatives for commissions to be used in political campaigns.