AI Tools Like ChatGPT Used in Election Disruption, OpenAI Reports
OpenAI has issued a stark warning about the misuse of its AI models, ChatGPT and DALL-E, by foreign hacking groups to influence elections worldwide. In a detailed report, the company disclosed that it had already uncovered 20 campaigns aimed at disrupting elections, with more anticipated as the U.S. presidential race approaches.
Global democratic processes at risk
The report’s findings come amid what experts are calling the largest-ever demonstration of democracy, with over 50 countries scheduled to vote this year. Concerns are mounting over the role of generative AI in spreading disinformation and manipulating public opinion, prompting tech companies to ramp up their defenses against election interference.
Manipulative AI tactics exposed
According to OpenAI, malicious actors have been leveraging ChatGPT to generate fake personas, craft articles, and engage in deceptive social media tactics. “Multi-stage efforts to analyze and reply to social media posts” were also among the tactics identified, underscoring the increasingly sophisticated strategies used in these covert operations.
“In this year of global elections, we know it is particularly important to build robust, multi-layered defenses against state-linked cyber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns,” stated OpenAI’s latest report. The company revealed that it has successfully disrupted over 20 such operations since the beginning of 2024.
OpenAI’s report pointed to state-affiliated hacking groups from countries like China, Iran, and Russia as key players in these disruptive campaigns. These groups allegedly used AI to create and spread disinformation targeting regions like West Africa and the UK.
Case study: Russia-linked operation in the UK and West Africa
One significant case involved a “Russia-origin threat actor” generating English and French content aimed at influencing audiences in West Africa and the UK. The operation reportedly built fake news websites posing as legitimate media outlets and established “information partnerships” with local entities, including a church in Yorkshire, a school in Wales, and an association of chambers of commerce in California.
“This operation used our models to generate short comments, long-form articles, and images. The long-form articles in English and French were then posted on a cluster of websites that posed as news outlets in Africa and the UK,” stated the report.
OpenAI’s push for election integrity
OpenAI CEO Sam Altman has voiced his concerns about AI’s potential to compromise election integrity. Last year, he testified before Congress, warning of the threat posed by generative AI to democracy. The company remains focused on countering the misuse of its technology to safeguard global democratic processes.
As the 2024 election season unfolds, OpenAI’s report serves as a critical reminder of the challenges posed by AI in the fight against disinformation and election manipulation.