As India awaits the outcome of the 2024 Lok Sabha elections, a recent report from OpenAI reveals that Russia, China, Iran, and Israel have leveraged AI models to disseminate false information online.
As per the study, five operations were involved – two from Russia, one from China, one from Iran, and an Israeli political campaign management firm known as STOIC. These operations utilised AI-generated content, including text and images, in their efforts.
One of these campaigns, nicknamed Zero Zeno by STOIC, aimed to influence the 2024 Indian elections, according to OpenAI, the developers behind ChatGPT. However, these campaigns failed to reach a significant audience or drive engagement.
The study indicates that none of these networks achieved substantial real interactions, with OpenAI categorising their impact as below level 2 on a six-level “breakout scale” for assessing the effectiveness of influencer operations.
Responding to OpenAI’s findings, minister of state for electronics and IT Rajeev Chandrasekhar labelled this a “dangerous threat” to democracy, criticising OpenAI for not alerting the public when the threat was initially detected in May.
Earlier reports from the Microsoft Threat Analysis Centre (MTAC) also highlighted China’s attempts to use AI-generated content to influence elections in various countries, including India, the US, and South Korea. Similar tactics were reportedly tested during Taiwan’s presidential elections.
Despite these revelations, political parties such as the BJP and Congress have acknowledged the use of AI tools for campaigning. Political strategists claim that AI has become a significant asset, with the BJP leading in GenAI utilisation for electoral purposes. The Congress, however, is using AI minimally or barely at all.
In a conversation with AIM on the status of AI integration in the 2024 elections, independent political campaigner and strategist Sagar Vishnoi pointed out that the BJP leads the way in employing AI to translate their messaging across various languages.