Asad Mirza

The fast progress of AI and generative AI in recent years has opened many new frontiers for the humans to scale new heights in various fields, but a darker side of the generative AI might be used for divisive purposes in global electoral battles in 2024.

The manner in which the information technology has outpaced even our wildest dreams is manifest in the fast-paced growth of AI in technically all fields of human existence.

The manner in which AI-generated content that can impersonate anyone with alarming accuracy has also set the alarm bells ringing. This when coupled with how this technological advancement could be misused for unlawful activities  also very scary. 

One domain which could be affected the most in particular using this technology is the manner in which it could be deployed political parties to influence their voters or malign their opponents.

This technological advancement is known as Deepfakes. While developers could argue that their technology is evolving and mistakes are inevitable, the rapid rise of deepfakes, particularly those used for sexual harassment, fraud, and political manipulation, poses an existential threat to democratic processes and public discourse. 

Deepfakes could be exploited for non-consensual exploitation, financial deception, political disinformation or the dissemination of falsehoods, and they could endanger not only the integrity of individuals but also the very foundations of democratic societies. 

Further, they corrode trust, manipulate public sentiment, and have the potential to incite widespread chaos and violence. From 2019 to 2023, the total number of deepfakes surged by 550%, as revealed in the 2023 State of Deepfakes report issued by Home Security Heroes, a US-based organisation.

With elections slated in more or less half of the world, the potential for deepfakes to sow discord and undermine trust in institutions is more significant than ever. Various efforts are underway to combat this threat. Initiatives like the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” at the recently held Munich Security Conference (MSC) represent a commendable effort to confront the challenges presented by deepfakes in electoral processes. 

By uniting leading tech companies, the accord presents a unified stance against malicious actors, showcasing a shared determination to address the issue. Beyond mere detection and removal, the agreement encompasses educational initiatives, transparency measures, and origin tracing, laying the groundwork for comprehensive and enduring solutions.

However, the accord’s focus on the 2024 elections may overlook the on-going evolution of the deepfake threat, potentially necessitating adjustments beyond the specified timeframe.

The efficacy of the accord also hinges on its ability to keep pace with the latest advancements. While the accord establishes guiding principles, it lacks concrete mechanisms for enforcement. Ensuring accountability among participating companies is essential for meaningful progress. Relying on self-regulation from tech companies raises concerns about potential biases in implementation, underscoring the need for transparent and impartial oversight.

The era of the deepfake campaign has already begun. And as generative AI gathers ubiquity and sophistication, the fraying of social cohesion throughout the West in recent years may soon feel quaint by comparison. Rather than stoke outrage, tribalism, and conspiratorial thinking among voters, these new digital tools might soon breed something arguably much worse: apathy. 

Indian Scenario

Deepfakes used to spread misinformation online using the treacherous role of a rapidly evolving AI technology are particularly lethal for countries like India.

The concern stems from some of the recent deepfake incidents. The concerns come after a series of recent deepfake incidents involving top Indian film stars and public figures. This prompted the government to work out a “clear, actionable plan” in collaboration with various social media platforms, artificial intelligence companies and industry bodies to tackle the issue.

Reportedly, PM Narendra Modi has said that deepfakes were one of the biggest threats faced by the country, and warned people to be careful with new technology amid a rise in AI-generated videos and pictures.

Meanwhile, Global Risks Report released by World Economic Forum in January 2024, warns that as polarisation grows and technological risks remain unchecked, ‘truth’ will come under pressure Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors alike will leverage misinformation and disinformation to further widen societal and political divides.

The report further says that as close to three billion people head to the electoral polls across several economies – including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States – over the next two years, the widespread use of misinformation and disinformation, and tools to disseminate it, may undermine the legitimacy of newly elected governments. 

The resulting unrest could range from violent protests and hate crimes to civil confrontation and terrorism. Misinformation and disinformation is a new leader of the top 10 rankings this year. No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites. 

Moreover, in Western countries in particular, an even deeper governance issue might be playing even now. Basically, AI may also accelerate a decades-long erosion of civic engagement and social capital, particularly in liberal democracies – where citizens tend to be more secular and self-oriented and have smaller family networks than in other societies, like in the East.

It is predicted that Generative AI will lead to even greater social fissures, by allowing more and more individuals to bypass complex human interactions and exchanges and contests of ideas that form the bedrock of democracy. In this background, the generative AI might echo your thoughts and will tell you what you want to hear, putting a complete end to human interactions, grasp and execution.

This highlights the darker side of generative AI in the future yet to come. The citizen’s ideals, inputs and actions might be marred at the local, national and international issues and may force them to completely opt out of the normal civic life, by creating a false picture in their minds, akin to human being controlled by machines, though in the democratic scenario it will be the political leaders not machines, who’ll control the electorate through false narratives. 

Thus, it may portend an end of human intervention at all levels in all fields and manipulative leaders could use the generative AI to create empathy in their favour amongst the wider electorate and thus influence the outcome of electoral battles in a completely corrupt manner.

(Asad Mirza is a Delhi-based senior political and international affairs commentator.)