As India prepares for its next general elections, AI has moved from a theoretical concern to an immediate reality. Deepfake videos of political leaders already circulate during state elections, while AI-powered tools enhance voter registration verification. With 970 million eligible voters, the world’s largest democracy faces a challenge: Harness AI’s transformative potential while guarding against its capacity for manipulation.
India’s electoral rolls contain nearly a billion entries, with approximately 80 million additions, deletions, and corrections processed annually. Machine learning could transform this mammoth task. Traditional name-matching algorithms struggle with the linguistic diversity — where “Mohammad” appears in 15 spellings, or “Kumar” surfaces across millions. AI models trained on Indian name patterns can identify potential duplicates even when spellings vary wildly or transliterations differ between Hindi and English. During Bihar’s 2024 Special Intensive Revision, manual verification took weeks; AI could reduce this to days while flagging anomalies like multiple registrations from single addresses.
Computer vision can detect when identical photographs appear in multiple voter ID applications under different names — a common fraud technique. AI can optimise booth management across the roughly 1,500 polling stations in each parliamentary constituency. AI analysis of historical turnout patterns, demographic clustering, and geographic accessibility could balance loads months in advance. Campaign finance transparency offers another frontier. AI can cross-reference declared expenses against market rates, flagging when candidates claim to have spent Rs 50,000 on rallies that clearly cost Rs 5 lakh. Computer vision analysing rally footage can estimate crowd sizes and infrastructure costs independently, catching discrepancies before campaigns end.
However, there is a flip side. Convincing deepfakes can be produced. Unlike broadcast media manipulation, AI-generated content can be hyper-personalised to inflame tensions. AI supercharges microtargeting beyond Cambridge Analytica’s ambitions. Machine learning analyses social media behaviour, app usage patterns, and location data to identify psychological vulnerabilities. Instead of one misleading post, AI generates thousands of variants tailored to individual recipients.
Bot networks create artificial consensus, making fringe views appear mainstream. When 500 AI-generated accounts in a WhatsApp group all denounce a candidate, real members assume this reflects genuine community sentiment. Machine learning can even optimise traditional fraud — identifying which booths offer maximum impact with minimum risk, or calculating exactly which booth manipulations would be statistically unlikely to be caught in random VVPAT sampling.
Effective deepfake detection requires multi-modal analysis. AI-generated videos often show lighting inconsistencies or facial geometry irregularities invisible to human eyes but detectable algorithmically. Synthetic voices lack subtle breathing patterns present in authentic recordings. Metadata examination reveals manipulation traces.
Bot network detection examines social media activity patterns — accounts created in bulk with similar registration times, posting rhythms inconsistent with human behaviour, and suspicious content similarity. AI might flag 50,000 accounts revealing that 90 per cent were created within three days, all using AI-generated profile pictures, posting during hours when genuine users sleep.
Statistical anomaly detection in results can identify implausible patterns — booths where candidates receive exactly 100 per cent of votes, constituencies where turnout spikes in the final hours without corresponding queue observations. This directs investigators toward suspicious patterns worth examining.
India’s election law predates social media, let alone AI. The Representation of the People Act, 1951 doesn’t address liability for AI-generated defamation or prosecution of bot-network operators. When deepfakes originate from anonymous accounts and bot networks operate from overseas servers, attribution becomes nearly impossible.
Speed compounds the problem. During the critical 48 hours before polling, coordinated AI-driven disinformation can flood swing constituencies faster than fact-checkers can respond. AI systems often carry biases from their training data. Voter verification tools, for instance, might flag minority names as suspicious more frequently.
Immediate action requires establishing an AI Task Force within the Election Commission combining electoral expertise, data scientists, and cybersecurity specialists to monitor threats in real time and develop detection capabilities. Political parties should be required to cryptographically sign official communications, allowing voters to verify authenticity. Unsigned content claiming party origin should automatically be treated as unverified. Pilot programmes deploying deepfake detection and bot identification in select state elections could refine systems before nationwide deployment.
Medium-term reforms must update the Representation of the People Act to define AI-generated content, establish disclosure requirements, create liability frameworks for deepfakes, and establish penalties for deploying AI tools for electoral manipulation. Long-term transformation requires redesigning electoral architecture, assuming that AI permeates all aspects — voter registration with built-in duplicate detection, campaign finance with continuous algorithmic oversight, and result verification incorporating statistical anomaly detection as standard protocol.
Beyond technical solutions lies a philosophical challenge: What does democratic participation mean when AI mediates between citizens and candidates? When every voter receives personalised messages designed by algorithms exploiting their psychological vulnerabilities, is informed consent possible? Yet, rejecting AI entirely is neither possible nor desirable. The efficiency gains are real. The ability to detect fraud at scale could strengthen electoral integrity. The opportunity to reach millions in their own languages represents democratic inclusion.
The challenge is developing an electoral ecosystem where AI serves democratic values rather than undermining them. This requires human-centred design where final decisions rest with humans accountable to democratic institutions. It demands transparency so citizens understand how AI shapes electoral processes.
India has navigated previous electoral transformations successfully. The AI challenge is more complex but not insurmountable. It requires the same combination of technical excellence, institutional integrity, and public engagement that has made Indian elections the world’s largest democratic exercise.
The alternative — ignoring AI while others deploy it — would leave the Election Commission fighting sophisticated manipulation technology with inadequate tools. With appropriate safeguards, continuous vigilance, and commitment to democratic values, artificial intelligence can become democracy’s ally rather than its adversary in India’s ongoing electoral story.
The writer is former chief election commissioner of India and the author of An Undocumented Wonder: The Making of the Great Indian Election
