On February 10 last, the Ministry of Electronics and Information Technology (MeitY) notified amendments to the IT Intermediary Rules, formally bringing AI-generated content under regulatory oversight. Effective February 20, these rules mandate clear labelling of synthetically generated information, embed permanent metadata for traceability, and compress takedown timelines to as little as three hours. The intent is laudable — protecting citizens from deepfakes, election manipulation, and non-consensual intimate imagery. The challenge lies in the execution.
India has witnessed the dark side of unregulated AI content. Deepfake videos have targeted celebrities and ordinary citizens alike, fabricated political speeches have threatened electoral integrity, and synthetic child abuse material has proliferated unchecked. By explicitly defining “synthetically generated information” as content artificially created or modified to appear authentic, the government addresses real harms. The rules cover everything from AI-manipulated videos to algorithmically altered audio, bringing platforms that enable such content squarely within India’s compliance framework.
The architecture is comprehensive. Platforms must prominently label AI content so users can immediately identify its synthetic nature. Where technically feasible, they must embed permanent metadata with unique identifiers to trace content back to its source. Crucially, these labels cannot be removed or suppressed. Significant social media intermediaries face heightened obligations — they must require user declarations before content is published and deploy automated tools to verify those declarations. For prohibited content involving child abuse, non-consensual imagery, or deceptive impersonation, platforms must act swiftly with account suspensions, content removal, and mandatory reporting to law enforcement.
These objectives have merits. Users have a right to know when content is synthetic. Victims need rapid remedies. Democracy requires distinguishing authentic political discourse from fabricated statements. The government has also shown pragmatism by excluding routine edits like colour correction and noise reduction from the synthetic content definition, responding to industry concerns. The removal of an earlier proposal requiring AI labels to occupy 10 percent of screen space reflects welcome flexibility.
However, good intentions collide with practical realities when we examine implementation. The three-hour takedown window, while reflecting urgent concerns about harmful content, may be technically unfeasible for many platforms. Such compressed timelines create a “take-down-first, question-later atmosphere”. This isn’t merely inconvenient — it risks constitutional violations. Automated over-removal to avoid liability could amount to prior restraint on speech, potentially violating Article 19(1)(a). When penalties are severe and deadlines tight, platforms moderate conservatively, inevitably catching lawful content in their nets.
The technical challenges are equally daunting. Current AI detection systems struggle with sophisticated deepfakes. Requiring platforms to embed permanent metadata sounds straightforward until you consider cross-platform sharing, screenshots, and re-uploads. Will metadata survive these journeys? The rules prohibit tampering with labels, but enforcing this across billions of daily content pieces strains credibility. Moreover, distinguishing malicious disinformation from satire requires understanding context and intent — determinations that remain difficult for automated systems at scale.
The burden falls disproportionately across the ecosystem. Large platforms like Meta and Google possess resources for expanded compliance teams and sophisticated automated tools. But smaller start-ups, regional platforms, and emerging AI companies face potentially prohibitive compliance costs. This could create significant barriers to entry, stifling the innovation ecosystem India seeks to nurture.
The rules also create complex questions around user behavior and platform responsibility. How do platforms verify user declarations when users themselves may not understand what constitutes synthetic content? An amateur creator using AI-assisted filters for colour grading might genuinely believe they’re simply editing, not creating synthetic content. The line between enhancement and generation can blur even for experts. Requiring platforms to police this distinction before publication transforms them from neutral intermediaries into active gatekeepers — a fundamental shift in their role.
Enforcement asymmetry presents another concern. While the government has updated legal references to align with the Bharatiya Nyaya Sanhita and other new criminal codes, practical questions remain: Who gets prosecuted and how? Will enforcement target individual users who mislabel content, or platforms failing to catch violations? The clarification that platforms won’t lose safe harbor protection when removing synthetic content in good faith is reassuring, but doesn’t resolve the fundamental tension between being a neutral intermediary and actively policing content creation.
Perhaps most concerningly, requiring platforms to deploy automated tools to prevent violations of “any law for the time being in force” creates an impossible compliance burden. India’s legal landscape encompasses defamation, electoral laws, communal harmony provisions, and content-specific regulations across states. How can automated systems parse such nuances? This risks either technological overreach or selective enforcement — neither serves justice.
Looking internationally, India isn’t alone in grappling with these challenges. The European Union’s AI Act and various US state-level deepfake laws demonstrate democracies everywhere are struggling to balance innovation with safety. What distinguishes India’s approach is both its comprehensiveness and aggressive timeline. Other jurisdictions have given stakeholders more time to develop technical capabilities and compliance frameworks.
The path forward requires calibration, not retreat. The government should consider phased implementation, starting with the most harmful categories — child abuse material, non-consensual intimate imagery, and election-related deepfakes — before expanding to broader synthetic content. The three-hour takedown window should be reserved for genuinely urgent cases, with reasonable timelines for other violations. Technical standards for metadata embedding and labeling should be developed collaboratively with industry, ensuring actual implementability across platforms and content types.
User education is equally critical. Labels only work if people understand them. Quarterly warnings about penalties, as mandated, are a start, but comprehensive digital literacy programmes would better equip citizens to navigate an AI-saturated information environment.
The amendments represent India’s entry into regulating synthetic reality — a necessary response to genuine harms grounded in legitimate concerns about democratic integrity, personal dignity, and public safety. But demanding near-instantaneous compliance with technically ambitious requirements risks creating a framework that looks impressive on paper but proves unworkable in practice — or achieves compliance through over-censorship rather than careful calibration.
As these rules take effect next week, both government and platforms should approach implementation with flexibility and good faith. The goal isn’t perfect enforcement immediately, but building a regulatory ecosystem that evolves alongside technology, protecting citizens without strangling innovation. That balance is worth striving for, even if it takes longer than three hours to achieve.
The writer is a defence and tech policy adviser and author of the forthcoming book The Digital Decades on 30 years of the internet in India
