In an ambitious attempt to regulate synthetically generated content, commonly linked to deepfakes and AI-manipulated media, the Union government recently introduced the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (henceforth “the 2026 Rules”). Notified on February 10, the 2026 IT Rules amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The 2026 Rules attempt to address misinformation, impersonation, and gender-based abuse caused by deepfakes or digitally altered content using AI. The Rules impose accelerated takedown obligations on intermediaries and, in their current state, run the risk of weakening the constitutional guarantees of free speech while fostering over-compliance and regulatory uncertainty.
The core of the amendment is the introduction of the term “synthetically generated information”. Sub-clause “(wa)” under Rule 2(a)(ii) defines the term to mean audio, visual, or audio-visual information generated or altered artificially in a manner that makes it appear real. The scope of the proposed definition is the first problem. It states that certain information shall not be considered “synthetically generated information” if the same arises from actions undertaken in “good faith”.
However, the Rules do not define what is meant by good faith. For instance, satire, parody, or remixes are forms of expression that may fall outside the definition of good faith or could be mistakenly classified as “synthetically generated information”. The over-broadening of terms can lead to vagueness in implementation, which was cautioned against by the Supreme Court in Shreya Singhal v. Union of India (2015).
The 2026 Rules also reduce the compliance period for intermediaries. Most importantly, intermediaries are required to comply with a takedown notice within 3 hours, as against the earlier time period of 36 hours. They are also required to remove content within 36 hours, as opposed to the earlier timeframe of 72 hours in case of a grievance reported.
Although a timely response by an intermediary in taking down content is critical, a reduced period encourages intermediaries to delete content with immediate expediency. Intermediaries will inevitably resort to deleting content, essentially resulting in censorship. The new system essentially requires intermediaries to make quasi-judicial decisions regarding the authenticity and legality of content, which results in the problem of over-blocking and discrimination — precisely the problem that the Supreme Court highlighted in Shreya Singhal while reading down Section 79(3)(b) of the IT Act, 2000, and specifying that intermediaries must act only upon receiving court orders or appropriate government notification.
Perhaps closer scrutiny is required while assessing the enforcement mechanism of the 2026 Rules. The Rules rely on user complaints and platform detection, but there is no independent verification mechanism to distinguish malicious deepfakes from innocuous synthetic media generated in good faith. For example, Rules 3(3)(a)(i)(II) and 3(3)(a)(i)(IV) are ambiguous because they prohibit false content or depictions that are likely to deceive, but there is no definition of falsity, deception, or intent, which was precisely the issue in the Fact Check Unit case decided by the Bombay High Court on September 26, 2024.
Ultimately, the High Court struck down the amendment allowing the establishment of a Fact Check Unit owing to the vagueness of the terms “fake, false or misleading” and the impact of the same on freedom of speech. The ambiguity in such concepts compromises the fundamental right to speech and needs to be balanced against reasonable restrictions set out in the Constitution.
This is not to downplay the real problems of deep-fake media. Non-consensual AI images, election disinformation, and impersonation scams do require strict regulations, and in this regard, the amendment is a welcome change. However, speech restrictions need to be narrowly tailored, proportionate, and directly related to the identified harms in Article 19(2) of the Constitution. The 2026 Rules, by allowing widespread and swift blocking based on questionable standards of authenticity, risk blurring this line.
A more balanced approach is required to clearly articulate the concept of synthetic deception, establish a rigorous transparency system, and insist on judicial or independent oversight. Without such safeguards, the 2026 Rules may succeed in curbing deepfakes while pushing us towards a far more insidious threat: the normalisation of opaque, privatised control over online speech.
The writers teach at Jindal Global Law School
