A 31-year-old woman in Assam downloaded a loan app and, as part of routine verification, uploaded her ID and photograph. Within days, the harassment and extortion began. When she refused further payment, her photograph was manipulated using AI tools to create synthetic nude images. Her phone number was circulated alongside theirs, triggering degrading and threatening calls. By the time she contacted Meri Trustline, the online safety helpline run by RATI Foundation, her distress came not only from the threats but from how people around her reacted to the images.
“Even if they knew it was fake,” she said, “it still made me feel dirty.”
Her testimony captures what the newly notified IT Rules amendment on synthetically generated content misses. For survivors of AI-generated non-consensual intimate imagery, the deepest harm is not confusion about the authenticity of the media used to target them. In this, as well as other cases documented in Make It Real, a joint study by Tattle Civic Tech and RATI Foundation on ‘Mapping AI-Facilitated Gender Harm’, the images involved in perpetuating harm were easily identifiable as fabricated. Survivors knew it. Viewers knew it. Yet the damage landed.
The amendment places weight on realism and detectability, but for victims, the core violation is simple. The image of their body was manipulated without their consent. The authenticity is secondary to the harm.
A Technical Framework for a Human Problem
The new amendment is structured around “process” — how synthetic content is detected, labelled, tagged and quickly removed. It approaches the issue as a compliance and classification challenge rather than an issue of bodily and emotional harm.
Provenance markers and labels can help show that manipulation has occurred. But labels do not neutralise stigma, watermarks do not restore dignity, and traceability does not reduce the emotional and social burden on victims.
India already has a reasonably functional takedown framework for severe online harms, at least for significant social media intermediaries (platforms serving more than 50 lakh users in India). The bigger friction today lies elsewhere in reporting access, platform responsiveness and survivor support.
Reporting remains difficult for many users. Grievance and appellate complaint forms are complex, English-forward and OTP-gated. For vulnerable users, this is a real barrier. These digital hurdles sit alongside long-recognised challenges in reporting gender-based violence through physical police stations, including hesitation, stigma, procedural delays and uneven handling of sensitive material.
The regulatory focus remains centred on the file, not the affected person.
The complexity of assessing good faith notwithstanding, harm does not depend on an edit being classified as routine or synthetic. In practice, many degrading manipulations to images are simple: A cropped frame, a suggestive caption, a meme overlay, a staged variation. As the recent trends on Grok made clear, while some images are explicitly altered, others are adjusted just enough to humiliate or suggest sexual connotations. In both cases, consent is broken, and the distress to the victim is real.
When regulation draws technical boundaries, perpetrators learn to operate just below them. Abusers are quick to adapt to thresholds. Subtle manipulations often survive moderation longer than explicit fabrication. That is exactly where many victims fall through the cracks.
Speed at Any Cost
The final amendment cuts platform response timelines sharply, requiring action within three hours for government takedown orders and as little as two hours for certain high-priority harm complaints, while general grievance resolution windows are reduced to seven days. Faster takedown can be critical in serious abuse cases. But the amendment’s compressed timelines introduce a trade-off that deserves scrutiny.
Human review takes time. Understanding context takes time. When deadlines tighten sharply, platforms might defer to automated moderation. And all automated systems will make errors. Systems, they are prone to misreading context, miscode abuse and removing safe content while retaining offending content and under-removal.
Sexual-image abuse cases often require careful, case-by-case judgement. In one instance reported to the Meri Trustline helpline, a complainant sought the removal of a woman’s consensual, glamour-style modelling portfolio by presenting it as harmful content. Because the case was reviewed with the benefit of time, an effort could be made to verify the facts directly with the woman concerned. The misuse of the reporting channel was identified, and the request was rejected. Under compressed decision timelines, such content risks being wrongly removed. Speed without verification can easily turn safety mechanisms into censorship tools.
When speed becomes the dominant compliance metric, platforms predictably shift investment toward automated filters and away from trained moderators. The result is a paradox: Moderation becomes stricter at the margins but weaker against adaptive abuse.
Fast systems are not always better systems. They are often less accurate. Faster removal helps, but only if accuracy and survivor-sensitive review are not sacrificed in the process.
Consent Is the Missing Centre
Sexualised image abuse, AI-generated or otherwise, sits within the continuum of gender-based violence. Yet the amendment focuses on deception and misrepresentation while remaining largely silent on consent and violation, the foundations of gender-violence jurisprudence.
This creates a practical loophole. Victims whose images are manipulated in a manner that is humiliating or suggestive, but not technically explicit, may struggle to secure action. Without a consent-centred trigger, harm is easily misclassified, and response misdirected.
Uneven Burden, Uneven Protection
The strictest obligations fall on large social media platforms. Yet abusive content frequently spreads through smaller sites, mirror services and fringe platforms.
This creates uneven protection. Major platforms are tightly regulated. Smaller bad-faith actors may evade scrutiny. Smaller good-faith platforms may struggle to comply. Harm moves across the ecosystem, but regulatory pressure does not.
Law Exists but Capacity Lags
Most AI-enabled sexual abuse behaviours are already punishable under Indian law. This is subtly recognised by the amendment, where it asks intermediaries to comply with existing laws pertaining to POCSO. Existing laws in BNS cover obscenity, voyeurism, impersonation and privacy violations. The persistent gap is in enforcement capacity.
Even in non-synthetic image abuse cases, justice is slow. Investigative and judicial systems are still adapting to digital harm. Additional technical rules cannot substitute for institutional readiness and survivor-sensitive procedure.
Instead of producing more rules for platforms to signal compliance with, we need mechanisms to ensure the enforcement of existing laws and rules. Some of this involves the difficult work of upskilling police to understand and handle digital evidence. When it comes to platforms, we need protection and enablement of independent audits and spot checks of platforms, and redressal of user reports. While regulators have started checking and penalising platforms for using deceptive design, we have not seen similar penalties for not responding to users’ complaints, even when it violates platforms’ community guidelines. A victim-centred framework must focus on the outcome instead of the process.
Siddharth P and Tarunima Prabhakar are with Rati Foundation and Tattle Civic Technologies, respectively
