Can new IT rules stop the deepfake epidemic?

In the shadowy corridors of the digital age, a new form of violation has emerged, one that requires no physical proximity, no coercion of the flesh, and, terrifically, no consent.
deepfake epidemic
Published on

 

Chandan Kumar Nath 

(chandankumarnath7236@gmail.com)

 

 

In the shadowy corridors of the digital age, a new form of violation has emerged,  one that requires no physical proximity, no coercion of the flesh, and, terrifically, no consent. It is the phenomenon where artificial intelligence is weaponised to strip the clothes off digital subjects, transforming benign, public images into explicit, non-consensual intimate imagery (NCII). This is not merely the creation of pornography; it is the algorithmic conquest of dignity. The observation that the “use of AI can enhance nudity” is not a testament to technological progress but a chilling warning about the erosion of privacy. It is against this dystopian backdrop that the Supreme Court of India’s persistent alarm bells have finally crystallised into executive action, culminating in the watershed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified this February.

To understand the gravity of the new legal framework, one must first dissect the specific malaise it seeks to cure. The phrase ‘AI can enhance nudity’ refers to the capabilities of Generative Adversarial Networks (GANs) and diffusion models to perform “image-to-image” translation. In plain terms, these tools can take a high-resolution photograph of a clothed individual – a politician, a journalist, or a college student – and “enhance” it by predicting and rendering realistic nudity where none existed. This is a violation of the most intimate order. Unlike traditional Photoshop, which required skill and time, AI democratises this abuse, allowing for mass production of humiliation. The victim is forced to prove that the “reality” millions are viewing is, in fact, a digital lie. For months, the Supreme Court of India has been the institutional voice of reason amidst this chaos. The judiciary, led by the Chief Justice, who himself became the subject of a deepfake video, has repeatedly flagged the inadequacy of existing laws. The Court’s observations have been clear: the right to privacy, enshrined in the Puttaswamy judgement, includes the right to one’s digital likeness. The court effectively signalled that the “safe harbour” immunity enjoyed by social media platforms could not be a shield for hosting what is essentially digital sexual assault. It is this judicial pressure that has forced the government’s hand, leading to the stringent amendments notified on February 10, 2026. The centrepiece of this new regulatory regime is the mandatory labelling of “Synthetically Generated Information” (SGI). The logic is rooted in the “Right to know.”. If AI can blur the lines between truth and fabrication, the law must force a distinction. The new rules mandate that any content created or modified by AI to appear authentic must carry a clear, irremovable label. This is not just a watermark; it is a digital scarlet letter intended to break the illusion of reality. By forcing platforms to label AI-generated nudity or hyper-realistic fakes, the law attempts to equip the viewer with immediate scepticism. If a user sees a compromising video of a public figure, the “AI-Generated” tag serves as a cognitive stop sign, potentially mitigating the reputational damage and the viral spread of misinformation. Labelling is a necessary first step, but it is not a panacea. The trauma of NCII lies not just in the deception but in the depiction. A labelled deepfake of a woman is still a humiliating image. It still objectifies and violates. This is where the second pillar of the new order comes into play: the “3-hour takedown” rule. By slashing the compliance window from 36 hours to just three hours for flagged deepfakes and NCII, the regulations acknowledge the “viral velocity” of modern media. In the context of the internet, 36 hours is an eternity – enough time for a video to be downloaded, mirrored, and archived on the dark web forever. Three hours is a sprint. It forces platforms to abandon their reliance on sluggish, human-moderated ticket systems in favour of proactive, algorithmic detection.

Critically, the Supreme Court’s influence is visible in the shift of the burden of proof. Previously, the onus was largely on the victim to spot the fake and plead with the platform. The new framework, inspired by the Court’s “duty of care” principles, shifts this burden to the intermediaries. The requirement to embed unremovable metadata digital fingerprints that trace the origin of the content is a game-changer for accountability. It means that the creator of the deepfake can no longer hide easily behind anonymity. It creates a chain of custody for digital files, turning the “enhanced” nude from a ghostly weapon into a traceable piece of evidence.

Yet, as we applaud these regulatory strides, we must remain vigilant about the “cat and mouse” nature of AI. The technology used to “enhance nudity” is evolving faster than the ink can dry on any gazette notification. We are already seeing “undressing” apps that run locally on devices, bypassing platform filters entirely. The Supreme Court’s intervention was crucial because it framed the question not as a technical issue but as a fundamental rights issue. The Court recognized that dignity in the 21st century is digital. If the law cannot protect the virtual body, it fails to protect the person.

There is also a nuanced danger of “label fatigue.” If every second image on the internet bears an AI label, from harmless memes to weather reports, the public may become desensitised to the warning, rendering it useless when it appears on malicious content. The implementation of this order will require a delicate balance. The labels must be prominent enough to warn, but the enforcement must be targeted enough to punish. The distinction made in the rules between “routine editing” (like colour correction) and “substantive modification” (like deepfakes) is a welcome attempt to thread this needle, ensuring that photographers aren’t criminalised while predators are targeted.

In conclusion, the February 2026 notification, born from the womb of the Supreme Court’s anxiety over the “deepfake epidemic”, represents a maturation of India’s digital legal framework. It accepts the terrifying premise that AI can “enhance” nudity and that it can conjure violation out of thin air. By mandating labels, enforcing rapid takedowns, and piercing the veil of anonymity with metadata, the state is finally building a firewall around the digital citizen. But let us not be under illusions; laws can scrub the internet, but they cannot easily scrub the minds of those who have seen the unseen. The fight against AI-generated exploitation is not just legal; it is cultural. We need a society that views the consumption of such “enhanced” imagery not as entertainment, but as complicity in a crime. Until then, these new rules represent our best and only line of defence.

Top News

No stories found.
The Sentinel - of this Land, for its People
www.sentinelassam.com