Pallab Bhattacharyya
(Pallab Bhattacharyya is a former director-general of police, Special Branch and erstwhile Chairman, APSC. Views expressed by him is personal. He can be reached at pallab1959@hotmail.com)
On 22 October 2025, the Government of India introduced significant amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The Ministry of Electronics and Information Technology (MeitY) acted after witnessing the alarming proliferation of deepfakes and AI-generated misinformation that began to distort democratic discourse and public trust. A viral video falsely showing the Prime Minister announcing new Rs 2000 notes, another featuring fabricated apologies by national leaders for an imaginary “Operation Sindoor”, and fake endorsements of financial scams were not isolated incidents—they signalled the dawn of a dangerous digital era. The amendments were therefore designed to counteract this rising menace of synthetic media that can defame individuals, manipulate elections, and threaten national security.
The 2025 amendment mandates intermediaries to identify, label, and swiftly remove synthetically generated content. Platforms must act within 36 hours of notification and ensure that AI-altered visuals or voices are clearly marked as such. These new obligations reflect the government’s determination to preserve authenticity in an online world increasingly shaped by algorithms and artificial intelligence. Yet, these measures are not arbitrary controls—they emerge from a long and evolving journey of India’s digital governance, beginning with the IT (Intermediaries Guidelines) Rules of 2011 and maturing into the 2021 Rules that now stand amended in 2025.
The 2011 Rules, notified under the Information Technology Act, 2000, were drafted at a time when social media was still in its infancy in India. Their scope was limited to intermediaries such as hosting providers and service platforms, whose primary duty was to publish policies and remove unlawful content upon receiving notice. Grievance redressal mechanisms were weak; complaints could take up to 30 days to be resolved. The digital ecosystem was simpler, and the concept of fake news or AI-generated deepfakes did not yet exist. Consequently, the 2011 framework became inadequate as the internet transformed from an information highway into a powerful social and political force.
By contrast, the 2021 Rules—and their subsequent amendments through 2022, 2023, and 2025—represent a far more comprehensive and accountable framework. They expanded the scope beyond mere intermediaries to include social media platforms, online gaming intermediaries, and digital media publishers. Grievance officers must now acknowledge complaints within 24 hours and resolve them within 15 days, and users can appeal decisions before a newly constituted Grievance Appellate Committee. The rules demand greater transparency, including monthly compliance reports and prior notice to users before content takedown. The concept of due diligence has evolved from a perfunctory legal requirement into a robust system of multilingual communication, user harm prevention, and algorithmic accountability. Most crucially, the 2025 amendment formally introduces obligations concerning AI-generated and synthetic content—an area that did not even exist when the first rules were framed.
The transformation from 2011 to 2025 also reflects a philosophical shift. The early regulatory model viewed intermediaries as passive carriers of content, protected by “safe harbour” provisions under Section 79 of the IT Act. The new regime views them as active participants with responsibilities to maintain integrity in digital communication. Safe harbour now stands conditional—lost if intermediaries fail to observe due diligence or defy lawful orders. This change mirrors the evolution of India’s constitutional and judicial understanding of free speech: rights must coexist with duties, and liberty must be balanced by accountability.
The amendments also derive legitimacy from constitutional principles. Articles 14, 19, and 21 of the Constitution ensure equality before the law, freedom of expression, and the right to life and personal liberty. The Supreme Court’s rulings—from Shreya Singhal v. Union of India (2015) to Justice K.S. Puttaswamy v. Union of India (2017)—have emphasized that any restriction on digital speech must be reasonable, transparent, and procedurally fair. The IT Rules, by introducing defined grievance procedures and avenues for appeal, align regulatory oversight with these constitutional safeguards. They aim not to silence dissent but to prevent deception—a distinction central to the health of a democracy.
The latest 2025 amendment arrives at a time when countries worldwide are grappling with the same challenges. The European Union’s Digital Services Act (2022) mandates algorithmic transparency and annual risk audits. The UK’s Online Safety Act (2023) compels platforms to prevent illegal or harmful content, especially targeting the protection of minors. Singapore’s POFMA law provides fact-checking powers while ensuring user appeal mechanisms. Australia’s eSafety model introduces a centralised regulatory authority for swift removal of abusive material. India’s new rules borrow from these models while retaining a distinctly democratic and pluralistic ethos: emphasizing multilingual accessibility, user education, and proportional regulation rather than blanket censorship.
Critics, however, have voiced concerns about the short window for public consultation—only fifteen days between 22 October and 6 November 2025. Civil society groups and digital rights advocates argue that such an important regulatory shift demands deeper debate and scrutiny. Their apprehension is understandable, for regulation in a democracy must always evolve through dialogue, not decree. The government’s challenge, therefore, lies in achieving equilibrium between swift action against digital deception and safeguarding the freedom of speech that lies at the heart of India’s constitutional promise.
The practical benefits of these evolving IT rules are undeniable. By imposing accountability on intermediaries, the rules discourage anonymous hate speech and fake campaigns. They empower citizens through faster grievance redressal and multilingual accessibility, thereby reinforcing the inclusivity of the digital sphere. They also help create a more predictable environment for businesses by defining clear compliance standards. For a nation as diverse as India, these measures strengthen the delicate balance between technological innovation and social harmony.
Yet, as technology marches forward, no law can remain static. Deepfakes today may give way to more sophisticated generative manipulations tomorrow. The ethical frontiers of artificial intelligence, virtual reality, and quantum communication will pose fresh dilemmas. Therefore, India’s digital governance must remain dynamic, adaptive, and consultative. Just as the 2011 rules evolved into the 2021 and 2025 frameworks, future amendments must continue to reflect both technological realities and democratic values. In the end, an open, safe, and trusted internet is not merely a regulatory goal—it is a democratic necessity. India’s pluralistic character and constitutional vibrancy will be best served by laws that evolve with time, protecting not only freedom of expression but also the authenticity of truth itself.