

The central government making it mandatory for prominent labelling of artificial intelligence (AI)-generated content and setting a small window of two to three hours of takedown time for social media platforms to remove unlawful AI-generated content is aimed at preventing misuse of generative AI and ensuring information integrity. The implementation of the amended rules is not without challenges due to the behavioural uncertainty of users to AI content labelling—which is still evolving—and it is not known if they are going to take notice of the label or still be gullible to consume it without much consideration about the transparency. Rising incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods—depicting individuals in acts or statements they never made—explaining the rationale behind the urgency to notify the amended rules. The Ministry of Electronics and Information Technology (MeitY), in the ‘Background Note’ on the amendments, sounded the alarm bell that such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which will come into force on February 20, for the first time defined ‘synthetically generated information’ to be audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or oraltered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be, perceived as indistinguishable from a natural person or real-world event. The definition will remove ambiguity over the legal definition of synthetic content, which can be subjected to the notified rules on AI labelling and take-down action. According to the new rules, audio, visual, or audio-visual information shall not be deemed to be ‘synthetically generated information’ where such audio, visual, or audio-visual information arises from routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter, distort, or misrepresent the substance, context, or meaning of the underlying audio, visual, or audio-visual information. Similarly, the routine or good-faith creation, preparation, formatting, presentation or design of documents, presentations, educational or training materials, and research outputs, including the use of illustrative, hypothetical, draft, template-based or conceptual content, where such creation or orpresentation does not result in the creation or generation of any false document or false electronic record, will not be treated as synthetic content. The rules mandate deployment of reasonable and appropriate technical measures, including automated tools by social media intermediaries, to not allow any user to create, generate, modify, alter, publish, transmit, share, or disseminate. While large social media intermediaries may have the capacity to make the required investment, smaller platforms, especially start-ups, may find it difficult to comply due to a dearth of infrastructure, resources and manpower. Once the rules come into effect, social media platforms will be required not just to make it mandatory for their users to first declare if the content is AI-generated but also to deploy automated tools to verify the declaration. The possibility of mislabelling due to technical glitches or due to limitations in the tools cannot be ruled out, which remains a critical challenge under the new AI labelling rules. The central government has claimed the new rules will ensure visible labelling, metadata traceability, and transparency for all public-facing AI-generated media; will protect intermediaries acting in good faith; empower users to distinguish authentic from synthetic information, thereby building public trust; and support India’s broader vision of an open, safe, trusted and accountable internet while balancing user rights to free expression and innovation. Independent bodies like the Internet Freedom Foundation (IFF), however, flagged concern over a new rule which mandates disclosure of violating users’ identities directly to complainants who claim to be victims. It alleges that this rule “bypasses judicial oversight and creates serious risks of harassment, doxing, and vigilante action, especially against marginalized users and dissenting voices.” The IFF further alleges that vague articulation like Synthetically Generated Information that “results in the creation, generation, modification or alteration of any false document or false electronic record” runs the risk of criminalising legitimate uses of AI in document preparation, research outputs, and creative works. It also cautioned that the wordings like “falsely depicting or portraying” persons or events “in a manner that is likely to deceive” are so broad they could include satire, parody, political commentary, and artistic expression.? Addressing these concerns will be crucial to build public trust and confidence in fair enforcement of the rules to curb harmful deepfake videos, audios and other synthetic information deceiving unsuspecting digital users. Building awareness among users of how to detect deepfakes or synthetic content is equally important to strengthen digital safety.