
The rapid development of Artificial Intelligence (“AI”) and generative technologies has significantly reshaped the digital ecosystem. Tools that generate realistic images, videos, audio and other forms of synthetic media have created novel regulatory challenges for governments worldwide.
In India, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”) were framed under the Information Technology Act, 2000 (IT Act) to regulate intermediaries and prescribe due diligence obligations for social media and other online platforms.
However, the increasing misuse of AI tools to create deepfakes, impersonation content and manipulated media exposed gaps in this framework. To address these risks, the Government notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“IT Amendment Rules, 2026”), which focus on “Synthetically Generated Information” (“SGI”) and strengthen the accountability of intermediaries, particularly Significant Social Media Intermediaries (“SSMIs”).
The amendments were preceded by several developments, including draft proposals to regulate synthetically generated information1 under the IT Rules, 2021; a public interest litigation before the Punjab and Haryana High Court seeking regulation of AI‑generated content and deepfakes2 ; a surge in legal disputes concerning deepfakes and SGI‑based impersonation, particularly involving celebrities and public figures.
This article analyses the regulatory framework prior to the 2026 amendments, the concept of SGI, the enhanced obligations imposed on intermediaries, and the broader legal implications for platform liability and digital rights.
Regulatory Framework Prior to the 2026 Amendment
The IT Rules, 2021 established a comprehensive due diligence framework for intermediaries operating digital platforms in India. The primary objective of these rules was to enhance transparency, accountability and user safety in the digital environment.
Rule 3 of the IT Rules, 2021 prescribes baseline due diligence obligations for all intermediaries, including publishing rules, regulations and user agreements; informing users not to host, display, upload, modify, publish, transmit or share unlawful information; removing or disabling access to unlawful content upon receiving actual knowledge through court orders or government or agency notifications.
Rule 4 imposes additional obligations on SSMIs, such as appointment of a Chief Compliance Officer, Nodal Contact Person and Resident Grievance Officer; ensuring coordination with law‑enforcement authorities and effective handling of user grievances.
The Rules also established a grievance redressal mechanism requiring intermediaries to acknowledge complaints within 24 hours and resolve them within 15 days.
Despite these safeguards, the framework largely operated on a reactive notice‑and‑takedown model, with regulatory action triggered only after harmful content surfaced on the platform.
Challenges Under the Earlier Framework
Rapid advances in generative AI revealed several structural limitations in the original IT Rules.
First, the widespread availability of AI tools enabled creation of highly realistic deepfakes that could impersonate individuals, fabricate events and fuel misinformation at scale. Such content posed serious risks to privacy, reputation, electoral integrity and public order.
Second, the 36‑hour window previously available to intermediaries to remove unlawful content after acquiring actual knowledge often proved inadequate in the context of virality, where deepfake content could be replicated and re‑shared within minutes.
Third, the Rules did not define AI‑generated or synthetic content, leaving intermediaries and regulators uncertain about when manipulated content crossed into unlawful or regulated territory.
Finally, traceability requirements for messaging platforms, particularly obligations to identify the “first originator” of certain messages, attracted criticism on the ground that they could undermine end‑to‑end encryption and chill lawful anonymous speech. These concerns prompted calls for a more proactive, SGI‑specific framework capable of addressing emerging harms associated with generative AI.
Synthetically Generated Information (SGI)
The cornerstone of the 2026 amendments is the formal introduction of “Synthetically Generated Information”.
Rule 2(1)(wa) defines SGI as audio, visual or audio‑visual information that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears authentic to a reasonable person, while realistically depicting persons, events or scenes that have not occurred.
Recognition of SGI directly tackles risks associated with deepfake technologies and AI‑generated media, including impersonation, identity fraud, misinformation campaigns, reputational damage and non‑consensual intimate imagery.
By explicitly bringing SGI within the meaning of “information” under the IT Rules, the amendments clarify that harmful synthetic media can attract the same regulatory consequences as other forms of unlawful or harmful content.
Amendments to Rule 3: Strengthening Due Diligence Obligations
The 2026 amendments significantly reinforce the due diligence obligations under Rule 3.
Intermediaries are now required to enhance user awareness regarding the misuse of their platforms for generating, modifying or distributing synthetic content.
Key elements include:
Intermediaries providing computer resources that enable creation or dissemination of SGI are also subject to enhanced disclosure obligations concerning the risks associated with AI‑generated content. These measures signal a shift from a purely reactive model to a more proactive due diligence standard for intermediaries.
Amendments to Rule 4: Additional Obligations for Significant Social Media Intermediaries
Rule 4, which governs SSMIs, has been expanded to embed SGI‑specific safeguards.
SSMIs must now implement mechanisms requiring users to declare whether uploaded or shared content constitutes SGI.
The amendments further require SSMIs to:
Platforms may also be required to embed metadata or technical identifiers that record the provenance of SGI, including unique identifiers and information regarding the intermediary’s computer resource used to generate or alter the content.
Failure by an SSMI to take reasonable steps against unlawful SGI can be treated as a failure to exercise due diligence, with potential consequences for safe‑harbour protection under Section 79 of the IT Act and exposure to vicarious liability for user‑generated content.
Revised Compliance Timelines
The amendment also introduces stricter timelines for responding to harmful content and user complaints.
Where an intermediary receives actual knowledge through a court order or government notice regarding unlawful information, it must remove or disable access to such content within three hours. This represents a significant reduction from the earlier 36-hour timeframe.
Similarly, complaints relating to certain categories of harmful content such as impersonation, morphed images, nudity or sexually explicit material must be addressed within two hours of receiving the complaint. The general grievance resolution period has also been reduced from fifteen days to seven days.
These accelerated timelines aim to reduce the speed at which harmful content spreads and to ensure quicker protection for affected individuals.
Exceptions and Legitimate Uses
While the amendments adopt a stringent approach to malicious synthetic media, they also recognise that not all SGI is harmful or unlawful. Certain legitimate uses of synthetic technologies are expressly preserved or treated as outside the mischief of the SGI definition where they lack intent to mislead or create false records. Such activities include:
These carve‑outs attempt to balance innovation, free expression and accessibility with necessary safeguards against misuse of SGI for deception, harassment or other unlawful purposes.
In this backdrop, the accompanying illustration of “New Media India” above, visually captures the dual character of synthetically generated information as both a tool of innovation and a subject of regulation.
On one side, an urban creator in a contemporary Mumbai studio uses immersive interfaces to re‑imagine traditional Kathakali performance in three‑dimensional form, symbolising how AI tools can legitimately preserve and reinterpret cultural expression.
On the other, a rural content‑creator records video on a basic smartphone while their speech is instantaneously rendered into multiple Indian scripts, reflecting the democratizing potential of SGI to expand linguistic reach and participation in the digital economy.
The faint “AI‑generated” watermark embedded in the composition deliberately echoes the 2026 IT Amendment Rules’ emphasis on provenance, transparency and labelling, underscoring that such creative, clearly disclosed uses of synthetic media fall outside the mischief of deceptive deepfakes and align with the carve‑outs for artistic, educational and accessibility‑oriented applications.
Authors: Ms. Aayushi Singh (Adv.), Sr. Partner; Mr. Anurag Nahata (Adv.), Jr. Associate
Legum Solis in association with GRATA International
Advocates and Corporate Law Consultants, New Delhi
______________________________
[1] https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf
[2] https://timesofindia.indiatimes.com/city/chandigarh/hc-notice-to-centre-others-on-plea-for-regulation-of-ai-generated-content/articleshow/129112789.cms