top of page
Search

The Shrinking Safe Harbour: Reading the Latest IT Amendments

  • CBTL
  • 5 days ago
  • 6 min read
by Aditya Bharadwaj (Chanakya National Law University, Patna) and Aarav Kumar (Gujarat National Law University, Gandhinagar)

Introduction 

With the increasing cases of AI-generated content, deepfakes, and online safety,  the Ministry of Electronics and Information Technology (MeitY) introduced changes to regulate intermediaries and thus redefined intermediary accountability. Although these changes are framed as essential for protecting users, they significantly overhauled the existing principles of intermediary liability. Synthetic Content Amendment to the IT Rules, 2021, notified on 22nd October, 2025, reinforces due diligence obligations of the SMIs (Social Media Intermediaries) and SSMIs (Significant Social Media Intermediaries). It fine-tunes the statutory mandates with the rapid expansion of AI and the proliferation of AI-generated content. Additionally, the SoP (Standards of Procedure) released by the MeitY on 11th November, 2025, provides operational guidelines for victims, law enforcement agencies, and intermediaries to address the resurfacing of Non-Consensual Intimate Imagery (NCII), with highlights including a 24-hour content takedown mandate and crawler-based detection. Together, these changes establish strict liability for intermediaries by elevating their responsibility from reactive takedown to proactive governance, consequently blurring the line between due diligence (under Rule 3 of the IT Rules, 2021) and direct liability. With such requirements, intermediaries are now expected to monitor and regulate the generated content, creating a presumed-fault liability effectively, under the newly added Rule 4(1A). This weakens the protections intermediaries presently enjoy under Section 79 of the IT Act, 2000.  This shift of stricter accountability on platforms needs to be assessed for its likely effects on intermediaries and users. 


This blog looks into the principle backing the safe harbour protection, analyses the changes from a legal-constitutional perspective, and tries to determine their effects on the liability of intermediaries.  


The Doctrinal Baseline

The safe harbour doctrine under Section 79 of the IT Act was based on the assumption that intermediaries are there to host content and not to judge or control it. This exemption provided under the statute was not the reward but just to protect the architecture of the platform, which merely carries, stores, or transmits third-party information from any legal consequences that can arise out of the content it was hosting. The three mandatory conditions to avail this exemption are no initiation, no selection and no modification, which is reinforced by the courts over the years. The apex court in Shreya Singhal drew a clear line between actual knowledge and proactive monitoring. The court acknowledged that requiring the intermediaries to scan and identify the unlawful content defeats the very purpose of the safe harbour. The safe harbour was operated as a constitutional compromise that balances free speech, innovation, and privacy while preventing the state from using these platforms as a surveillance tool.


This framework of the IT Act has a particular role for the intermediaries: reactive responsibility, which means the responsibilities of the intermediaries arise only after notice or complaint. The IT Act 2021 has a due diligence layer, but that also ensures that the takedown can be done only after notice, not from constant surveillance. The existing 24-hour removal of Non-Consensual Intimate Imagery (NCII) content is also justified as it is done only in exceptional circumstances and is initiated by the victims and not the result of any systemic monitoring. 


To evaluate the impact of the 2025 amendment, the purpose for which the safe harbour was created cannot be neglected, which is to prevent intermediaries from becoming tools of gatekeeping. Any changes to this baseline would risk rewriting the core balance upon which the intermediary liability rests.


Expanding Intermediary Responsibility 

The 2025 amendment, called the “Synthetic Content Amendment”, brings some changes in the IT Act, 2000, especially affecting platforms that enable AI-content generation. Under the newly added Rule 3(3)(a), IT Rules, 2025, such platforms are required to watermark the AI-generated or “synthetic” content, covering 10% of the surface area in case of “visual display” such as images, videos, and the initial 10% of audio content. Removal or modification of such labels is prohibited under Rule 3(3)(b). 

Similarly, the newly added Rule 4(1A)(a), the SSMIs are now required to collect user declarations before displaying, uploading, or publishing any information, affirming whether such information is synthetically originated. Moreover, the SSMIs are required to verify the accuracy of such declarations by deploying “reasonable” and “appropriate” technical measures. The proviso to the sub-section establishes presumed-fault liability; if the intermediary becomes aware that these rules are violated and either allows, promotes, or takes no action, it will be treated as a failure of due diligence on the part of the intermediary, even if there was no intention or a genuine mistake. Though intermediary liability is not a novel concept, these changes burden intermediaries with the duty to verify user declarations ex-ante. These declarations can be context-dependent and contestable. With this duty to verify such claims, these rules instil awareness, failing with a penalty follows irrespective of intent or good faith. Thus, the section and proviso read together make the intermediary liable stricto sensu, burdening them with this uncompromising adherence.  


The SoP (Standard Operating Procedure) released by the MeitY to make removal and prevention mechanisms specifically stronger against NCII ( Non-Consensual Intimate Imagery) on online platforms makes the pertinent procedures clear and victim-centric, and thus contributes to effective implementation of Rule 3(2)(b) of the IT Rules, 2021. This is also relevant considering the increasing cases of non-consensual sharing of morphed and intimate images. 


The SoP gives the victims many venues for redressal of their grievances: One Stop Centres (OSCs), intermediaries through in-app reporting, grievance officers, National Cybercrime Reporting Portal (NCRP), and other law enforcement agencies such as local police stations. A strict timeline of 24 hours is mandated for the intermediaries to act upon receiving a complaint by removing or disabling access to such content. SSMIs are required to use hash-matching and crawler methods to ensure that such content does not resurface. Besides, duties such as de-indexing links providing access to such flagged content and blocking hosting entities are imposed on search engines and domain registrars, respectively.   


Collectively, these changes convert “due diligence” obligations into a continuous surveillance (or proactive) model. Whereas the safe harbour protects passive conduit currently, these changes create a content-policing model, making related tech-infrastructure a necessity. Earlier considered “publishers”, intermediaries are now required to adopt a network-wide compliance model.  


The legislature’s timely response to the growing menace of deepfakes is commendable. Instances of sharing morphed images are increasing day by day. However, without clear standards and mandates of how these responsibilities are to be carried out, this diminishes the doctrinal limits of intermediary liability and makes the platforms adjudicators without procedural safeguards. 


Constitutional and Administrative Law Tensions in the Emerging Liability Model

After the amendments, the role of the intermediaries has shifted from merely hosting the contents to monitoring them, which has created several constitutional and administrative fault lines. Constant surveillance has been institutionalised through the amendment’s labelling metadata, user declaration, combined with SOP’s crawler and 24-hour takedown mechanism. 


Firstly, the amendment is in direct contravention of the doctrine as it mandates proactive monitoring by the intermediaries, while the Indian jurisprudence previously had a clear line of distinction between takedown on notice by the victim or government and state-directed surveillance. This new settlement of constant monitoring risks collapsing the distinction and promotes state-sponsored pre-censorship. 

Secondly, the administrative law issues arise because, although SOPs are presented as operational guidelines, they impose substantive obligations (including deadlines) that are similar to delegated legislation. This calls into question the legality principle and ultra vires rule-making, particularly when "deemed failure" presumptions shift the burden of proof to platforms.


Lastly, while immediate intervention is necessary in cases of non-consensual intimate imagery, deepfakes, and hate speech to prevent immediate harm, imposing such obligations without clear statutory limits risks chilling free speech and disproportionately burdening smaller intermediaries these actions have the potential to stifle free speech and disproportionately burden smaller intermediaries. In summary, well-meaning safety precautions here clash with fundamental standards of lawfulness, free speech, and administrative justice; this conflict necessitates more precise primary legislation rather than an increase in technical responsibilities.


Conclusion  

These regulatory changes together remodel the concept of intermediary liability and safe harbour under Section 79 of the IT Act, 2000. Platforms are now not only responsible for hosting the content, but also for the reappearance of the flagged content.  


The phrase “deemed to have failed” in Rule 5(1)(1A) of IT Rules, 2025, signifies a proactive undertone, weakening the safe harbour. Compliance with these rules ultimately results in censorship, incentivising over-removal and content suppression.  


This goes against constitutional principles of non-arbitrariness carefully embedded by the courts over time in the very letters of Article 14. This is also antithetical to the Shreyal Singhal v Union of India judgment in which the Supreme Court categorically stated that online intermediaries would not be liable for hosting content unless they received an order from a court or a government authority.  

Thus, India must not establish a censorship regime, centred around unnecessarily proactive monitoring. The deterrence principles and rules must be risk-based, centred around constitutional safeguards and practicality. 

 


 
 
 

Comments


Address

2nd & 4th Floor, Maharashtra National Law University Mumbai, MTNL Building, Technology Street, Powai, Mumbai. 

Our Socials

  • LinkedIn
  • Instagram

Contact

Convenor
Revant Sinha -  91 78270 76105

Associate Convenors
Om Dambhare - 91 93072 24566

Ritesh Karale - 91 93593 07137

bottom of page