Data Processors… Be Enlightened, Empowered and Emancipated.

After the notification of DPDPA rules on November 13, 2025, there is a new awareness flowing through the industry on the need to be compliant with DPDPA 2023. The potential fine  of upto and  beyond Rs 250 crores is motivating the companies to recognize and take steps in mitigating the financial risks.

One school of thought is that penalties under DPDPA 2023 apply only to “Data Fiduciaries” and not “Data Processors”. Hence those who classify themselves as “Data Processors” think that they need not be compliant with DPDPA 2023.

This is however a huge fallacy.

One reason to call this a fallacy is because every organization which is not a single man organization and has “Employees” is a Data Fiduciary to the extent of processing of the “Employee Information”. Employment also includes “Recruitment” where personal data of non employees are processed and sub contractors are hired for background verification. There is also disclosure of employee information under legitimate use basis to statutory authorities as well as handling of personal information of ex-employees after their termination and information of the family of the employees for various welfare measures including insurance.

As a result of this requirement, there is no organization (other than single man entities) which escapes the need to comply with DPDPA 2023 or face the penalty risks. Indian DPB may not be as irrational as the  GDPR authorities who impose fines even on individuals who process personal data for their personal safety issue (Refer the Tesla Car owner case here). However, it is a fact that every entity with employees is exposed to the DPDPA Risk and has to take steps in documenting a Risk Assessment and a “Compliance by Design” program.

Some of the small entities which come under the category of SMEs may handle sensitive assignments such as servicing a defence establishment or establishments of national importance and the information of their employees can even be considered as “Sensitive” to a certain extent.

In view of this viewpoint, FDPPI belongs to that school of thought that every organization in India which is processing data in some form or the other is potentially a data fiduciary and needs to be compliant with DPDPA 2023.

FDPPI has already introduced frameworks such as “FDPPI-Lite”, “FDPPI-Full” and “FDPPI-AI” to address the requirements of data fiduciaries.

There is however one class of manufacturers for B2B market who deal with employee data and business contact data only. There are also one class of organizations which provide sub contracting services for HR functions (eg background verification or conduct of Pre recruitment medical examination and Pre recruitment aptitude test etc) who often manage “platforms” that are licensed for operation to the recruiter and remain in the background. Most of them consider themselves to be  “Data Processors” today.

Further, every organization has different processes associated with personal data in which some divisions of a data fiduciary directly handle data processing contracts for third party data fiduciaries as if they are a different companies.  In such cases “Governance of Risk” suggests that division wise (Process wise) risks may be different and strategies to segregate them as “Data Processors” instead of “Data Fiduciaries” or “Joint Data Fiduciaries” needs to be explored. Similarly in case of platform service providers and SaaS providers, there may be some contracts in which an entity could be only a data processor and some in which they are joint data fiduciaries.

Additionally there are entities who process Indian data along with data from EU and they need to be compliant with DPDPA 2023 as an organization while also being GDPR compliant as a Data Processor or a Data Controller or a joint Data Controller. In case they have signed standard contract clauses agreement, they would have taken  voluntary responsibilities to be liable under GDPR.

Considering these different types of organizations that are in the market, FDPPI has tried to customize its DGPSI Framework for bringing more focus in compliance as well as simplifying it to some extent based on the activity of the entity.

Accordingly, the DGPSI framework has now become a “Family of Frameworks” with multiple frameworks for multiple types of organizations.

For example, DGPSI Full with DGPSI-AI would be the core framework for data fiduciaries who use AI and needs to cover compliance of related laws such as ITA 2000,  DGPSI-Lite would be a simplified DPDPA 2023 only compliance framework.

DGPSI-GDPR is a framework which addresses the requirements of a GDPR processing division where the organization in India processes EU data as a Controller or Joint Controller.

Additionally DGPSI-HR tries to focus on organizations who donot handle B2C business and their data principals are the employees only.

Further DGPSI-Data Processor is a framework which is meant  primarily for  Data Processors who service Data Fiduciaries in India who need to be  compliant with DPDPA 2023 and wants to present themselves as an organization which is aware of its responsibilities, empathizes with the data fiduciary and is empowered and considers itself as voluntarily undertaking  a responsibility as if they are “Deemed Data Fiduciaries”.

Entities who comply with this framework voluntarily  are in a way “Enlightened”, Empowered” and “Emancipated”. They possess a strategic competitive edge over other processors who may be competing for business with the Data fiduciary.

If the Data Fiduciary factors-in the DPDPA Risk as part of the business risk, he would prefer to work with such enlightened, empowered and  emancipated data processors and even would be willing to  pay a premium for their services.

FDPPI therefore recommends every organization in India, big  or small, whether they consider themselves today as Data Fiduciaries or Data Processors, to explore  being compliant with DPDPA under a relevant DGPSI Framework.

By understanding the needs of different entities and introducing appropriate frameworks of compliance under the DPGSI umbrella, FDPPI is proving that DGPSI is a framework which can be called the “Vishwa Guru” of compliance frameworks. When the members of FDPPI expand DGPSI-GDPR to  other jurisdictions and develop DGPSI-Singapore, DGPSI-California etc, DGPSI family will be  engulfing the global Data Protection Compliance regime.

This may take a decade but  is definitely the vision of DGPSI.

Naavi

Posted in Privacy | Leave a comment

Beware of the moves of Donald Trump on Stable coins

In the recent days, the raise in the Gold prices in India has attracted attention on the reasons behind the unprecedented raise in the price of Gold and Silver. It is clear that many Governments including India, China and Russia are buying Gold in anticipation of a global change in the financial system.

Mr Donald Trump is now in partnership with Pakistan  in a Crypto Currency project and it could be part of a global scam in the making.

USA which is high on international debt  of nearly $37 trillion is trying to get recognition to Bitcoin and Stable Coin so that their prices can be jacked up and the US dollar debts can be converted into Crypto Currency denominated debts at an advantage. For example if Stable Coins get a value of US dollar 2, then the US debt will come down from 37 trillion to 16.5 trillion stable coins. If stable coins can be acquired today in exchange of a jacked up Bitcoin, there will be a possibility of the  US debts denominated in Stable Coins at  less than the  US $37 trillion.

Mr Trump appears to be working on this and hence trying to promote Stable Coins on one Stable Coin to One US dollar today but later value Stable Coin to a basket that includes Bitcoins and there by inflate the Stable Coin value to more than USD 1 per Stable Coin.

India currently has a dwindling mind on Bit Coins and it can succumb to this Trump’s game and agree for an international monetary system where Stable Coins can be a reserve currency as an alternative to US Dollars and forcefully convert all current USD reserves to Stable Coin denominations. Subsequently when the Stable Coin valuation gets aligned with the inflated bitcoin value, the debt of 37 trillion will shrink.

It appears that already 63% of Stable Coin holdings are related to criminal activities and soon, like Bitcoins, the entire stock of Stable  Coins will be tainted as “Laundered Money”.

I hope India tightens up its regulatory ban  on Crypto currencies and support the gold (with a possible addition of silver in the basket)  backed currency system.

(Views from Financial experts welcome)

Naavi

Posted in Privacy | Leave a comment

GDPR implementation some times can be crazy

Recently there was an interesting Austrian Supervisory authority decision imposing a fine of Euro 600 on a owner of Tesla Car . The car owner had installed seven cameras which could film when the car was parked recognizing possible threats. The argument of the Supervisory authority was that it could film people who were not threats and the data subject was not informed about the filming.

This decision indicates that the “Security” of the individual was considered subordinate to the principle of “Privacy”. Secondly it did not matter that the Car owner had no way to filter the recording to only those persons who were considered threats and delete those who were not.

There is no doubt that this decision is one of those crazy decisions for which GDPR supervisory authorities are known. However the new Digital Omnibus Proposal could change things here since the owner of the cameras has no identity of the persons whose pictures have been captured and hence the data will not be considered as “Personal Data”.

If the persons in the camera are identified, it would be through an  additional process of matching of the faces with a facial recognition software and who so ever uses this process would be liable for infringement  of privacy and obtaining consent. The Car owner who has  recorded the video and does not distribute it or sell it for exploitation should be free from liability.

Further, if the data is  captured by the cameras and is over written automatically, and  referred only when there is a security incident, then the capture automatically get deleted within a reasonable time and hence should not be a violation of privacy principles.

Further the car owner should consider that it is Tesla which perhaps has failed to provide appropriate guidelines for the Car users on how to handle the captures  without violating GDPR. Tesla should perhaps indemnify the car owner.

One more point to debate is that if the Car is parked in a public place, the captures would be of the public space. Hence if any body else expose themselves in front of this camera, they would perhaps be also considered as being in public place. It is our view that when a person enters a “Public Space” he is voluntarily exposing himself to public and  should not commit any activity which he would like the privacy law to protect.

Further, to consider  an individual car owner trying to protect  his property as a Data Controller and imposing him the liabilities of GDPR  Compliance is simply crazy. By this standard, all “Dash board Cameras” and “Reverse Parking Cameras” are also violating GDPR because any body can come in front of such cameras.

The decision is unacceptable  and  should be considered as an aberration.

The case opens up many academic points for debate. Comments are welcome.

On the lighter  side, now the potential for GDPR Compliance training is open  to all individuals who may be considered as “Data Controllers” whenever they use their mobiles to take pictures in public or install CCTV cameras anywhere!

It was alarming to see that there were 210 decisions from different supervisory authorities since 2020 where GDPR authorities have fined individuals. This requires a debate of its own.

Naavi

Ref: https://www.enforcementtracker.com/ETid-2975

Posted in Privacy | Leave a comment

Governing AI-Generated Content: Intermediary Compliance, Free Speech, and Regulatory Prudence

Mr. M. G. Kodandaram, IRS., Assistant Director (Retd), ADVOCATE and CONSULTANT, decodes the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025

  1. A Constitutional Moment in India’s Digital Governance

The Ministry of Electronics and Information Technology (MeitY) notified the ‘Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025[1]’, bringing into effect from 15 November 2025, a carefully crafted amendment to Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”). Issued under Section 87 of the Information Technology Act, 2000 (“IT Act”), the amendment recalibrates the procedural architecture governing the takedown of unlawful online content by intermediaries. This moment is significant not for expanding State power, but for disciplining its exercise in constitutionally sensitive domains.

At first glance, the amendment appears incremental. It neither expands the categories of prohibited content nor alters the substantive grounds on which speech may be restricted. But beneath this lies a profound constitutional intervention. By precisely defining how an intermediary may acquire “actual knowledge” under Section 79(3)(b) of the IT Act, the amendment restores procedural discipline, reinforces executive accountability, and re-anchors India’s intermediary liability regime in the jurisprudential logic of Shreya Singhal v. Union of India [2](2015).

It is interesting to note that this constitutionally grounded reform unfolds alongside a parallel and far more disruptive regulatory initiative: the proposed amendments addressing “synthetically generated information”[3] and deepfakes, particularly through a new Rule 4(1A). These draft proposals, still under consultation, seek to impose proactive verification and labelling obligations on Significant Social Media Intermediaries (“SSMIs”), thereby fundamentally altering the intermediary’s role from neutral conduit to active arbiter of authenticity. This divergence reveals two competing regulatory philosophies operating simultaneously within India’s digital governance framework.

While the notified 2025 amendment to Rule 3(1)(d) reflects a constitutionally grounded maturation of India’s intermediary liability framework, the parallel draft proposals on synthetic content threaten to unsettle the delicate balance between free speech, technological innovation, and regulatory accountability. Against this backdrop, the article traces the evolution of intermediary jurisprudence in India, analyses the constitutional logic underpinning the 2025 amendment, and compares India’s approach to AI-generated content with international regulatory models.

  1. Genesis of Intermediary Liability in India

The IT Act, 2000 was enacted at a time when intermediaries were largely perceived as passive facilitators of electronic communication. Section 79 embodied this understanding by providing a conditional “safe harbour” from liability for third-party content, modelled on notice-based liability regimes rather than prior restraint. The legislative intent was clear: intermediaries should not be compelled to pre-emptively police user speech, as such an obligation would be incompatible with both scale and constitutional free expression under Article 19(1)(a).

However, this immunity was never absolute. Section 79(2) subjected safe harbour to due diligence obligations, while Section 79(3)(b) withdrew protection where the intermediary failed to act upon receiving “actual knowledge” that its platform was being used to commit an unlawful act.

The first attempt to operationalise this framework came through the IT (Intermediary Guidelines) Rules, 2011. These rules, however, suffered from vagueness and overbreadth, effectively delegating censorship decisions to private platforms. The lack of procedural clarity created strong incentives for over-removal of content, prompting widespread criticism from civil society and constitutional scholars.

The constitutional reckoning arrived in 2015. In Shreya Singhal v. Union of India, (MANU/SC/0329/2015) the Supreme Court struck down Section 66A of the IT Act and, more importantly for intermediary law, read down Section 79(3)(b). The Court held that “actual knowledge” could arise only through a court order or a notification by an appropriate government agency, and not through private complaints or subjective assessments by intermediaries. This interpretation was a deliberate constitutional choice, designed to prevent intermediaries from becoming private adjudicators of legality and to mitigate chilling effects on speech.

The IT Rules, 2021 marked a second wave of digital regulation. They significantly expanded due diligence obligations, introduced a three-tier grievance redressal mechanism, and extended regulatory oversight to digital news publishers and OTT platforms. Subsequent amendments in 2022 and 2023 tightened compliance timelines and reporting obligations.

However, Rule 3(1)(d), the provision governing takedown of unlawful content, continued to attract constitutional concern, particularly in relation to procedural opacity and executive discretion. Its reference to “notification by the appropriate Government” lacked clarity on the rank of issuing officers, the requirement of reasons, and the existence of internal review. In practice, this opacity risked reviving the very private censorship dynamics that Shreya Singhal sought to dismantle. It is against this backdrop that the 2025 amendment assumes particular significance.

III. The 2025 Amendment to Rule 3(1)(d)

The substituted Rule 3(1)(d) reads as follows: (d) an intermediary, on whose computer resource the information which is used to commit an unlawful act which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force is hosted, displayed, published, transmitted or stored shall, upon receiving the actual knowledge under clause (b) of sub-section (3) of section 79 of the Act on such information, remove or disable access to such information within thirty-six hours of the receipt of such actual knowledge, and such actual knowledge shall arise only in the following manner, namely:—

  • by an order of a court of competent jurisdiction; or
  • a reasoned intimation, in writing, —
  • issued by an officer authorised for the purpose of issuing such intimation by the Appropriate Government or its agency, being not below the rank of Joint Secretary or an officer equivalent in rank or where an officer at such rank is not appointed, a Director or an officer equivalent in rank, to the Government of India or to the State Government, as the case may be, and, where so authorised, acting through a single corresponding officer in its authorised agency, where such agency is so appointed:

Provided that where such intimation is to be issued by the police administration, the authorised officer shall not be below the rank of Deputy Inspector General of Police, especially authorised by the Appropriate Government in this behalf:

Provided further that all such intimations shall be subject to periodic review by an officer not below the rank of the Secretary of the concerned Appropriate Government once in every month to ensure that such intimations are necessary, proportionate, and consistent with clause (b) of sub-section (3) of section 79 of the Act and this clause;

(II) clearly specifying the legal basis and statutory provision invoked, the nature of the unlawful act, and the specific uniform resource locator, identifier or other electronic location of the information, data or communication link required to be removed or disabled;”.

The above substituted Rule 3(1)(d) mandates that an intermediary must remove or disable access to information used to commit an unlawful act within thirty-six hours of receiving “actual knowledge” under Section 79(3)(b). The amendment operationalises “actual knowledge” through a closed and verifiable administrative design. Crucially, it exhaustively defines the modes through which such knowledge may arise.

Actual knowledge may arise through:
(a) an order of a court of competent jurisdiction; or
(b) a reasoned intimation in writing issued by a duly authorised government officer, subject to stringent safeguards.

These safeguards include:
(i) issuance by an officer not below the rank of Joint Secretary (or Director where such rank does not exist);
(ii) in the case of police authorities, issuance by an officer not below the rank of Deputy Inspector General of Police, specially empowered;
(iii) specification of the legal basis, statutory provision invoked, nature of the unlawful act, and precise URL or electronic identifier; and
(iv) mandatory monthly review by an officer not below the rank of Secretary to ensure necessity, proportionality, and consistency with Section 79(3)(b).

This architecture replaces vague executive notifications with a structured, reviewable, and senior-authorised process, restoring procedural discipline to content takedown.

  1. Transparency, Proportionality, and Constitutional Fidelity

From a constitutional perspective, the 2025 amendment is best understood as a reaffirmation of Shreya Singhal rather than a departure from it. The amendment reflects what may be described as procedural proportionality rather than substantive expansion.

Senior-level authorisation ensures political and administrative accountability. Reasoned intimations grounded in identifiable statutory provisions introduce legality and precision. The monthly review mechanism embeds proportionality within executive decision-making itself, acting as a safeguard against bureaucratic inertia and mission creep.

Importantly, the amendment does not expand the substantive grounds of censorship. It merely disciplines the process through which existing legal prohibitions are enforced, strengthening both the legitimacy and durability of State action.

  1. Practical Implications for Intermediaries and Users

For the State, the amendment bolsters enforcement credibility. By aligning takedown powers with constitutional safeguards, it insulates regulatory action from judicial invalidation and enhances public trust in digital governance.

For intermediaries, the amendment provides long-overdue clarity. Compliance obligations are now tethered to clearly identifiable triggers, reducing uncertainty and litigation risk. While the thirty-six-hour timeline remains demanding, intermediaries now know precisely when the clock begins to run.

For users, the amendment enhances procedural fairness. Content takedown decisions are embedded within a traceable administrative process, reducing the risk of arbitrary or excessive interference with lawful speech.

  1. Regulating Synthetically Generated Information

The rapid evolution of generative Artificial Intelligence (AI) has fundamentally transformed the digital information ecosystem. Technologies capable of producing highly realistic synthetic audio, visual, and textual content, often indistinguishable from authentic material, have expanded creative and commercial possibilities, while simultaneously intensifying risks of misinformation, impersonation, fraud, electoral manipulation, and erosion of public trust. It is against this backdrop that the Central Government proposed to notify the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, exercising powers under Section 87 of the Information Technology Act, 2000. The amendments represent a significant regulatory intervention aimed at addressing emerging AI-driven harms while preserving the foundational architecture of intermediary liability and safe harbour protection.

A defining feature of the 2025 Amendment Rules is the introduction of a statutory definition of “synthetically generated information.” By inserting clause (wa) in Rule 2(1), the Rules define such information as content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that reasonably appears to be authentic or true. The definition is deliberately broad and technology-neutral, ensuring regulatory durability amid rapidly evolving AI tools and techniques. Crucially, the focus is not on artificiality per se, but on the reasonable appearance of authenticity—thereby centring regulatory concern on deception, user harm, and misuse rather than benign or clearly fictional digital content.

To eliminate interpretational ambiguity, the Amendment Rules introduce sub-rule (1A) to Rule 2, clarifying that references to “information” in the context of unlawful acts under the IT Rules, 2021, including Rules 3 and 4, shall include synthetically generated information. This clarification is doctrinally significant. It ensures that AI-generated or manipulated content is not treated as a regulatory exception but is fully subsumed within the existing intermediary governance framework governing unlawful content, notice-and-takedown obligations, and enhanced due diligence requirements. By embedding synthetic content within the established statutory lexicon, the amendment avoids creating a parallel or fragmented regulatory regime.

At the level of intermediary protection, the 2025 amendments incorporate an important safeguard through a proviso to Rule 3(1)(b). This proviso clarifies that the removal or disabling of access to information, including synthetically generated information, undertaken in good faith, whether pursuant to user grievances or reasonable content moderation efforts, shall not be construed as a violation of the conditions for safe harbour under Section 79(2) of the IT Act. This provision reflects regulatory prudence, recognising that fear of losing statutory immunity can otherwise chill proactive content moderation. By explicitly protecting good-faith action, the Rules encourage responsible intermediary behaviour without diluting the safe harbour framework.

A notable innovation is the insertion of sub-rule (3) in Rule 3, which introduces targeted due diligence obligations for intermediaries that provide computer resources enabling the creation or modification of synthetically generated information. Such intermediaries are now required to ensure that every instance of synthetic content is clearly labelled or embedded with a permanent, unique metadata identifier. The Rules prescribe minimum visibility standards: in visual content, the label must cover at least ten percent of the display area, while in audio content, the disclosure must be audible during the initial ten percent of its duration. The prohibition on enabling the removal, suppression, or alteration of such identifiers reinforces the integrity and enforceability of the transparency mechanism. This approach reflects a regulatory preference for traceability and user awareness over outright prohibition.

Enhanced obligations are imposed on Significant Social Media Intermediaries (SSMIs) through the insertion of Rule 4(1A). Under this provision, SSMIs must obtain a declaration from users regarding whether uploaded content is synthetically generated. Beyond reliance on self-declaration, intermediaries are also required to deploy reasonable and proportionate technical measures—including automated tools—to verify the accuracy of such disclosures, having regard to the nature, format, and source of the content. Where content is identified as synthetic, the intermediary must ensure prominent labelling prior to its publication or display. Importantly, the amendments introduce a compliance-linked accountability mechanism: an intermediary that knowingly permits, promotes, or fails to act upon non-compliant synthetic content is deemed to have failed to exercise due diligence, thereby risking loss of safe harbour protection.

VII. Accompanying Explanatory Notes

The Explanatory Note[4] accompanying the proposed amendments provides critical insight into the Government’s regulatory rationale. Anchored in the objective of ensuring an “Open, Safe, Trusted and Accountable Internet,” the Note identifies the proliferation of highly realistic AI-generated content—particularly deepfakes—as a systemic threat capable of inflicting multidimensional harm. These harms include non-consensual intimate imagery, financial fraud, impersonation, large-scale misinformation, electoral interference, and a broader erosion of trust in digital ecosystems. Recognising that synthetic content increasingly blurs the line between truth and fabrication, the Note justifies the need for strengthened intermediary due diligence, especially for platforms with significant reach and influence.

The Explanatory Note clarifies that synthetically generated information squarely falls within the ambit of “information” used to commit unlawful acts under existing provisions, including Rules 3(1)(b) and 3(1)(d), thereby aligning AI-generated harms with established notice-and-takedown and lawful order-based mechanisms. At the same time, it signals a decisive policy shift toward anticipatory regulation. Unlike the reactive, order-driven obligations under Rule 3(1)(d), the proposed framework for synthetic content is proactive, continuous, and technology-dependent. By mandating labelling, metadata embedding, user declarations, and verification measures, the State seeks to embed transparency and accountability directly into platform governance structures.

Nevertheless, the Explanatory Note also reflects an attempt to balance enhanced accountability with intermediary protection. It expressly safeguards good-faith removal of harmful synthetic content under Section 79(2) of the IT Act, thereby acknowledging constitutional concerns surrounding over-censorship and chilling effects on free expression. This balance underscores the regulatory intent to recalibrate, rather than dismantle, intermediary liability in response to generative AI.

Collectively, the 2025 Amendment Rules represent a calibrated and constitutionally conscious response to the challenges posed by AI-generated and synthetic content. Rather than imposing blanket prohibitions or content-based censorship, the framework prioritises transparency, traceability, and informed user choice, while remaining anchored in the safe harbour principles of the IT Act. By integrating synthetic content regulation within the existing intermediary governance architecture, the amendments seek to preserve innovation and free expression while addressing demonstrable harms. As generative technologies continue to evolve, the 2025 framework provides a foundational legal architecture, i.e. one that signals a shift toward anticipatory governance, yet remains attentive to constitutional limits and the need for regulatory restraint.

VIII. Safe Harbour Under Strain

Nevertheless, the Note reveals a decisive policy shift toward anticipatory regulation, signalling the State’s intention to move beyond reactive enforcement and embed continuous transparency and verification obligations within platform governance structures, thereby recalibrating the contours of intermediary liability in response to the perceived risks posed by generative artificial intelligence.

Section 79 was designed to ensure intermediaries are not compelled to police content proactively. The draft synthetic content rules risk reintroducing constructive knowledge through the back door. By mandating verification tools, the law presumes detection capacity that does not yet reliably exist.

Deepfake detection technologies remain imperfect. The regulatory asymmetry is stark: intermediaries face little risk for over-removal but significant liability for under-detection. The rational response is over-censorship. This regulatory asymmetry, rather than malicious intent, threatens the continued viability of intermediary neutrality.

  1. Enduring Relevance of Shreya Singhal

The Supreme Court in Shreya Singhal was acutely conscious of chilling effects. The draft synthetic content rules risk recreating this environment through algorithmic enforcement. While a proviso protects intermediaries removing synthetic content, the real risk lies in loss of safe harbour for failure to detect, skewing incentives toward suppression of lawful speech.

The European Union’s AI Act, adopted in 2024, offers a useful contrast in regulatory design rather than substantive objectives. Article 50 imposes transparency obligations on deployers of AI systems, not intermediaries. The EU model preserves intermediary safe harbour, recognises technical limits, and adopts a risk-based approach with exemptions for artistic and satirical expression.

Atypically, the 2025 amendment to Rule 3(1)(d) demonstrates that India already possesses a constitutionally sound mechanism to address unlawful content, including harmful deepfakes. The central regulatory question is not whether to regulate AI-generated harm, but how. Targeted orders, criminal law, civil remedies, and public investment in AI forensics offer more precise responses than continuous platform monitoring.

  1. Choosing the Future of India’s Digital Constitution

The 2025 amendment to Rule 3(1)(d) reflects measured, transparent, and accountable digital governance. By restoring procedural discipline to content takedown and aligning executive action with constitutional safeguards, it reaffirms the intermediary’s role as a neutral conduit rather than an adjudicator of legality. The amendment demonstrates that India already possesses a constitutionally sound mechanism to address unlawful online content, including harmful manifestations of AI-generated material, through targeted orders, clearly defined authority, and built-in proportionality review.

The parallel push toward proactive verification of synthetically generated content, however, threatens to unsettle this carefully restored balance. By imposing continuous, technology-dependent obligations on intermediaries, particularly Significant Social Media Intermediaries, the draft framework risks transforming platforms from facilitators of speech into instruments of anticipatory regulation. This shift carries significant implications for free expression, innovation, and intermediary neutrality, especially in light of the technical limitations of deepfake detection and the asymmetric liability incentives that favour over-removal.

India thus stands at a constitutional crossroads, i.e. between preserving intermediaries as neutral conduits of speech, subject to clearly triggered and reviewable takedown obligations, and recasting them as active monitors responsible for verifying authenticity at scale. The regulatory choices made in navigating AI-generated content will shape not merely platform governance, but the contours of India’s digital constitutional order. Whether the future lies in procedural restraint anchored in Shreya Singhal, or in expansive anticipatory regulation driven by technological anxiety, will determine how free speech, accountability, and innovation coexist in India’s democratic digital ecosystem.

Mr. M. G. Kodandaram, IRS.

Reference

[1]https://www.meity.gov.in/static/uploads/2025/10/708f6a344c74249c2e1bbb6890342f80.pdf,

[2] https://indiankanoon.org/doc/110813550/ 

[3]https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf

[4]https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf

Posted in Privacy | Leave a comment

Non EU Data Processors under the radar of GDPR Supervisory authorities for fines

It appears that EU GDPR authorities are now on a Global Data Warfare extending the GDPR fines on non-EU data processors.

In a recent  case, CNIL, the French authority has fined a SaaS provider a fine of 1 Million Euros.

Naavi has several times addressed the issue of such fines on Indian data processors and the need for Indian Government to have a protective shield. This  has been ignored by Meity all along. Perhaps this needs to be addressed once again.

In the instant case (See Details here ), On December 11, 2025, CNIL sanctioned Mobius Solutions Ltd, a subcontractor, an  Israeli Company a fine of 1 million Euros for data leak.

The violation was “Failure to delete data at the  end of Contractual relationship”.

MOBIUS SOLUTIONS LTD retained a copy of the data of more than 46 million DEEZER users after the end of their contractual relationship, despite its obligation to delete all such data at the end of the contract. The company was  also found to have used client data to improve its own services. Further the company had failed to maintain a required register of processing activities.

Unfortunately the data leaked  into the Dark Web causing the CNIL to act.

In November 2022, CNIL had been notified about the data breach by the Controller. Data from 12.7 to 21.6 million EU users (including 9.8 million in France)—including names, ages, email addresses, and listening habits—had been posted on the dark web. The platform identified its former subcontractor, which had provided personalized advertising services, as the source of the breach. The CNIL conducted checks in 2023 and 2024, followed by an investigation in 2025, which uncovered multiple GDPR violations by the subcontractor.

In this context, it is important to note that for Indian  data processors of GDPR data processing, FDPPI has released DGPSI-GDPR as a framework of compliance. Hopefully this will assist the Indian Companies to mitigate  the GDPR Risks.

It may however be noted that the EU approach on GDPR Compliance has been predatory and the cross border transfer conditions are legally not amenable with local laws. Hence risk can be mitigated but not fully eliminated. However it would be better than ignoring compliance.

Also Refer: 

Fox Rothshield

 Global Policy Watch  

 

Posted in Privacy | Leave a comment

Cyber Law College/FDPPI upgrade the online Courses

We have pleasure to inform that the course content of Module I, Module G and  the complete C.DPO.DA. courses conducted by Cyber Law College under the FDPPI certification scheme have been upgraded to the latest versions.

Accordingly all registrations from 1st January 2026 will be  eligible for additional videos.

The updation process is currently in progress. Kindly send an  e-mail to naavi in case necessary.

Cyber Law College is also introducing a separate training program for GDPR specialization which will include

  1. GDPR the law
  2. GDPR member state Laws
  3. GDPR Case studies
  4. GDPR digital Omnibus Proposal
  5. ISO 27701:2025  for GDPR

This program will be called “Master in GDPR Compliance” and should be useful for all DPOs who are currently working in  the GDPR domain.

This new course will be launched in January 2026.

Naavi

Also Refer: CNIL Fines Non EU Data Processor

Posted in Privacy | Leave a comment