Non EU Data Processors under the radar of GDPR Supervisory authorities for fines

It appears that EU GDPR authorities are now on a Global Data Warfare extending the GDPR fines on non-EU data processors.

In a recent  case, CNIL, the French authority has fined a SaaS provider a fine of 1 Million Euros.

Naavi has several times addressed the issue of such fines on Indian data processors and the need for Indian Government to have a protective shield. This  has been ignored by Meity all along. Perhaps this needs to be addressed once again.

In the instant case (See Details here ), On December 11, 2025, CNIL sanctioned Mobius Solutions Ltd, a subcontractor, an  Israeli Company a fine of 1 million Euros for data leak.

The violation was “Failure to delete data at the  end of Contractual relationship”.

MOBIUS SOLUTIONS LTD retained a copy of the data of more than 46 million DEEZER users after the end of their contractual relationship, despite its obligation to delete all such data at the end of the contract. The company was  also found to have used client data to improve its own services. Further the company had failed to maintain a required register of processing activities.

Unfortunately the data leaked  into the Dark Web causing the CNIL to act.

In November 2022, CNIL had been notified about the data breach by the Controller. Data from 12.7 to 21.6 million EU users (including 9.8 million in France)—including names, ages, email addresses, and listening habits—had been posted on the dark web. The platform identified its former subcontractor, which had provided personalized advertising services, as the source of the breach. The CNIL conducted checks in 2023 and 2024, followed by an investigation in 2025, which uncovered multiple GDPR violations by the subcontractor.

In this context, it is important to note that for Indian  data processors of GDPR data processing, FDPPI has released DGPSI-GDPR as a framework of compliance. Hopefully this will assist the Indian Companies to mitigate  the GDPR Risks.

It may however be noted that the EU approach on GDPR Compliance has been predatory and the cross border transfer conditions are legally not amenable with local laws. Hence risk can be mitigated but not fully eliminated. However it would be better than ignoring compliance.

Also Refer: 

Fox Rothshield

 Global Policy Watch  

 

Posted in Privacy | Leave a comment

Cyber Law College/FDPPI upgrade the online Courses

We have pleasure to inform that the course content of Module I, Module G and  the complete C.DPO.DA. courses conducted by Cyber Law College under the FDPPI certification scheme have been upgraded to the latest versions.

Accordingly all registrations from 1st January 2026 will be  eligible for additional videos.

The updation process is currently in progress. Kindly send an  e-mail to naavi in case necessary.

Cyber Law College is also introducing a separate training program for GDPR specialization which will include

  1. GDPR the law
  2. GDPR member state Laws
  3. GDPR Case studies
  4. GDPR digital Omnibus Proposal
  5. ISO 27701:2025  for GDPR

This program will be called “Master in GDPR Compliance” and should be useful for all DPOs who are currently working in  the GDPR domain.

This new course will be launched in January 2026.

Naavi

Also Refer: CNIL Fines Non EU Data Processor

Posted in Privacy | Leave a comment

Queries on DGPSI-AI explained

The DGPSI-AI is a framework conceived for use by deployers of AI who are “Data Fiduciaries” under DPDPA 2023.

An interesting set of observations have been received recently from a professional regarding the framework. We welcome the comments as an opportunity to improve the framework over a period of time. In the meantime, let us have an academic debate to understand the concerns expressed and respond.

The observer  had made the following four observations as concerns related to the DGPSI-AI framework.

1. AI’s definition is strange and too broad. Lots of ordinary software has adaptive behavior (rules engines, auto-tuning systems, recommender heuristics, control systems). If you stretch “modify its own behavior,” you’ll start classifying non-AI automation as AI. Plus, within AI spectrum, only ML models may have self learning capabilities. Linear and statistical and decision tree models do not.

2. “AI risk = data protection risk = fiduciary risk”. That is legally and conceptually incorrect. DPDP Act governs personal data processing, not AI behavior as such. Many AI risks cited (hallucination, deception, emergent behavior, hypnosis theory) are safety / reliability / ethics risks, not privacy risks.

3. Unknown risk = significant risk” is a logical fallacy. Unknown ≠ high. Unknown risk can be negligible, bounded or mitigated through controls. Risk management is about estimating and bounding uncertainty,

4.Explainability treated as a legal obligation, not a contextual requirement. This is overstated. DPDP requires notice, not model explainability.

I would like to provide my personal response to these observations, as follows:

1. AI Definition

DGPSI has recommended adoption of a definition for AI which reflects an ability for automated change of the execution code based on the end results of the software without an intervention of a human for creating a modified version.

The “Rules Engine”, “auto tuning systems” or such other systems that may be part of the ordinary software are characteristic by the existence of a code for a given context and situation. If the decision rule fails, the software may either crash or use a default behaviour.  The outcome is therefore not driven by a self learning of the software. It is pre-programmed by a human being. Such software may have higher degree of automation than most software but need not be considered as AI in the strict sense.

Therefore, iff there is any AI model where the output is pre-determined, it can be excluded from the definition of AI by a DGPSI-AI auditor with suitable documentation.

Where the model self corrects and over a period of time transforms itself like metamorphosis into a new state, without a human intervention, the risk could be that further outputs may start exhibiting more and more hallucinations or unpredictable outcomes. The output data which may become input data for further use may get so poisoned that the difference between reality and artificial creation may vanish. Hence such behaviour is classified as AI.

In actual practice, we tend to use the term “AI” loosely to refer to any autonomous software with a higher degree of autonomy. We can exclude them from this definition. The model implementation specification MIS-AI-1 in the framework states as follows:

“The deployer of an AI software in the capacity of a Data Fiduciary shall document a Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI process”

This implementation specification which require documentation for the purpose of compliance, may perhaps address the concern expressed.

2. AI Risk and Privacy Risk

The framework DGPSI-AI is presented in the specific context of a responsibility of a “Data Fiduciary” processing “Personal Data”.

Since non compliance of DPDPA leads to a financial risk of Rs 250 crores+, it is considered prudent for the data fiduciary to consider AI behavioural risks as risks that can lead to non compliance.

In the context of our usage, hallucination, rogue behaviour, etc which are termed “Safety” or “Ethics” related issues in AI are applied for recognizing “Unauthorized processing of personal data” and hence become risks that may result in hefty fines. We cannot justify with the Data protection board that the error happened because I was using AI and hence I must be excused.

Hence AI risks become Privacy Risks or DPDPA Non Compliance Risks.

3. Unknown Risk:

The behaviour of AI is by design  meant  to be  creative  and therefore is unpredictable. All Risks associated with the algorithm is not known even to the developer himself. They are definitely to be classified as “Unknown Risks” by the deployer.

We accept that Unknown Risk can be negligible. But we come to know of it only after the risk becomes known. A Fiduciary cannot assume that the risk when determined will be negligible. If he has to determine if he is a “Significant Data Fiduciary” or not, he should be able to justify that the risk is negligible ab-initio. This is provided for in the framework by MIS-AI-3 which suggests,

“Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process. “

This provides a possibility of “Absorbing” the “Unknown Risk” irrespective of its significance including ignoring the need to classify the deployer as a “Significant Data Fiduciary”.

Hence there is an in-built flexibility that addresses the concern.

4.Explainability

The term “Explainability” may be used by the AI industry in a particular manner. DGPSI-AI tries to use the term also to the legal liability of a data fiduciary to give a clear, transparent privacy notice.

A “Notice” from a “Fiduciary” requires to be clear, Understandable and transparent by the data principal and hence there is a duty for the Data Fiduciary to understand the AI algorithm himself.

It may not be necessary to share the Explainability document of the AI developer with the data principal in the privacy notice. But the Data Fiduciary should have a reasonable assurance that the algorithm does not cause any harm to the data principal and its decisions are reasonably understood by the Data Fiduciary.

Towards this objective, MIS-AI-6 states:

“The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals.”

I suppose this reasonably answers the concerns expressed. Further debate is welcome.

Naavi

Posted in Privacy | Leave a comment

Vinod Sreedharan puts a creative touch to DGPSI-AI

Mr Vinod  Sreedharan is a AI expert with a creative bent of mind. He has applied his creative thoughts to give a visual imagery touch to “Taming of DPDPA and AI with DGPSI-AI”

The complete document above in PDF format is available here 

Posted in Privacy | Leave a comment

Corrigendum to DPDPA Rules

The Meity has released some corrections in DPDPA Rules through a Gazette notification

It essentially consists of some typo corrections.  as follows

Necessary corrections are being made at dpdpa.in and the rules posted at the dpdpa rules 

Naavi

Posted in Privacy | Leave a comment

Karnataka Hate Speech Bill is unconstitutional and Ultra-vires ITA 2000

The Government of Karnataka has recently passed a bill titled “The Karnataka Hate Speech and hate Crimes (Prevention) Bill 2025 (LA Bill No 79 of2025) . It is currently  pending Governor’s assent.

The first thing we note in this bill is that it covers both “Speech” and “Crime” and includes “Electronic Form”.

Regulating the speech  is subject to restrictions in the constitution. A law to directly curtail speech is ultra vires the Constitution Article 19.

Legislation on “Electronic  Documents” is under Information Technology Act  2000 (ITA 2000) and the powers of the State Government to legislate for the use of Electronic document is restricted to Section 90 of the ITA 2000. It does not extend to creating new Cyber Crimes under the power of the State.

Section 6 of the Act provides power to block the hate crime materials and if it is in electronic form, directly conflicts with Section 69 and 69A of the Information Technology Act 2000 (ITA 2000) and rules made there under which places such powers  with several restrictions.

For ease of understanding, we quote Section 90 of ITA 2000 here

Section 90: Power of State Government to make rules

(1) The State Government may, by notification in the Official Gazette, make rules to carry out the provisions of this Act.

(2) In particular, and without prejudice to the generality of the foregoing power, such rules may provide for all or any of the following matters, namely –

(a) the electronic form in which filing, issue, grant receipt or payment shall be effected under sub-section (1) of section 6;
(b) for matters specified in sub-section (2) of section 6;

(3) Every rule made by the State Government under this section shall be laid, as soon as may be after it is made, before each House of the State Legislature where it consists of two Houses, or where such Legislature consists of one House, before that House.

It is clear that the powers of the State Government to legislate on the use of Electronic Documents for Governance is restricted and does not extend  to defining new Cyber Crimes and prescribing punishments.

This is considered void ab-initio.

Hence  including “Electronic” form of communication under the Bill renders the Bill illegal and liable to be struck down.

The world of Electronic Documents constitutes what is generally described as the “Cyber Space” which is recognized as an independent area of activity. Countries such as USA recognize “Cyber” as a separate command for their defence forces exactly for the reason that it represents an extension of the geographical boundaries of the sovereign country similar to the sea and air space.

Hence law of cyber space does not fall under the concurrent list and is the sole domain of the Central Government to legislate. This legislation and all legislations so far in multiple States are therefore unconstitutional and needs to be stuck down. 

A suitable petition constitutional bench in the Supreme Court needs to be considered to decide about the status of “Cyber Space” and the rights of a sovereign entity to draw Cyber boundaries and legislate for crimes there in.

Currently ITA 2000 already has established such boundaries and conditions when the extra territorial jurisdictions can be  extended to foreign territories. Such powers  under Section 75 cannot be left to be exercised by State Government and hence the Karnataka Hate Act cannot include “Electronic Documents” as instruments by which offences under the Act may be committed.

Hence there is a need to omit “Electronic Documents” from the provisions of the Karnataka Hate Bill under Section 2(i) and 2(iv).

Hence the part of the Bill that curtails speech  particularly in the electronic form is considered unconstitutional and ultravires the Indian Constitution as well as ITA 2000.

Additionally,  the punishment envisaged under the act even for the first time offender is “Imprisonment of not less than one year which may extend to seven years” and for subsequent offences can extend upto 10 years.  The offence is “Cognizable”, “Non Bailable”  and “triable by the Judicial magistrate First Class”.

Hence the offence is graded as “Heinous” and can be grossly abused by the Police. It can therefore have a “Chilling Effect” as the Supreme Court defines under the “Shreya Singhal Case”.

There is therefore an urgent need for the Bill to be withdrawn by the Assembly,  failing which to be rejected by the Governor, failing which to be struck down by the appropriate Courts.

This issue being a serious Constitutional matter,  has to be taken up by some public spirited law firm and fought in Karnataka High Court and the Supreme Court.

I hope  such people take note.

Comments are welcome.

Copy of the Bill

Refer:

https://www.youtube.com/watch?v=3YmexzlaPko

Naavi

Posted in Privacy | Leave a comment