DPDPA Rules are Notified

After a long wait, the DPDPA  2023 rules have been finally notified.

Links to the documents are here:

  1. 23 Rules
  2. DPB number of members
  3. DPB location
  4. Timeline of implementation

The rules appear to be similar to the draft rules except that the rule for data of minors and data of persons with disability has been separated.

The time line of implementation consists of three dates.

  1. Date of Notification (13th November 2025)
  2. One year from the date of notification
  3. 18 months from the date of notification.

Rules 1, 2, 17,18,19,20 and 21 will be effective immediately. These relate to the name, definitions and rules 17-21 relates to the formation and functioning of DPB.

Rule 4 will come into effect after one year (13th November 2026) and this relates to registration of the consent manager.

Rules 3,5 to 16 , 22 and 23 will come into force 18 months (13th May 2027) from the date of publication.

The sections of the Act which will become immediately effective are

Sec : 1(2), Sec 2, Sec 18 to 26, Sec 35, 38 to43 and Section 44(1) and 44(3)

The sections of the Act which will become effective after one year are

Section 6(9) (Related to the Consent Manager) and  and 27(1)d) (Related to breach notification by a Consent Manager)

The sections of the Act which will become effective after 18 months are

Sections 3 to 5,  Section 6 (1) to 6(8),  Section 6(9), Sections 7 to 10, Sections 11 to 17 and Section 27 other than 27(1)d) and Sections 28 to 34, 36,37, and 44(2)

Section 44(2) is related to the scrapping of Section 43A of ITA 2000. Section 33 is related to penalties under DPDPA both of which will be effective after 18 months. The obligations of Data Fiduciary, the Rights of the Data Principal will all become effective at the  same time (13th May 2027).

Now that the uncertainty about the time line of implementation is over, the industry can plan to start implementation starting with the Gap analysis. Those who have already started compliance activity are slightly at an advantage.

Naavi

Posted in Privacy | Leave a comment

Print Copies of DGPSI-AI are now available

The Print Copies of the book DGPSI are now available on Amazon, Flipkart on on the publisher’s website.

The links are

Amazon
Flipkart
WFP (Publisher) Store

DGPSI-AI is a framework for compliance of DPDPA and an extension of DGPSI. It  contains 9 implementation specifications applicable to Data fiduciaries built on the foundation of six principles. It also contains 13 more implementation specifications applicable to AI developers.

The framework has been in existence for last few months and is fully in tune with the recommendation contained in the India AI Guidelines published by MeitY on 5th November 2025.

Naavi

Posted in Privacy | Leave a comment

Was the Balaraman Committee on India AI Governance unaware of the work of Naavi.org?

Naavi.org has been in existence as a blog on Cyber Law since 1998-2000 even before ITA 2000 was passed. Any professional in the domain of Cyber Law and/or Data Protection is aware of the enormous information that is present on the website. Over the last few years, Naavi has been working on assisting of the industry for compliance of DPDPA 2023 much more than what MeitY has been doing. If any member of the India AI Governance committee says that he was unaware of the information available on naavi.org, they were perhaps not ready to be in the committee.

The committee  has  given some useful recommendations on how AI as a technology should be adopted in India and in the process made references to policy and regulations  as well as the responsibility of the industry to “develop  governance frameworks that are balanced, agile, flexible, and principle-based,  and enable monitoring and recalibration based on feedback.”

In part 3, the committee also suggested short term goals which  included “development of  India-specific AI risk assessment and classification frameworks with sectoral inputs”.

In part 4, the Committee stated in its recommendation to the industry  “Adopt voluntary measures (principles, codes, and standards), including with respect to privacy and security; fairness, inclusivity; non-discrimination; transparency; and other technical and organisational measures.”

The committee listed  in annexure 2 various foreign laws which it took note of, in Annexure 3 different laws in India which it perhaps  studied as relevant, in Annexure 5 listed types of voluntary frameworks such as “Developer’s Playbook for Responsible AI in India published by NASSCOM.” It also published in annexure 6, various ISO standards related to AI including standards  under development.

The committee also appended a list of references which included many private blogs including from some of the members of the Committee.

Amidst all these references, the Committee lost sight of naavi.org , the framework DGPSI and DGPSI-AI about which many results are available in a Google Search and published books are available.

I am not suggesting that  the committee should have accepted the framework or endorsed the framework. But not referencing the existence of the framework is a gross show of ignorance or deliberate bias on the part of the committee  members.

Academicians like Dr Balaraman have no excuse in my view not to have studied the framework of DGPSI-AI and the book “Taming the Twin Challenges of DPDPA and AI” about which we had conducted an IDPS event in Chennai itself recently. If they were ignorant, they have to question themselves if their work was fair and unbiased.

The DGPSI-AI framework contained the only recommendations available in India today on how Deployers of AI should respond to AI and how developers of AI should support the Data Fiduciaries.

The Committee  should feel ashamed  that they did not study these available publications and incorporate its reference if not the recommendations into the committee’s work.

For those who may think, Naavi  is trying to blame the committee because the DGPSI and DGPSI-AI framework were not included, I want to state that this committee had a duty to  recognize work  going on in India in these fields and a duty to highlight the work of DGPSI irrespective of who was behind the creation of these frameworks.

For those of you who think like “Good Indians” and say ..

“What was the motive for such an omission?… It could be an honest mistake… etc..”,

it is my duty to disclose that in the last few months, I had a major confrontation with Meity when NIXI wanted to forcefully take over the domain dpdpa.in registered by me and I was forced to issue a legal notice forcing them to back off.

The request for such  take over had come from MeitY and was a serious violation of the rule of law which had to be pointed out in self  defence. There was therefore a motive for MeitY to reject any reference to the  undersigned. Only Mr Krishnan, the MeitY secretary can clarify if this was true or not.

Dr Balaraman in the  meantime should clarify if he was aware of all the work of Naavi regarding AI framework and rejected it unworthy of mention after due consideration.

Since I am the aggrieved person, I have a right to raise my objection. I expect clarifications from the members of the committee if none of them were aware that here was a ready frame work on how industry could adopt AI and also be compliant with DPDPA including from Mr Rahul Mathan of Tri Legal. It is possible that the report could have been prepared without taking into consideration the views  of the members. The committee could have just endorsed a pre-prepared report without proper deliberations.

Assuming that all the Committee members were honestly ignorant, I am now having  an open  discussion on November 15 at 6.30 pm on Zoom on introducing DGPSI and DGPSI-AI and invite every one of the members  of the Committee and those who are supporting their right to chose what data they can  reference and quote in a public interest report such as this.

The link is provided below.

Due diligence suggests that after November 15, the members of the Committee shall be considered that they have taken note of the “Made in India Framework” and if they continue to ignore it would a show of deliberate bias.

Naavi

Posted in Privacy | Leave a comment

Critics are Early Whistleblowers… Principle no 7/12 under DGPSI

DGPSI which is the “Viswa Guru of compliance frameworks” has 12 principles under which principle number Seven  is

“Critics are Early Whistle Blowers”.

This principle recognizes that in Risk Management, the Incident Management system needs to recognize that identification of a potential risk and the early indication of an incident comes from a complaint  either from within the company or from a customer or a vendor. This needs to be recognized incentivised and explored.

Accordingly DGPSI  Model Implementation Specification no 41 (HR responsibility number 4) states that “Organization shall establish a whistle-blower policy covering employees and extending to  external participants including the public with appropriate witness protection and incentives.”

This applies to the complaint raised by Naavi against the India AI Guidelines Committee’s report of November 5 and also to some of the objections raised to this by some professionals.

To clarify why DGPSI is referred here as “Viswa Guru of Compliance Frameworks”,  Naavi is holding an open Virtual Conference on introducing the DGPSI framework to a group of CDPO  s who wanted to know more about this framework. We have done other free presentations of this framework on several occasions since this is an “Open Source” framework, but would like to repeat it for expanding the awareness.

The program details are as follows:

It is expected that there will be two types of audience who will attend the event.

First is the professional with an open mind who wants to know more about this framework and contribute towards its improvement and adopt it where feasible.

The second would be the professionals with a sceptic mind who donot expect anything coming from within the country as acceptable and would wait for the EU to guide them. Some of them would like to pick holes in the argument that DGPSI cannot be robust and useful since it is not an ISO/NIST/BIS framework or originated from colonial powers.

I hope that first part of the presentation would be for the open minded professionals after which sceptics can enter into a debate. Since we need to follow the principle of “Critics are our first whistle blowers”, we welcome a debate within acceptable boundaries.

All the members of the India AI committee are cordially invited to participate either openly or anonymously.

Naavi

Posted in Privacy | Leave a comment

India AI Mission abdicates its responsibility to people in the AI Guidelines

For the Attention of the members of the India AI Guidelines Committee

The goal of India’s AI Governance Framework is stated (Refer para 2.1 of the report)  as “To promote innovation, adoption, diffusion and  advancement of AI” while mitigating risks to the society.

This is the Trump model of AI regulation where he wanted no  regulation for the next 10 years which was cancelled by the US senate. Today there are many states in USA which have their AI regulations similar to the EU-AI act. These regulations are meant  to recognize the “Risk” of using AI and try to regulate mainly the “AI Developers” to adopt a “Ethical, Responsible and Accountable Governance Framework” so that the harm that may be caused to the society through the AI is mitigated.

The Indian guidelines has however not made the “Safety of the society” as central to the guideline though one of the declared Sutras is “People Centric Design, human oversight and human empowerment”.

The para 2.1 could have been better stated as “To promote innovation, adoption, diffusion and  advancement of AI”  without placing the society at an unreasonable risk.

Under DGPSI-AI the first principle is “Unknown Risk is Significant Risk”. Since AI Risk is largely unknown, all AI  usage should lead to it being considered as “Significant Risk” and all Data Fiduciaries who deploy AI should be considered as “Significant Data Fiduciaries” under DPDPA 2023.

The AI Governance Guideline makes reference to risks under page 9 of the report where it states as follows:

“Mitigating the risks of AI to individuals and society is a key pillar of the governance framework. In general, the risks of AI include malicious use (e.g. misrepresentation through deepfakes), algorithmic discrimination, lack of transparency, systemic risks and threats to national security. These risks are either created or exacerbated by AI. An India-specific risk assessment framework, based on empirical evidence of harm, is critical. Further, industry-led compliance efforts and a combination of different accountability models are useful to mitigate harm.”

The  above para is conspicuous by ignoring to point out the real AI Risk which is “Hallucination” and “Rogue behaviour” of an AI model. Since all AI deployments involve use of LLM at some base level, the risk of hallucination pervades all AI tools and creates an “Unknown Risk”. While the above para recognizes “Impersonation” through deepfake and “Bias”, the word “Systemic Risks” need to be expanded to represent “Hallucination Risk” where the algorithm behaves in a manner that it was not intended to.

Examples of such hallucination include the Replit incident, The Cursor AI incident and the Deepseek Incident all of which were recorded in India. The Committee does not seem to think this as a risk and restricts it’s vision to “Deep Fake”.

Hence the Committee also ignores the need for guard rails as mandatory security requirements and proper kill switch to stop a rogue behaviour of the robot. When AI is deployed in a  humanoid robot form or industrial robo form the physical power of the robotic body introduces a high level of physical risks to the users at large. The could be the Chess player whose fingers were broken by a robot or an  autonomous car that may ram into a crowd.

The DGPSI -AI addresses these risks through its implementation specifications.

A Brief summary of the  implementation specifications under DGPSI-AI related to  the risks are  given below.

The Deployer of an AI shall take all such measures that are essential to ensure that the AI does not harm the society at large. In particular the following  documentation of assurances from the licensor is recommended.

1.The AI comes with a tamper-proof Kill switch.

2.In the case of Humanoid Robots and industrial robots,  the Kill Switch shall be controlled separately from the intelligence  imparted to the device so that the device intelligence cannot take over the operation of the Kill Switch.

3.Where the kill switch is attempted to be accessed by the device without human intervention, a self destruction instruction shall be  built in.

4.Cyborgs and  Sentient algorithms are a risk to the society and shall be classified as Critical risks and regulated more strictly than other AI, through an express approval at the highest management level in the data fiduciary.

5.Data used for learning and modification of future decisions of the AI shall be imparted a time sensitive weightage with a “Fading memory” parameter assigned to the age of the observation.

6. Ensure that there are sufficient disclosures to the data principals about the AI risk

Additionally, DGPSI-AI prescribes

  1. The deployer of an AI software in the capacity of a Data Fiduciary shall document a  Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on  whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI  process.
  2. Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt   not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process.
  3. The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals

There are more such specifications in DGPSI-AI.  Some of the additional specifications apply to the developer and for Agentic AI systems. (Details are available in the book.. “Taming the twin challenges of DPDPA and AI through DGPSI-AI”. (better read along with the earlier book..  DGPSI the Perfect prescription for DPDPA  Compliance.

It may come as a surprise to the members of the Committee that this knowledge base exists and  has been ignored by the committee. Many of members  of MeitY are aware of and probably have copies of the books without probably realizing their relevance to the activities of the Committee.

Hope the Committee members will at least now understand that they have been working with deficient data either because it was not known to the research team (which knew all about international laws but not the work going on in India) or was deliberately kept away from the Committee.

There could be more comments that can be made on the recommendations but our interest was only to point out the bias in the data collected by the Committee for preparing  its report and a deliberate attempt to suppress the work of DGPSI and  DGPSI-AI as if it does not exist.

Want to re-iterate that if Meity wants, DGPSI and DGPSI-AI can be used as a DPDPA implementation standard to the exclusion of ISO 27701, ISO 42001 and plethora of ISO standards which some of the data fiduciaries may look at. Majority of data fiduciaries who are in the SME/MSME cadre cannot afford to get ISO audits and certifications since it would be expensive and redundant for their requirements. At best  they will go through the ISO certification brokers who may offer them the certifications at less than Rs 5000/- to show case. This will create a false information with the Government that many are compliant though they have little understanding of what  is compliance.

Even the next document which we are expecting from MeitY namely the DPDPA Rules will perhaps be published without taking into consideration the the available information on the web.

It is high time MeitY looks for knowledge from across the country when such reports are prepared.

Request MeitY to respect indigenous institutions not just in statements and adding “India” to the report but  in spirit by recognizing who represents the national interest and who replicates the foreign work and adopts it  as a money making proposition without original work.

Naavi

 

Posted in Privacy | Leave a comment

Recommendations of the Indian AI guidelines to the industry vs practical guidelines under DGPSI-AI

For the attention of the Indian AI Guidelines Committee headed  by Dr Balaraman, Professor, IIT Madras.

The Indian AI Guidelines had set out the above recommendations for the industry. The DGPSI AI which is a framework exclusively for the compliance of DPDPA (with ITA 2000 et al.) is not just a recommendation but an implementable framework for DPDPA Compliance by a Data Fiduciary in an  AI environment.

The first three recommendations above directly relate to the  use of DGPSI AI. The framework is meant for compliance of DPDPA and can be used to demonstrate compliance.

The fourth recommendation on Grievance redressal is part of DGPSI framework and DPDPA compliance.

The fifth recommendation is for industry groups and not for individual Data Fiduciaries  FDPPI is actually discharging this function and recently exposed the “Deep Seek” AI’s plans to bypass DPDPA Permissions, silence the whistle blower, bribe authorities etc.

The sixth recommendation is also already under implementation by FDPPI which is setting up a SIG specifically to mentor Privacy Enhancement Tools.

We can therefore proudly say that DGPSI-AI is already implementing all the recommendations that are incorporated in the India AI Guidelines though the committee members had no courtesy to acknowledge or are betraying their complete ignorance of the market happenings.

It appears that the report must have been prepared by some industry consultant and endorsed by the committee rather than a well deliberated development over the last two years of the existence of the committee.

I remember that when I had enquired about a famous committee report of RBI where a leading legal expert was a member and there was a chapter on  Cyber Laws, he had confirmed that no meetings were attended by him regarding Cyber Law related recommendations and he was totally unaware of how it found a place in the report. Subsequently based on the objections raised by Naavi.org, RBI completely left out the recommendations in its notification.

Even in this committee if some body gets a RTI report on how many meetings happened, who all attended, and whether there were minutes of the  meeting etc., we will perhaps realize that many of the committee members were not even aware of some of the recommendations contained in this report.

I would love to get the feedback from the members of the Committee. Why the existence of DGPSI-AI was not mentioned in the  report is today a mystery and  will be revealed in due course..

Your comments are welcome..as we dive  further into the  report in subsequent articles.

Naavi

Posted in Privacy | Leave a comment