Six principles of DGPSI-AI Vs Seven Sutras of India AI Guidelines (IAIG)

For the attention of the Indian AI Committee headed  by Dr Balaraman, Professor, IIT Madras. Since the Committee is not aware of the DGPSI-AI framework developed voluntarily on behalf of the Indian industry, it is our duty to present some of the salient features of this framework and how it addresses the AI related risk of deployers and indirectly the AI Governance requirements of the Developers.  We are aware that the report has already been finalized and released but it is necessary to place before the august committee members what they deliberately did not want to refer in their reports for hurting the vested interests.

DGPSI-AI is a framework that extends the DGPSI framework meant for DPDPA Compliance. DGPSI Framework is guided by 12 principles and 50 implementation specifications that cover the compliance of DPDPA 2023, ITA 2000 (For DPDPA Protected Data or DPD), and relevant aspects of Consumer Protection Act, Telecom Act,, BNS, BSA and BIS draft guidelines on Personal Data Governance and Protection. DGPSI-AI is an extension for AI environment  in the Data Fiduciary environment and consists of six principles and Nine implementation specifications.  We shall now discuss only this framework.

In the next week or so, we are conducting an open virtual conference on DGPSI and DGPSI-AI and the Committee  members are invited to participate for a detailed discussion where we will highlight how DGPSI covers the ISO 27701:2025 requirements completely. The Committee and the Meity should understand that these frameworks can singularly replace the need for compliance with host of ISO standards which are only Best Practice Standards and not meant for DPDPA compliance. In the AI scenario also the IAIG refer to 26 ISO standards indirectly hinting that Indian companies have to look for their compliance.  We want this mindset of dependence only on ISO standards to the exclusion of any attempt for developing  Indian standards to be changed. Let us leave the colonial mindset that anything which comes from the west is good and have confidence in the Made in India concept of Mr Modi.

Let us now look at the Six Principles of DGPSI AI depicted below.

While we leave the detailed discussion to the book which is available on Kindle  (Should be available on print directly from Naavi or when the publisher White Falcon Publishing wakes up from its deep slumber).

Now let us look at the Seven Sutras of IAIG.

The DGPSI-AI is built on Accountability, Explainability, Responsibility and Ethics which are meant to build “Trust” which is the declared objective of IAIG. Explainability of DGPSI-AI can also be mapped to “Understanding by Design” in IAIG. Accountability is covered in both. Safety in IAIG is covered under Security in DGPSI-AI. Resilience and Sustainability and People first concepts of IAIG is covered under the principle of “Ethics” under DGPSI AI.

Where the IAIG tilts towards the industry interests is in the concept of ” Innovation over Restraint”. DGPSI-AI prefers “Innovation with Restraint” and flags the “Unknown Risk” associated with AI  usage as a “Significant Risk”. By explaining the Restraint concept as “All things being equal, Responsible Innovation is prioritized over cautionary restraint”, the IAIG tilts towards industry benefits than being “Data Principal Specific”.

Since under DPDPA, the Data Fiduciary is a trustee of the Data Principal, he cannot prioritize industry benefits over the risks to the data principal. Hence DGPSI-AI is a shade better than the IAIG  in this respect.

Your comments are welcome… More comments will follow.

Naavi

Posted in Privacy | Leave a comment

The 26 ISO standards referred to by the India AI Guidelines

While our Prime Minister Mr Narendra Modi swears by “Made in India” the reference to 26 ISO standards in the AI guidelines indicates that MeitY is not concerned with such indigenization of industry efforts.

This was once pointed out by the undersigned in 2011 when Section 43A rules were notified where MeitY became the marketing agency for ISO 27001 standards. This same mindset is reflected in the guidelines where instead of developing our own standard for compliance we are still interested in promoting only and only ISO standards.

I wish the committee had recognized that there are efforts already for development of indigenous frameworks of compliance such as  DGPSI or Data Governance and Protection Standard of India which is better than ISO27701:2025. The extended version DGPSI-AI has suggested compliance measures for DPDPA Compliance which include all the recommendations that this guideline suggests ( as would be specifically pointed out in subsequent articles). But the research of the Committee or the pressure to support non-indigenous standards must have over weighed the considerations of the committee and suppressed mention of any reference to DGPSI or similar efforts.

I am disappointed with the Chairman Dr Balaraman for not making a reference to the framework DGPSI and DGPSI-AI while finalizing the report.

DGPSI-AI could be considered as a framework which is a single indigenous framework which is good enough for compliance over  the 26 or more ISO standards pointed  out in the report and would have hurt the interests of promoters of ISO standards. Perhaps these forces must have worked in suppressing the information on the existence of DGPSI framework which  is a boon for SMEs to remain compliant with an affordable cost.

This would be considered as a failure of the Committee.

P.S: I am aware that my comments could create an adverse backlash on me but truth has to be told. If such committees headed by academic persons exhibit influence of vested interests, it has to be called out. Naavi will not be Naavi without pointing out such  deficiencies of the system. My advise to the committee members is “Be-Indian and Encourage Made in India efforts” . It is not enough if we call the mission as “India AI mission” and the guidelines as “India AI Guidelines”. The Indianness has to reflect through the actions and not remain in the name. If DGPSI-AI did not  merit a mention at least on Page 56  under Types of Voluntary frameworks, it only indicates that the Committee has not conducted proper literature research…or it was deliberately suppressed by vested interests. 

Naavi

Posted in Privacy | Leave a comment

Data Input BIAS indicated in the AI Guidelines report?

The November 5 guidelines issued by Meity on AI, is a guideline meant to unlock AI’s benefits for growth, inclusion, and competitiveness, while safeguarding against risks to individuals and society.

The PIB press note indicates that “These are envisioned as a foundational reference for policymakers, researchers, and industry to foster greater national and international cooperation for safe, responsible, and inclusive AI adoption”.

In other words, this document is not a standard or requirement. It is a background document for further action.

Part 4 of the guidelines indicate “Practical Guidelines for the Industry” . It states that any person developing or deploying AI systems in India should be guided by the following.

  1. Comply with all Indian laws and regulations, including but not limited to laws relating to information technology, data protection, copyright, consumer protection, offences against women, children, and other vulnerable groups that may apply to AI systems
  2. Demonstrate compliance with applicable laws and regulations when called upon to do so by relevant agencies or sectoral regulators.
  3. Adopt voluntary measures (principles, codes, and standards), including with respect to privacy and security; fairness, inclusivity; non-discrimination; transparency; and other technical and organisational measures.
  4. Create a grievance redressal mechanism to enable reporting of AI-related harms and ensure resolution of such issues within a reasonable timeframe.
  5. Publish transparency reports that evaluate the risk of harm to individuals and society in the Indian context. If they contain any sensitive or proprietary information, the reports should be shared confidentially with relevant regulators
  6. Explore the use of techno-legal solutions to mitigate the risks of AI, including privacy-enhancing technologies, machine unlearning capabilities, algorithmic auditing systems, and automated bias detection mechanisms.

It is heartening to note that FDPPI’s framework for compliance of DPDPA in the AI environment already incorporates all these suggestions.

I wish the drafting committee had read the book “Taming the twin challenges of DPDPA and AI with DGPSI-AI” .This book is now available in E Book  form from Amazon. It was released in pre-print version during the IDPS 2025 at Bengaluru on September 17.  I am not sure that the committee members were unaware of this  framework but chose to deliberately suppress the information from the report. I urge the members to atleast read the framework now and compare the 6 principles with 9 Implementation  specifications for the deployers and 13 implementation specifications for the developers.

It is possible that the research of the committee was inadequate and they neither follow www.naavi.org nor the linked in. Had they done so, they would have known the recommendations contained in this book and could have atleast made a reference to the document in the report.

It is also possible that the last meeting of the committee was well before September 17 and hence the members of the committee were unaware of the book at that point of time.

It may also be true that the Committee did not want to share any credit and wanted to show case the report as a original recommendation.

It is therefore apt  to say that “Data input Bias” was indicative in the development of the  report itself.

However, we forget the bias and try to correct it through a series of articles highlighting how DGPSI-AI compares with the 7 Sutras and recommendations related to Policy and Regulations, Risk Mitigation, the suggested action plans particularly for the industry.

Watch this space for more….

Naavi

 

 

Posted in Privacy | Leave a comment

AI Governance Guidelines from GOI

On November 5, a report containing AI Governance guidelines were released by MeitY with the declared objective of developing  a foundational reference for policymakers, researchers, and industry to foster greater national and international cooperation for safe, responsible, and inclusive AI adoption.

The guidelines have been drafted by a high-level committee under the chairmanship of Prof. Balaraman Ravindran, IIT Madras, comprising policy experts including Shri Abhishek Singh, Additional Secretary, MeitY; Ms. Debjani Ghosh, Distinguished Fellow, NITI Aayog; Dr. Kalika Bali, Senior Principal Researcher, Microsoft Research India; Mr. Rahul Matthan, Partner, Trilegal; Mr. Amlan Mohanty, Non-Resident Fellow, NITI Aayog; Mr. Sharad Sharma, Co-founder, iSPIRT Foundation; Ms. Kavita Bhatia, Scientist ‘G’ & GC, MeitY & COO IndiaAI Mission; Mr. Abhishek Aggarwal, Scientist D, MeitY & Mr. Avinash Agarwal, DDG (IR), DoT, Ms. Shreeppriya Gopalakrishnan, DGM, IndiaAI.

The Guideline will be analysed by the FDPPI’s AI Chair and its comments will be provided here.

In September this year, FDPPI released DGPSI-AI as a framework for DPDPA Compliance which covered the recommended industry approach to DPDPA Compliance where AI is used by a Data Fiduciary.  This framework which is an extension of the DGPSI framework also covers the requirements of  AI Developers and  Agentic AI  users.

It would be interesting to look at this framework in the light of the guidelines now released.

Watch out for more articles on the guidelines….

Copy of the report can be accessed here:

Posted in Privacy | Leave a comment

FDPPI to set up an SIG to follow up on DPDPA Rules

When the DPDPA Rules are notified, it is expected that different industry segments will have different concerns. Some of these concerns will be in interpretations of the  Rules. Some may indicate conflicts with sectoral regulations and some may even require representations to be made to DPB or MeitY for clarification or modification.

In order to assist the ecosystem, FDPPI is setting up a Special Interest Group of industry experts selected from FDPPI’s trained  and certified DPOs who will continuously monitor the developments and  share their views on a periodical basis in the form of advisories to the industry or otherwise. Where necessary, they will also be in touch with the MeitY and DPB to seek clarifications.

We are presently in the process of  setting up the SIG.

Naavi

 

Posted in Privacy | Leave a comment

Albania creates “AI Babies” of “AI Minister” and brings them into the Parliament

“The AI can be good, bad, ugly and Bizarre” says the anchor. What  more you can say for the Albanian  Prime Minister who first created an “AI Minister” named Diella and now is creating 83 babies who are AI Assistants, one for each of the party members in the country’s parliament.  Soon he may replace all his ministers with AI agents and perhaps create a Digital Twin of himself and anoint him as Deputy PM to take over after his death.

The decision is stranger since Diella is herself a “Virtual Chat Bot” and not even a Humanoid robot. Some time later the PM Edi Rama may here of “Parakaya Pravesha” and create a Humanoid Robot in which Diella’s program can be imported. Then she will have a body also.

If we also consider that Saudi Arabia did not hesitate to grant Citizenship to Sophia which also may be emulated by Edi Rama to cross the legal barrier which we understands requires Parliament members in Albania to be  citizens.

The Private Sector is not far behind in this craze and the Columbian MNC named Dictador which has appointed an autonomous AI agent MIKA

Some may dismiss this as jokes to be ignored but to me it is indicative of a malaise that will kill the world as we know today. Of Course, our culture teaches us to think that this was part of the destiny and the next Kalki may actually be an AI agent and an autonomous ruler of the world.

It is simultaneously noticed that Quantum physicists have already identified a pattern development  in the Chaotic quantum chip state which is indicative of development of early signs of general intelligence in the AI.

Once these thoughts combine together and Mr Edi Rama decides to transform himself into a Cyborg by placing a chip inside his brain to link to an AI Agent, we will have the first Global AI leader who can take over the world.

The best we can do is to pray that this should not happen too quickly for all of us to absorb.

Naavi

Posted in Privacy | Leave a comment