India AI Mission abdicates its responsibility to people in the AI Guidelines

For the Attention of the members of the India AI Guidelines Committee

The goal of India’s AI Governance Framework is stated (Refer para 2.1 of the report)  as “To promote innovation, adoption, diffusion and  advancement of AI” while mitigating risks to the society.

This is the Trump model of AI regulation where he wanted no  regulation for the next 10 years which was cancelled by the US senate. Today there are many states in USA which have their AI regulations similar to the EU-AI act. These regulations are meant  to recognize the “Risk” of using AI and try to regulate mainly the “AI Developers” to adopt a “Ethical, Responsible and Accountable Governance Framework” so that the harm that may be caused to the society through the AI is mitigated.

The Indian guidelines has however not made the “Safety of the society” as central to the guideline though one of the declared Sutras is “People Centric Design, human oversight and human empowerment”.

The para 2.1 could have been better stated as “To promote innovation, adoption, diffusion and  advancement of AI”  without placing the society at an unreasonable risk.

Under DGPSI-AI the first principle is “Unknown Risk is Significant Risk”. Since AI Risk is largely unknown, all AI  usage should lead to it being considered as “Significant Risk” and all Data Fiduciaries who deploy AI should be considered as “Significant Data Fiduciaries” under DPDPA 2023.

The AI Governance Guideline makes reference to risks under page 9 of the report where it states as follows:

“Mitigating the risks of AI to individuals and society is a key pillar of the governance framework. In general, the risks of AI include malicious use (e.g. misrepresentation through deepfakes), algorithmic discrimination, lack of transparency, systemic risks and threats to national security. These risks are either created or exacerbated by AI. An India-specific risk assessment framework, based on empirical evidence of harm, is critical. Further, industry-led compliance efforts and a combination of different accountability models are useful to mitigate harm.”

The  above para is conspicuous by ignoring to point out the real AI Risk which is “Hallucination” and “Rogue behaviour” of an AI model. Since all AI deployments involve use of LLM at some base level, the risk of hallucination pervades all AI tools and creates an “Unknown Risk”. While the above para recognizes “Impersonation” through deepfake and “Bias”, the word “Systemic Risks” need to be expanded to represent “Hallucination Risk” where the algorithm behaves in a manner that it was not intended to.

Examples of such hallucination include the Replit incident, The Cursor AI incident and the Deepseek Incident all of which were recorded in India. The Committee does not seem to think this as a risk and restricts it’s vision to “Deep Fake”.

Hence the Committee also ignores the need for guard rails as mandatory security requirements and proper kill switch to stop a rogue behaviour of the robot. When AI is deployed in a  humanoid robot form or industrial robo form the physical power of the robotic body introduces a high level of physical risks to the users at large. The could be the Chess player whose fingers were broken by a robot or an  autonomous car that may ram into a crowd.

The DGPSI -AI addresses these risks through its implementation specifications.

A Brief summary of the  implementation specifications under DDGPSI-AI related to  the risks are  given below.

The Deployer of an AI shall take all such measures that are essential to ensure that the AI does not harm the society at large. In particular the following  documentation of assurances from the licensor is recommended.

1.The AI comes with a tamper-proof Kill switch.

2.In the case of Humanoid Robots and industrial robots,  the Kill Switch shall be controlled separately from the intelligence  imparted to the device so that the device intelligence cannot take over the operation of the Kill Switch.

3.Where the kill switch is attempted to be accessed by the device without human intervention, a self destruction instruction shall be  built in.

4.Cyborgs and  Sentient algorithms are a risk to the society and shall be classified as Critical risks and regulated more strictly than other AI, through an express approval at the highest management level in the data fiduciary.

5.Data used for learning and modification of future decisions of the AI shall be imparted a time sensitive weightage with a “Fading memory” parameter assigned to the age of the observation.

6. Ensure that there are sufficient disclosures to the data principals about the AI risk

Additionally, DGPSI-AI prescribes

  1. The deployer of an AI software in the capacity of a Data Fiduciary shall document a  Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on  whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI  process.
  2. Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt   not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process.
  3. The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals

There are more such specifications in DGPSI-AI.  Some of the additional specifications apply to the developer and for Agentic AI systems. (Details are available in the book.. “Taming the twin challenges of DPDPA and AI through DGPSI-AI”. (better read along with the earlier book..  DGPSI the Perfect prescription for DPDPA  Compliance.

It may come as a surprise to the members of the Committee that this knowledge base exists and  has been ignored by the committee. Many of members  of MeitY are aware of and probably have copies of the books without probably realizing their relevance to the activities of the Committee.

Hope the Committee members will at least now understand that they have been working with deficient data either because it was not known to the research team (which knew all about international laws but not the work going on in India) or was deliberately kept away from the Committee.

There could be more comments that can be made on the recommendations but our interest was only to point out the bias in the data collected by the Committee for preparing  its report and a deliberate attempt to suppress the work of DGPSI and  DGPSI-AI as if it does not exist.

Want to re-iterate that if Meity wants, DGPSI and DGPSI-AI can be used as a DPDPA implementation standard to the exclusion of ISO 27701, ISO 42001 and plethora of ISO standards which some of the data fiduciaries may look at. Majority of data fiduciaries who are in the SME/MSME cadre cannot afford to get ISO audits and certifications since it would be expensive and redundant for their requirements. At best  they will go through the ISO certification brokers who may offer them the certifications at less than Rs 5000/- to show case. This will create a false information with the Government that many are compliant though they have little understanding of what  is compliance.

Even the next document which we are expecting from MeitY namely the DPDPA Rules will perhaps be published without taking into consideration the the available information on the web.

It is high time MeitY looks for knowledge from across the country when such reports are prepared.

Request MeitY to respect indigenous institutions not just in statements and adding “India” to the report but  in spirit by recognizing who represents the national interest and who replicates the foreign work and adopts it  as a money making proposition without original work.

Naavi

 

Posted in Privacy | Leave a comment

Recommendations of the Indian AI guidelines to the industry vs practical guidelines under DGPSI-AI

For the attention of the Indian AI Guidelines Committee headed  by Dr Balaraman, Professor, IIT Madras.

The Indian AI Guidelines had set out the above recommendations for the industry. The DGPSI AI which is a framework exclusively for the compliance of DPDPA (with ITA 2000 et al.) is not just a recommendation but an implementable framework for DPDPA Compliance by a Data Fiduciary in an  AI environment.

The first three recommendations above directly relate to the  use of DGPSI AI. The framework is meant for compliance of DPDPA and can be used to demonstrate compliance.

The fourth recommendation on Grievance redressal is part of DGPSI framework and DPDPA compliance.

The fifth recommendation is for industry groups and not for individual Data Fiduciaries  FDPPI is actually discharging this function and recently exposed the “Deep Seek” AI’s plans to bypass DPDPA Permissions, silence the whistle blower, bribe authorities etc.

The sixth recommendation is also already under implementation by FDPPI which is setting up a SIG specifically to mentor Privacy Enhancement Tools.

We can therefore proudly say that DGPSI-AI is already implementing all the recommendations that are incorporated in the India AI Guidelines though the committee members had no courtesy to acknowledge or are betraying their complete ignorance of the market happenings.

It appears that the report must have been prepared by some industry consultant and endorsed by the committee rather than a well deliberated development over the last two years of the existence of the committee.

I remember that when I had enquired about a famous committee report of RBI where a leading legal expert was a member and there was a chapter on  Cyber Laws, he had confirmed that no meetings were attended by him regarding Cyber Law related recommendations and he was totally unaware of how it found a place in the report. Subsequently based on the objections raised by Naavi.org, RBI completely left out the recommendations in its notification.

Even in this committee if some body gets a RTI report on how many meetings happened, who all attended, and whether there were minutes of the  meeting etc., we will perhaps realize that many of the committee members were not even aware of some of the recommendations contained in this report.

I would love to get the feedback from the members of the Committee. Why the existence of DGPSI-AI was not mentioned in the  report is today a mystery and  will be revealed in due course..

Your comments are welcome..as we dive  further into the  report in subsequent articles.

Naavi

Posted in Privacy | Leave a comment

Six principles of DGPSI-AI Vs Seven Sutras of India AI Guidelines (IAIG)

For the attention of the Indian AI Committee headed  by Dr Balaraman, Professor, IIT Madras. Since the Committee is not aware of the DGPSI-AI framework developed voluntarily on behalf of the Indian industry, it is our duty to present some of the salient features of this framework and how it addresses the AI related risk of deployers and indirectly the AI Governance requirements of the Developers.  We are aware that the report has already been finalized and released but it is necessary to place before the august committee members what they deliberately did not want to refer in their reports for hurting the vested interests.

DGPSI-AI is a framework that extends the DGPSI framework meant for DPDPA Compliance. DGPSI Framework is guided by 12 principles and 50 implementation specifications that cover the compliance of DPDPA 2023, ITA 2000 (For DPDPA Protected Data or DPD), and relevant aspects of Consumer Protection Act, Telecom Act,, BNS, BSA and BIS draft guidelines on Personal Data Governance and Protection. DGPSI-AI is an extension for AI environment  in the Data Fiduciary environment and consists of six principles and Nine implementation specifications.  We shall now discuss only this framework.

In the next week or so, we are conducting an open virtual conference on DGPSI and DGPSI-AI and the Committee  members are invited to participate for a detailed discussion where we will highlight how DGPSI covers the ISO 27701:2025 requirements completely. The Committee and the Meity should understand that these frameworks can singularly replace the need for compliance with host of ISO standards which are only Best Practice Standards and not meant for DPDPA compliance. In the AI scenario also the IAIG refer to 26 ISO standards indirectly hinting that Indian companies have to look for their compliance.  We want this mindset of dependence only on ISO standards to the exclusion of any attempt for developing  Indian standards to be changed. Let us leave the colonial mindset that anything which comes from the west is good and have confidence in the Made in India concept of Mr Modi.

Let us now look at the Six Principles of DGPSI AI depicted below.

While we leave the detailed discussion to the book which is available on Kindle  (Should be available on print directly from Naavi or when the publisher White Falcon Publishing wakes up from its deep slumber).

Now let us look at the Seven Sutras of IAIG.

The DGPSI-AI is built on Accountability, Explainability, Responsibility and Ethics which are meant to build “Trust” which is the declared objective of IAIG. Explainability of DGPSI-AI can also be mapped to “Understanding by Design” in IAIG. Accountability is covered in both. Safety in IAIG is covered under Security in DGPSI-AI. Resilience and Sustainability and People first concepts of IAIG is covered under the principle of “Ethics” under DGPSI AI.

Where the IAIG tilts towards the industry interests is in the concept of ” Innovation over Restraint”. DGPSI-AI prefers “Innovation with Restraint” and flags the “Unknown Risk” associated with AI  usage as a “Significant Risk”. By explaining the Restraint concept as “All things being equal, Responsible Innovation is prioritized over cautionary restraint”, the IAIG tilts towards industry benefits than being “Data Principal Specific”.

Since under DPDPA, the Data Fiduciary is a trustee of the Data Principal, he cannot prioritize industry benefits over the risks to the data principal. Hence DGPSI-AI is a shade better than the IAIG  in this respect.

Your comments are welcome… More comments will follow.

Naavi

Posted in Privacy | Leave a comment

The 26 ISO standards referred to by the India AI Guidelines

While our Prime Minister Mr Narendra Modi swears by “Made in India” the reference to 26 ISO standards in the AI guidelines indicates that MeitY is not concerned with such indigenization of industry efforts.

This was once pointed out by the undersigned in 2011 when Section 43A rules were notified where MeitY became the marketing agency for ISO 27001 standards. This same mindset is reflected in the guidelines where instead of developing our own standard for compliance we are still interested in promoting only and only ISO standards.

I wish the committee had recognized that there are efforts already for development of indigenous frameworks of compliance such as  DGPSI or Data Governance and Protection Standard of India which is better than ISO27701:2025. The extended version DGPSI-AI has suggested compliance measures for DPDPA Compliance which include all the recommendations that this guideline suggests ( as would be specifically pointed out in subsequent articles). But the research of the Committee or the pressure to support non-indigenous standards must have over weighed the considerations of the committee and suppressed mention of any reference to DGPSI or similar efforts.

I am disappointed with the Chairman Dr Balaraman for not making a reference to the framework DGPSI and DGPSI-AI while finalizing the report.

DGPSI-AI could be considered as a framework which is a single indigenous framework which is good enough for compliance over  the 26 or more ISO standards pointed  out in the report and would have hurt the interests of promoters of ISO standards. Perhaps these forces must have worked in suppressing the information on the existence of DGPSI framework which  is a boon for SMEs to remain compliant with an affordable cost.

This would be considered as a failure of the Committee.

P.S: I am aware that my comments could create an adverse backlash on me but truth has to be told. If such committees headed by academic persons exhibit influence of vested interests, it has to be called out. Naavi will not be Naavi without pointing out such  deficiencies of the system. My advise to the committee members is “Be-Indian and Encourage Made in India efforts” . It is not enough if we call the mission as “India AI mission” and the guidelines as “India AI Guidelines”. The Indianness has to reflect through the actions and not remain in the name. If DGPSI-AI did not  merit a mention at least on Page 56  under Types of Voluntary frameworks, it only indicates that the Committee has not conducted proper literature research…or it was deliberately suppressed by vested interests. 

Naavi

Posted in Privacy | Leave a comment

Data Input BIAS indicated in the AI Guidelines report?

The November 5 guidelines issued by Meity on AI, is a guideline meant to unlock AI’s benefits for growth, inclusion, and competitiveness, while safeguarding against risks to individuals and society.

The PIB press note indicates that “These are envisioned as a foundational reference for policymakers, researchers, and industry to foster greater national and international cooperation for safe, responsible, and inclusive AI adoption”.

In other words, this document is not a standard or requirement. It is a background document for further action.

Part 4 of the guidelines indicate “Practical Guidelines for the Industry” . It states that any person developing or deploying AI systems in India should be guided by the following.

  1. Comply with all Indian laws and regulations, including but not limited to laws relating to information technology, data protection, copyright, consumer protection, offences against women, children, and other vulnerable groups that may apply to AI systems
  2. Demonstrate compliance with applicable laws and regulations when called upon to do so by relevant agencies or sectoral regulators.
  3. Adopt voluntary measures (principles, codes, and standards), including with respect to privacy and security; fairness, inclusivity; non-discrimination; transparency; and other technical and organisational measures.
  4. Create a grievance redressal mechanism to enable reporting of AI-related harms and ensure resolution of such issues within a reasonable timeframe.
  5. Publish transparency reports that evaluate the risk of harm to individuals and society in the Indian context. If they contain any sensitive or proprietary information, the reports should be shared confidentially with relevant regulators
  6. Explore the use of techno-legal solutions to mitigate the risks of AI, including privacy-enhancing technologies, machine unlearning capabilities, algorithmic auditing systems, and automated bias detection mechanisms.

It is heartening to note that FDPPI’s framework for compliance of DPDPA in the AI environment already incorporates all these suggestions.

I wish the drafting committee had read the book “Taming the twin challenges of DPDPA and AI with DGPSI-AI” .This book is now available in E Book  form from Amazon. It was released in pre-print version during the IDPS 2025 at Bengaluru on September 17.  I am not sure that the committee members were unaware of this  framework but chose to deliberately suppress the information from the report. I urge the members to atleast read the framework now and compare the 6 principles with 9 Implementation  specifications for the deployers and 13 implementation specifications for the developers.

It is possible that the research of the committee was inadequate and they neither follow www.naavi.org nor the linked in. Had they done so, they would have known the recommendations contained in this book and could have atleast made a reference to the document in the report.

It is also possible that the last meeting of the committee was well before September 17 and hence the members of the committee were unaware of the book at that point of time.

It may also be true that the Committee did not want to share any credit and wanted to show case the report as a original recommendation.

It is therefore apt  to say that “Data input Bias” was indicative in the development of the  report itself.

However, we forget the bias and try to correct it through a series of articles highlighting how DGPSI-AI compares with the 7 Sutras and recommendations related to Policy and Regulations, Risk Mitigation, the suggested action plans particularly for the industry.

Watch this space for more….

Naavi

 

 

Posted in Privacy | Leave a comment

AI Governance Guidelines from GOI

On November 5, a report containing AI Governance guidelines were released by MeitY with the declared objective of developing  a foundational reference for policymakers, researchers, and industry to foster greater national and international cooperation for safe, responsible, and inclusive AI adoption.

The guidelines have been drafted by a high-level committee under the chairmanship of Prof. Balaraman Ravindran, IIT Madras, comprising policy experts including Shri Abhishek Singh, Additional Secretary, MeitY; Ms. Debjani Ghosh, Distinguished Fellow, NITI Aayog; Dr. Kalika Bali, Senior Principal Researcher, Microsoft Research India; Mr. Rahul Matthan, Partner, Trilegal; Mr. Amlan Mohanty, Non-Resident Fellow, NITI Aayog; Mr. Sharad Sharma, Co-founder, iSPIRT Foundation; Ms. Kavita Bhatia, Scientist ‘G’ & GC, MeitY & COO IndiaAI Mission; Mr. Abhishek Aggarwal, Scientist D, MeitY & Mr. Avinash Agarwal, DDG (IR), DoT, Ms. Shreeppriya Gopalakrishnan, DGM, IndiaAI.

The Guideline will be analysed by the FDPPI’s AI Chair and its comments will be provided here.

In September this year, FDPPI released DGPSI-AI as a framework for DPDPA Compliance which covered the recommended industry approach to DPDPA Compliance where AI is used by a Data Fiduciary.  This framework which is an extension of the DGPSI framework also covers the requirements of  AI Developers and  Agentic AI  users.

It would be interesting to look at this framework in the light of the guidelines now released.

Watch out for more articles on the guidelines….

Copy of the report can be accessed here:

Posted in Privacy | Leave a comment