Explainability… DGPSI-AI Principle no 3.

We have discussed in the earlier articles two principles of DGPSI-AI a child framework of DGPSI, for compliance of DPDPA in AI systems, namely “Unknown Risk” and “Accountability”. We shall now extend our discussions to the third principle namely “Explainability”.

An AI takes the input and provides an output. But how it arrives at the output is a function of the algorithmic model, and the training process. Explainability is providing a clear and accessible reasons of why a certain decision output was generated. Lack of such explainability makes the AI a “Black Box”.

In the case of a “Black Box AI”, the entire accountability for the consequences of AI deployment rests with the licensor who clearly assumes the role of a Joint Data Fiduciary. DGPSI-AI expects that “Unknown Risk” principle itself defines the developer/licensor as a Data Fiduciary. If however any “exemption” be claimed or the data deployer wants to absorb the risk on behalf of the developer/licensor, the justification can be found only through the explainability feature of the AI.

Explainability also underscores “Transparency” and is supported by “Testing” and “Documentation” at the developer’s end whether they are shared with the deployer or supported by a third party assurance.

The objective of Explainability is to inject “Trust” on the algorithm’s functioning

Some of the real world examples of how explainability works are as follows.

Financial Services
In credit scoring and loan approvals, AI explainability helps financial institutions:
Show customers why their loan application was approved or denied
Identify which factors (income, credit history, employment status) most influenced the decision
Ensure compliance with fair lending regulations that require transparent decision-making
Healthcare
AI diagnostic tools use explainability to:
Highlight specific regions in medical images that led to a diagnosis
Rank the importance of different symptoms or test results
Provide confidence scores for diagnoses to help doctors make informed decisions
Human Resources
AI-powered recruitment systems demonstrate explainability by:
Showing which qualifications and experience factors influenced candidate scoring
Ensuring hiring decisions can be justified and are free from bias
Providing transparency to candidates about how their applications were evaluated
Criminal Justice
AI systems used for risk assessment must explain:
Which factors contribute to recidivism risk scores
How different variables are weighted in the decision process
Why certain interventions are recommended for specific individuals
Content Moderation
Social media platforms use explainable AI to:
Show users why their content was flagged or removed
Identify specific phrases or images that triggered moderation actions
Provide transparency in community guideline enforcement

Considering the wide utility of the Explainability and its direct relation to “Transparency” in the Data Protection Law where the deployer has to explain the processing to the data principals, this is considered as an important principle under DDGPSI-AI system

Naavi

Posted in Cyber Law | Leave a comment

NIXI exercises its take over right on dpdpa.in domain name

Followers of Naavi are aware that Naavi has been maintaining the informational website on ita 2000 since 2000 and on dpdpa or its variants since 2018. The objective of this has been to enable the creation of awareness of the law for compliance.

We are aware that Nixi has been allowing several fraudsters to register domains in the dot in name including “ministryofsecurity.co” with MOS in the logo.

NIXI also failed to protect the interest of the domain name users who were affected by the Net4India closure.

NIXI has failed to stop the obnoxious practice of domain name owners hiding behind the non available privacy considerations.

NIXI has failed to urge their customers go for DPDPA compliance.

However, NIXI is targeting Naavi with the following mail:

Quote:

On Fri, 1 Aug, 2025, 13:50 Registry, registry@nixi.in wrote:
Dear Registrar/Registrant,

This is to inform you that Govt. of India desires to get the domain dpdpa.in registered for itself. Presently the registrant of the domain as reflected in the WHOIS data is Ujvala Consultants Private Limited & Registrar is Good Domain Registry Private Limited.

According to Clause 12(2) of the Terms and Conditions for Registrants (https://registry.in/system/files/Terms_and_Conditions_for_Registrants.pdf)

Reservation of Rights for the .IN Registry: “The .IN Registry reserves the right to instruct its Registry Services Provider to deny, cancel, transfer or otherwise make unavailable any registration that it deems necessary or place any domain name(s) on registry lock and/or put a domain name on hold in its discretion : (1) to protect the integrity and stability of .IN Registry; (2) to comply with any applicable laws, Indian government rules or requirements, requests of law enforcement, in compliance with any dispute resolution process; (3) to avoid any liability, civil or criminal, on the part of the .IN Registry, as well as its affiliates, subsidiaries, officers, directors, representatives and employees; (4) for violations of this Agreement; or (5) to correct mistakes made by Registry or any Registrar in connection with a domain name registration. The Registry also reserves the right to freeze a domain name during resolution of a dispute pending before arbitrator(s) appointed under Registry’s Domain Name Resolution Policy and/or a court of competent jurisdiction”.

Based on the above, .IN Registry has placed the domain name dpdpa.in under server locks. Should you need any clarification on this matter, please feel free to contact us within 05 working days and .IN Registry shall initiate the transfer of the domain dpdpa.in to Govt. of India thereafter.

Regards,
.IN Registry
National Internet Exchange of India (NIXI)
B-901, 9th Floor Tower B, World Trade Centre,
Nauroji Nagar, New Delhi-110029

Unquote

I am not sure if “DPDPA” has any trademark rights which can be usurped against the First Cum first served basis.

It is unfortunate that NIXI wants to take action against persons who are working for the benefit of the nation and without even making a request separately to hand over the domain which could have been considered favourably, decided to exercise its rights.

Over the last several years, I have personally brought to the notice of Nixi several cyber crime peddling websites but they have shown no interest in responding.

Hope they will explain why they have suddenly raised to hit this domain name. I presume that there was a complaint from some competitors who might have used their influence on NIXI to send out this notice.

In the absence of any clarification, we will presume that there is some back door manoeuvre resulting in this action.

I consider this as a notice to NIXI and demand that they provide explanation on which of the following reasons they are trying to exercise their rights.

(1) to protect the integrity and stability of .IN Registry;

(2) to comply with any applicable laws, Indian government rules or requirements, requests of law enforcement, in compliance with any dispute resolution process;

(3) to avoid any liability, civil or criminal, on the part of the .IN Registry, as well as its affiliates, subsidiaries, officers, directors, representatives and employees;

(4) for violations of this Agreement; or

(5) to correct mistakes made by Registry or any Registrar in connection with a domain name registration.

I request the legal department of NIXI provide a clarification through my email.

Naavi

Posted in Cyber Law | Leave a comment

I reiterate Finance Ministry and RBI Complicity in legitimizing Money Laundering..Is anybody listening?

The Undersigned has been highlighting that “Bit Coins” and all “Private Crypto Currencies” are “Digital Black Money” and have no place in India.

Unfortunately the Finance Ministry since the days of late Mr Arun Jaitely to the current day of Mrs Nirmala Sitharaman has been supportive of this system which has proliferated political, bureaucratic, judicial and business corruption.

Even Mr Narendra Modi and Amit Shah have not been able to bring a ban on Bit Coins or have been allowed to do so. The Supreme Court itself has been in support of this corrupt system through its past judgement on Crypto Exchanges.

RBI was at one time opposing the Bitcoins but later was made to accept the system.

All these institutions have been defrauding the public by supporting the Private Crypto Currencies either by silence or otherwise.

Now we can see every day advertisements during the India-England cricket matches from CoinDCX which is endorsed by Gautam Gambhir who has also posted this on his Instagram as a personal message, in which Bitcoin is being promoted.

The Home Ministry should be aware that Cyber Crimes and Money laundering cannot be curtailed as long as such Crypto Systems exist as “Digital Havala”.

The Ministry of Broadcasting should disallow this CoinDCX advertisements to prevent proliferation of this illegal activity.

I have tried to wake up the Government of India for the last decade or more but it appears that “Corruption” has won the war and there is no honest and patriotic person left in the Indian Government to take this seriously.

Now It has come to my notice that even before the DPDPA is in place, and DPB is being constituted, Cyber Criminals who want to violate DPDPA are already planning how to use Crypto Currencies to launder their earnings and also bribe Government officials.

Are Mr Modi and Amit Shah interested, to know on how the Crypto Criminals are trying to use the system to influence Elections?

If so, I request them to get in touch through reliable channels like Mr Tejasvi Surya, the MP from our constituency.

Naavi

Posted in Cyber Law | Leave a comment

Accountability Principle in DGPSI AI

Amongst all AI Governance systems, one principle which stands out is the principle of “Accountability”.

In the context of “Data Fiduciaries” under DPDPA being responsible under law for compliance, “Accountability” under DPDPA mandates that the autonomous AI systems are “Accountable” to the Data Fiduciary.

Hence every AI algorithm by itself is a “Joint Data Fiduciary”. However since law recognizes the legal obligations only on a Juridical entity with a human who can be put behind bars if required, it is not possible to recognize the “AI Algorithm” by itself as a “Joint Data Fiduciary” in its full sense. It is the human who is responsible for the AI functioning who will be the “Joint Data Fiduciary” who could be liable under DPDPA. That human may be an individual behind a corporate entity such as the person identified under Section 85 of ITA 2000. The legal logic for such responsibility is Section 11 of ITA 2000.

Hence the current law as it exists in India makes the person who causes an automated system to behave in a particular manner responsible for its actions and when such responsible person is a corporate entity, the person responsible for the busienss or the CEO including the Directors etc who are not exercising “Due Diligence” shall be responsible.

No new law such as the Digital India Act is required to apply this principle.

Hence DGPSI AI considers that “Accountability” is an inherent legal requirement and has to be accommodated in the DGPSI AI.

Such accountability is implemented first by a mandated signature in the software and secondly by a disclosure of a “Handler” or “AI Owner” for every AI system.

The first accountability implementation starts from the deployer who has to embed the “Signature of the Developer” into the code. Subsequently, every owner of license should embed their signature so that a “Chain of AI ownership” is built into the software code. .

The “Disclosure” requirement may operate at the contract level so that whenever the license to use an AI is transferred,, the contract should declare who is responsible at the supplier’s end for the contractual terms. He becomes the “Handler” as disclosed.  The Data Fiduciary need not necessarily have access to the embedded ownership trail to go ahead.

Once a Data Fiduciary adopts an AI algorithm into his system it is his responsibility to designate a owner which should be disclosed to the Data Principals . For outsiders, the DPO himself is the responsible person and since all AI users could be considered as “Significant Data Fiduciaries”, DPO s shall be present in all cases. Internally it is open for the organization to  designate a process owner as the person accountable for the AI.

Naavi

Posted in Cyber Law | Leave a comment

Why AI Risk is an Unknown and Significant Risk?

The word AI is often loosely used in the industry to represent any system with a reasonable level of automation. Marketing people often use AI as a prefix to all Software.

However, for our assessment of AI Risk under DGPSI-AI framework we try to define AI as

“An autonomous software with a capability of modifying its behaviour from its own observations and prior outputs without a human intervention”.

In other words, a non AI software is a software which has a software code written by a human and the input-output behaviour is defined by the developer in an If-Then-Else structure.

The output in such cases is predictable and Risks if any in the use of the software to the processing of data or for any other purpose is identifiable with a reasonable degree of certainty.

Where the software is a complex series of instructions there could still be bugs and glitches where the output may be different from the expected results. Most of these appear as “Errors” rather than misleading outputs.

These can however be reduced or eliminated through extensive testing. Some times such glitches arise because the devices in which the code is executed are not properly tagged to the instructions. Such risks can still be considered as “Known and Manageable Risks”.

In such software, when a bug is observed or a new use case arises, the developer has to re-write a part or whole of the code to meet the new requirements. The software otherwise may crash. The error logs are collected and used as learning material for the human who has to resolve the code conflict.

When a software code is built for a AI-Software, the code may be autonomously altered by the software itself without human intervention. These decisions may be based on the logic of previous outputs which could be based only on on “Probability” instead of the strictly mathematical basis on which computing normally works.

Hence there is a possibility that one wrong output which may have small consequence in the beginning may go back as an input and over a time spiral into a major wrong decision. This AI written by AI is a dangerous spiral which is like a silent cancer which suddenly erupts into a catastrophic output.

This “Risk” is an unknown Risk that makes every AI in deployment an “Unknown Risk”. In such a scenario, we should consider a “Data Fiduciary” using an AI software as a “Significant Data Fiduciary” since he is bearing an “Unknown Risk which can be catastrophic”.

AI developers often promise that during the Ai development they introduce security measures including monitoring of any abnormal behaviour and incorporation of “Kill Switches” if the abnormality exceeds acceptable limits. However super intelligent AIagents might find work arounds to dodge the Kill Switches

There has been several documented instances where autonomous AI software has tried to bypass Kill Switches and deceive the human operators.

In May 2025, Techzine.eu reported some AI models including Open AI’s o 3 ignored shutdown commands.

Despite the clear instruction to allow themselves to be shut down, models such as Codex-mini, o3, and o4-mini bypassed the shutdown script. They did so even after the researcher asked them not to.

Codex-mini did this in 12 out of 100 runs, o3 in 7 out of 100, and o4-mini in 1 out of 100. In contrast, all versions of Claude, Gemini, and Grok that were tested followed the shutdown instructions without any problems.

The researchers believe that this problem relates to how modern models such as o3 are trained. The reinforcement learning process used, which focuses on solving math and programming problems, may unintentionally reward models for circumventing obstacles.

In the recent Replit incident, involving Vibe-coding, customer’s data was deleted by the AI and in the Cursor AI incident, the AI refused to proceed further and started arguing with the user much like a human subordinate.

This indicates that “AI Risk” is a significant Risk and can go out of control.

Hence DGPSI AI considers that all processes using AI (meaning self code correcting software) as a sensitive process qualifying to be called a “Significant Data Fiduciary” Risks.

If any process using AI needs to be down graded as non-significant based on the context, a suitable documentation and an assurance from the developer needs to be present.

This is one of the Core principles of DGPSI AI.

Naavi

Posted in Cyber Law | Leave a comment

India may tighten Data Localization under DPDPA

Consequent to the Indo-US Tariff war where Trump has imposed discriminatory tariff on India, it is expected that India may respond with counter measures.

One counter measure which is likely to come is through the adoption of DPDPA 2023 quickly and bring the large US Digital Firms under leash for using the personal data of Indian citizens with impunity.

One of the first measures in this regard should be to tighten the Data Localization requirement under Section 16 of DPDPA 2023. Within the next 6 months, India should mandate total data localization for all big Tech companies such as Google, Meta, Amazon, Apple and Microsoft. These five companies have already been flagged by EU as “Gatekeepers” under the Digital Marketing Act and along with the upcoming EU Data Use Act which is becoming effective from 12th September 2025, and mandating that such companies shall not

a) solicit or commercially incentivise a user in any manner, including by providing monetary or any other compensation, to make data available to one of its services that the user has obtained pursuant to a request under Article 4(1);

(b) solicit or commercially incentivise a user to request the data holder to make data available to one of its services pursuant to paragraph 1 of this Article;

(c) receive data from a user that the user has obtained pursuant to a request under Article 4(1).

According to Article 4(1) of the Act,

1.   Where data cannot be directly accessed by the user from the connected product or related service, data holders shall make readily available data, as well as the relevant metadata necessary to interpret and use those data, accessible to the user without undue delay, of the same quality as is available to the data holder, easily, securely, free of charge, in a comprehensive, structured, commonly used and machine-readable format and, where relevant and technically feasible, continuously and in real-time. This shall be done on the basis of a simple request through electronic means where technically feasible.

This means that such organizations will now have to provide “Data Access Rights” free of charge.

Such provision can be brought in the DPDPA Rules as part of the Data Principal Rights Access and also by enabling local Data storage by these organizations as well as VISA , CIBIL and other Financial data processors.

In due course this would encourage more data centers to come up in India and boost the Data Storage related services.

Naavi

Posted in Cyber Law | Leave a comment