“Unknown Risk” is “Significant Risk”

Data Fiduciaries who are deploying AI products for Personal Data Processing needs to take note that DPDPA Rule (no 12) expects that

“(3) A Significant Data Fiduciary shall observe due diligence to verify that algorithmic software deployed by it for hosting, display, uploading, modification, publishing, transmission, storage, updating or sharing of personal data processed by it are not likely to pose a risk to the rights of Data Principals.”

While some data fiduciaries may find comfort that this only relates to “Significant Data Fiduciaries” and not others, the determination of which data fiduciary is a “Significant Data Fiduciary” may itself may require an assessment of the “Sensitivity” of processing and the harm likely to be caused to the data principal.

The Officer of MeitY designated for this purpose may declare certain classes of data fiduciaries or specific data fiduciaries as “Significant Data Fiduciary”. However if any data fiduciary thinks that if the designated official has not declared a specific category of data fiduciaries as “Significant Data Fiduciaries”, they may not be fully correct.

The need to make an assessment of the Risk of processing still lies with the data fiduciary since he is a “Fiduciary” and not a “Controller”. It is the responsibility of every data fiduciary to do a self evaluation of his processes and document why he is not a significant data fiduciary.

In this context, deployers of AI will have a unique challenge. In case they are using an Open Source AI, it is their responsibility to understand the risk and declare if there is a high risk to a data principal. If however they are unaware of the code of the algorithm then they need to depend on the provider of the algorithm.

Due diligence in this regard means that the data fiduciary obtains an assurance along with indemnity and include it in the contract. Alternatively the provider should be declared as a “Joint Data Fiduciary” so that the responsibility of compliance will be on the provider also.

In the context of proprietary algorithms, the deployer being unaware of how the algorithm processes the personal data, the risk is not quantifiable. In such a case any data fiduciary should presume that the “Unknown Risk” could be high risk and therefore the process renders them as “Significant Data Fiduciary”.

In other words “Deployers of all Proprietary AI algorithms need to be automatically tagged as “Significant Data Fiduciaries”. If use of AI is ubiquitous, then a large number of Data Fiduciaries will be Significant Data Fiduciaries.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.