Towards AI standardization in India

We have started discussion on AI standardization in these columns some time back with a brief review of ethical standards that have been suggested by various international bodies as well as EU-AI act.

In India, we have a tendency to be “Followers” rather than “Leaders”. Hence we look upto EU or US to guide us for everything including developing a standard for AI. Naavi.org has always believed that while we can take guidance from all parts of the world, we should not hesitate to develop our own indigenous standards. It is this principle that has guided Naavi and FDPPI to develop DGPSI the Digital Governance and Protection Standard of India which addresses the compliance requirements of DPDPA, ITA 2000 and the draft BIS standard on Data Governance to develop a comprehensive Indian standard for personal data protection.

One of the objectives of the DGPSI approach has been an attempt to simplify the standard requirements to make it easy to comprehend by the users and keep it flexible enough to be adapted to the requirements of different risk situations.

The AI-DTS as part of the DGPSI already has tried to look at the feasibility of bringing in a certain framework for AI users and developers that would start providing a base for the regulation.

The very first part of this AI-DTS which is a measure of data trust score for an AI algorithm is to bring “Accountability” to AI development. It is one of the beliefs of AI-DTS that once we make the action of AI accountable to a legal entity (The developer or the user) then most of the adverse consequences that may arise because of “Unethical development or use” of AI could be addressed under normal laws.

“Standardization” is an attempt to provide a detailed “Check list” which is like defining the “Due Diligence”. The “Check list” cannot however over ride the law of the land and hence without changing the law itself, standardization cannot over ride the benefits of bringing in “Accountability”.

Accountability is the first step not only for regulation but also for standardisation since the applicability of the standard has to be directed to a defined system.

Hence any standardization attempt has to start with “Accountability”. Accountability requires “Registration of a AI Developer” .

“Registration requires designation of an authority” in regulatory mechanisms and licensing formalities. In standardization the registration could be a self regulatory mechanism and led by even NGOs like FDPPI. Hence without waiting for a law to be passed, a regulatory authority to be set up, penalty mechanism to be implemented, standardisation can start with voluntary movements led by interested NGOs.

FDPPI has started the DGPSI movement along with a compliance certification mechanism exactly under these thoughts for DPDPA compliance. Hence DGPSI has become today the only DPDPA compliance tool ahead of ISO 27701 or any other standards.

Similarly the AI-DTS has the potential for becoming a self regulatory tool and FDPPI could take the lead.

Under DGPSI, AI-DTS has started its activity by focussing first on “Accountability” under which every AI developer shall voluntarily declare in its coding the ownership and ensure that the licensee as well as the chain of sub licensees is embedded into the source code.

However before we implement any regulation or standard, one needs to identify the applicability. Hence it is essential to define the “Regulated System” and the “Regulated entity”.

In the context of personal data protection, AI-DTS adopted the regulated entity definition as the “Data Fiduciary” or “Data Processor” since it is already part of the DPDPA regulation. Also using the provisions of Section 11 of ITA 2000, the AI developer was also considered as a Data Fiduciary and it was only left to the identification of the data fiduciary for enforcement. Hence, embedding the identity of the developer was the only missing requirement to enable AI regulation in India.

However, definition of the regulated system was essential and this was explained earlier through these columns. (Refer here) The definition was linked to the graded ability of the system to alter the source code of the algorithm without human intervention . This was an approach which redefined a class of software as “AI” depending on its nature to avoid a human in re-coding itself.

The EU-AI Act approach was slightly different since it required the definition to be linked to “Risk” and “Risk” required assessment of the “Harm” to the ultimate users.

DGPSI approach was simpler to tag software with ability to change its behaviour based on the observation of the output of the algorithms itself.

It appears that now the Bureau of Indian Standards (BIS) has started a debate towards developing an Indian standard for AI and is trying to gather industry responses. We welcome this initiative.

FDPPI/Naavi however urges BIS to focus on creating a proper definition of AI and Accountability as the foundation pillars for the standards and avoid reproducing an AIMS system on the lines of ISO 42001. Approach of ISO 42001 has been to create a standard for AIMS as if it is different from ISMS.

While this is commercially good to have one more certifiable standard, it is not a great idea as far as the implementing entity is concerned who have an ISMS certification and AIMS certification separately.

Hence we need to think differently when BIS starts looking at an AI standard for India.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.