Is there a strategic need for segregation of Ethics while defining AI Standards?

In India we are today discussing both regulation of AI and standardization of AI at the same time. Just as the EU-AI act is a regulation while ISO 42001 is a standardization, BIS is discussing AI standardization while ITA 2000 and DPDPA 2023 already represent the regulation.

The Bureau of Indian Standards (BIS) under one of its sectional committees, BIS (LITD 30BIS) has arranged a webinar on 21st June 2024 on the topic “Artificial Intelligence-Standardization Landscape” which may present the current status of the standardization initiatives related to AI in India.

The objectives of standardization is set out as ensuring “safe, secure, and ethical development and deployment of AI systems” and the webinar is meant to “Sensitize the stakeholders’. The webinar is a very brief event and is likely to only have the limited objective of announcing its initiative.

It may be observed that the objectives of standardization is meant to include “Ethical” development and deployment as part of the technical standardization. This could mean that the standard may wade into the domain of regulation.

Currently there are a few Standard documents such as IS 38507 :2022 on “Governance implications of the Use of AI by organizations” as well as IS 24368:2022 on “Artificial Intelligence-Overview of Ethical and Societal Concerns” which address AI standardization in India from Governance point of view. There are atleast 10 other standards on different technical aspects of AI.

At the same time there is already ITA 2000 which regulates Automated Systems (including what may now be defined as Artificial Intelligence systems) and DPDPA 2023 which regulates automated decision making in the domain of personal information processing. The systems used by Data Fiduciaries and Data Processors under DPDPA 2023 will certainly include AI systems and hence the published standards if any will affect the activities of the Data Fiduciaries and Processors.

Hence any new initiatives in standardization need to ensure that the specifications of the overlapping of standards do not clash with any legal issues covered under regulations.

Very often industry has a confusion between “Regulatory Compliance” and “Adherence to Standards”. Since customers of organizations often refer to “Industry Best Practices” and specify need to adhere to standards, Indian companies tend to prioritize standardization to regulatory compliance.

It is important that the users of the standards appreciate that “Compliance to law is mandatory” and “Compliance to Standards” is a business decision. Compliance therefore should always come as the first priority and standardization is only a step to compliance.

There is a responsibility for organizations like BIS to ensure that in all their standardization documents they do record that standardization is not a replacement of regulation but is subordinate to regulation. Non Conformance to law could lead to penalties and non conformance to standards could be a matter of business negotiation.

In the past, there has been an awkward attempt by vested interests to influence law making to draft rules in such a manner that the industry is deliberately mislead into believing that “Standard is also part of the law”. There has also been an attempt by the standardization organizations also to mislead the market to wrongly believe that “Adhering to a standard is deemed compliance to law”.

Naavi has been forced to call out such attempts in the past and may do so again if required.

This conflicting situation between standardization and regulation is unwarranted and should be avoided by keeping the objective of “Safe and Secure development and deployment” of AI as the objective of standardization along with building compatibility of usage across multiple technology platforms and sectors and leave the “Ethical development and deployment” as the responsibilities of the regulation.

It is my belief that this approach of segregation of objectives between BIS and MeitY (Cyber Law and Data Protection Division) would also ensure that “Standards” are more generic than they are today. If not, in future, there will be a need for one AI standard for Fintech and another for Healthcare, another for hospitality and so on leading to proliferation of standards which is the bane of ISO standards.

The fact that ISO standards run to thousands in numbers is not a matter to be proud of. It is an indication of how complicated is the system of compliance to ISO standards in the perspective of the industry though auditors and educators are happy to have multiple business opportunities. This situation is more acute in the domain of IT since “Data” is the building block of IT industry and there are hundreds of application of one type of data and hence any attempt to standardize the processing will require hundreds of ways of processing. The need for generic standards is therefore essential to make standardization in IT easily acceptable by multiple types of data processors.

Naavi has consistently tried to address this issue and introducing “Unified Framework of compliance” to reduce the burden of compliance on the industry. “Compliance without Pain” is the motto followed by Naavi which has been ingrained in the frameworks like DGPSI.

When I look at the composition of the technical committees of BIS which draft IT related standards, I get a feeling that there is a shortfall in the member’s exposure to ITA 2000 and DPDPA 2023. This could result in the final versions of the standard missing legal issues. There is also representation of too many industry sectors which could result in the final draft trying to accommodate many sector specific requirements and vested interests rather than the standard being neutral and generic. I hope this will be suitably taken care of.

I wish these comments are considered by BIS in the right spirit as they go forward with the standardization of AI.

I look forward to a healthy and constructive debate on these comments.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.