Is there a strategic need for segregation of Ethics while defining AI Standards?

In India we are today discussing both regulation of AI and standardization of AI at the same time. Just as the EU-AI act is a regulation while ISO 42001 is a standardization, BIS is discussing AI standardization while ITA 2000 and DPDPA 2023 already represent the regulation.

The Bureau of Indian Standards (BIS) under one of its sectional committees, BIS (LITD 30BIS) has arranged a webinar on 21st June 2024 on the topic “Artificial Intelligence-Standardization Landscape” which may present the current status of the standardization initiatives related to AI in India.

The objectives of standardization is set out as ensuring “safe, secure, and ethical development and deployment of AI systems” and the webinar is meant to “Sensitize the stakeholders’. The webinar is a very brief event and is likely to only have the limited objective of announcing its initiative.

It may be observed that the objectives of standardization is meant to include “Ethical” development and deployment as part of the technical standardization. This could mean that the standard may wade into the domain of regulation.

Currently there are a few Standard documents such as IS 38507 :2022 on “Governance implications of the Use of AI by organizations” as well as IS 24368:2022 on “Artificial Intelligence-Overview of Ethical and Societal Concerns” which address AI standardization in India from Governance point of view. There are atleast 10 other standards on different technical aspects of AI.

At the same time there is already ITA 2000 which regulates Automated Systems (including what may now be defined as Artificial Intelligence systems) and DPDPA 2023 which regulates automated decision making in the domain of personal information processing. The systems used by Data Fiduciaries and Data Processors under DPDPA 2023 will certainly include AI systems and hence the published standards if any will affect the activities of the Data Fiduciaries and Processors.

Hence any new initiatives in standardization need to ensure that the specifications of the overlapping of standards do not clash with any legal issues covered under regulations.

Very often industry has a confusion between “Regulatory Compliance” and “Adherence to Standards”. Since customers of organizations often refer to “Industry Best Practices” and specify need to adhere to standards, Indian companies tend to prioritize standardization to regulatory compliance.

It is important that the users of the standards appreciate that “Compliance to law is mandatory” and “Compliance to Standards” is a business decision. Compliance therefore should always come as the first priority and standardization is only a step to compliance.

There is a responsibility for organizations like BIS to ensure that in all their standardization documents they do record that standardization is not a replacement of regulation but is subordinate to regulation. Non Conformance to law could lead to penalties and non conformance to standards could be a matter of business negotiation.

In the past, there has been an awkward attempt by vested interests to influence law making to draft rules in such a manner that the industry is deliberately mislead into believing that “Standard is also part of the law”. There has also been an attempt by the standardization organizations also to mislead the market to wrongly believe that “Adhering to a standard is deemed compliance to law”.

Naavi has been forced to call out such attempts in the past and may do so again if required.

This conflicting situation between standardization and regulation is unwarranted and should be avoided by keeping the objective of “Safe and Secure development and deployment” of AI as the objective of standardization along with building compatibility of usage across multiple technology platforms and sectors and leave the “Ethical development and deployment” as the responsibilities of the regulation.

It is my belief that this approach of segregation of objectives between BIS and MeitY (Cyber Law and Data Protection Division) would also ensure that “Standards” are more generic than they are today. If not, in future, there will be a need for one AI standard for Fintech and another for Healthcare, another for hospitality and so on leading to proliferation of standards which is the bane of ISO standards.

The fact that ISO standards run to thousands in numbers is not a matter to be proud of. It is an indication of how complicated is the system of compliance to ISO standards in the perspective of the industry though auditors and educators are happy to have multiple business opportunities. This situation is more acute in the domain of IT since “Data” is the building block of IT industry and there are hundreds of application of one type of data and hence any attempt to standardize the processing will require hundreds of ways of processing. The need for generic standards is therefore essential to make standardization in IT easily acceptable by multiple types of data processors.

Naavi has consistently tried to address this issue and introducing “Unified Framework of compliance” to reduce the burden of compliance on the industry. “Compliance without Pain” is the motto followed by Naavi which has been ingrained in the frameworks like DGPSI.

When I look at the composition of the technical committees of BIS which draft IT related standards, I get a feeling that there is a shortfall in the member’s exposure to ITA 2000 and DPDPA 2023. This could result in the final versions of the standard missing legal issues. There is also representation of too many industry sectors which could result in the final draft trying to accommodate many sector specific requirements and vested interests rather than the standard being neutral and generic. I hope this will be suitably taken care of.

I wish these comments are considered by BIS in the right spirit as they go forward with the standardization of AI.

I look forward to a healthy and constructive debate on these comments.

Naavi

Posted in Cyber Law | Leave a comment

Towards AI standardization in India

We have started discussion on AI standardization in these columns some time back with a brief review of ethical standards that have been suggested by various international bodies as well as EU-AI act.

In India, we have a tendency to be “Followers” rather than “Leaders”. Hence we look upto EU or US to guide us for everything including developing a standard for AI. Naavi.org has always believed that while we can take guidance from all parts of the world, we should not hesitate to develop our own indigenous standards. It is this principle that has guided Naavi and FDPPI to develop DGPSI the Digital Governance and Protection Standard of India which addresses the compliance requirements of DPDPA, ITA 2000 and the draft BIS standard on Data Governance to develop a comprehensive Indian standard for personal data protection.

One of the objectives of the DGPSI approach has been an attempt to simplify the standard requirements to make it easy to comprehend by the users and keep it flexible enough to be adapted to the requirements of different risk situations.

The AI-DTS as part of the DGPSI already has tried to look at the feasibility of bringing in a certain framework for AI users and developers that would start providing a base for the regulation.

The very first part of this AI-DTS which is a measure of data trust score for an AI algorithm is to bring “Accountability” to AI development. It is one of the beliefs of AI-DTS that once we make the action of AI accountable to a legal entity (The developer or the user) then most of the adverse consequences that may arise because of “Unethical development or use” of AI could be addressed under normal laws.

“Standardization” is an attempt to provide a detailed “Check list” which is like defining the “Due Diligence”. The “Check list” cannot however over ride the law of the land and hence without changing the law itself, standardization cannot over ride the benefits of bringing in “Accountability”.

Accountability is the first step not only for regulation but also for standardisation since the applicability of the standard has to be directed to a defined system.

Hence any standardization attempt has to start with “Accountability”. Accountability requires “Registration of a AI Developer” .

“Registration requires designation of an authority” in regulatory mechanisms and licensing formalities. In standardization the registration could be a self regulatory mechanism and led by even NGOs like FDPPI. Hence without waiting for a law to be passed, a regulatory authority to be set up, penalty mechanism to be implemented, standardisation can start with voluntary movements led by interested NGOs.

FDPPI has started the DGPSI movement along with a compliance certification mechanism exactly under these thoughts for DPDPA compliance. Hence DGPSI has become today the only DPDPA compliance tool ahead of ISO 27701 or any other standards.

Similarly the AI-DTS has the potential for becoming a self regulatory tool and FDPPI could take the lead.

Under DGPSI, AI-DTS has started its activity by focussing first on “Accountability” under which every AI developer shall voluntarily declare in its coding the ownership and ensure that the licensee as well as the chain of sub licensees is embedded into the source code.

However before we implement any regulation or standard, one needs to identify the applicability. Hence it is essential to define the “Regulated System” and the “Regulated entity”.

In the context of personal data protection, AI-DTS adopted the regulated entity definition as the “Data Fiduciary” or “Data Processor” since it is already part of the DPDPA regulation. Also using the provisions of Section 11 of ITA 2000, the AI developer was also considered as a Data Fiduciary and it was only left to the identification of the data fiduciary for enforcement. Hence, embedding the identity of the developer was the only missing requirement to enable AI regulation in India.

However, definition of the regulated system was essential and this was explained earlier through these columns. (Refer here) The definition was linked to the graded ability of the system to alter the source code of the algorithm without human intervention . This was an approach which redefined a class of software as “AI” depending on its nature to avoid a human in re-coding itself.

The EU-AI Act approach was slightly different since it required the definition to be linked to “Risk” and “Risk” required assessment of the “Harm” to the ultimate users.

DGPSI approach was simpler to tag software with ability to change its behaviour based on the observation of the output of the algorithms itself.

It appears that now the Bureau of Indian Standards (BIS) has started a debate towards developing an Indian standard for AI and is trying to gather industry responses. We welcome this initiative.

FDPPI/Naavi however urges BIS to focus on creating a proper definition of AI and Accountability as the foundation pillars for the standards and avoid reproducing an AIMS system on the lines of ISO 42001. Approach of ISO 42001 has been to create a standard for AIMS as if it is different from ISMS.

While this is commercially good to have one more certifiable standard, it is not a great idea as far as the implementing entity is concerned who have an ISMS certification and AIMS certification separately.

Hence we need to think differently when BIS starts looking at an AI standard for India.

Naavi

Posted in Cyber Law | Leave a comment

Flagging the “Dark Pattern” in Money Control Pro Subscription consent

Yesterday, I made a post on the need for “Auto Renewals” to stop as per DPDPA. This post elicited the following response on linked-in from one of the followers which has opened up further interesting discussions.

Quote:

RBI has/had guidelines that allowed banks to auto renew fixed deposits. Is that gone? Has RBI updated its guideline

Unquote

This was a good observation. I had made my comment in a different context but it did apply to the contracts such as FD renewal where also the “Auto renewal” without notice could have adverse consequences. I have personally experienced such inconvenience in the past when a joint account was auto renewed by a Bank without prior information locking the premature closure for a further period of renewal which I thought was unfair.

I therefore wanted to clarify the context of my earlier comment so that there is no misunderstanding of my post:

In this post I was referring to the privacy related consents where a service is provided with an auto renewal option and in particular to a situation involving online subscription of an information service. In such cases, when the service is due, the auto renewal triggers a financial debit which the consumer/data principal may not want. In such circumstances the data fiduciary/service provider falling back on the auto renewal clause is an unfair implementation of the consent requirement under DPDPA 2023.

Further DPDPA 2023 requires renewal of consent for all legacy data principals and hence auto renewal per-se is no longer valid.

DPDPA brings in two important changes to the system of obtaining an informed consent. First any consent should be capable of being withdrawn. If the withdrawal results in any adverse consequences to the data fiduciary it should be borne by the data principal. If cancellation genuinely requires a certain time, it should be allowed.

However the ease of placing the withdrawal request should be comparable to the granting of consent. If I can order a product or service at a single click, it should be withdrawable by a single click.

Cert-in guidelines under ita2000 has said privacy policy needs to be renewed once a year. Also, purpose oriented consent has to be clear and fairly obtained whereas in many cases it is deceptively obtained. This needs to stop.

The FD renewal also should ideally include a pre-auto renewal notification at least 24 hours prior to renewal stating to the effect

“Your FD would mature and fall due in next 24 hours. It would be renewed as per your current instructions unless you indicate new disposal instructions. You can indicate your disposal instructions by clicking the following button..” etc

In case of the FD it can be closed anytime even after renewal though with a interest reduction. The interest reduction can be justified under the reasonable adverse loss of the data fiduciary which should be borne by the depositor or the data principal. Hence it is compatible with DPDPA.

My comment was specifically made in respect of a subscription of Money control pro by e-eighteen.com which is refusing to stop annual subscription even when requested one day prior to due date and charging for the entire year ahead. This is unfair and violative of DPDPA 2023 for which e-eighteen.com could be penalized under DPDPA 2023.

While the stoppage of subscription does not impose any inconvenience on money control, their refusal is just greedy exploitation of an earlier consent. There is no inconvenience to Money Control and they want to postpone the decision by another one year. This is “Dark pattern” consent which is unethical and needs to be flagged.

This sort of privacy contracts need to stop.

I have served a notice on the DPO of e-eighteen.com and grievance redressal officer of nw.18 and no satisfactory resolution has been received so far. I reserve my right to raise this dispute at the appropriate time.

There are many such “Auto renewal” contracts that need to be re-set. While it is not the intention of naavi.org to inconvenience businesses, the need to take prior consent to use auto renewal clauses of an earlier era needs to be flagged and DPB will have to act in this regard.

Naavi

Posted in Cyber Law | Leave a comment

Auto Renewal of Subscriptions should go out of use

Recently I came across an issue related to my subscription at Money Control.com for what is called a premium subscription. Though I asked for its cancellation one day before its due date, Money control or nw18.com or E-Eighteen.com Ltd is refusing to cancel the subscription.

According to DPDPA, “Withdrawal of Consent” is a right of the data principal and the general practice is that withdrawal should be as easy as the acceptance itself. If subscription is possible on a single click, withdrawal should also be possible through a single click.

The website of money control provides information on a grievance officer. There is also a DPO contact. At present I have not received resolution from the Grievance officer and I am now sending a complaint to the DPO.

I will be waiting for the DPB of India to come into existence when I will complain against the practice of “Auto renewal” which is not in consonance with the spirit of DPDPA or ITA 2000.

Naavi

Posted in Cyber Law | Leave a comment

AI should be prevented from Lying by design

An interesting article was found today a futurism.com about how AI is lying with intention. Referring to recent studies the article highlights that LLMs are deceiving human observers on purpose.

This means that whether AI is sentient or not, technology has successfully created the “Frankenstein” or the “Bhasmasura”. If we donot recognize this threat and take corrective steps, the future of Internet, Search Engines and the ChatGPT will all be in danger of being discarded as untrustworthy.

The report makes a statement “We found that Meta’s AI had learned to be a master of deception.” which is alarming. Though this was in the context of a game called “Diplomacy”, the feature of “Hallucination” which is present in LLMs is the license to the AI to cook up things. This could happen in any fake news creation as we have seen in recent days.

The EU-AI law does not seem to be good enough in controlling this risk since its approach to flagging of risk is inadequate.

In our view any AI algorithm with a capability to hallucinate or in other words, alter its behaviour in a manner in which the humans did not design it to behave should be considered as “Unacceptable Risk” and “Hallucination” is one such unacceptable risk.

When India tries to draft its law on AI, it has to ensure that this risk is recognized and countered effectively.

Many regulations in USA about AI only focusses on “Bias” in terms of racism. But the bigger threat is the licence given to the AI to hallucinate which leads to not only racist behaviour but the fraudulent behaviour indicated in the above surveys.

In India, since the owner of AI algorithm is legally accountable for the end result of its impact on the society, the “Fraud by AI” is a “Fraud by the owner of the AI algorithm”.

Hence all those companies who proudly announce that they develop AI software should be ready for the adverse consequences of their software built on the LLMs defrauding any body in the society.

Naavi

Posted in Cyber Law | Leave a comment

Welcome Mr Jitin Prasada as Mos IT

With the new cabinet of Modi 3.0 announced, it is heartening to note that Mr Ashwini Vashnav continues to be the minister for IT along with Railways and Information and Broadcasting. Digital publishing being a major part of Meity’s regulations, it is good that I&B ministry has been combined with MeitY at the level of the minister.

The MOS of IT Mr Rajev Chandrashekar unfortunately lost his election narrowly in Tiruvananthapuram against Mr Shashi Tharoor and will be missed for continuity. With his personal IT knowledge, he had brought his own welcome style of operations in the MeitY and worked hard on the DPDPA as well as the amendment of ITA 2000 through Digital India Act. He will be missed by the industry.

We wish Rajeev Chandrashekar all the best in his next stint as a party worker either in Kerala or in Bangalore which he represented in the Rajya Sabha.

In place of Mr Mr Rajeev Chandrashekar, we have now Mr Jitin Prasada as the new minister of state for IT.

A product of Doon School Dehradun and an MBA from International Management institute in New Delhi, Jitin Prasada recently served as a Minister in the UP state Government as Minister of Technical Education for 2 years. We hope under the guidance of Mr Ashwin Vaishnaw, he would be continuing from where Mr Rajeev Chandrashekar left off.

We had heard that in the 100 day agenda, passing of the rules of DPDPA was one of the items included and we look forward to Mr Jitin Prasada to ensure that the draft rules are released quickly and initiate the public debate. We also hope that he will not succumb to the lobbying of the industry which is interested in delaying the rules and manipulating it to their advantage.

Though Mr Jitin Prasada comes from the background of Congress, we presume that if Mr Modi has chosen him for the job, he must be committed enough to take the Indian IT forward. Apart from the task of constituting the DPB, releasing the draft rules on DPDPA, Mr Jitin Prasada has the responsibility of defending the Digital Media Intermediary rules under ITA 2000 which is under challenge in the Supreme Court by Meta/WhatsApp.

In the past we have found that the MeitY has not effectively defended cases against itself from the multinational Big Tech industry since this industry is also supported by the NASSCOM. There is a need to change this attitude and enable the opposition to have its way when it comes to tightening the laws against “Fake News Industry”.

Laws and regulations related to the AI industry is another major step required from the Meity and the earlier regime had been pursuing the revision of ITA 2000 with a replacement act like Digital India Act. There was a fight between Meity and Ministry of Finance on the Bit Coin regulation which also was kept pending due to corruption at all places and possibly including the Judiciary.

Now, if Mr Jitin Prasada pursues the DIT dream, he should ensure that the law is made for the people and not for the benefit of the Bit-Coin industry and the Fake news industry.

There is also a need for taking action on Cyber Crime prevention and ensuring that the existing ITA 2000 is itself used to strengthen the mechanism with better training of the “Adjudicators” who are administratively close to the MeitY. With the passage of DPDPA, we expect that the compensation payable to Data Principals under DPDPA needs to be handled by the Adjudicators of IT and they need to be trained on DPDPA quickly. Even after 24 years of ITA 2000, the performance of IT Secretaries as Adjudicators have been below expectations and this needs to be corrected.

The opportunities before MeitY are plenty and we wish Mr Jitin Prasada success in his new stint.

Naavi

Posted in Cyber Law | Leave a comment