Towards AI standardization in India

We have started discussion on AI standardization in these columns some time back with a brief review of ethical standards that have been suggested by various international bodies as well as EU-AI act.

In India, we have a tendency to be “Followers” rather than “Leaders”. Hence we look upto EU or US to guide us for everything including developing a standard for AI. Naavi.org has always believed that while we can take guidance from all parts of the world, we should not hesitate to develop our own indigenous standards. It is this principle that has guided Naavi and FDPPI to develop DGPSI the Digital Governance and Protection Standard of India which addresses the compliance requirements of DPDPA, ITA 2000 and the draft BIS standard on Data Governance to develop a comprehensive Indian standard for personal data protection.

One of the objectives of the DGPSI approach has been an attempt to simplify the standard requirements to make it easy to comprehend by the users and keep it flexible enough to be adapted to the requirements of different risk situations.

The AI-DTS as part of the DGPSI already has tried to look at the feasibility of bringing in a certain framework for AI users and developers that would start providing a base for the regulation.

The very first part of this AI-DTS which is a measure of data trust score for an AI algorithm is to bring “Accountability” to AI development. It is one of the beliefs of AI-DTS that once we make the action of AI accountable to a legal entity (The developer or the user) then most of the adverse consequences that may arise because of “Unethical development or use” of AI could be addressed under normal laws.

“Standardization” is an attempt to provide a detailed “Check list” which is like defining the “Due Diligence”. The “Check list” cannot however over ride the law of the land and hence without changing the law itself, standardization cannot over ride the benefits of bringing in “Accountability”.

Accountability is the first step not only for regulation but also for standardisation since the applicability of the standard has to be directed to a defined system.

Hence any standardization attempt has to start with “Accountability”. Accountability requires “Registration of a AI Developer” .

“Registration requires designation of an authority” in regulatory mechanisms and licensing formalities. In standardization the registration could be a self regulatory mechanism and led by even NGOs like FDPPI. Hence without waiting for a law to be passed, a regulatory authority to be set up, penalty mechanism to be implemented, standardisation can start with voluntary movements led by interested NGOs.

FDPPI has started the DGPSI movement along with a compliance certification mechanism exactly under these thoughts for DPDPA compliance. Hence DGPSI has become today the only DPDPA compliance tool ahead of ISO 27701 or any other standards.

Similarly the AI-DTS has the potential for becoming a self regulatory tool and FDPPI could take the lead.

Under DGPSI, AI-DTS has started its activity by focussing first on “Accountability” under which every AI developer shall voluntarily declare in its coding the ownership and ensure that the licensee as well as the chain of sub licensees is embedded into the source code.

However before we implement any regulation or standard, one needs to identify the applicability. Hence it is essential to define the “Regulated System” and the “Regulated entity”.

In the context of personal data protection, AI-DTS adopted the regulated entity definition as the “Data Fiduciary” or “Data Processor” since it is already part of the DPDPA regulation. Also using the provisions of Section 11 of ITA 2000, the AI developer was also considered as a Data Fiduciary and it was only left to the identification of the data fiduciary for enforcement. Hence, embedding the identity of the developer was the only missing requirement to enable AI regulation in India.

However, definition of the regulated system was essential and this was explained earlier through these columns. (Refer here) The definition was linked to the graded ability of the system to alter the source code of the algorithm without human intervention . This was an approach which redefined a class of software as “AI” depending on its nature to avoid a human in re-coding itself.

The EU-AI Act approach was slightly different since it required the definition to be linked to “Risk” and “Risk” required assessment of the “Harm” to the ultimate users.

DGPSI approach was simpler to tag software with ability to change its behaviour based on the observation of the output of the algorithms itself.

It appears that now the Bureau of Indian Standards (BIS) has started a debate towards developing an Indian standard for AI and is trying to gather industry responses. We welcome this initiative.

FDPPI/Naavi however urges BIS to focus on creating a proper definition of AI and Accountability as the foundation pillars for the standards and avoid reproducing an AIMS system on the lines of ISO 42001. Approach of ISO 42001 has been to create a standard for AIMS as if it is different from ISMS.

While this is commercially good to have one more certifiable standard, it is not a great idea as far as the implementing entity is concerned who have an ISMS certification and AIMS certification separately.

Hence we need to think differently when BIS starts looking at an AI standard for India.

Naavi

Posted in Cyber Law | Leave a comment

Flagging the “Dark Pattern” in Money Control Pro Subscription consent

Yesterday, I made a post on the need for “Auto Renewals” to stop as per DPDPA. This post elicited the following response on linked-in from one of the followers which has opened up further interesting discussions.

Quote:

RBI has/had guidelines that allowed banks to auto renew fixed deposits. Is that gone? Has RBI updated its guideline

Unquote

This was a good observation. I had made my comment in a different context but it did apply to the contracts such as FD renewal where also the “Auto renewal” without notice could have adverse consequences. I have personally experienced such inconvenience in the past when a joint account was auto renewed by a Bank without prior information locking the premature closure for a further period of renewal which I thought was unfair.

I therefore wanted to clarify the context of my earlier comment so that there is no misunderstanding of my post:

In this post I was referring to the privacy related consents where a service is provided with an auto renewal option and in particular to a situation involving online subscription of an information service. In such cases, when the service is due, the auto renewal triggers a financial debit which the consumer/data principal may not want. In such circumstances the data fiduciary/service provider falling back on the auto renewal clause is an unfair implementation of the consent requirement under DPDPA 2023.

Further DPDPA 2023 requires renewal of consent for all legacy data principals and hence auto renewal per-se is no longer valid.

DPDPA brings in two important changes to the system of obtaining an informed consent. First any consent should be capable of being withdrawn. If the withdrawal results in any adverse consequences to the data fiduciary it should be borne by the data principal. If cancellation genuinely requires a certain time, it should be allowed.

However the ease of placing the withdrawal request should be comparable to the granting of consent. If I can order a product or service at a single click, it should be withdrawable by a single click.

Cert-in guidelines under ita2000 has said privacy policy needs to be renewed once a year. Also, purpose oriented consent has to be clear and fairly obtained whereas in many cases it is deceptively obtained. This needs to stop.

The FD renewal also should ideally include a pre-auto renewal notification at least 24 hours prior to renewal stating to the effect

“Your FD would mature and fall due in next 24 hours. It would be renewed as per your current instructions unless you indicate new disposal instructions. You can indicate your disposal instructions by clicking the following button..” etc

In case of the FD it can be closed anytime even after renewal though with a interest reduction. The interest reduction can be justified under the reasonable adverse loss of the data fiduciary which should be borne by the depositor or the data principal. Hence it is compatible with DPDPA.

My comment was specifically made in respect of a subscription of Money control pro by e-eighteen.com which is refusing to stop annual subscription even when requested one day prior to due date and charging for the entire year ahead. This is unfair and violative of DPDPA 2023 for which e-eighteen.com could be penalized under DPDPA 2023.

While the stoppage of subscription does not impose any inconvenience on money control, their refusal is just greedy exploitation of an earlier consent. There is no inconvenience to Money Control and they want to postpone the decision by another one year. This is “Dark pattern” consent which is unethical and needs to be flagged.

This sort of privacy contracts need to stop.

I have served a notice on the DPO of e-eighteen.com and grievance redressal officer of nw.18 and no satisfactory resolution has been received so far. I reserve my right to raise this dispute at the appropriate time.

There are many such “Auto renewal” contracts that need to be re-set. While it is not the intention of naavi.org to inconvenience businesses, the need to take prior consent to use auto renewal clauses of an earlier era needs to be flagged and DPB will have to act in this regard.

Naavi

Posted in Cyber Law | Leave a comment

Auto Renewal of Subscriptions should go out of use

Recently I came across an issue related to my subscription at Money Control.com for what is called a premium subscription. Though I asked for its cancellation one day before its due date, Money control or nw18.com or E-Eighteen.com Ltd is refusing to cancel the subscription.

According to DPDPA, “Withdrawal of Consent” is a right of the data principal and the general practice is that withdrawal should be as easy as the acceptance itself. If subscription is possible on a single click, withdrawal should also be possible through a single click.

The website of money control provides information on a grievance officer. There is also a DPO contact. At present I have not received resolution from the Grievance officer and I am now sending a complaint to the DPO.

I will be waiting for the DPB of India to come into existence when I will complain against the practice of “Auto renewal” which is not in consonance with the spirit of DPDPA or ITA 2000.

Naavi

Posted in Cyber Law | Leave a comment

AI should be prevented from Lying by design

An interesting article was found today a futurism.com about how AI is lying with intention. Referring to recent studies the article highlights that LLMs are deceiving human observers on purpose.

This means that whether AI is sentient or not, technology has successfully created the “Frankenstein” or the “Bhasmasura”. If we donot recognize this threat and take corrective steps, the future of Internet, Search Engines and the ChatGPT will all be in danger of being discarded as untrustworthy.

The report makes a statement “We found that Meta’s AI had learned to be a master of deception.” which is alarming. Though this was in the context of a game called “Diplomacy”, the feature of “Hallucination” which is present in LLMs is the license to the AI to cook up things. This could happen in any fake news creation as we have seen in recent days.

The EU-AI law does not seem to be good enough in controlling this risk since its approach to flagging of risk is inadequate.

In our view any AI algorithm with a capability to hallucinate or in other words, alter its behaviour in a manner in which the humans did not design it to behave should be considered as “Unacceptable Risk” and “Hallucination” is one such unacceptable risk.

When India tries to draft its law on AI, it has to ensure that this risk is recognized and countered effectively.

Many regulations in USA about AI only focusses on “Bias” in terms of racism. But the bigger threat is the licence given to the AI to hallucinate which leads to not only racist behaviour but the fraudulent behaviour indicated in the above surveys.

In India, since the owner of AI algorithm is legally accountable for the end result of its impact on the society, the “Fraud by AI” is a “Fraud by the owner of the AI algorithm”.

Hence all those companies who proudly announce that they develop AI software should be ready for the adverse consequences of their software built on the LLMs defrauding any body in the society.

Naavi

Posted in Cyber Law | Leave a comment

Welcome Mr Jitin Prasada as Mos IT

With the new cabinet of Modi 3.0 announced, it is heartening to note that Mr Ashwini Vashnav continues to be the minister for IT along with Railways and Information and Broadcasting. Digital publishing being a major part of Meity’s regulations, it is good that I&B ministry has been combined with MeitY at the level of the minister.

The MOS of IT Mr Rajev Chandrashekar unfortunately lost his election narrowly in Tiruvananthapuram against Mr Shashi Tharoor and will be missed for continuity. With his personal IT knowledge, he had brought his own welcome style of operations in the MeitY and worked hard on the DPDPA as well as the amendment of ITA 2000 through Digital India Act. He will be missed by the industry.

We wish Rajeev Chandrashekar all the best in his next stint as a party worker either in Kerala or in Bangalore which he represented in the Rajya Sabha.

In place of Mr Mr Rajeev Chandrashekar, we have now Mr Jitin Prasada as the new minister of state for IT.

A product of Doon School Dehradun and an MBA from International Management institute in New Delhi, Jitin Prasada recently served as a Minister in the UP state Government as Minister of Technical Education for 2 years. We hope under the guidance of Mr Ashwin Vaishnaw, he would be continuing from where Mr Rajeev Chandrashekar left off.

We had heard that in the 100 day agenda, passing of the rules of DPDPA was one of the items included and we look forward to Mr Jitin Prasada to ensure that the draft rules are released quickly and initiate the public debate. We also hope that he will not succumb to the lobbying of the industry which is interested in delaying the rules and manipulating it to their advantage.

Though Mr Jitin Prasada comes from the background of Congress, we presume that if Mr Modi has chosen him for the job, he must be committed enough to take the Indian IT forward. Apart from the task of constituting the DPB, releasing the draft rules on DPDPA, Mr Jitin Prasada has the responsibility of defending the Digital Media Intermediary rules under ITA 2000 which is under challenge in the Supreme Court by Meta/WhatsApp.

In the past we have found that the MeitY has not effectively defended cases against itself from the multinational Big Tech industry since this industry is also supported by the NASSCOM. There is a need to change this attitude and enable the opposition to have its way when it comes to tightening the laws against “Fake News Industry”.

Laws and regulations related to the AI industry is another major step required from the Meity and the earlier regime had been pursuing the revision of ITA 2000 with a replacement act like Digital India Act. There was a fight between Meity and Ministry of Finance on the Bit Coin regulation which also was kept pending due to corruption at all places and possibly including the Judiciary.

Now, if Mr Jitin Prasada pursues the DIT dream, he should ensure that the law is made for the people and not for the benefit of the Bit-Coin industry and the Fake news industry.

There is also a need for taking action on Cyber Crime prevention and ensuring that the existing ITA 2000 is itself used to strengthen the mechanism with better training of the “Adjudicators” who are administratively close to the MeitY. With the passage of DPDPA, we expect that the compensation payable to Data Principals under DPDPA needs to be handled by the Adjudicators of IT and they need to be trained on DPDPA quickly. Even after 24 years of ITA 2000, the performance of IT Secretaries as Adjudicators have been below expectations and this needs to be corrected.

The opportunities before MeitY are plenty and we wish Mr Jitin Prasada success in his new stint.

Naavi

Posted in Cyber Law | Leave a comment

FDPPI and BSPIN organizes a discussion on DPDPA and ITA 2000

BSPIN and FDPPI has jointly organized an online Fire Chat discussion today at 10.00 am on DPDPA 2023 and ITA 2000.

Registration can be done here

On request, attendees will be issued participation certificate with 2 hours CPE credit. Attendees will also get a 20% discount on the book “Privacy Guardians…”

Naavi

Posted in Cyber Law | Leave a comment