AI industry needs to adopt Discipline

After the recent press reports about “Intermediaries” to be required to take permission of Meity for deploying Generative AI solutions in public platforms, there was a spate of knee jerk reactions from the industry.

One start up founder reacted …

“I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4 years full time bringing AI to this domain in India”

For one such positive contribution, we can show many negative contribution of AI. What will start up community say about “Gemini” having been deployed without proper tuning? Is it not the responsibility of the Government to put a check on “Irresponsible use of AI”?

Before jumping into to criticise the Government one should seek clarification. The clarification about the advisory of the Meity has now been issued and it is applicable only to “Significant Intermediaries” such as the Google,Instagram etc.

So, industry can relax that the prior permission is not for every use of AI in their activity. But if you want to place a “Frankenstein” or a “Potential Frankenstein” for unsuspecting public to be mislead, there is a need for regulation.

Naavi.org has always advocated “Accountability” for all AI before even talking of Responsible AI, Ai Ethics, Transparency, Explainability, Bias control etc.

I repeat, every AI developer and AI deployer should identify themselves in any of the outputs generated by the AI. For example, if I have created an AI application X using an underlying algorithm Y created by a company Z, my disclosure in the output should be

“This is generated by X on algorithm Y created by Z”.

What Meity has suggested is a move in this direction.

Naavi.org and FDPPI has suggested another innovative method for regulation which Meity or even an NGO like FDPPI can implement. It is the “Registration of an AI before release”. In this system, Z will register his algorithm Y (only ownership not the code) with the registrar and claim ownership like a Copyright/Patent. When the algorithm is licensed to X, the registration system can be invoked both by X or Z to record the user. Then the disclosure can only be a registration number which can be stenographically embedded in the output code such as the deep fake video.

This is the measure which we suggested as Ujvala innovation a few days back in these columns.

If AI is not regulated and accountability fixed, I anticipate that during this election time there will be a tool kit funded from George Soros to create chaos in the social media. We may have to shut down X , WhatsApp, Instagram and YouTube temporarily unless they put in place necessary controls. All these organizations are already considered “Significant Social Media Intermediaries” under ITA 2000 and also “Significant Data Fiduciaires” under DPDPA 2023 and there is legal framework for imposing discipline.

Genuine AI start ups need to realize that they have the responsibility  not start a fight with the Government for such regulatory measures.

Naavi

Posted in Cyber Law | Leave a comment

Chakshu Portal launched for reporting spam calls

In a welcome move, the Government of India has introduced a new mechanism for reporting Spam Calls . The portal “Chakshu” has been opened as part of the Sanchar Sathi website Chakshu is meant to be used by citizens to report suspected fraudulent communication, wherein users can report numbers, messages and phishing attempts.

The website Sanchar sathi also provides the following citizen centric services.

  1. Block your lost or stolen mobile
  2. Know your Mobile Connections (To know how many SIM cards exit in the name of a person)
  3. IMEI verification
  4. Report incoming international call with Indian number
  5. Know your wireline isp

Naavi

Posted in Cyber Law | Leave a comment

AI Sand Box required to prevent a new Toolkit of Fake News

India is a fertile ground for misuse of AI through Fake news creation and distribution. It is expected that this would grow multiple times in the next few months and there could be an international tool kit under development to use Deep Fake videos to disturb the electoral democracy of India.

The Government is hesitating to notify the DPDPA rules which could bring agencies involved in distribution of online news under reign.

In this phase we can expect that AI created deep fake would proliferate through X, WhatsApp, Instagram and YouTube.

Simultaneously this will make any information on the Internet unreliable.

The challenge is therefore to identify what is to be accepted as credible information when it is presented online.

Apart from a notification that can be given under ITA 2000 without any further need for change of law, we urge that as a part of ethical use of AI, the following measures are initiated.

It is essential that any responsible AI developer should incorporate such codes in the software that a signature of the original developer and the licensee is embedded into a creation of an image, video or text through a steganographic inscription which cannot be altered or destroyed.

Attention of MeitY is drawn to ensure that this control is notified immediately under ITA 2000 before it is too late..

This “Genuinity Tag” should be embedded in all AI and taken note of by genuine users and AI auditors as a necessary compliance measure for AI related compliance.

A regulatory agency or an NGO should take the responsibility to “Register” genuine AI” and issue certificates of reliability assurance as a part of the AI algorithm audit.

Ujvala Consultants Pvt Ltd is in the process of developing such a registration system leading to an AI-DTS evaluation.

(Watch out for more)

Naavi

Posted in Cyber Law | Leave a comment

Regulatory Sandbox of RBI and DPDPA

Yesterday, RBI also released a document namely “Enabling Framework for Regulatory Sandbox” which inter-alia attracted interest of Data Protection professionals because a reference was made about DPDPA.

RBI is a sectoral regulator and how its regulations that may overlap with DPDPA is closely watched.

Under Section 16(2) of DPDPA, which applies to Cross border transfer of personal data, it is stated that…

“Nothing contained in this section shall restrict the applicability of any law for the time being in force in India that provides for a higher degree of protection for or restriction on transfer of personal data by a Data Fiduciary outside India in relation to any personal data or Data Fiduciary or class thereof”.

Since RBI already has some stricter regulation regarding transfer of data by its Regulatory Entities (REs) which may be both personal and non personal, it is understood that those regulations will remain.

Under Section 17(1(b) certain provisions of Chapter II, Chapter III and Section 16 is not applicable for the processing of ” 0f personal data by … or any other body in India which is entrusted by law with the performance of any …. regulatory or supervisory function, where such processing is necessary for the.performance of such function;

However the new Framework for regulatory sand box for Fintech industry once the sand box scheme is approved by RBI, the Fintech regulatory compliance will be supported through some relaxations by RBI.

However,  The sandbox entity must process all the data, in its possession or under its control with regard to Regulatory Sandbox testing, in accordance with the provisions of Digital Personal Data Protection Act, 2023. In this regard, the sandbox entity should have appropriate technical and organisational measures to ensure effective compliance of the provisions of the Act and rules made thereunder. Further, the sandbox entity should ensure adequate safeguards to prevent any personal data breach.

In the event such startups are notified by MeitY under DPDPA, Section 5, Section 8(3), 8(7) , Sec 10 and 11 of the DPDPA may be exempted.

Sec 5 is “Notice”. Sec 8(3) is accuracy and updation if the data is used for disclosure or automated decision making, Section 8(7) data retention and erasure, Sec 10 is “Significant Data Fiduciary” and Section 11 is Right to access.

A Start up working inside an RBI sandbox and notified by MeitY will have the benefits of both Section 17(1)(3) with above exemptions and the RBI exemptions as provided under the notification.

The RBI notification reiterates that RBI will manage the Fintech regulations and MeitY will regulate the DPDPA regulations. There  no other special impact of the RBI regulation on DPDPA.

There is however one observation. RBI notification is currently applicable and recognizes the existence of DPDPA.. though it is yet to be notified for effect. In a way RBI is validating the effectiveness of DPDPA even today .

Naavi

Posted in Cyber Law | Leave a comment

RBI also refers to Climate Change Impact on Financial Risk

Yesterday, RBI issued a Draft Disclosure framework on Climate related financial risks 2024 applicable for regulated entities (REs). Comments / feedback, if any, may be sent by e-mail with the subject line “Comments on Disclosure framework on Climate-related Financial Risks, 2024”, by April 30, 2024.

The policies proposed will have cascading impact on the loan customers and hence is of interest to the industry also.

The disclosures for REs will cover Governance, Strategy, Risk Management, Metrics and Targets. The Governance, Strategy and Risk management may be rolled out from FY 2025-26 on wards for Banks and top layer of NBFCs. Metrics and Targets may be rolled out in the following year. For ban cooperative Banks the roll out may be deferred by an additional year and for others, the dates need to be announced.

Since the risks of Banks and NBFCs are related to those of their customers, the REs will have to collect information and impose norms for their customers in terms of not only Governance, Strategy and Risk management, but also the Metrics.

The discussion on AI and climate change appears relevant in this context since the customers who are users of AI may be required to disclose information to the investors and who in turn have to submit the consolidated information to RBI.

Naavi

Posted in Cyber Law | Leave a comment

Climate Change Impact on ISO 42001

(Refer article in News18.com)

It is observed that some time in 2023, an idea was adopted by ISO that Standard developers should incorporate and demonstrate their concern for Climate change while arriving at the standards. Accordingly ISO-Guide 84:2020 was also released. It appears that this requirement is being now added mechanically to all standards without justifying its relevance.

Accordingly, a standard like ISO 42001 meant as a Requirement Standard for Artificial Intelligence Management System (AIMS) in clause 4.1 (Understanding the organization and its context) adds a component “The organization shall determine climate change is a relevant issue”.

When a company implementing an AI or developing an AI and looking at this document for guidance and possible certification would wonder what its use of an AI algorithm has to do with “Climate Change”.

While we consider that this clause has crept into the standard following the blind implementation of a norm without considering the proportionality of the impact of such a suggestion, we still open up for debate this requirement in the context of some recent revelations on the climatic impact of the AI systems particularly using LLMs.

LLMs are the first AI systems adopted by most companies and hence the climatic impact of LLMs becomes a relevant case for certification of ISO 42001.

In the context of Crypto Currencies, we have discussed how the energy requirements of Bitcoin/Crypto currency mining could be detrimental to the society (Refer the Article: Mr Piyush Goyal and Mr R K Singh… Do you know how much energy goes into Bitcoins?), a similar concern has now surfaced on the consequential use of scarce water resources in the development of LLMs.

For example it is stated that

“A single LLM interaction may consume as much power as leaving a low-brightness LED lightbulb on for one hour.”—Alex de Vries, VU Amsterdam

If you go through the Business Today article “Every time you talk to ChatGPT it drinks 500ml of water; here’s why” the information is scary. It is stated that Open AI’s Chat GPT consumes 500 ml of water for every 5 to 50 prompts it answers according to researchers.

In India discussions have taken place on water consumption by Companies like Pepsi or Cocacola but the dimension of Water and Energy Consumption by AI systems both for development and usage makes one to sit back and think if there is a need to decelerate the growth of data centers to conserve water and energy resources.

An article published by associated Press recently quoting the 2022 information suggested that Microsoft’s data center water use  increased by 34% from 2021 to 2022. The company slurped up more than 1.7 billion gallons, or 6.4 billion liters, of water in the previous year, which is said to be enough to fill more than 2,500 Olympic-sized swimming pools. It was a similar story with Google, which reported a 20% spike  in its water consumption over the same timeframe. It is anybody’s guess what would be the situation in 2024 with ChaptGPT 4/5 and Bard/Gemini being in use.

A time has come for ISO 42001 auditors (Ed: Audit of ISO 42001 may perhaps be required to be done like ISO 27701 along with ISO 27001) to ask the question to their auditee organizations if it is possible to ignore the climatic impact of use of AI when an AIMS audit is undertaken.

The current discussions on regulation of AI is normally around Job loss, Human brain degradement, Explainability,Accountability, Bias control etc., but not very much on the climate impact or related issues such as carbon foot print. The EU act on AI may require that the “High Risk AI Systems” may be required to report report their energy consumption resource use and other impacts throgh out their systems life cycle.

India has to also incorporate this aspect in its proposed AI regulation. A Yale university report mentions that in Chile and Uruguay, protests have erupted over planned data centers that would tap drinking water reservoirs.

There was a time when Indian Government would run TV ads on “Stop the tap when you are shaving”. Now the new generation ads will be “Dont make a query in Chat GPT if you donot need it”. Probably water conservation should become part of the IT industry’s responsibilities.

We donot know if the recent drinking water shortage in Bengaluru city has any origin in the increased use of AI !

Let us keep this issue on the radar….

Reference Articles:

https://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669

https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions#:~:text=Those%20will%20include%20standards%20for,electricity%20consumed%20by%20its%20calculations.

Posted in Cyber Law | Leave a comment