Who is or Who Should be a Significant Data Fiduciary?

One of the keenly awaited rule under DPDPA 2023 is the criteria to be adopted by the Government for declaring a Data Fiduciary as a Significant Data Fiduciary.

While the Act does not define “Sensitive Personal Data”, Section 10(1) brings in the concept of “Sensitivity of data” under the special obligations of a SDF.

According to the section, the Central Government may notify “Any” data fiduciary or “Class” of data fiduciaries as Significant data fiduciary on the basis of an assessment of such relevant factors as it may determine including

(a) the volume and sensitivity of personal data processed;

(b) risk to the rights of Data Principal;

(c) potential impact on the sovereignty and integrity of India;

(d) risk to electoral democracy;

(e) security of the State; and

(f) public order.

“Sensitivity” of personal data in the context of the Act is tagged with the “Volume” which means that different combinations of “Sensitivity” and “Volume” may determine the definition of a SDF. In the case of security of state or public order, risk to tights etc., volume is not an essential criterion.

Since the reasons mentioned in Section 10(1) are “Inclusive examples”, the Government may be at its liberty to notify any specific data fiduciary or class of data fiduciaries as SDF.

In the case of a “Class” of data fiduciaries, those who are involved in the processing of Financial data, Health Data or Bio metric data or minor’s data may be easily recognized as potential SDF category.

To this we need to add “Organizations” which supply material to defence organizations or law enforcement agencies or to Government in general. These are the types of organizations which are often targeted by the enemies of the state for stealing state secrets. Hence they should be declared SDF by virtue of the “Security of State” clause itself. In such cases, the volume may not be a key criterion.

In other cases different volume limits may be specified for different classes of data fiduciaries also.

Further, all criteria for declaring an entity as SDF may not be announced at one time and it may come from time to time through individual notifications just as Section 70 notifications are made under ITA 2000.

An organization will have some special obligations if it becomes a SDF and hence the compliance canvas will change. Unless otherwise exempted, the applicability of DPDPA 2023 is from the date of specific notification. Hence it is possible that an organization which is declared as a SDF may need to designate a DPO and conduct a DPIA immediately. Hopefully a time of around 6 months may be given for this compliance.

However, to err on the safe side, wise organizations should make a self assessment and decide themselves to be compliant to the higher degree of compliance of a SDF at least to the extent of designating a DPO/Compliance officer.

Some of these organisations are already into the DPIA process as the first time implementation of DPIA is time consuming.

All B2C e-commerce organizations will be potentially considered as SDF unless they are having a low volume of transactions. Any organization which has more than say 50 lakh customers till now (cumulatively since inception) could be considered as SDF by virtue of the ITA 2000 definition. The Government may however bring down this limit substantially for DPDPA for Health and Fintech Companies.

Ideally the limit should be in the range of around 1 lakh personal data sets which meet a threshold sensitivity criteria of health data or finance data. In case of biometric data it could go even down to around 50000 and in case of highly sensitive biometric data such as DNA records, there may be no limit at all.

We donot know if the MeitY will go to such depths of thinking and opt for some generic description of SDF.

We have also in the past raised another important issue which also is not expected to be addressed by MeitY. It is the need to allow flexibility to consider an organization as a hybrid entity where certain operations are of SDF nature and certain others are not. In such an event the SDF obligations can be applied only to the unit processing sensitive personal information and not others.

For example, if there is a diagnostic lab processing normal health data of small volumes with a unit handling DNA processing or there is a payment gateway service which provides services to many ordinary data fiduciaries and one or two clients providing sensitive transactions, then if the data fiduciary offers to segregate its SDF activities from others, it should be permitted to treat the part of its business to be declared as a “SDF Unit” instead of the entire organization to be treated as SDF.

I am not sure that this nuance is recognized or will be recognized by the Meity when it formulates its rules. Let us wait and see .

Naavi

Posted in Cyber Law | Leave a comment

Is there a strategic need for segregation of Ethics while defining AI Standards?

In India we are today discussing both regulation of AI and standardization of AI at the same time. Just as the EU-AI act is a regulation while ISO 42001 is a standardization, BIS is discussing AI standardization while ITA 2000 and DPDPA 2023 already represent the regulation.

The Bureau of Indian Standards (BIS) under one of its sectional committees, BIS (LITD 30BIS) has arranged a webinar on 21st June 2024 on the topic “Artificial Intelligence-Standardization Landscape” which may present the current status of the standardization initiatives related to AI in India.

The objectives of standardization is set out as ensuring “safe, secure, and ethical development and deployment of AI systems” and the webinar is meant to “Sensitize the stakeholders’. The webinar is a very brief event and is likely to only have the limited objective of announcing its initiative.

It may be observed that the objectives of standardization is meant to include “Ethical” development and deployment as part of the technical standardization. This could mean that the standard may wade into the domain of regulation.

Currently there are a few Standard documents such as IS 38507 :2022 on “Governance implications of the Use of AI by organizations” as well as IS 24368:2022 on “Artificial Intelligence-Overview of Ethical and Societal Concerns” which address AI standardization in India from Governance point of view. There are atleast 10 other standards on different technical aspects of AI.

At the same time there is already ITA 2000 which regulates Automated Systems (including what may now be defined as Artificial Intelligence systems) and DPDPA 2023 which regulates automated decision making in the domain of personal information processing. The systems used by Data Fiduciaries and Data Processors under DPDPA 2023 will certainly include AI systems and hence the published standards if any will affect the activities of the Data Fiduciaries and Processors.

Hence any new initiatives in standardization need to ensure that the specifications of the overlapping of standards do not clash with any legal issues covered under regulations.

Very often industry has a confusion between “Regulatory Compliance” and “Adherence to Standards”. Since customers of organizations often refer to “Industry Best Practices” and specify need to adhere to standards, Indian companies tend to prioritize standardization to regulatory compliance.

It is important that the users of the standards appreciate that “Compliance to law is mandatory” and “Compliance to Standards” is a business decision. Compliance therefore should always come as the first priority and standardization is only a step to compliance.

There is a responsibility for organizations like BIS to ensure that in all their standardization documents they do record that standardization is not a replacement of regulation but is subordinate to regulation. Non Conformance to law could lead to penalties and non conformance to standards could be a matter of business negotiation.

In the past, there has been an awkward attempt by vested interests to influence law making to draft rules in such a manner that the industry is deliberately mislead into believing that “Standard is also part of the law”. There has also been an attempt by the standardization organizations also to mislead the market to wrongly believe that “Adhering to a standard is deemed compliance to law”.

Naavi has been forced to call out such attempts in the past and may do so again if required.

This conflicting situation between standardization and regulation is unwarranted and should be avoided by keeping the objective of “Safe and Secure development and deployment” of AI as the objective of standardization along with building compatibility of usage across multiple technology platforms and sectors and leave the “Ethical development and deployment” as the responsibilities of the regulation.

It is my belief that this approach of segregation of objectives between BIS and MeitY (Cyber Law and Data Protection Division) would also ensure that “Standards” are more generic than they are today. If not, in future, there will be a need for one AI standard for Fintech and another for Healthcare, another for hospitality and so on leading to proliferation of standards which is the bane of ISO standards.

The fact that ISO standards run to thousands in numbers is not a matter to be proud of. It is an indication of how complicated is the system of compliance to ISO standards in the perspective of the industry though auditors and educators are happy to have multiple business opportunities. This situation is more acute in the domain of IT since “Data” is the building block of IT industry and there are hundreds of application of one type of data and hence any attempt to standardize the processing will require hundreds of ways of processing. The need for generic standards is therefore essential to make standardization in IT easily acceptable by multiple types of data processors.

Naavi has consistently tried to address this issue and introducing “Unified Framework of compliance” to reduce the burden of compliance on the industry. “Compliance without Pain” is the motto followed by Naavi which has been ingrained in the frameworks like DGPSI.

When I look at the composition of the technical committees of BIS which draft IT related standards, I get a feeling that there is a shortfall in the member’s exposure to ITA 2000 and DPDPA 2023. This could result in the final versions of the standard missing legal issues. There is also representation of too many industry sectors which could result in the final draft trying to accommodate many sector specific requirements and vested interests rather than the standard being neutral and generic. I hope this will be suitably taken care of.

I wish these comments are considered by BIS in the right spirit as they go forward with the standardization of AI.

I look forward to a healthy and constructive debate on these comments.

Naavi

Posted in Cyber Law | Leave a comment

Towards AI standardization in India

We have started discussion on AI standardization in these columns some time back with a brief review of ethical standards that have been suggested by various international bodies as well as EU-AI act.

In India, we have a tendency to be “Followers” rather than “Leaders”. Hence we look upto EU or US to guide us for everything including developing a standard for AI. Naavi.org has always believed that while we can take guidance from all parts of the world, we should not hesitate to develop our own indigenous standards. It is this principle that has guided Naavi and FDPPI to develop DGPSI the Digital Governance and Protection Standard of India which addresses the compliance requirements of DPDPA, ITA 2000 and the draft BIS standard on Data Governance to develop a comprehensive Indian standard for personal data protection.

One of the objectives of the DGPSI approach has been an attempt to simplify the standard requirements to make it easy to comprehend by the users and keep it flexible enough to be adapted to the requirements of different risk situations.

The AI-DTS as part of the DGPSI already has tried to look at the feasibility of bringing in a certain framework for AI users and developers that would start providing a base for the regulation.

The very first part of this AI-DTS which is a measure of data trust score for an AI algorithm is to bring “Accountability” to AI development. It is one of the beliefs of AI-DTS that once we make the action of AI accountable to a legal entity (The developer or the user) then most of the adverse consequences that may arise because of “Unethical development or use” of AI could be addressed under normal laws.

“Standardization” is an attempt to provide a detailed “Check list” which is like defining the “Due Diligence”. The “Check list” cannot however over ride the law of the land and hence without changing the law itself, standardization cannot over ride the benefits of bringing in “Accountability”.

Accountability is the first step not only for regulation but also for standardisation since the applicability of the standard has to be directed to a defined system.

Hence any standardization attempt has to start with “Accountability”. Accountability requires “Registration of a AI Developer” .

“Registration requires designation of an authority” in regulatory mechanisms and licensing formalities. In standardization the registration could be a self regulatory mechanism and led by even NGOs like FDPPI. Hence without waiting for a law to be passed, a regulatory authority to be set up, penalty mechanism to be implemented, standardisation can start with voluntary movements led by interested NGOs.

FDPPI has started the DGPSI movement along with a compliance certification mechanism exactly under these thoughts for DPDPA compliance. Hence DGPSI has become today the only DPDPA compliance tool ahead of ISO 27701 or any other standards.

Similarly the AI-DTS has the potential for becoming a self regulatory tool and FDPPI could take the lead.

Under DGPSI, AI-DTS has started its activity by focussing first on “Accountability” under which every AI developer shall voluntarily declare in its coding the ownership and ensure that the licensee as well as the chain of sub licensees is embedded into the source code.

However before we implement any regulation or standard, one needs to identify the applicability. Hence it is essential to define the “Regulated System” and the “Regulated entity”.

In the context of personal data protection, AI-DTS adopted the regulated entity definition as the “Data Fiduciary” or “Data Processor” since it is already part of the DPDPA regulation. Also using the provisions of Section 11 of ITA 2000, the AI developer was also considered as a Data Fiduciary and it was only left to the identification of the data fiduciary for enforcement. Hence, embedding the identity of the developer was the only missing requirement to enable AI regulation in India.

However, definition of the regulated system was essential and this was explained earlier through these columns. (Refer here) The definition was linked to the graded ability of the system to alter the source code of the algorithm without human intervention . This was an approach which redefined a class of software as “AI” depending on its nature to avoid a human in re-coding itself.

The EU-AI Act approach was slightly different since it required the definition to be linked to “Risk” and “Risk” required assessment of the “Harm” to the ultimate users.

DGPSI approach was simpler to tag software with ability to change its behaviour based on the observation of the output of the algorithms itself.

It appears that now the Bureau of Indian Standards (BIS) has started a debate towards developing an Indian standard for AI and is trying to gather industry responses. We welcome this initiative.

FDPPI/Naavi however urges BIS to focus on creating a proper definition of AI and Accountability as the foundation pillars for the standards and avoid reproducing an AIMS system on the lines of ISO 42001. Approach of ISO 42001 has been to create a standard for AIMS as if it is different from ISMS.

While this is commercially good to have one more certifiable standard, it is not a great idea as far as the implementing entity is concerned who have an ISMS certification and AIMS certification separately.

Hence we need to think differently when BIS starts looking at an AI standard for India.

Naavi

Posted in Cyber Law | Leave a comment

Flagging the “Dark Pattern” in Money Control Pro Subscription consent

Yesterday, I made a post on the need for “Auto Renewals” to stop as per DPDPA. This post elicited the following response on linked-in from one of the followers which has opened up further interesting discussions.

Quote:

RBI has/had guidelines that allowed banks to auto renew fixed deposits. Is that gone? Has RBI updated its guideline

Unquote

This was a good observation. I had made my comment in a different context but it did apply to the contracts such as FD renewal where also the “Auto renewal” without notice could have adverse consequences. I have personally experienced such inconvenience in the past when a joint account was auto renewed by a Bank without prior information locking the premature closure for a further period of renewal which I thought was unfair.

I therefore wanted to clarify the context of my earlier comment so that there is no misunderstanding of my post:

In this post I was referring to the privacy related consents where a service is provided with an auto renewal option and in particular to a situation involving online subscription of an information service. In such cases, when the service is due, the auto renewal triggers a financial debit which the consumer/data principal may not want. In such circumstances the data fiduciary/service provider falling back on the auto renewal clause is an unfair implementation of the consent requirement under DPDPA 2023.

Further DPDPA 2023 requires renewal of consent for all legacy data principals and hence auto renewal per-se is no longer valid.

DPDPA brings in two important changes to the system of obtaining an informed consent. First any consent should be capable of being withdrawn. If the withdrawal results in any adverse consequences to the data fiduciary it should be borne by the data principal. If cancellation genuinely requires a certain time, it should be allowed.

However the ease of placing the withdrawal request should be comparable to the granting of consent. If I can order a product or service at a single click, it should be withdrawable by a single click.

Cert-in guidelines under ita2000 has said privacy policy needs to be renewed once a year. Also, purpose oriented consent has to be clear and fairly obtained whereas in many cases it is deceptively obtained. This needs to stop.

The FD renewal also should ideally include a pre-auto renewal notification at least 24 hours prior to renewal stating to the effect

“Your FD would mature and fall due in next 24 hours. It would be renewed as per your current instructions unless you indicate new disposal instructions. You can indicate your disposal instructions by clicking the following button..” etc

In case of the FD it can be closed anytime even after renewal though with a interest reduction. The interest reduction can be justified under the reasonable adverse loss of the data fiduciary which should be borne by the depositor or the data principal. Hence it is compatible with DPDPA.

My comment was specifically made in respect of a subscription of Money control pro by e-eighteen.com which is refusing to stop annual subscription even when requested one day prior to due date and charging for the entire year ahead. This is unfair and violative of DPDPA 2023 for which e-eighteen.com could be penalized under DPDPA 2023.

While the stoppage of subscription does not impose any inconvenience on money control, their refusal is just greedy exploitation of an earlier consent. There is no inconvenience to Money Control and they want to postpone the decision by another one year. This is “Dark pattern” consent which is unethical and needs to be flagged.

This sort of privacy contracts need to stop.

I have served a notice on the DPO of e-eighteen.com and grievance redressal officer of nw.18 and no satisfactory resolution has been received so far. I reserve my right to raise this dispute at the appropriate time.

There are many such “Auto renewal” contracts that need to be re-set. While it is not the intention of naavi.org to inconvenience businesses, the need to take prior consent to use auto renewal clauses of an earlier era needs to be flagged and DPB will have to act in this regard.

Naavi

Posted in Cyber Law | Leave a comment

Auto Renewal of Subscriptions should go out of use

Recently I came across an issue related to my subscription at Money Control.com for what is called a premium subscription. Though I asked for its cancellation one day before its due date, Money control or nw18.com or E-Eighteen.com Ltd is refusing to cancel the subscription.

According to DPDPA, “Withdrawal of Consent” is a right of the data principal and the general practice is that withdrawal should be as easy as the acceptance itself. If subscription is possible on a single click, withdrawal should also be possible through a single click.

The website of money control provides information on a grievance officer. There is also a DPO contact. At present I have not received resolution from the Grievance officer and I am now sending a complaint to the DPO.

I will be waiting for the DPB of India to come into existence when I will complain against the practice of “Auto renewal” which is not in consonance with the spirit of DPDPA or ITA 2000.

Naavi

Posted in Cyber Law | Leave a comment

AI should be prevented from Lying by design

An interesting article was found today a futurism.com about how AI is lying with intention. Referring to recent studies the article highlights that LLMs are deceiving human observers on purpose.

This means that whether AI is sentient or not, technology has successfully created the “Frankenstein” or the “Bhasmasura”. If we donot recognize this threat and take corrective steps, the future of Internet, Search Engines and the ChatGPT will all be in danger of being discarded as untrustworthy.

The report makes a statement “We found that Meta’s AI had learned to be a master of deception.” which is alarming. Though this was in the context of a game called “Diplomacy”, the feature of “Hallucination” which is present in LLMs is the license to the AI to cook up things. This could happen in any fake news creation as we have seen in recent days.

The EU-AI law does not seem to be good enough in controlling this risk since its approach to flagging of risk is inadequate.

In our view any AI algorithm with a capability to hallucinate or in other words, alter its behaviour in a manner in which the humans did not design it to behave should be considered as “Unacceptable Risk” and “Hallucination” is one such unacceptable risk.

When India tries to draft its law on AI, it has to ensure that this risk is recognized and countered effectively.

Many regulations in USA about AI only focusses on “Bias” in terms of racism. But the bigger threat is the licence given to the AI to hallucinate which leads to not only racist behaviour but the fraudulent behaviour indicated in the above surveys.

In India, since the owner of AI algorithm is legally accountable for the end result of its impact on the society, the “Fraud by AI” is a “Fraud by the owner of the AI algorithm”.

Hence all those companies who proudly announce that they develop AI software should be ready for the adverse consequences of their software built on the LLMs defrauding any body in the society.

Naavi

Posted in Cyber Law | Leave a comment