DPDPA Rules: How will legacy data consent be handled?

According to DPDPA 2023, consent is to be obtained even for applicable personal data collected by a Data Fiduciary before the commencement of the Act as per the notification. Hence identifying such data and issuing notices to such data principals is one of the key activities of data fiduciaries.

The proposed rules is expected to indicate for this purpose,

Notice to inform of Processing done where the Data Principal has given consent before commencement of Act:

(1) Where a Data Principal has given her consent for the processing of her personal data before the commencement of the Act, the data fiduciary shall as soon as it is reasonably practicable, give to the Data Principal a notice, in the following manner, namely:-

(a) The notice shall be made in like manner as is provided for a notice to seek consent and shall be understandable independently of any other information that has been made available by such data fiduciary; and

(b) The notice shall inform, in clear and plain language, the details necessary to enable her to exercise the Rights of the Data Principal, including-

(i) Such minimum details as are required in respect of a Notice to seek consent; and

(ii) description of the goods or services (including the offering of any service) that were provided or the users that were enabled, as a result of such processing

(2) A Data Fiduciary may use a Consent Artifact for thee purpose of giving the notice to inform of processing done.

The rule is silent about how the Data Fiduciary has to handle situations where the notice cannot be given for lack of contact information, or when the notice is returned undelivered or when the recipient is silent on whether the processing can continue.

Under DGPSI, we prescribe that appropriate measures should be built into the Consent artifact itself to meet these contingent possibilities.

It would be interesting to see how other frameworks (if any) address this issue.

Naavi

Posted in Cyber Law | Leave a comment

Will a Copy of draft Notice be part of the rules?

In one of the versions of the draft DPDPA rules which is under circulation, it is expected that the Government may provide a template for notice for consent.

Accordingly a model notice as follows is expected to be part of the notification.

It is suggested that the above model notice could be part of a “Consent Artifact” as per rule 3(4) and hence it is likely to be adopted mutatis mutandis by data fiduciaries and used for automation of consent. This could lead to inadequate consent and should be subject to some human oversight. It may also be necessary that the above format given as a “model” needs to be fine tuned by users.

One observation is that this model is not following the principle of “Purpose Segregation” in the sense it suggests one notice and one consent for multiple purposes such as “Registration”, “receiving of payments” etc. It does not take into account the need of a data principle who only wants to register today but is not ordering anything or making any payments.

The notice however suggests the segregation of data elements with different retention requirements as has been the suggestion of DGPSI. This needs to be factored into the consent management system.

The model notice suggests a hyperlinked form for withdrawal of consent and for filing a grievance with the Data Fiduciary as well as the DPBI and for saving a copy of the notice.

The model suggests the notification of right to right to “Nominate” ignoring the provision of ITA 2000 [Section 1(4)].

The model form suggests “Erasure” as a right without a clarity that it is “Subject to other legal requirements to preserve the data” which is mentioned in the rule.

The lack of integration of the rule in this regard to the ITA 2000 as it exists now appears to show up.

Under DGPSI framework, we also recommend that one line on reminding the “Duties” of the data principal is also added to the notice and this is missing in the model notice.

It is apparent that the model notice is designed as a web form and has to end with a “Click” which should state say “I accept” converting the notice into a consent contract. The need for proper authentication of the consent needs to be addressed by the Data Fiduciary. There is no mention of how the notice needs to be authenticated in the rule 3

Regarding the rule of erasure, the rule 3(5) is ambiguous as it states

The Data Fiduciary shall maintain every notice relating to processing of personal data on the basis of consent given by the Data Principal till the expiry of such period, beyond the date of erasure of such personal data, as may be applicable by law to limitation on the institution of any suit, filing of any appeal or making of any application in relation to such personal data”.

It should be noted that the consent along with the data collected for consent needs to be retained both for the legal rights of the data fiduciary and also the legal obligation as per laws like ITA 2000 where some information has to be kept for 6 months or 5 years. It would not suffice if only the notice is preserved. Even the data has to be preserved. The rules as available misses this point.

Under rule 3(1), it is stated as follows:

3. Notice to seek consent of Data Principal: (1) Every request for consent made to the data principal shall be accompanied oor preceded by a notice given by the Data Fiduciary to such Data Principal shall be accompanied or preceded by a notice given by the Data Fiduciary to such Data Principal, in the following manner, namely:-

(a) The notice shall be so made that it is –

(i) an electronic record or document presented independently of any other information that is or may be made available by such data fiduciary;

(ii) understandable independently of any other information that is or may be made available by such data fiduciary

(iii) storable by the data fiduciary independently of the personal data to which such notice pertains; and

(iv) easily storable or preservable by the data principal for future reference and

(b) The notice shall inform , in clear and plain language, the details necessary to enable her to give specific and informed consent for the processing of her personal data, which shall include, at the minimum,

(i) an itemised description of such personal data

(ii) the specific purpose of such processing

(iii) a declaration that only such personal data is proposed to be processed as is necessary for the purpose

(iv) a description of the goods or services (including the offering of any service) to be provided, or the uses to be enabled, as a result of such processing:

(v) the specific duration or point in time till which such personal data shall be processed

(vi) a list of the Rights of the Data Principal

(vii) the particular communication link for accessing the website or app, or both, of such data fiduciary using which such data principal may withdraw her consent, exercise the rights of the data principal or make a complaint to the Board, and a description of other means, if any , using which she may so withdraw, exercise such rights or make a complaint.

It is clear from the above that the notice and consent is expected to be obtained in electronic form. The possible legal conflict with ITA 2000 regarding validity of digitally signed electronic contracts or the cancellation of the mandate on the death of an individual on nomination has been ignored as was expected.

Though the Data Fiduciary which is a State has the right to use “Legitimate use” basis for processing personal data in situations like provision of subsidy, benefit or service etc., there is a mention under rule 3(2) about the need for notice and consent. This could introduce a needless conflict between “Consent” and “Legitimate use” as two different aspects of establishing the legal basis.

In summary the rule regarding “Notice and Consent” will continue to offer some challenges in implementation which needs to be addressed by the Data Fiduciaries. It is notable that these have already been anticipated and factored into the DGPSI framework in its detailed implementation manual.

More discussions will follow….

Naavi

Posted in Cyber Law | Leave a comment

Consent Manager and Account Aggregator

When the rules under DPDPA is released, apart from the definition of Significant Data Fiduciary, industry would be keenly looking at the rules related to “Consent manager”.

This is one area where Naavi may have divergent views with one section of professionals who may think that the current Account Aggregator (AA) scheme under DEPA and used by RBI is good enough to be adopted to the DPDPA. Obviously the 14 licensed Account Aggregators would be happy to be presented with an additional opportunity to expand their current business.

The system of AAs is currently built as an “Intermediary” under ITA 2000 subject to provisions of Section 79 of ITA 2000. These AAs hold the consent of individuals to “Fetch and Share” their personal information from a set of approved “Data Providing Agencies” or “Financial Information Provider (FIP) to a set of “Data Requesting Agencies” or “Financial Information user (FIU)”through a technical process of exchange that can be triggered by the requester. The system operates through an AA platform. The platform is a data routing platform and should not provide any access to the AA to the data. Data should flow directly from the FIP to FIU and the role of the AA is only to open the gate when the request is made after ensuring that it has the permission from the individual subscriber to its service.

Under DPDPA, the Consent Manager is a Data Fiduciary licensed by DPDPA. Hence current AAs who want to act as Consent Managers, need to obtain an additional license from DPDPA. The procedure for AA licensing There are many agencies which is trying to assist organizations go through this process of registration.

If any of the licensed AAs need to register themselves as a “Consent Manager -Data Fiduciary” it would amount to a diversification of their current business and therefore may in principle violate the terms of license. Whether this is permitted under the RBI’s current AA registration is not clear.

Since AAs will now also come under the DPDPA, unless they declare and obtain a “Conformity Assessment Certificate” that they have no access to identifiable personal information, they will be subject to all compliance requirements of a Significant Data Fiduciary.

They will therefore be subject to “Continuing Consent” for existing data principals as per DPDPA unless they are exempted.

If however the AAs have established systems as envisaged under the AA scheme without any deviation, they may claim to be exempt from DPDPA provisions since they may not process identifiable personal data.

This however could be a point of contention at some point of time in future if any data breach exposes the stream of data flow through the system to a hacker attack. If the FIP and FIU use their own encrypted network as they are supposed to, using an approved digital signature system, then the responsibilities of the AA will remain that of an intermediary and does not extend to a data fiduciary.

I am not fully aware of how the different AAs have structured their IT architecture and hence I request those of you having the information to share the data security features in the AA system. In particular any of you may confirm if there is a digital signature based data encryption system between FIP and FIU.

I look forward to clarification from any of you who is aware.

Naavi

Posted in Cyber Law | Leave a comment

Who is or Who Should be a Significant Data Fiduciary?

One of the keenly awaited rule under DPDPA 2023 is the criteria to be adopted by the Government for declaring a Data Fiduciary as a Significant Data Fiduciary.

While the Act does not define “Sensitive Personal Data”, Section 10(1) brings in the concept of “Sensitivity of data” under the special obligations of a SDF.

According to the section, the Central Government may notify “Any” data fiduciary or “Class” of data fiduciaries as Significant data fiduciary on the basis of an assessment of such relevant factors as it may determine including

(a) the volume and sensitivity of personal data processed;

(b) risk to the rights of Data Principal;

(c) potential impact on the sovereignty and integrity of India;

(d) risk to electoral democracy;

(e) security of the State; and

(f) public order.

“Sensitivity” of personal data in the context of the Act is tagged with the “Volume” which means that different combinations of “Sensitivity” and “Volume” may determine the definition of a SDF. In the case of security of state or public order, risk to tights etc., volume is not an essential criterion.

Since the reasons mentioned in Section 10(1) are “Inclusive examples”, the Government may be at its liberty to notify any specific data fiduciary or class of data fiduciaries as SDF.

In the case of a “Class” of data fiduciaries, those who are involved in the processing of Financial data, Health Data or Bio metric data or minor’s data may be easily recognized as potential SDF category.

To this we need to add “Organizations” which supply material to defence organizations or law enforcement agencies or to Government in general. These are the types of organizations which are often targeted by the enemies of the state for stealing state secrets. Hence they should be declared SDF by virtue of the “Security of State” clause itself. In such cases, the volume may not be a key criterion.

In other cases different volume limits may be specified for different classes of data fiduciaries also.

Further, all criteria for declaring an entity as SDF may not be announced at one time and it may come from time to time through individual notifications just as Section 70 notifications are made under ITA 2000.

An organization will have some special obligations if it becomes a SDF and hence the compliance canvas will change. Unless otherwise exempted, the applicability of DPDPA 2023 is from the date of specific notification. Hence it is possible that an organization which is declared as a SDF may need to designate a DPO and conduct a DPIA immediately. Hopefully a time of around 6 months may be given for this compliance.

However, to err on the safe side, wise organizations should make a self assessment and decide themselves to be compliant to the higher degree of compliance of a SDF at least to the extent of designating a DPO/Compliance officer.

Some of these organisations are already into the DPIA process as the first time implementation of DPIA is time consuming.

All B2C e-commerce organizations will be potentially considered as SDF unless they are having a low volume of transactions. Any organization which has more than say 50 lakh customers till now (cumulatively since inception) could be considered as SDF by virtue of the ITA 2000 definition. The Government may however bring down this limit substantially for DPDPA for Health and Fintech Companies.

Ideally the limit should be in the range of around 1 lakh personal data sets which meet a threshold sensitivity criteria of health data or finance data. In case of biometric data it could go even down to around 50000 and in case of highly sensitive biometric data such as DNA records, there may be no limit at all.

We donot know if the MeitY will go to such depths of thinking and opt for some generic description of SDF.

We have also in the past raised another important issue which also is not expected to be addressed by MeitY. It is the need to allow flexibility to consider an organization as a hybrid entity where certain operations are of SDF nature and certain others are not. In such an event the SDF obligations can be applied only to the unit processing sensitive personal information and not others.

For example, if there is a diagnostic lab processing normal health data of small volumes with a unit handling DNA processing or there is a payment gateway service which provides services to many ordinary data fiduciaries and one or two clients providing sensitive transactions, then if the data fiduciary offers to segregate its SDF activities from others, it should be permitted to treat the part of its business to be declared as a “SDF Unit” instead of the entire organization to be treated as SDF.

I am not sure that this nuance is recognized or will be recognized by the Meity when it formulates its rules. Let us wait and see .

Naavi

Posted in Cyber Law | Leave a comment

Is there a strategic need for segregation of Ethics while defining AI Standards?

In India we are today discussing both regulation of AI and standardization of AI at the same time. Just as the EU-AI act is a regulation while ISO 42001 is a standardization, BIS is discussing AI standardization while ITA 2000 and DPDPA 2023 already represent the regulation.

The Bureau of Indian Standards (BIS) under one of its sectional committees, BIS (LITD 30BIS) has arranged a webinar on 21st June 2024 on the topic “Artificial Intelligence-Standardization Landscape” which may present the current status of the standardization initiatives related to AI in India.

The objectives of standardization is set out as ensuring “safe, secure, and ethical development and deployment of AI systems” and the webinar is meant to “Sensitize the stakeholders’. The webinar is a very brief event and is likely to only have the limited objective of announcing its initiative.

It may be observed that the objectives of standardization is meant to include “Ethical” development and deployment as part of the technical standardization. This could mean that the standard may wade into the domain of regulation.

Currently there are a few Standard documents such as IS 38507 :2022 on “Governance implications of the Use of AI by organizations” as well as IS 24368:2022 on “Artificial Intelligence-Overview of Ethical and Societal Concerns” which address AI standardization in India from Governance point of view. There are atleast 10 other standards on different technical aspects of AI.

At the same time there is already ITA 2000 which regulates Automated Systems (including what may now be defined as Artificial Intelligence systems) and DPDPA 2023 which regulates automated decision making in the domain of personal information processing. The systems used by Data Fiduciaries and Data Processors under DPDPA 2023 will certainly include AI systems and hence the published standards if any will affect the activities of the Data Fiduciaries and Processors.

Hence any new initiatives in standardization need to ensure that the specifications of the overlapping of standards do not clash with any legal issues covered under regulations.

Very often industry has a confusion between “Regulatory Compliance” and “Adherence to Standards”. Since customers of organizations often refer to “Industry Best Practices” and specify need to adhere to standards, Indian companies tend to prioritize standardization to regulatory compliance.

It is important that the users of the standards appreciate that “Compliance to law is mandatory” and “Compliance to Standards” is a business decision. Compliance therefore should always come as the first priority and standardization is only a step to compliance.

There is a responsibility for organizations like BIS to ensure that in all their standardization documents they do record that standardization is not a replacement of regulation but is subordinate to regulation. Non Conformance to law could lead to penalties and non conformance to standards could be a matter of business negotiation.

In the past, there has been an awkward attempt by vested interests to influence law making to draft rules in such a manner that the industry is deliberately mislead into believing that “Standard is also part of the law”. There has also been an attempt by the standardization organizations also to mislead the market to wrongly believe that “Adhering to a standard is deemed compliance to law”.

Naavi has been forced to call out such attempts in the past and may do so again if required.

This conflicting situation between standardization and regulation is unwarranted and should be avoided by keeping the objective of “Safe and Secure development and deployment” of AI as the objective of standardization along with building compatibility of usage across multiple technology platforms and sectors and leave the “Ethical development and deployment” as the responsibilities of the regulation.

It is my belief that this approach of segregation of objectives between BIS and MeitY (Cyber Law and Data Protection Division) would also ensure that “Standards” are more generic than they are today. If not, in future, there will be a need for one AI standard for Fintech and another for Healthcare, another for hospitality and so on leading to proliferation of standards which is the bane of ISO standards.

The fact that ISO standards run to thousands in numbers is not a matter to be proud of. It is an indication of how complicated is the system of compliance to ISO standards in the perspective of the industry though auditors and educators are happy to have multiple business opportunities. This situation is more acute in the domain of IT since “Data” is the building block of IT industry and there are hundreds of application of one type of data and hence any attempt to standardize the processing will require hundreds of ways of processing. The need for generic standards is therefore essential to make standardization in IT easily acceptable by multiple types of data processors.

Naavi has consistently tried to address this issue and introducing “Unified Framework of compliance” to reduce the burden of compliance on the industry. “Compliance without Pain” is the motto followed by Naavi which has been ingrained in the frameworks like DGPSI.

When I look at the composition of the technical committees of BIS which draft IT related standards, I get a feeling that there is a shortfall in the member’s exposure to ITA 2000 and DPDPA 2023. This could result in the final versions of the standard missing legal issues. There is also representation of too many industry sectors which could result in the final draft trying to accommodate many sector specific requirements and vested interests rather than the standard being neutral and generic. I hope this will be suitably taken care of.

I wish these comments are considered by BIS in the right spirit as they go forward with the standardization of AI.

I look forward to a healthy and constructive debate on these comments.

Naavi

Posted in Cyber Law | Leave a comment

Towards AI standardization in India

We have started discussion on AI standardization in these columns some time back with a brief review of ethical standards that have been suggested by various international bodies as well as EU-AI act.

In India, we have a tendency to be “Followers” rather than “Leaders”. Hence we look upto EU or US to guide us for everything including developing a standard for AI. Naavi.org has always believed that while we can take guidance from all parts of the world, we should not hesitate to develop our own indigenous standards. It is this principle that has guided Naavi and FDPPI to develop DGPSI the Digital Governance and Protection Standard of India which addresses the compliance requirements of DPDPA, ITA 2000 and the draft BIS standard on Data Governance to develop a comprehensive Indian standard for personal data protection.

One of the objectives of the DGPSI approach has been an attempt to simplify the standard requirements to make it easy to comprehend by the users and keep it flexible enough to be adapted to the requirements of different risk situations.

The AI-DTS as part of the DGPSI already has tried to look at the feasibility of bringing in a certain framework for AI users and developers that would start providing a base for the regulation.

The very first part of this AI-DTS which is a measure of data trust score for an AI algorithm is to bring “Accountability” to AI development. It is one of the beliefs of AI-DTS that once we make the action of AI accountable to a legal entity (The developer or the user) then most of the adverse consequences that may arise because of “Unethical development or use” of AI could be addressed under normal laws.

“Standardization” is an attempt to provide a detailed “Check list” which is like defining the “Due Diligence”. The “Check list” cannot however over ride the law of the land and hence without changing the law itself, standardization cannot over ride the benefits of bringing in “Accountability”.

Accountability is the first step not only for regulation but also for standardisation since the applicability of the standard has to be directed to a defined system.

Hence any standardization attempt has to start with “Accountability”. Accountability requires “Registration of a AI Developer” .

“Registration requires designation of an authority” in regulatory mechanisms and licensing formalities. In standardization the registration could be a self regulatory mechanism and led by even NGOs like FDPPI. Hence without waiting for a law to be passed, a regulatory authority to be set up, penalty mechanism to be implemented, standardisation can start with voluntary movements led by interested NGOs.

FDPPI has started the DGPSI movement along with a compliance certification mechanism exactly under these thoughts for DPDPA compliance. Hence DGPSI has become today the only DPDPA compliance tool ahead of ISO 27701 or any other standards.

Similarly the AI-DTS has the potential for becoming a self regulatory tool and FDPPI could take the lead.

Under DGPSI, AI-DTS has started its activity by focussing first on “Accountability” under which every AI developer shall voluntarily declare in its coding the ownership and ensure that the licensee as well as the chain of sub licensees is embedded into the source code.

However before we implement any regulation or standard, one needs to identify the applicability. Hence it is essential to define the “Regulated System” and the “Regulated entity”.

In the context of personal data protection, AI-DTS adopted the regulated entity definition as the “Data Fiduciary” or “Data Processor” since it is already part of the DPDPA regulation. Also using the provisions of Section 11 of ITA 2000, the AI developer was also considered as a Data Fiduciary and it was only left to the identification of the data fiduciary for enforcement. Hence, embedding the identity of the developer was the only missing requirement to enable AI regulation in India.

However, definition of the regulated system was essential and this was explained earlier through these columns. (Refer here) The definition was linked to the graded ability of the system to alter the source code of the algorithm without human intervention . This was an approach which redefined a class of software as “AI” depending on its nature to avoid a human in re-coding itself.

The EU-AI Act approach was slightly different since it required the definition to be linked to “Risk” and “Risk” required assessment of the “Harm” to the ultimate users.

DGPSI approach was simpler to tag software with ability to change its behaviour based on the observation of the output of the algorithms itself.

It appears that now the Bureau of Indian Standards (BIS) has started a debate towards developing an Indian standard for AI and is trying to gather industry responses. We welcome this initiative.

FDPPI/Naavi however urges BIS to focus on creating a proper definition of AI and Accountability as the foundation pillars for the standards and avoid reproducing an AIMS system on the lines of ISO 42001. Approach of ISO 42001 has been to create a standard for AIMS as if it is different from ISMS.

While this is commercially good to have one more certifiable standard, it is not a great idea as far as the implementing entity is concerned who have an ISMS certification and AIMS certification separately.

Hence we need to think differently when BIS starts looking at an AI standard for India.

Naavi

Posted in Cyber Law | Leave a comment