“Significant Data Fiduciary” …The Trinity Principle

In the DPDPA 2023, when the rules are notified, one of the most important aspects which the industry is looking forward to is the notification under Section 10(1) on the identification of a Significant Data Fiduciary.

The “Data Fiduciary” (DF) is an entity that determines the purpose and means of processing of personal data as distinguished from the “Data Processor” who processes the personal data under the instruction from another entity which determines the purpose and means.

There are some instances when one organization determines the purpose and then engages another organization which has full control on the means of processing for the given purpose. In such instances both organizations become “Joint Data Fiduciaries”.

Once this distinction is determined an organization needs to determine whether they are “Significant Data Fiduciaries” or not.

If volume is a criteria there could be many processors who become “Data Fiduciaries”. Firstly  since they manage proprietary processing technologies they may become joint Data Fiduciaries. There after, they may become “Significant Data Fiduciaries” since as processors for many Data Fiduciaries, the cumulative volume they handle may exceed the thresholds even if the vendors themselves may operate at low volumes. 

In other words, in today’s chain of processors, the sub contractor (Who is today referred to as a data processor) could be a “SDF” while the main contracting party may be only a “DF”.

Many cloud service providers will fall into the category of SDF where as their users may not be.

It is possible that the determination of when a DF becomes a SDF is not determined only on the basis of “Volume” but also on “Sensitivity”. Sensitivity (including processing of children data) itself is based on the “Risk to the Data Principal” and hence the criteria for determination of SDF status may depend on Volume-Sensitivity-Risk combination.

It is also possible that without consideration of “Volume”, some factors such as  ‘Risk’, as well as the ‘impact on sovereignty and integrity of India’, ‘risk to electoral democracy’, ‘security of state’ or ‘public order’ may be considered as independent criteria under which an organization may be classified as SDF .

Hence the primary criteria for identifying SDF status is the “Risk status of Processing” and volume becomes a secondary factor.

The term Data Fiduciary used in DPDPA is similar to the term “Data Controller” under GDPR and hence it would be natural for many to interpret DF from their knowledge of a Data Controller under GDPR.

The current interpretation of Data Controller is that “An Organization is a Data Controller”. If the same is applied in India, an “Organization” becomes a “Data Fiduciary”.

I would however like to challenge this concept of the status of Data Fiduciary being assigned to an organization.

Most of us today accept that an organization is some times a data controller and some times also a data processor. Significant Data Fiduciary is considered another status with special obligations. We identify this as the “Trinity Principle” where an organization can be any one of these categories for compliance purpose.

This “Trinity” principle of an organization seems to remind us of the famous Heisenberg principle of uncertainty  applicable to light and matter.  The Trinity principle states that an organization in the context of Data Protection context may exist in any of the three states of Data Fiduciary, Significant Data Fiduciary or Data Processor and the controls have to be applied accordingly.

These three different categories of status of an organization adds uncertainty to when the organization should designate a DPO or appoint a DA or when it has the obligations under Section 9. 

It is for this reason that the DGPSI (Data Governance and Protection Standard of India) adopts the principle that

“Every Organization is an aggregation of multiple processes”.

This principle of DGPSI is related to the Trinity principle of  categorization of compliance entities and makes it easy to recognise that in one process the organization may be a Data fiduciary and in another a Data Processor. By the same logic, in one process an organization is a “Significant Data Fiduciary” and in another, simply a “Data Fiduciary”.

Thus an organization is like a “Trinity” and in terms of compliance may need to be a Data Processor some times, Data Fiduciary some other times and Significant Data Fiduciary some other times. This can be identified and tagged if we break up an organization into processes of personal information for compliance.

Unfortunately, GDPR did not visualize this possibility and the DPDPA 2023 at the level of he Act has also not visualized this possibility.

However, while framing the rules, it is possible for the Government to bring in this “Trinity Principle” and distinguish our law from the rest of the world.

The Section 10(1) provides an option to notify either any “Data Fiduciary” or a “Class of Data Fiduciary” as a SDF and the Government can use the “Class” as a sub category of a DF and link it to a process.

For example, (after stating the general criteria for determining the data fiduciary), it may state

“The term ‘class’ under Section 10(1) of the Act for the application of this rule applies to any class of personal data process/es that an entity may use where the risk, sensitivity and volume of personal data processed exceeds a specified threshold”

I hope the Meity incorporates this principle when the rules are notified…..

Naavi

Also refer: Why Not “Significant Data Fiduciary” be Process Centric

Posted in Cyber Law | Leave a comment

100 day agenda of Modi 3.0 to address some old demands of Naavi.org

As India awaits for the 2024 Lok Sabha elections to be completed and for the new Government to take charge, many of the long pending suggestions of Naavi are likely to find place in the immediate 100 day implementation plan of the next Modi Government.

One such thing is setting up of the National Cyber Security Agency (NCSA). Another focus area is the control of mobile crimes and introduction of the “Calling Name Presentation (CNAP).

The NCSA will likely to be the umbrella organization for managing Cyber Crime prevention.

Refer Economic Times article here

The NCSA needs to function with a “Cyber Space Jurisdiction” cutting across the State Police Jurisdictions and take over many of the intra state Cyber crimes which have given raise to many mafia centers in Bharatpur, Nuh etc. where criminals are unable to be controlled by the state police for various reasons. Similarly, CNAP should significantly reduce the vishing frauds.

We need similar law and procedure to ensure that E-Mail sender’s identity and domain name identity is also displayed. Without waiting for Google and Proton mail to introduce such systems, NCSA itself should introduce a “Digital Identity Gateway” which should be integrated with the browser and email clients and display the sender identity or domain name registrant identity.

Appropriate consent can be made available by the user of the service  without infringing on the Privacy .

Hopefully the rules of DPDPA 2023 will also be released during the same time. Naavi.org may publish a document shortly on the 26+ required notifications that the Government needs to make to indicate that it is not an insurmountable task any way given the intentions.

There is also a proposal for NFIR (National Financial Information Registry Bill). This should change the current non compliant system run by CIBIL and other rating agencies under the Credit Information Companies (Regulation) Act 2005 which has conveniently facilitated siphoning off of lakhs of crores worth of data of Indian Bank customers to USA. RBI has in its certification system failed to monitor the activities of these companies and today TransUnion a US Company is the owner of TransUnion CIBIL and personal information provided to Bankers for the purpose of a loan/credit card is without proper consent  shared with the US entity. This should stop and the new Act is an opportunity to correct this monumental mistake.

Naavi

Posted in Cyber Law | Leave a comment

FDPPI Event in Delhi on May 12

The Delhi Chapter of FDPPI is conducting a workshop on DPDPA 2023 Implementation Challenges & Framework, on 12th May 2024 in New Delhi.

This will be a day long workshop which will cover the DPDPA and its applicability, how GDPR & ISO 27701 certified companies can adopt DPDPA, CXO Insights implementation guidance and framework.

This is an excellent opportunity to interact with the community of privacy professionals and gain more insights. The workshop will be led by Mr. Na.Vijayashankar (Naavi), Chairman, FDPPI and Mr Ramesh Venkataraman,  Director, FDPPI.

The workshop cost is INR 11,000/- (with GST) which includes the course content, lunch, and beverages at the venue.

Early bird offer till Apr 22, 2024 – 15% discount

Group enrolment of 3 or more –  20% discount

Register at https://forms.gle/SSWsV1W3pHWbkgbV9

For queries, write to delhi@fdppi.in/ fdppi4privacy@gmail.com

Posted in Cyber Law | 1 Comment

Forthcoming training events by Naavi

Naavi/FDPPI is organizing events on Compliance of DPDPA at multiple centers for different audience.

Since January, one day events were held in Pune, Mumbai, Ahmedabad and Kolkata.

Now the following events are planned.

13th April 2024: Hyderabad (CIOKLUB members only)

11th May 2024: Delhi (CIOKLUB members only)

12th May 2024: Delhi (Open paid event)

18th May 2024 : Coimbatore (CIOKLUB members only)

The one day programs will cover DPDPA law and Implementation through DGPSI framework.

Interested persons who would like to attend the May 12tth event may contact FDPPI. Any other organization which may like to conduct programs for their members may also contact FDPPI.

Naavi

Posted in Cyber Law | Leave a comment

Should we start DPDPA Compliance today?

Five most frequent queries we receive in the market from companies today are

1. Is DPDPA 2023 is effective today or should we wait for the notification?

2. Should I start my compliance program today or wait till the rules are notified?

3.How long will the implementation typically take?

4. If we want to start a DPDPA compliance program what is the right framework to adopt?

5. Who has to lead the implementation in a company?

Let me try to add my views on each of the above queries.

1. Is DPDPA 2023 is effective today or should we wait for the notification?

DPDPA (Digital Personal Data Protection Act 2023) was passed by the Parliament and the relevant gazette notification was issued on August 11, 2023 with the President signing the Bill into an Act.

However, one of the sections of DPDPA 2023 (section 1.2) states that the Act shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.

One school of thought is that since the notification has not yet come, the Act is not yet in force.

This view cannot be brushed aside strictly from the legal perspective.

However, a prudent corporate entity does not wait for the penalty notice to be delivered to them or an arrest warrant is issued before taking steps to be compliant.

Compliance to DPDPA 2023 is a “Risk Management Measure” for all Companies and more so the Board, the Independent Directors, the CEO, CRO and CFO to recognize the possible impact of non compliance.

We can procrastinate and say “Let the rules be notified”, “Let the DPB be appointed”, “Let a Breach happen”, “Let me receive a notice” etc… and then say I can challenge the notice in a Court and escape.

But is this the wise strategy for a corporate entity?..one needs to ponder.

It must be remembered that DPDPA 2023 is not a completely new legislation as many may think. It is a continuation of ITA 2000/8 since one of the sections 43A is being replaced with the new Act. Section 43A expects companies handling sensitive personal data to follow a reasonable security practice and the “Reasonable” includes “Due Diligence” which is acting in a manner which is considered a best industry practice from the perspective of both the law which is effective today and the law which is pending notification of a date from which penalties may become effective.

Even otherwise, ITA 2000 has Sections 43 along with Section 66, Section 72A as well as Sections 66C and 66D read with Section 85 all of which may impose both civil and criminal penalties on the persons in charge of business in a company, the Directors, the Company Secretary etc.

ITA 2000 already has a regulatory mechanism which includes the Adjudicating officer under Section 46, the CERT IN under Section 70A and 70B and the Police. Adjudicator can impose penalties, CERT In can impose penalties and also recommend prosecution and the Police can start prosecution in case there is a breach of data.

DPDPA can only be considered different in the fact that liabilities under ITA 2000 may fructify after a breach has taken place while penalties under DPDPA can be imposed in many cases even if there is no breach. ITA 2000 is however more risky in another angle since any action under ITA 2000 could lead to imprisonment of corporate executives which DPDPA 2023 does not contemplate.

Even after DPDPA 2023 comes into existence, ITA 2000 will not vanish and hence some of the liabilities under ITA 2000 may still be relevant for the companies.

Those companies who donot flag these risks today are probably those who will face the wrath of law on a later day.

I therefore consider that wise managements need to treat that the DPDPA 2023 is in principle, effective as of date.

2. Should I start my compliance program today or wait till the rules are notified?

Compliance is a journey and the earlier one starts, better it is. Even before the first controls are in place an organization needs to “Discover” the covered data and have the necessary classification.

Consent need to be obtained from legacy data principals and any delay will only add to the legacy personal data which is not in conformity with the DPDPA 2023. The previous consents obtained on the basis of our understanding of GDPR or under the guidance of earlier privacy consultants may not suffice for the compliance of the new law.

A wise corporate executive will therefore start the compliance today and make necessary updations when the rules are notified. Such updations are a routine requirement and will continue.

3.How long will the implementation typically take?

It is difficult to say how much time it takes to achieve compliance. Normally it takes not less than 3 months for a medium sized company to take care of the basic requirements. Satisfactory implementation may take a further 6 months. The actual time depends on the size and operations of the organization.

4. If we want to start a DPDPA compliance program what is the right framework to adopt?

At present the only framework that is designed to meet the DPDPA compliance is DGPSI (Digital Governance and Protection Standard of India) developed by the professionals of FDPPI.

ISO 27701 which was developed for GDPR is not suitable for DPDPA compliance and no other framework is available.

DGPSI is a combination of compliance of DPDPA 2023, ITA 2000/8 as well as the draft BIS standard for Data Protection (Released on August 10, 2023). The book “Guardians of Privacy-Comprehensive Handbook on DDPDPA and DGPSI” would be the starting point for the journey to understand DGPSI. Getting certified with FDPPI as Certified DPO and Data Auditor (C.DPO.DA) is the next step.

5. Who has to lead the implementation in a company?

Most Indian companies donot have a DPO at present and some of them have designated their CISO as the DPO. DPO is the designated person in a company who needs to assume the leadership for DPDPA compliance. Small companies which are not “Significant Data Fiduciaries” need not have a designated DPO but may designate one suitable person as a “DPDPA Compliance officer”.

However, the DGPSI recognizes that compliance of DPDPA is an enterprise level responsibility and hence the implementation responsibility has to be shared. The Apex Governance committee consisting of different stake holders and the policy of “Distributed Responsibility” suggested in DGPSI makes the implementation a joint responsibility of the Governance team though DPO remains the leader.

The starting point for organizations today may actually be from the CFO and CRO who has to flag the risk of penalty and start working on Cyber Insurance and appointment of a DPO.

The lead therefore is with the Board of the Company which should do a quick business impact analysis and decide how they should move ahead with compliance.

I welcome any queries on the above and happy to debate any disagreements.

Naavi

Posted in Cyber Law | 1 Comment

Generative AI and EU AI Act

One of the major concerns of the society regarding AI is the “Dis-intermediation of human beings from the decision process”. There is the risk of AI system becoming sentient at some point of time in the future and will remain the long term risk.

In the mid term and short term AI is already posing risk of “Biased outputs due to faulty machine training ” “Automated Decision Making”, “Poisoning of Training models”, “Behavioural Manipulations”, “Neuro-Rights Violations” , ” Destruction of the Right of Choice of a human etc”.

One of the specific areas of concern is the development of large language models with the functionality of predicting the action of a user with a creative license. The creative license leads to “Hallucination” and “Rogue behaviour” of a LLM like ChatGPT or BARD and could create more problems when such software is embedded into humanoid robots.

Industrial robots on the other hand are less prone to such rogue behaviour on their own (except when they are hacked) since the creative license given to an industrial robot is less and they operate in the ANI area.

In India the use of AI to generate “Deep Fakes” and “Fake news ” is already in vogue. There is a large scale feeding of false data into the internet with the hope that it would poison the learning systems which parse information from the internet resources like websites, blogs, instagrams, X, etc. There are many declared and undeclared “Parody” accounts which boldly state falsehood which a casual observer may consider as true. The sole purpose of such accounts and content is to poison the learning systems that extract public data and incorporate it into news delivery systems. Many AI systems operate to generate content for such fake X accounts so that AI develops false information that further feeds back into the training data and generates further fake news.

Unfortunately the Indian Supreme Court dancing to the tune of anti national lobby frustrated the efforts of the Government to call out fake narrative even when such fake narrative is in the name of Ministries and Government departments.

The EU-AI act recognizes the risk of Generative AI and identifies them as a “High Risk” AI by underscoring “High Impact capabilities” and “Systemic risk at Union level”.

Even under prohibited AI practices, EU AI act includes such AI systems that deploy

“subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;”

Many of the LLMs could be posing such risks through their predictive generation of content either as a text or speech. “Deepfake” per-se (for fun) may not be classified as “High Risk” under the EU AI act but tagged with the usage, deep fake can be considered as “High Risk” or “Prohibited Risk”.

Title VIIIA specifically addresses General Purpose AI models. The compliance measures related to these impact the developers more than the deployers deployers would be relying upon he conformity assessment assurances given by the developers.

In respect of AI systems which are already in the market and have not been classified as high risk AI systems but are modified by an intermediary to be considered as a high risk AI system, the intermediary will himself be considered as the “provider” (developer) .

1.A general purpose AI model shall be classified as general-purpose AI model with systemic risk if it meets any of the following criteria:
(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;
(b) based on a decision of the Commission, ex officio or following a qualified alert by the scientific panel that a general purpose AI model has capabilities or impact equivalent to those of point (a).

2.A general purpose AI model shall be presumed to have high impact capabilities pursuant to point a) of paragraph 1 when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25.*

According to Article 52(b)

Where a general purpose AI model meets the requirements referred to in points (a) of Article 52a(1), the relevant provider shall notify the Commission without delay and in any event within 2 weeks after those requirements are met or it becomes known that these requirements will be met. That notification shall include the information necessary to demonstrate that the relevant requirements have been met. If the Commission becomes aware of a general purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.

The Commission shall ensure that a list of general purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to respect and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law.

Article 52c provides the obligations for providers of general purpose AI models which may be relevant to such providers and the persons who build their products on top of such products and market under their brand name. (We are skipping further discussion on this since we are focussing on the user’s compliance requirement for the time being).

It may however be noted that the MeitY advisory of March 4th reproduced below also requires notification to Meity and registration of the person accountable for the Generative AI software.

This notification has been made under ITA 2000 as an Intermediary guideline treating the deployer of the AI as an intermediary.

Naavi

(More to follow…)

Reference: *

A floating-point operation is any mathematical operation (such as +, -, *, /) or assignment that involves floating-point numbers (as opposed to binary integer operations).

Floating-point numbers have decimal points in them. The number 2.0 is a floating-point number because it has a decimal in it. The number 2 (without a decimal point) is a binary integer.

Floating-point operations involve floating-point numbers and typically take longer to execute than simple binary integer operations. For this reason, most embedded applications avoid wide-spread usage of floating-point math in favor of faster, smaller integer operations.

Posted in Cyber Law | Leave a comment