Visit our virtual stall at Bangalore Tech Summit 2021

Posted in Cyber Law | Leave a comment

Privacy Notice is a RFP for personal information

(Here are some emerging thoughts on Privacy meant for further discussion during IDPS 2021…Naavi)

Indian Data Protection Law in the form of PDPB 2019 is unique for the introduction of the concept of a Data Controller being a Data Fiduciary and Data Subject being a Data Principal.

The difference in the naming of Data Controller as Data Fiduciary is indicative of the fact that the Data Controller is a trustee for the information shared with him. This imposes a responsibility higher than what a “Data Controller” is expected to shoulder. Since the Data Fiduciary is a trustee, the consent document is an indicative trust deed and there is a duty cast on the Data Fiduciary to take decisions in the interest of the Data subject who is aptly renamed as “Data Principal”.

Additionally there are a few more nuances in PDPB 2019 which we need to take note of.

The  Privacy By Design policy as envisaged in PDPB 2019 is to be submitted to the Data Protection Authority and certified along with a DTS (Where possible) is like a Prospectus instrument for the raising of “Personal Data” from public for being used for business similar to the prospectus for IPOs with a “Rating”.

 

In this context the “Privacy Notice” is like a Request for Proposal (RFP) based on which the data principal provides his personal information to the data fiduciary.

In a financial IPO, there is a return expectation from the investment. In the Personal Information sharing by the Data Principal the return is in the form of the service benefit that the Data Fiduciary offers.

The RBI has now adopted the DEPA (Data Empowerment and Protection Architecture) where in the term “Consent Manager” has been used. PDPB 2019 also uses the term “Consent Manager”. However the two concepts have a slight difference that needs to be appreciated.

Under RBI’s AA proposal and the DEPA, the “Consent Manager” is an intermediary of information transmission from the Data Provider to the Data User. However under PDPB 2019, the consent manager is a data fiduciary himself and acts as a repository of personal information on behalf of the data principal and provides it on his discretionary control to the data fiduciary.

Since the “Consent Manager” under PDPB 2019 has visibility to the personal data, he has the responsibility of a data fiduciary. On the other hand the Consent manager under DEPA does not have the visibility of the information and acts only as a technology platform for transmission of identifiable personal information.

Concept of Privacy 1.0 and Privacy 2.0

In our previous article “New Dimensions of Privacy on the horizon” Naavi extended his Theory of Data to re-defining the concept of Privacy by calling the present definition as Privacy 1.0 and indicating the need for a new definition Privacy 2.0. To some extent this need is reflected in the difference between the two types of Consent Managers envisaged under DEPA and PDPB 2019.

Under Privacy 1.0, the “Right to Privacy” is the “Right of Choice of a Data Principal to determine how his personal data would be collected, used and disposed of by a third party who could be a human or an automated device”.

Under Privacy 2.0 a case was made out to recognize that there are technology processes which may convert the identifiable personal information to a state where there is no identification of an individual.

Hence a proposition was made that Visibility of identity by such automated processes should not be considered as equivalent to “Access of identifiable personal data by a human being”.

In other words, this definition of Privacy 2.0 “Excludes” processing by an automated device where the output is anonymized as per the standards to be fixed by the Data Protection Authorities.

Data Conversion Algorithms

The technology process which converts identifiable raw personal data to anonymized processed  data  is a combination of “Anonymization” and “Further processing”.

The “Anonymization” process is a process where the identity parameters are segregated from a data set and irrevocably destroyed. Subsequently the data may be processed by another process such as filtering, aggregation etc

(The De-identification and Pseudonymization process are similar processes where also the identity parameters are segregated but are not destroyed. They are kept separately and a mapping proxy ID is inserted to the data set in replacement of the identity parameters removed therefrom. Some consider that anonymization is also reversible and hence there is not much of a difference between Anonymization and Pseudonymization. However, reversing anonymization is like decryption of encrypted information. It is possible but rendered infeasible except under a special effort. If such effort is malicious, it is considered as a punishable offence. Pseudonymization is however a  security process to mitigate the risk of compromise of identifiable data and by definition is reversible and reversal is not an offence… Naavi)

There are however a number of other data processing techniques where the input is personally identifiable data but the output is non-personally identifiable data. This process is anonymization plus one or more other process involving aggregation, filtering etc.

Big Data Companies need to use such processes for adding value to the data. After such a process, the processing company or the algorithm can either destroy the identity parameters irrevocably or retain it in a mapped proxy form. Such destruction of identity may be embedded in the process itself so that there is no visibility of the identity at any point by a human being.

Accordingly the process can be termed  is “Anonymization plus” and can be kept out of privacy concerns.

This automated process is considered  “Processing” under the GDPR like data protection laws and not considered as  an anonymization like activity. There is a need for rethinking on this concept.

It is  considered that anonymization converts “Personal Data” to “Non Personal Data”. As long as Anonymization is meeting the standards fixed by the Data Protection Authority, it should be ideally recognized as the “Right of the Business”.

It is a legitimate claim of a business entity that if the Business entity (Data Fiduciary) can effectively anonymize the personal data,  they should be allowed to use the anonymized personal data for any purpose with or without consent for anonymization from the data principal.

This principle is similar to the Banking principle where the depositor lends the money to the Bank with a right to demand its return but cannot interfere in the activity of the Bank that they should lend it only to such and such a person or purpose. This reflects the principle of “Fungibility” of the money in the hands of the Bank.

The Privacy laws seem to however think that “Personal Data” is deposited for a specific purpose and hence are not considered fungible. There is a need to change this perspective and consider that after anonymization, the resulting data should be considered as the property of the data fiduciary and fungible with the rest of the anonymized data in his hands which all become “Non Personal Data”.

Perhaps, it is time that the professional community starts discussing the principle of Privacy 2.0 which provides some benefits to the Big Data Industry without sacrificing the Privacy Interests.

Presently the requirements for processing of personal data through the “Anonymization plus” processes can be incorporated in the Privacy Notice/RFP for Personal Data. However it would be good if it is formalized by recognition of Privacy 2.0 principles  in the data protection regulations itself in the next available opportunity.

Naavi urges the professionals who discuss different aspects of privacy in IDPS 2021 to consider throwing some light on the above emerging thought.

Naavi

 

 

Posted in Cyber Law | Leave a comment

9 Panel discussions in three days on different aspects of Data Privacy

 

Posted in Cyber Law | Leave a comment

New Dimensions of Privacy on the horizon

India is on the eve of passing the PDPB 2019 and this law is set to herald a new era of data processing where the processing has to take into account a recognition of whether the data is personal or non personal and if it is personal, what is the consent associated with the data based on which the data should be collected, processed disclosed or destroyed. Any processing outside the “Consent” would be “Ultra-vires” the data protection law except to the extent otherwise permitted by “Legitimate Interest” which includes  “Compulsion of other laws”, National security, “Vital Interests” etc.

“Data” is nothing but a series of binary values stored in an appropriate medium which can be gathered, arranged and associated with pre-determined interpretation protocols and converted into human experience in the form of text, audio, video. In the coming days experience in the form of touch or smell may also be simulated. Further down the road, wearing a brain connected hat, experiences can be fed directly from a computing device into the human brain invoking the physical sensory organs such as the eye, ear, skin tongue and nose. The MetaVerse universe is likely to take the augmented virtual reality to another level where the human experience and computer outputs merge.

In the current understanding of Privacy in the Data Protection context, we interpret “Right to Privacy” as a “Right to chose how personal information may be collected, used/processed and disposed/destroyed. Hence the task of Data Protection boils down to “Collecting the Consent” and structuring the data protection process to the consent. Though the consent has to flow from the data principal to the data fiduciary, more often it flows as “Request for Consent” from the Data Fiduciary to the data principal followed by the conveying of the consent.

This scenario is Privacy 1.0 which the Justice Puttaswamy judgement can be fitted with.

In this concept which we identify as Privacy 1.0,

“The Right to privacy would be the right of a data principal to determine the collection, use and disposal of personally identifiable information to other human beings.”

Some laws like GDPR prohibit automated decision making except under consent. They also prohibit profiling.

Hence Privacy 1.0 is  completely dependent on the consent sought for and provided by the data principal and includes a right to interfere in machine reading of personal information.

Entry of AI and ML

With the advent of AI and ML, a situation may arise when identifiable personal data is collected by a machine and processed and the output containing the processed data which is not identifiable to any data principal may be generated.

In this scenario the AI device sees the personal data but the human being who sees the output may only see the anonymized data.

In such a scenario, the privacy considerations need to be assessed in two aspects. Firstly the admin of the AI device may use his admin access to take out the data in identifiable form and therefore the processing may be considered as “Accessible by human beings”. It is immaterial whether the admin actually views it or not but the possibility exists and this has to be factored into the consent.

Second is that  “Automatic Processing” may be attributed to the owner of the process and hence it is equivalent to human processing.

Privacy 2.0 concept

In practice, it is possible to design technical processes  in such a manner that identifiable information may be processed by the device but the output may be devoid of any identifiability. In such a case there is no exposure of identifiable information to a human being.

 According to Naavi’s  theory of data “Data is created by technology but interpreted by humans”. Hence if data cannot be interpreted by humans, it is nothing but a “binary junk”.

Same way, if processed personal data cannot be identified by a human being, it is equivalent to “Anonymized personal data” which is “Non Personal Data” by definition.

If this concept is accepted in the upcoming data protection laws, then we will be entering the concept of “Privacy 2.0” where the Right to privacy would be the right of a data principal to determine the collection, use and disposal of personally identifiable information to other human beings but excludes use and disclosure by computing devices.

Privacy 3.0

The industry is now entering a scenario where Cobots will be closely interacting with human beings and may have a Cobot identity which is like another human being.

Do such “Cobots” have privacy rights?…is the new question that would arise.

If a Government can provide citizenship to a humanoid robot, (eg: Sophia, a humanoid robot created by a Hongkong company  being given citizenship by the Government of Saudi Arabia) then the demand for “Privacy” by “Cobots” may also be raised sooner or later.

The “Second life” concept and “Immersive gaming experience” is now being expanded with Virtual reality to building “Cognitive systems in the virtual reality”. In such systems one can touch and feel the presence of another individual across the table initially in forms which are standardized avatars like the secondlife.com but later replicas of actual real look of the characters. With deep fake technology, real looking video avatars are possible. A further merger of these different technologies will enable individuals to travel across space and be present as 2D characters on computer screens or 3D hologram images .

Perhaps FaceBook’s transformation to MetaVerse may be a step in this direction.

Will the current privacy concept of Privacy 2.0 be adequate when the universe will have “Real holographic characters” being transmitted as “Data” and processed by devices?

Perhaps we may need a new definition of Privacy Rights which could be Privacy 3.0 where the right of choice may have to be expanded from “Human and machine Recognition of identity” as in Privacy 1.0, to beyond “Only human Recognition of identity” as in Privacy 2.0 and address the activities of digital clones.

This Privacy 3.0 may as a right should extend to creation of simulated avatars of the data principal and the activities of the avatar in the virtual reality environment.

May be the future is a little fuzzy….

Naavi

Posted in Cyber Law | Leave a comment

If you are a wise CEO..

 

 

In case I am a HR Manager or a CEO, one of the thoughts that  come across my mind is how much it would cost to conduct an awareness training for all my employees on Data Privacy?

If I have 500 employees to be trained  to make them aware of what is PDPB 2019, GDPR, Pseudonymziation, Employee Privacy, Implementation framework, Data Valuation etc. it would perhaps cost at least Rs 10,000/- per employee per day.

However, FDPPI the organization which is dedicated spread awareness of Data Protection in India is giving away 3 days of training as a free service to the nation.

It is therefore worth Rs 30,000/- per delegate for the three days and Rs 15,000,000 for 500 employees.

More over the 3 days of training is being given by 52 eminent experts and it should be worth much more than Rs 1.5 crores for 500 trainees.

If you are a wise CEO you should consider deputing as many people as possible from your organization to attend the IDPS 2021.

Contact Naavi for more information.

Naavi

 

 

Posted in Cyber Law | Leave a comment

The Stage is Set for IDPS 2021… Register your presence now

Posted in Cyber Law | Leave a comment