New Dimensions of Privacy on the horizon

India is on the eve of passing the PDPB 2019 and this law is set to herald a new era of data processing where the processing has to take into account a recognition of whether the data is personal or non personal and if it is personal, what is the consent associated with the data based on which the data should be collected, processed disclosed or destroyed. Any processing outside the “Consent” would be “Ultra-vires” the data protection law except to the extent otherwise permitted by “Legitimate Interest” which includes  “Compulsion of other laws”, National security, “Vital Interests” etc.

“Data” is nothing but a series of binary values stored in an appropriate medium which can be gathered, arranged and associated with pre-determined interpretation protocols and converted into human experience in the form of text, audio, video. In the coming days experience in the form of touch or smell may also be simulated. Further down the road, wearing a brain connected hat, experiences can be fed directly from a computing device into the human brain invoking the physical sensory organs such as the eye, ear, skin tongue and nose. The MetaVerse universe is likely to take the augmented virtual reality to another level where the human experience and computer outputs merge.

In the current understanding of Privacy in the Data Protection context, we interpret “Right to Privacy” as a “Right to chose how personal information may be collected, used/processed and disposed/destroyed. Hence the task of Data Protection boils down to “Collecting the Consent” and structuring the data protection process to the consent. Though the consent has to flow from the data principal to the data fiduciary, more often it flows as “Request for Consent” from the Data Fiduciary to the data principal followed by the conveying of the consent.

This scenario is Privacy 1.0 which the Justice Puttaswamy judgement can be fitted with.

In this concept which we identify as Privacy 1.0,

“The Right to privacy would be the right of a data principal to determine the collection, use and disposal of personally identifiable information to other human beings.”

Some laws like GDPR prohibit automated decision making except under consent. They also prohibit profiling.

Hence Privacy 1.0 is  completely dependent on the consent sought for and provided by the data principal and includes a right to interfere in machine reading of personal information.

Entry of AI and ML

With the advent of AI and ML, a situation may arise when identifiable personal data is collected by a machine and processed and the output containing the processed data which is not identifiable to any data principal may be generated.

In this scenario the AI device sees the personal data but the human being who sees the output may only see the anonymized data.

In such a scenario, the privacy considerations need to be assessed in two aspects. Firstly the admin of the AI device may use his admin access to take out the data in identifiable form and therefore the processing may be considered as “Accessible by human beings”. It is immaterial whether the admin actually views it or not but the possibility exists and this has to be factored into the consent.

Second is that  “Automatic Processing” may be attributed to the owner of the process and hence it is equivalent to human processing.

Privacy 2.0 concept

In practice, it is possible to design technical processes  in such a manner that identifiable information may be processed by the device but the output may be devoid of any identifiability. In such a case there is no exposure of identifiable information to a human being.

 According to Naavi’s  theory of data “Data is created by technology but interpreted by humans”. Hence if data cannot be interpreted by humans, it is nothing but a “binary junk”.

Same way, if processed personal data cannot be identified by a human being, it is equivalent to “Anonymized personal data” which is “Non Personal Data” by definition.

If this concept is accepted in the upcoming data protection laws, then we will be entering the concept of “Privacy 2.0” where the Right to privacy would be the right of a data principal to determine the collection, use and disposal of personally identifiable information to other human beings but excludes use and disclosure by computing devices.

Privacy 3.0

The industry is now entering a scenario where Cobots will be closely interacting with human beings and may have a Cobot identity which is like another human being.

Do such “Cobots” have privacy rights?…is the new question that would arise.

If a Government can provide citizenship to a humanoid robot, (eg: Sophia, a humanoid robot created by a Hongkong company  being given citizenship by the Government of Saudi Arabia) then the demand for “Privacy” by “Cobots” may also be raised sooner or later.

The “Second life” concept and “Immersive gaming experience” is now being expanded with Virtual reality to building “Cognitive systems in the virtual reality”. In such systems one can touch and feel the presence of another individual across the table initially in forms which are standardized avatars like the secondlife.com but later replicas of actual real look of the characters. With deep fake technology, real looking video avatars are possible. A further merger of these different technologies will enable individuals to travel across space and be present as 2D characters on computer screens or 3D hologram images .

Perhaps FaceBook’s transformation to MetaVerse may be a step in this direction.

Will the current privacy concept of Privacy 2.0 be adequate when the universe will have “Real holographic characters” being transmitted as “Data” and processed by devices?

Perhaps we may need a new definition of Privacy Rights which could be Privacy 3.0 where the right of choice may have to be expanded from “Human and machine Recognition of identity” as in Privacy 1.0, to beyond “Only human Recognition of identity” as in Privacy 2.0 and address the activities of digital clones.

This Privacy 3.0 may as a right should extend to creation of simulated avatars of the data principal and the activities of the avatar in the virtual reality environment.

May be the future is a little fuzzy….

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.