The Shape of Things to Come..The New Data Protection Act of India-7 (Clarifications-Privacy)

(Continued from the previous article)

P.S: This series of articles is an attempt to place some issues before the Government of India which promises to bring a new Data Protection Law that is futuristic, comprehensive and Perfect. 

In our previous article, we discussed a new concept of “Privacy” where I advocated that “Data is in the beholder’s eyes” and hence Data in binary form which is accessed by an algorithm and processed without releasing the data out of the algorithm as an “Identified personal data” should be considered as not “Accessing” personal data for the purpose of personal data protection.

I am aware that supervisory authorities under GDPR would consider that  when identified data is accessed by an algorithm without consent, it amounts to “Automatic Processing” and  is considered as a breach of identifiable personal data.

What I am advocating is that this approach needs to be changed.

“Data” is “Data” only when it is in a form in which a human can understand it.

Identifiable Personal Data in binary form which is not accessible by a human being but is accessible only by an algorithm which ensures that even the admin of the algorithm cannot view it in identified form

and

subsequently released for human consumption only in anonymized form should be considered as not having been “accessed” and hence not considered as an infringement of Privacy.

In GDPR, this view is not supported. However, GDPR recognizes “Anonymisation” and considers “Anonymized personal data” as “Non Personal data”. If viewing by an algorithm in identified form is considered as “Breach” then all anonymization processes are actually processing of personal information and hence must be considered as an “Access” of identified personal data and requires consent.

I am in agreement with the views expressed in this article “Does anonymization or de-identification require consent?

According to this article, in Opinion 05/2014 of the Article 29 Working Party on Anonymisation Techniques, the Working Party stated:

“The Working Party considers that anonymisation as an instance of further processing of personal data can be considered to be compatible with the original purposes of the processing but only on condition the anonymisation process is such as to reliably produce anonymised information in the sense described in this paper.”

But if this view is correct then access of identified personal data by an automated processing algorithm is not an objectionable access. There is therefore an inherent conflict in GDPR.

This principle is extended in the concept which I am trying to advocate as Privacy 2.0 and drawing a principle that whenever  any process accesses identifiable personal data ad returns anonymised personal data where even the algorithm administrator has no access to identified personal data, then the process is compatible with the view that there is no infringement of privacy. Such a process will required that after the algorithm removes all identifiers the identifiers are irrevocable destroyed and are not associated with the output of the process.

This is an important  clarification I am advocating in the New Data Protection Act as part of the definition of “Privacy” and the definition of “Sharing”.

In the current versions of both GDPR and PDPB 2019, the  law does not define “Privacy” and proceeds to speak of various measures to protect the “Information Privacy”. It is felt that this is not fair on the data processing industry that they are required to protect a “Right of Privacy” which even the experts in Judiciary are not confidant of defining. We therefore strongly feel that the definition of privacy is essential in this data protection law though most of the law is related to “Information Privacy”.

The Privacy definition clause defines “Physical Privacy” which is the old concept which Supreme Court in the Kharak Sigh Case upheld. “Mental Privacy” is what Justice Chandrachud defined in the Puttaswamy judgement. The same Puttaswamy judgement also simplified the definition of Privacy into “Protection of Right of Choice” as expressed, which leads to the “Consent” and “Lawful basis for processing” requirements. Further the Puttaswamy judgement as well as GDPR mostly addressed issues of protecting  “Information Privacy” which  is protection of the Right of Choice about the use of personal information by the data subject when the personal information is in electronic form. Though in stray articles/sections it is stated that the principles of the law are also applicable for manual processing or file systems, the essence of the law is protection of personal information in electronic form.

We have tried to remove these attempts of law makers to hide behind vague concepts of “Privacy” and try to catch an industry for non compliance at the appropriate time just like the traffic cop who prefers to hide behind a bend and catch violators rather than standing in the middle of the road and guide the traffic towards compliant driving.

In GDPR we do differentiate between automated processing leading to “Profiling” from “automated decision making”  without human involvement but both require a lawful basis and this view if extended to other data protection laws also resulting in a general belief that access of identifiable personal data by any automated process requires consent/lawful basis and otherwise will be considered as a data breach.

We are challenging this interpretation and seeking  validation in the new data protection law.

(For abundant caution, I am trying to clarify that what is suggested is for the forthcoming new law in India and does not alter the earlier view in GDPR compliance where automated processing/decision making require a consent/lawful basis.)

Naavi

P.S: These discussions are presently for a debate and is a work in progress awaiting more inputs for further refinement. It is understood that the Government may already have a draft and may completely ignore all these recommendations. However, it is considered that these suggestions will assist in the development of “Jurisprudence” in the field of Data Governance in India and hence these discussions will continue until the Government releases its own version for further debate. Other professionals who are interested in participating in this exercise and particularly the Research and Academic organizations are invited to participate. Since this exercise is too complex to institutionalize, it is being presented at this stage as only the thoughts of Naavi.  Views expressed here may be considered as personal views of Naavi and not that of FDPPI or any other organization that Naavi may be associated with.

  1. Introduction
2. Preamble 3.Regulators
4. Chapterization 5. Privacy Definition 6. Clarifications-Binary
7. Clarifications-Privacy 8. Definitions-Data 9. Definitions-Roles
10. Exemptions-Privacy 11. Advertising 12. Dropping of Central Regulatory authority
13. Regulation of Monetization of Data  14. Automated means ..

 

Posted in Cyber Law | Leave a comment

The Shape of Things to Come..The New Data Protection Act of India-6 (Clarifications-binary)

(Continued from the previous article)

P.S: This series of articles is an attempt to place some issues before the Government of India which promises to bring a new Data Protection Law that is futuristic, comprehensive and Perfect.

In our previous article, we discussed the definition of Privacy. One of the new concepts we tried to bring out is that “Sharing” should be recognized only when identified personal data is made accessible to a human being.

In other words, if personally identified data is visible to an algorithm and not a human, it is not considered as sharing of identified data if after the processing of personal data by the algorithm, the identity is killed within the algorithm and the output contains only anonymised information.

Typically such a situation arises when a CCTV captures a video. Obviously the video captures the face of a person and therefore captures a critical personal data. However, if the algorithm does not have access to a data base of faces to which the captured picture is compared with and identified, the captured picture is only an “Orphan Data” which has an “Identity parameter” but is not “Identifiable”. The output which let us say a report of how many people passed a particular point as captured by the camera etc is devoid of the identity and is therefore not a personal information.

The algorithm may have an AI element where the captured data is compared to a data base of known criminals and if there is any match, the data is escalated to a human being where as if there is no match, it is discarded. In such a case the discarded information does not constitute personal data access while the smaller set of identified data passed onto human attention alone constitutes “Data Access” or “Data Sharing”.

Further, the definition provided yesterday used some strange looking explanation of “Sharing” as

“..making the information available to another human being in such form that it can be experienced by the receiver through any of the senses of seeing, hearing, touching, smelling or tasting of a human..”

This goes with my proposition that “Data is in the beholder’s eyes” and “Data” is “Data” only when a human being is able to perceive it through his senses.

For example, let us see the adjoining document which represents a binary stream.

A normal human being cannot make any meaning out of this binary expression. If it is accessed by a human being therefore, it is “Un-identifiable” information.

A computing device may however be able to make a meaning out of this.

For example, if the device uses a binary to ascii converter, it will read the binary stream as ” Data is in the beholder’s eyes”. Alternatively, if the device uses a binary to decimal converter, it could be read as a huge number. If the AI decides to consider each set separated by a space as a separate readable binary stream, it will read this as a series of numbers.

Similarly if the binary stream was a name, the human cannot “Experience” it as a name because he is not a binary reader. Hence the determination whether a binary stream is “Simple data” or “a Name” or a “Number” etc is determined by the human being to whom it becomes visible. In this context we are calling the sentence in English or number in decimal form as “visibility”. If the reader is an illiterate, even the converted information may be considered as “Not identifiable”. At the same time if the person receiving the information is a “Binary expert who can visualize the binary values”, he may be a computer in himself and consider the information as “Identifiable”.

It is for these reasons that in Naavi’s Theory of Data, one of the hypothesis is that “Data is in the beholder’s eyes”.

The “Experience” in this case is “Readability” through the sensory perception of “Sight”. Similar “Experience” can be recognized if the data can be converted into a “Sound” though an appropriate processing and output device. Theoretically it can also be converted into a sense of touch, smell and taste if there are appropriate devices to convert them into such forms.

If there is a “Neuro input device” associated, then the binary stream can be directly input into the human brain by a thought and it can be perceived as either a sentence or number as the person decides.

These thoughts have been incorporated in the definition of “Privacy” and “Sharing” which was briefly put out in the previous article.

The thought is definitely beyond the “GDPR limits” and requires some deep thinking before the scope of the definition can be understood.

In summary, the thought process is

If an AI algorithm can be designed that identifiable data is processed in such a manner that identity is killed within the algorithm, then there is no privacy concern.  In fact a normal “Anonymizing” algorithm will be one such device which takes in identifiable information and spits out anonymous information. In this school of thought, such processing does not require consent and does not constitute viewing of identifiable data even by the owner of the algorithm (as long as there is no admin over ride)

I request all of you to read this article and the previous article once again and send me a feedback.

P.S: These discussions are presently for a debate and is a work in progress awaiting more inputs for further refinement. It is understood that the Government may already have a draft and may completely ignore all these recommendations. However, it is considered that these suggestions will assist in the development of “Jurisprudence” in the field of Data Governance in India and hence these discussions will continue until the Government releases its own version for further debate. Other professionals who are interested in participating in this exercise and particularly the Research and Academic organizations are invited to participate. Since this exercise is too complex to institutionalize, it is being presented at this stage as only the thoughts of Naavi.  Views expressed here may be considered as personal views of Naavi and not that of FDPPI or any other organization that Naavi may be associated with.

Next article

Naavi

I am adding a reply to one of the comments received on Linked In:

Question:

Consider the situation of google processing your personal data from cookies or server and providing you specific ad. Google claims this automatic processing and output is anonymous.
So your suggestion to allow this?

Answer

It is a good question. It may require a long answer.
In such cases we first need to check through a DPIA what is the harm caused to the individual and arrive at a decision.
In the example cited,  there are three levels of processing. 
At first level there is collection of personal information. If the cookies are persistent cookies and linked to a known customer, it could be personal data and consent is required. If the entire cookie data collected is only anonymous and the collector is not reasonably capable of identifying the individual with other data on hand, it is processing of non personal data. 
At the second level, a profiling occurs and the users are categorised into different market segments may be without individual identity.
For example, if we say category A user’s would be interested in buying a computer, this analysis is not causing harm to the user. Usually this is done by an intermediary market research company. This company need not know the identity of the user and hence it only processes anonymised personal data which is outside the privacy protection regime.
At the third level advertisement is served. If the ad server is aware of the identity of the recipient and  target the ads then it is an activity which could cause harm to privacy.
Let us also take the example of TV ads or hoardings on the street. They are not specifically targeted ads and hence don’t infringe privacy.
Similarly if there are ads on the web space which are not targeted, it would be difficult to call it as infringement. If the ads are targeted by identity, without doubt it would be an infringement.
What you are indicating is a case which falls in between the above two extreme cases of targeted ads to identified individuals and generic ad serving just like the hoarding on the street which is open to everybody.
The determination of privacy impact is determined more by the platform where advertisement is placed. If it is a private space like your email inbox, you may say that there was an infringement. But if it is on a website which you go and visit, the ads may be treated like hoardings and not infringing.
Hence the platform on which the ads are served may determine whether there is harm or not.
What I have suggested would basically apply to intermediaries who only process data without any idea of the data subject and gives out the results to another recipient.  This is what an “Algorithm” would do.
But if Google is able to identify who has provided the data and who is getting the ads, they may not have the status of an “Intermediary” and there could be infringement of privacy.
Hence we may have to take a view based on the context. 

  1. Introduction
2. Preamble 3.Regulators
4. Chapterization 5. Privacy Definition 6. Clarifications-Binary
7. Clarifications-Privacy 8. Definitions-Data 9. Definitions-Roles
10. Exemptions-Privacy 11. Advertising 12. Dropping of Central Regulatory authority
13. Regulation of Monetization of Data  14. Automated means ..

 

Posted in Cyber Law | Leave a comment

75th Independence Day of India celebrated at every home

Posted in Cyber Law | Leave a comment

The Shape of Things to Come..The New Data Protection Act of India-5 (Privacy Definition)

(Continued from the previous article)

P.S: This series of articles is an attempt to place some issues before the Government of India which promises to bring a new Data Protection Law that is futuristic, comprehensive and Perfect. 

In the earlier articles in this series, we have discussed the requirements of the New Data Protection Act regarding the basic objectives, regulatory structure and the Chapterization all of which gives a framework of the desired legislation.

In this article we shall discuss some definitional aspects.

We are presently discussing the possibility of one Mega Act which will replace both ITA 2000 and PDPB 2019 though the Government may ultimately chose to keep the two laws separate. We shall go ahead with the concept of the “Unified Act” for the time being and if necessary it can be bifurcated later on the basis of the different chapters we may create.

The first important definition to be addressed is the “Definition of Privacy” which needs to be protected.

The second but most critical definition of the Act is the definition of “Data” since it is central to all our discussions. The definition has to be further expanded to “Sensitive personal data”, “Critical personal data”, “Neuro data”, “Non Personal-Corporate Data”, “Non Personal Sovereign Data”, “Non Personal Community data”, “Shared Personal Data” etc.

Definition of Privacy

The first definition of Privacy is the one which is required for protection of what Supreme Court has declared as the “Fundamental Right” under Article 21 of the Constitution.

We presently have some understanding of what kind of privacy is protected by data protection laws such as GDPR which is “Information Privacy”. The current definition of “Information Privacy” as used popularly is “Privacy 1.0” where as a need has come to look at two further levels of definition which can be defined as “Privacy 2.0” and “Privacy 3.0”. We may or may not use this software type definition 1.0, 2.0 and 3.0 but we may have to find other names that can be used in the Act. But let us first try to understand the differentiation that can be brought between these three types of Privacy.

Privacy 1.0 means the fundamental right guaranteed under the Indian Constitution under Article 21 as part of the “Right to Life”. We had earlier discussed this subject in our article “The Privacy Judgement… Conclusion.. Need for Definition of Privacy“.  We know that the Puttaswamy judgement did not include the definition of “Privacy” in its final order though it was discussed by the judges in their individual descriptive “Orbiter dicta”.

Privacy can be discussed as “Physical Privacy”, “Mental Privacy”, “Neuro Privacy” and “Information/Data Privacy”.

The requirement of the NDPAI can be served by defining “Privacy” as “Information Privacy” only and proceeding to discuss how “Autonomy and Freedom of Choice” can be imparted to an individual in directing others about how his personal information may be collected, processed and disposed.

We must appreciate that “Right of Privacy” is the “Right of Choice” of an individual to determine how he prefers to share his personal data with others. The difficulty is however capturing the “Right of Choice”  and also managing the changes in the “Choice” of a person over time and managing the difference in the “Choices” of one individual and the other.

Let us therefore determine the first definition of Privacy  as follows:

Privacy:

“Privacy is a fundamental right under the Constitution of India as an independent right under the Right to life and liberty that guarantees an individual that shall not be infringed except under due process of law as defined in this Act and  includes the following.

(a) “Physical Privacy” means the choice of an individual to determine to what extent the individual may chose to share his physical space with others.

(b) “Mental Privacy” means the choice of an individual to determine to what extent the individual may chose to share his mind space with others

(c) “Neuro Privacy” means the choice of an individual to determine to what extent the individual may share his neuro space with others

(d) “Information Privacy” means the expression in electronic form of the choice of an individual to determine to what extent the individual may share data about the individual with others.

Explanation:

“Sharing” in the context above means “making the information available to another human being in such form that it can be experienced by the receiver through any of the senses of seeing, hearing, touching, smelling or tasting of a human in such a manner that the identity  of the individual to whom the data belongs may become recognizable to the receiver with ordinary efforts”.

P.S: In the above definition, infringement of privacy is recognized only when the personal data becomes accessible by another human being. If the personal data is accessible only by a device and not by any human being, the data is not considered as “Shared”. When “Data” is processed by an algorithm without being accessed by any human being, if any human cannot access identified personal data by any reasonable efforts (similar to anonymisation), it is not considered as “infringement”.

This definition which recognizes visibility to humans only as infringement is the concept of Privacy 2.0. The inclusion of neuro privacy is the concept of Privacy 3.0. Both these are included in the above definition. Privacy 1.0 is the current definition used in GDPR where visibility of personal data by a device is also considered as potential data disclosure. Of

We shall discuss the definition of “Data” in the following article. In the meantime, I invite comments on the above.

Naavi

P.S: These discussions are presently for a debate and is a work in progress awaiting more inputs for further refinement. It is understood that the Government may already have a draft and may completely ignore all these recommendations. However, it is considered that these suggestions will assist in the development of “Jurisprudence” in the field of Data Governance in India and hence these discussions will continue until the Government releases its own version for further debate. Other professionals who are interested in participating in this exercise and particularly the Research and Academic organizations are invited to participate. Since this exercise is too complex to institutionalize, it is being presented at this stage as only the thoughts of Naavi.  Views expressed here may be considered as personal views of Naavi and not that of FDPPI or any other organization that Naavi may be associated with.

Next article

Naavi

  1. Introduction
2. Preamble 3.Regulators
4. Chapterization 5. Privacy Definition 6. Clarifications-Binary
7. Clarifications-Privacy 8. Definitions-Data 9. Definitions-Roles
10. Exemptions-Privacy 11. Advertising 12. Dropping of Central Regulatory authority
13. Regulation of Monetization of Data  14. Automated means ..

 

Posted in Cyber Law | Leave a comment

Flag flying at Ujvala

Posted in Cyber Law | Leave a comment

The Shape of Things to Come..The New Data Protection Act of India-4 (Chapterization)

(Continued from the previous article)

P.S: This series of articles is an attempt to place some issues before the Government of India which promises to bring a new Data Protection Law that is futuristic, comprehensive and Perfect. 

Naavi.org had been advising the the Privacy Activists who were opposing the PDPB 2019 that  it would be wise to accept the version of the Bill that the Government is ready to accept and later on work for improvements through amendments. We know that CCPA went through such immediate amendment and a similar approach could have been taken in India also with the experience of a simple legislation for an year or two. Unfortunately, the Privacy Activists conspired with the Tech companies and mounted an unreasonably harsh and false propaganda against the Bill which was not feasible for the Government to accept. It must be remembered that Government would have been the worst affected if the law had been passed as it was designed earlier since there would have been many cases that would have been mounted on the Government for personal data breach under various schemes just as the Arogya Setu app was once targeted. The attention of the Government would have been drawn to defending the cases including the charge that the law was ultra-vires the constitution and should be scrapped. The Supreme Court would have looked at the complaint seriously and would have made the life of the Government miserable.

Now, by withdrawing the Bill the Privacy Activists have lost and the Government has cleverly gained an edge. The Government now has some understanding of the agenda of the Privacy activists cum Andolan Jeevies and can plan the next version better.

I am reminded of a cricket scene where the intelligent bowler stops before delivering the ball to know what is the mindset of the batsmen, whether he would come forward, move to the off side, or move to the leg side, try a reverse sweep etc., and plans his next delivery. Similarly, the Government now has some idea of the vulnerable areas of the legislation where it will be attacked by the Privacy Activists Cum Andolan Jeevies and plan the next version accordingly.

The discussion on the Shape of things to come will factor in such possibilities since we need to facilitate a legislation in a balanced approach rather than hoping that we will find a “Perfect Legislation” that will be acceptable to all. Even if the Government presents a diamond, the andolan jeevies in India are in such a mindset that they will call it only “Compressed Carbon” and will not accept its value.

We can refer to the article in Indian Express titled ” Govt withdraws data protection bill to bring revamped, refreshed regualtion” dated August 4, 2022 to respond to some of the objections raised in support of the withdrawal and how they can be addressed in the next version.

The first concern to be addressed is for the Bill to be in line with the Supreme COurt judgement of 2017 particularly since Justice D N Chandrachud would be the CJI in the next term when the new version may be challenged in the Supreme Court.

The second concern is the “Certification of hardware” against malware recommended by the JPC.

Third concern is the Local Data Storage requirements which has been the main objection of the Tech industry.

In a similar article on August 6th  the data export restrictions were again cited as the main objection of the tech companies. In this article the possibilities of “Trusted Geographies” being identified was indicated. This is nothing different from the “Adequacy” status of the GDPR unless the Government comes up with some innovative way of establishing a “Data Union”, a concept which  we shall explore in greater detail. This was part of our recommendations to the JPC and will be elaborated later.

Another point of discussion is to drop the criteria of sensitivity for cross border data transfer and retain it only for penalties.

We need to discuss each of these points in greater detail and let us start with the first aspect which  is how to ensure that the legislation is in tune with the Supreme Court judgement.

One of the comments of the Aadhaar judgement that we should take note of is as follows:

“…it is held that all matters pertaining to an individual do not qualify as being an inherent part of right to privacy. Only those matters over which there would be a reasonable expectation of privacy are protected by Article 21”

This is relevant for the definition of the Right to Privacy that needs to be protected and also for the definition of “Personal Information”. In particular, whether “Meta Data” is data about a “Person” is a point of debate.

The first point to be addressed is

“Whether this law should also be the basic “Right to Privacy Protection Act” or restrict itself to “Protection of Personal Data in Electronic form Act”.

Right to Privacy as is understood is “Right to be let alone”. In the Kharak Singh case, it was discussed in the context of “Home as a castle” where “Physical Privacy” is recognized as a “Right”.

In the context of digitization of personal data, the “right to be left alone” can be disturbed by an SMS message or a WhatsApp message or an e-mail from the Internet space. Just as a person sitting at home may feel his privacy disturbed by the loud speaker in the neighbourhood blaring Aazaan, a person sitting quietly at home may feel his privacy disturbed by the messages on the mobile. Unlike the “Aazaan” issue, the “Message issue” is completely in the electronic domain and hence can be addressed through a “Data Protection Law” without the need to protect privacy in the non-electronic space.

“Non Electronic Space” is not limited to the paper world but also extends to the “Oral speech” as explained in the Aazaan example.

Infringement of Privacy through speech or paper documents is different from the infringement through electronic means.

It would be preferable that the Data Protection Law restricts itself to the Data Space and does not attempt to become a “Privacy Act” by itself. In other words it can be a  “Information Privacy Protection Act” only and not a “Privacy Protection Act”.

Also, “Privacy” as a mental state of an individual cannot be captured by a Data Fiduciary except as expressed by the individual himself. Hence the dependency on “Consent” for processing of “Personal Data” is critical and cannot be over ridden by an in-determinable responsibility of the data fiduciary to understand what is in the mind of the data principal and design his data protection measures accordingly. This could be an unreasonable expectation that may be beyond the prescription of law.

This thought makes a significant change to the approach of the law as it means that the concept of “Data Fiduciary” should be pushed back to that of a “Personal Data Manager” which is closer to the concept of “Data Controller” in GDPR. Dropping the “Fiduciary” duty of the Data Controller will weaken the “Protection of Privacy” but it would be more transparent to drop what cannot be legislated just to appear the law to be like an election manifesto of promises that cannot be kept.

Hence the scope of the Act should be limited to “Protection of Personal Information in Electronic Form” and nothing else. It should leave out the personal data in paper form or personal data infringement in oral form both of which should be in the domain of the IPC or a different “Right to Privacy Protection Act”.

Alternatively, the envisaged law could be divided into “Chapters” and one chapter may apply to “Protection of Right to Privacy in Non-Digital Space” and the other on “Protection of Right to Privacy in Digital Space”.  Other chapters (if one comprehensive law is to be framed) will include the “Security of personal and non personal data”, “Governance of personal data” and “Governance of Non personal data”.

The chapter on “Governance of Non Personal data” will include the recommendations of the Kris Gopalakrishna committee. Chapter on “Governance of Personal Data” will include the “Personal data collection, processing and disposal requirements as well as the special rights of data principals, the minor’s data etc”. It will also include the cross border restrictions.

Essentially the part of current data protection law with respect to “Security”, “Code of Practice” and  “Compliance” can be added in the chapter on “Security of Personal and Non Personal Data”. This chapter will also include information security aspects included in ITA 2000 such as the digital signatures, the CERT IN powers, the ITA 2000 compliance requirements etc. (These have been included in our Data Protection Compliance Standard of India already as a compliance requirement).

The telegraph act to the extent of “Digitized communication” automatically falls under the “information security” area and if parts of the Telecom Governance is to be bundled then it should appear in the “Governance of Non Personal data Chapter”.

The Crypto currency regulations are regulations related to Electronic document and can be covered under the Chapter on “Data Valuation and Monetization” which could be a separate chapter that can be referenced both by the Governance of Personal Data and Governance of Non Personal Data.

Along with these Chapters, a “Chapter on Preliminary” issues would be required where the definitions, scope etc could be added. This is also an  opportunity to extend this “Information Privacy Protection Law” to cover the “Neuro Rights” so that India leaps ahead of other countries in recognizing the need for “Neuro Rights Protection” as an extended concept of “Privacy Protection through protection of the individual choice including protection of manipulation of the individual choice”.

With these discussions, we are arriving at a “Chapterisation” of the New Data Protection Act at the top level leaving sub chapters for further focussed provisions.

The mapping of the chapters therefore looks as under.

Chapter I:

Preliminary (includes basic definitions, applicability related definitions, the Chapter structure, repealing of other laws, segregation of personal data, non personal data, Sovereign  Data, Corporate data, community data, Joint data, Transaction data, Neuro data etc,  limitations of application to non digital data   etc)

Chapter II:

Privacy Protection in Non Digital Data Environment

Chapter III:

Governance Framework for Personal Data

Chapter IV:

Governance framework for Non Personal Data

Chapter V:

Protection Framework for Personal Data

Chapter VI:

Protection Framework for Non Personal Data

Chapter VII:

Data Valuation Framework

Chapter VIII:

Residual Miscellaneous aspects if any

P.S: These discussions are presently for a debate and is a work in progress awaiting more inputs for further refinement. It is understood that the Government may already have a draft and may completely ignore all these recommendations. However, it is considered that these suggestions will assist in the development of “Jurisprudence” in the field of Data Governance in India and hence these discussions will continue until the Government releases its own version for further debate. Other professionals who are interested in participating in this exercise and particularly the Research and Academic organizations are invited to participate. Since this exercise is too complex to institutionalize, it is being presented at this stage as only the thoughts of Naavi.  Views expressed here may be considered as personal views of Naavi and not that of FDPPI or any other organization that Naavi may be associated with.

 

Next article

Naavi

  1. Introduction
2. Preamble 3.Regulators
4. Chapterization 5. Privacy Definition 6. Clarifications-Binary
7. Clarifications-Privacy 8. Definitions-Data 9. Definitions-Roles
10. Exemptions-Privacy 11. Advertising 12. Dropping of Central Regulatory authority
13. Regulation of Monetization of Data  14. Automated means ..

 

Posted in Cyber Law | 1 Comment