In quest of developing “I’m OK-You’re OK AI algorithm”

When we discuss Artificial Intelligence and Machine Learning, we often try to draw similarities between how a human brain learns and how the AI algorithm can learn. We are also aware that AI can exhibit a rogue behaviour (Refer the video on ChatGPT below) and could be a threat to the human race. As the Video demonstrates, Chat GPT has both Dr Jekyll and Mr Hyde within itself and through appropriate Prompting, user can evoke either the good response or the bad response. 

Based on this, there is one school of thought that suggests that Chat GPT or any other AI is just a tool and whether it is good or bad depends on the person who uses it. In other words the behaviour of the AI not only depends on what is the inherent nature of AI and what prompts it receives from outside. In the case of Chat GPT, prompts come in the form of text fed into the software. But in the case of a robot, it comes from the sensors attached to thee robot which may try to replicate the sense of vision, hearing, touch, smell or taste or any other sixth sense that we may find in future which can present itself in machines even if not available in humans.

I recall that the power of human mind if properly channelled often exhibits miraculous powers of strength or smell or hearing. We are aware that a surgery can be made on a person under hypnosis without anastasia  or the body being made as rigid as steel or demonstrate a heightened sense of smell like a dog, imparted to a person under hypnosis.

Hypnotism as a subject itself leading to age regression and super powers is a different topic but it exhibits that human brain is endowed with lot more capabilities than we realize. 

The neuro surgeons in future may not stop at merely curing the deficiencies of brain but also impart a super human power to the human brain and we need to discuss this as part of Neurorights regulation.

In the meantime, we need to appreciate that an AI created as a replica of human brain may be able to surpass the average performance level of human brain and in such state it is not just “Sentient” but is a super human. One easy example of this super human capability is the ability to remember things indefinitely.

The theory of sub concious mind and hypnosis being able to activate the sub concious mind is known. But otherwise normal humans do experience “Forgetting” and as they age the neuron activity may malfunction. The AI however may hold memory and recall it without any erosion. It is as if it is operating in a state similar to a human where the conscious and subconscious minds are both working simultaneously.

Concept of AI Abuse

When we look at regulations  for AI we need to also ask the philosophical question of whether we can regulate the “Upbringing” by parents. May be we do so and treat some behaviour of parents as “Child Abuse” and  regulate it.

We need to start debating if “AI abuse” can be similarly regulated and that is what we are looking at in the form of AI Ethics. 

Looking at AI as a software that works on “Binary instructions” that interact with a “Code Reading device” to make it change the behaviour of a computer screen or a speaker etc and regulating this as an induced computer behaviour,  is one traditional way of looking at regulations affecting AI.

In this perspective, the behaviour of an AI is attributed  to the owner of the system (Section 11 of ITA 2000) and any regulation coming through linking looks sufficient.

However the world at large is today discussing the “Sentient” status of AI as well as “Feelings” or “Bias” in machine learning, considering the AI as a “Juridical person” etc.

I am OK-You are OK principle for AI

While one school of thought supports this theory that AI can be “Sentient”, and therefore AI algorithm should be considered as a “Juridical person”,  there is a scope for debating if in the process we need to understand why an AI behaves in a particular manner and whether there is any relationship between the behaviour of these high end AI and the behavioural theories  that Persons like Eric Berne or Thomas Harris had propounded for human behavioural analysis.

I am not sure if this thought has surfaced elsewhere in the world, but even if this is the first time that this thought has emerged into the open, there is scope for further research by the behavioural theorists and the AI developers. May be in the process we will find some guidance to think of AI regulation and Neuro Rights regulation.

To start with,  let us look at the development of “Bias” in AI. We normally associate it with the deficiencies of the training data. However we must appreciate that even amongst humans we do have persons with a “Bias”. We live with them as part of our employee force or as friends.

Just as a bad upbringing by parents make some individuals turn out to be bad, bad training could make an AI biased.  Some times the environment turns a bad person into good and a good person into bad. Similarly a good Chat GPT can be converted into a rogue ChatGPT by malicious prompting.  I am not sure if the AI which has been created as capable of responding to both good and bad prompts, can be reined in by the user through his prompts to adopt some self regulatory ethical principles. Experts in AI can respond to this.

While the creator of AI can try to introduce some ethical boundaries and ensure that motivated prompting by a user does not break the ethical code, whether this can be mandated by law as need to create AI of the  “I am OK-You are OK” types rather than “I am Ok- you are not Ok” types. or “I am not OK-You are not OK types”.

If so, either as ethics or law, the AI developer needs to initiate his ML process to generate “I am OK, You are OK ” type AI if it is considered good for the society. This will be the “Due Diligence” of the AI developer.

This is different from the usual discussion on “Prevention of Bias” arising out of bad training data which has been flagged by the  industry at present. We can call this Naavi’s theory of AI Behavioural Regulation.

When we are drawing up regulations for AI, the question is whether we need to mandate that  the developer shall try to generate an “I’m OK-You’re OK” type and ban “I’m not Ok -you are not OK” or I am OK-You are not OK” type. 

The regulatory suggestion should be that “I am OK-You are OK” is the due diligence. “I am not OK-You are not OK” is banned and the other two types are to be regulated in some form.

Birth Certificate for AI

Naavi has been on record stating that if we can stamp every AI algorithm with a owner’s stamp, it is like assigning the responsibility for behaviour to the creator and would go a long way to ensure a responsible AI society.

This can be achieved by providing that every AI needs to be  mandatorily registered with a regulatory authority. 

Just as we have  a mandatory birth certificate for humans, there should be a mandatory “AI Activation Certificate” which is authorized by a regulator. It can also be accompanied by a “Death Certificate” equivalent of “AI deactivation certificate”. Unregistered AI should be banned from usage in the society like an “Illegal Currency”.

When the license is made transferable, it is like a minor being adopted by foster parents or a lady marrying and adopting a new family name and accordingly the change of environmental control on the AI algorithm is recognized and recorded.

Mandatory Registration coupled with a Development guideline on the I am OK-You are OK goal should be considered when India is trying to make a law for AI.

For the time being, I leave this as a thought for research  and would like to add more thoughts as we go along. 

Readers are welcome to add their thoughts.

Naavi

 

P.S: Background of Naavi

Naavi entered the realm of “Behavioural Science” some time in 1979-80 attending a training for Branch managers of IOB at Pune. The theme of the 6 day program for rural branch managers was “Transactional Analysis” and the faculty was one Dr Jasjit Singh. (Or was it Jaswant Singh?). Since then, “Transactional Analysis” and “Games people Play” of Dr Eric Berne and the concept of “I’m Ok You’re OK”, by Dr Thomas Harris and similar works have been of interest to me. Even earlier Professor Din Coly’s concepts of hypnotism has been of interest and which in more recent times motivated me to pursue a Certificate of hypnosis from California Institute of Hypnosis and finally linking up with Neuro Rights has been a journey of its own besides the Cyber Law-Privacy journey of Naavi that needs to be recalled.

Also see these videos

 

 

 

 

 

 

 

Posted in Cyber Law | Leave a comment

Is “Impersonation” a Privacy Issue?

The Right to Privacy as guaranteed by the Constitution and which is sought to be indirectly protected through DPDPA 2023 is a “Right of Choice” of an individual on how his personal information can be collected and used by another entity. That entity that processes the personal information is the “Data Fiduciary” (An individual or an organization)  which is expected to be penalized if the obligations stated in the Act for processing of personal data are not complied with. 

Any contravention of DPDPA 2023 results in the regulator (Data Protection Board or DPB) conducting an inquiry and imposing penalties on the Data Fiduciary. It does not provide for criminal consequences nor personal remedy to the victim of contravention.

“Impersonation” is on the other hand attributed to an act of an individual who uses an identity which belongs to another person. There is a relationship between Privacy protection of an individual and impersonation of the individual which needs to be identified and addressed by both persons looking at “Privacy Protection” and “Impersonation”.

In Privacy Protection, an individual often uses an assumed name for fun or for anonymity. Some times it is used by a data fiduciary without specific consent of the data principal as a security measure.  As long as the alternate identification is not causing harm to another person, it may not matter. But when the name is “Confusingly similar” to another person and is used in a context where the consumer of the information could misunderstand the identity as belonging to another person, then we have situations where “Impersonation” as a “Crime” arises.

The border line between “Pseudonymization” and “Impersonation” is thin and is dependent on the context and intention. For example, If I send an e-mail with the name Sunil Gavaskar and talk about Cricket and that too about a match in the 1980’s, it is quite possible that the recipient of the message may confuse it as a message from the cricketer Sunil Gavaskar.  All  celebrity names have this problem. 

A similar situation arises in the names of the domain names where use of a confusingly similar name of another entity as domain name is termed “Cyber Squatting”. 

A question arises on what is the relationship of “Right to Privacy of Mr X” with the use of the name X by Mr Y as a pseudonym, either for an e-mail or for a website.

Is it violation of the privacy of Mr X by Mr Y?. Is Mr Y a “Data Fiduciary”?  Is he using the pseudonym “X” for “Personal use” and therefore out of the scope of DPDPA 2023?

Similarly when a false name is used for domain names and e-mails are configured as @falseName, there is a potential impersonation effect.

In these cases, Mr Y has not received the personal information of Mr X and hence there is “No Notice” or “No Consent”. DPDPA 2023 nor any Privacy Law has not directly addressed this problem.

In this scenario, it  becomes necessary to look at other laws such as ITA 2000 and see how they work along with DPDPA 2023 in ensuring that “Privacy” is protected in letter and spirit whether the personal information is “Collected” or “Generated”.

This problem is accentuated in the era of AI and Deepfake where information may be generated in such a manner that it may be wrongly attributed to another person and cause harm. 

In view of the above, there is an unstated link between DPDPA 2023 compliance and compliance to Section 66C and 66D of ITA 2000 or Section 66 of ITA 2000.

Compliance of DPDPA 2023 is therefore incomplete without compliance of ITA 2000 to some extent. 

This has been captured in the DGPSI (Data Governance and Protection Standard of India) framework of compliance which is the only framework in India that addresses DPDPA 2023 compliance.

Open to debate….Comments welcome.

Naavi

Posted in Cyber Law | Leave a comment

How is AI Privacy handled under DGPSI?

DGPSI is the Digital Governance and Protection Standard of India (DGPSI) which is meant to be a guidance for compliance of DPDPA 2023 along with ITA 2000 and BIS draft standard on Data Governance. It is therefore natural to reflect how does DGPSI address the use of an AI algorithm by a Data Fiduciary during the process of compliance.

Let me place some thoughts here for elaboration later.

DGPSI addresses compliance in a “Process Centric” manner. In other words, the compliance is looked at at each process and the entity is looked at as an aggregation of processes. Hence each process including each of the AI algorithms used becomes a compliance subject and has to be evaluated for

  • a) What Personal Data enters the system
  • b) What processed data exits the system
  • c) Is there an access to the data in process within the system by any human including the “Admin”.

If the algorithm is configured for a black box processing with no access to data during processing, we can say that personal data enters in a particular status and leaves it in another status. If the entry status of personal data is “Identifiable to an individual” it is personal data. If the exit status is also identifiable to an individual, then the modification from entry status to the exit status and the end use of the exit status data should be based on the consent.

If the output data is not identifiable to an individual, then it is out of scope of DPDPA. The blackbox is therefore both a data processor and a data anonymiser. Since ITA 2000 does not consider the “Software”a s a juridical entity and it’s activity is accountable to the owner of the software, and the owner of the software in this case had an access to identity at the input level and lost it at the output level, the process is similar to an “Anonymization” process. In other words the AI algorithm would have functioned as a sequence of two processes one involving the personal data modification and another as an anonymization. The anonymization can be before the personal data processing or after the personal data processing.

Under this principle, the owner of the AI is considered as a “Data Processor” and accordingly the data processing contract should include the indemnity clause. In view of the AI code being invisible to the data fiduciary, it is preferable to treat the AI processor as a “Joint Data Fiduciary”.

There is one school of thought that “Anonymization” of a “Personal Data” also requires consent. While I do not strictly subscribe to this view, if the consent includes use of “Data Processors” including “AI algorithms”, this risk is covered. In our view, just as “Encryption” may not need a special consent and can be considered as a “Legitimate Use” for enhancing the security of the data, “Anonymization” also does not require a special consent.

Hence an AI process which includes “Anonymization” with an assurance that the data under process cannot be captured and taken away by a human being, is considered as not requiring consent.

This is the current DGPSI view of AI algorithm used for personal data processing.

Open for debate.

Naavi

Posted in Cyber Law | Leave a comment

How Does DPDPA Consent rule apply to CIBIL?

Today’s Economic Times carries an article “Banking Access at Risk if you seek to delete Credit Data” is an argument for exempting Credit Information Companies from the rigours of the Data Protection Act.

Naavi.org has been in the forefront of raising a complaint about CIBIL which collected enormous amount of data under its special status and transferred its share holding to a foreign company placing personal information worth lakhs of crores of rupees in the hands of a foreign company without any consent from the data principals.

Naavi.org has detailed its concerns in the following articles

1: Is TransUnion-CIBIL guilty of Accessing Critical Personal Data through surreptitious means?

2.CBI Enquiry is required for finding the truth behind TransUnion taking over CIBIL

The essence of these articles is that the value of data assets owned by CIBIL was transferred to a different company and outside India in a form which can be called “Data Laundering”.

Today’s article in ET threatens that if any Indian data principal has the vision of using the DPDPA and asking for deletion of personal data with CIBIL, then they may be denied further loan facilities by the Banks.

The threat is attributed to an IT Ministry official but has a serious flaw in the argument.

At present the CIC (Credit Information Company) comes into a picture as an agent of the lending Bank and not through any direct relationship with the data principal. The Banks may obtain a consent to share the credit information with the CIC. However, if the Bank deny the loan solely because the data principal refuses to permit sharing of his data with the CIC it could come in conflict with the “Right to withdraw Consent” and “Obligation to erase on completion of the consented process”.

The data principal has no direct relationship with the CIC and hence it is the responsibility of the credit giving Bank to get the data with the CIC removed after the closure of the loan when the purpose of the consent is over.

A Bank which has given a loan cannot indefinitely keep the previous loan data that too with a third party processor on the speculation that the person may apply for another loan in future and the information would be useful at that time. This would be a speculative retention of the data.

Further if it is required to keep the information for a reasonable period for tax or other purposes, the consent may be used by the lending Bank to keep the data with itself. Since no processing is involved, there is no case for allowing the CIC to be a personal data storage company on its behalf.

The purpose for which the CICs were established was to prevent a borrower from borrowing from multiple Banks and end up being a defaulter. However it is presently being used as a Big Data Processor who will analyze every EMI and determine the repayment efficiency.

While we can accept the need for such an agency from the point of view of credit discipline, it cannot be allowed to function without a direct consent from the data principals.

Hence the system of Banks taking a consent in their loan applications to share the data with the CICs, it is better if CICs become the specialized Consent Managers under DPDPA for the exclusive purpose of providing consent in the loan application context.

For this purpose they need to be licensed by the DPB and follow the Data Protection discipline that the DPDPA 2023 imposes in the form of Consent and Legitimate use.

One of the requirements of accrediting such consent managers should be the “Fitness criteria” which should include that the CIC should be an Indian Company with Indian promoters and should not be used for data laundering.

Referring to the article, in case a Bank refuses to provide the loan for the sole reason that the consent to share the data with a CIC, then it may be disputed as an unfair condition attached to the service.

The reading of DPDPA 2023 is that personal data can be processed only under Consent or under Legitimate interest or under Exemption. The legitimate interest clause has “Legal obligation” as one excuse so that if the disclosure of personal data is a legal necessity for the Bank, the sharing may be acceptable. However, after the loan is repaid there is no contractual relationship persisting between the Bank and the Customer and hence the Bank cannot refuse to get the data with the CIC erased.

At the time of the new loan application, it may be in order to ask for information about previous loans. In between it may be acceptable as a legitimate interest to retain the data for a reasonable period by the Bank with itself and not with the CIC.

Some of these issues will perhaps go to a Court shortly after the Act becomes fully effective and will get the clarity.

In the meantime FDPPI’s DGPSI adopts a “Consent Management Framework” that ensures that the consent is obtained by the Bank from the data principal to share the data with the CIC at the time of granting of a loan and take it back after the loan is closed.

In the instance when a loan is closed and the Bank or CIC defaults in maintaining the accuracy of information resulting in the lowering of the credit rating of the data principal, the Bank has to be take on the liability under DPDPA. CIC will be liable to the Bank under the contract and also under ITA 2000.

Beyond this general implication of DPDPA, we have to wait and see of MeitY tries to bail out the CICs from their responsibilities in which case a challenge in the Supreme Court is inevitable.

Naavi

Posted in Cyber Law | Leave a comment

The Challenge of WebDTS compliance

In the last one week, Naavi has been looking at the WebDTS prospects of some of the websites and it has revealed some challenges that throws light on the overall DPDPA 2023 compliance.

Many of the WebDTS Certification requests have failed because the way the website Privacy compliance is currently designed is generally faulty. There is a need for correction.

Yesterday, Naavi had an extensive discussion with some industry experts to understand why most of the Websites may not qualify for the FDPPI’s WebDTS tag. The reasons are many and there is a need for further education of the Website Owners to make them appreciate the compliance requirements drawn up by Naavi/FDPPI. A brief attempt is made here to explain the reasons for wide prevalence of non compliance and more information will be published from time to time as a part of DGPSI compliance framework.

For example, one of the basic principles of DGPSI is that “Purpose Oriented” collection and processing of personal data is the essence of Privacy Commitment. This requires that the personal data collected has to be minimized for the required purpose at the time of collection and retention.

A Company is a bundle of many personal data processing activities and the Website is one activity where the purpose of personal data processing is “Enabling a Visitor to receive the information published”. However, a Website can also be used for conducting E Commerce. It can also be used as an application interface. The “Information publication” itself may be at the primary level of “Read what is published” which can be extended to “Request for further information”.

In view of the different requirements of a website for different purposes, the collection, retention, disclosure, requirements for each purpose differs.

For example, if the purpose is “Enabling a visitor to receive information as published” (fundamental objective of a website), then there is no need to know the name or email address or the mobile number of the visitor. The technology may require knowing the IP address of the visiting device without which the basic IP handshake cannot occur. Then there is a requirement to know whether the device is a mobile or a computer so that the GUI can be dynamically modified based on the browser and device. For these purposes the name, email or mobile is not required. The information collected for such browsing can be through session cookies which some call as “Essential Cookies”. Such session cookies get automatically purged when the session ends.

However if a website decides that We will retain information about the BIOS identity of the device, and that will help me use the same display configurations when the visitor visits next time, then they may use “Persistent Cookies” which need to be retained and stored. If this information is not capable of identifying the human visiting the website, it does not constitute collection of “Personally Identifiable Information”.

Hence the DPDPA compliance is restricted to ensure that there is no persistent cookie and even if present, the cookie is not collecting personally identifiable information.

The Level 1 of WebDTS needs to enable just this data minimization requirement. We can discuss the higher levels of WebDTS compliance in subsequent articles.

It is an observation that even this Level 1 compliance of a website is not available with most websites. We may share some of the following observations in this regard so that we all can strive towards a better compliance eco system.

Some Observations

1.The lack of compliance could be because the hosting of the website is outsourced and the owner of the website may not have complete knowledge of what cookies are working on the website and what are their purposes. As a result the personal information may be collected by the hosting company and used for its own marketing efforts without a proper consent.

2.In the event a website works only with a basic functionality of company information being displayed and no personal identifiable information of the visitor is collected, then the website is compliant with DPDPA by default. But most websites have a purpose beyond presentation of information.

3.Since the website owner is dependent on the hosting provider for cookies used during hosting, it may be preferable to declare the identity of the hosting company and declare him as responsible for any undisclosed cookies collected by him.

4.It is also possible that the website may host cookies other than what the hosting company may install. This could normally come from data analytics companies including Google Analytic tools or associated Advertisements which are part of the Content monetization objective.

Such cookies may also be collecting only information which is not personally identifiable information but the cookies may be “Persistent” and may be stored and accessed beyond the session.

Further some information like Bios information and IP information may be used along with other information available with the analytics company and could lead to eventual identification of the individual. This is a consequential risk and the website owner may have to have some disclaimers in this regard.

5.The Base level of WebDTS (Level1) may therefore include such disclosures as may be necessary to declare that the possibility of undisclosed persistent cookies (beacons) hosted on the website by others exists and such companies will be considered as “Joint Data Fiduciaries” and are notified to identify themselves to the owner.

6. At the base level of WebDTS, some requirements of ITA 2000 compliance such as updation of declared privacy policy, provision of grievance redressal information, identifying the name and address of the website owner are considered essential.

7. A website is free to take a stand “We donot collect any personally identifiable information and hence this website is outside the scope of DPDPA 2023”.

8. If the website declares that it still wants to present a more detailed “Privacy Policy”, it has to declare if it is in compliance with the “Privacy Notice” under Section 5 of DPDPA 2023 which may include the 22 language criteria.

9. In a process wise compliance, the “Website Visitor Personal Information Process” may not constitute an activity that may qualify as an activity of a Significant Data Fiduciary. However, in view of the way Section 10 of the DPDPA is worded, the company may otherwise be considered as a “Significant Data Fiduciary”. If so, one interpretation could be that the name of the DPO should be displayed on the website. If however, there is a proper disclosure of the process, the identity of an organization as a “Significant Data Fiduciary” is also “Process Dependent” and need to be disclosed only when a consent to the related process is sought.

10. If the website opts to collect personally identifiable information through a secondary process such as “Request for Service” placed through the website, a separate Privacy Notice may be displayed in conformity with the DPDPA Section 5 and 6.

The scope of WebDTS certification is limited to “Compliance of DPDPA 2023 for the processing of applicable personal information collected from the visitors of the website”.

The most important compliance requirement is to ensure that the Objective of the website is declared as “Publishing of information to the public” and a separate Privacy policy declaring that there is no collection of personal information in the process .

Where the website wants to use the website as a gateway to further services, it is advised that the Privacy policies/Notices for each of such subsidiary services are separately displayed before requesting for the service which shall be of the “Consent Grade”.

If a company opts to use the website for not only information dissemination but also for other purposes (even if it is not e-commerce) as is the prevalent practice, the Privacy Policy becomes a Consent request for multiple purposes and it has to be appropriately written to meet the requirements of “Clear” and “Precise” standard along with “Consent” as per “Verifiable standard”.

Getting a WebDTS compliance tag (Level 1) is therefore possible with a proper revision of the Privacy Policy. However the expectation of FDPPI is that a website as a whole needs to be compliant with DPDPA 2023 and it appears that in India at present there are not many websites that will pass this test.

In the coming days, we shall discuss the different requirements to be met by a website if it has to get the WebDTS seal without the qualified seal of (Level 1). I suppose other experts in DPDPA 2023 may debate the compliance requirements that Naavi/FDPPI may consider as “Necessary”.

Naavi

Posted in Cyber Law | Leave a comment

E Mail handling as a Personal Data Process: Does DPDPA apply?

Every organization handles Corporate E Mail process. Just as having a website is one of the Digitization steps taken by all companies, having a corporate e-mail system is another early step in the process of digitization of business.

I would like to raise some issues on the application of DPDPA compliance related to handling of the E Mail system by a company for the industry professionals to debate.

For handling the email requirements, an organization sets up an e-mail server often in the domain name which is also used for its corporate website. For example abc.in is the domain name of the company and @abc.in is the email IDs used by the company.

The @abc.in emails are allocated to the employees such as vijay@abc.in. It is also allocated to certain positions in the company such as dpo@abc.in.

Outward emails are sent by different designations such a hr@abc.in or purchase@abc.in or marketing@abc.in, service@abc.in or support@abc.in etc.

Outsiders send e-mails to these email addresses and also to employees such as vijay@abc.in. E Mails to vijay@abc.in may be personal or business related. It may also contain a CV requesting for job. This could result in accumulation of unstructured personal data in the company’s assets.

Many companies are using and will continue to use “E-Mail Marketing” as a part of its corporate strategy where they will send out e-mails to their prospective customers.

In such cases different compliance issues may arise.

If a Company has to be compliant with DPDPA 2023, it has to therefore develop a policy for handling the e-mail identity of the employees.

We may recall the case of Cavauto S.R.L where the regulator fined the company for accessing the email customercare@cavouto.com in the PC of the Company allocated to the employee under the premise that there was no proper notice to the employees that their personal emails could be accessed even in the company asset and business email.

Can such a situation arise in India under DPDPA 2023?

If so, what compliance measures could mitigate this risk?

Let’s debate. Send your views …to naavi ..or comment below..

Ujvala/FDPPI ‘s service “E Mail DTS” is designed to evaluate the risk mitigation efforts towards meeting the challenge of Personal Data Processing in the E Mail management process.

Naavi

Posted in Cyber Law | Leave a comment