Disrupting the Disruptors

Whether it is on Privacy or Fintech Innovation or Cyber Laws, we are observing that all discussions in professional circles lead to DPDPA 2023 and Artificial Intelligence.

While Techies discuss AI as the new craze of Innovation and Disruption, the regulators and legal professionals keep warning about the dangers of AI and the need to put it on reigns.

The fact that Google has put a stop to its Google AI project, Mr Elon Musk has repeatedly warned about the dangers of AI, need to be kept in mind when we look at how to welcome AI into business.

Yesterday in a massive conference in Bangalore on AI in FINTECH organized by Razor Pay, the excitement of techies in the disruption caused by AI Technology was palpable. There was however one discreet warning about “Aggressive Juggad” taking on “Aggressive Regulator” from Dr Bharat Panchal who interestingly describes himself as the “Risky MonK”. Dr Padmanabhan, the past Executive  zdirector of RBI also referred to “Disruption of the Disruptors” by non compliance.

In the din of excitement of the day, the warnings would not have been noticed. The vague discussion on “Ethical AI”, is insufficient to address this issue of how “Hallucination”, “Bias”, “Intellectual Property Right and Privacy Right violation” in Machine learning are insufficient to meet the requirements.

FDPPI has been therefore working on how in its DGPSI (Data Governance and Protection Standard of India) framework for DPDPA 2023 compliance the issue of AI can be addressed.

This will be one of the discussions in the special one day training on “Implementation of DPDPA 2023 Compliance through DGPSI” being held in Bangalore on March 2nd at Fairfield Marriot, Bangalore.

Be there if you are interested…

Naavi

Posted in Cyber Law | Leave a comment

FDPPI Special Drive for DPO/DA training

FDPPI is conducting a series of training programs all over India to prepare the Indian Professionals to be Data Protection Officers and Data Auditors.

In the month of March 2o24, several one day programs have been scheduled in Mumbai, Ahmedabad, Kolkata, Nagpur and Bangalore for experienced data protection professionals requiring an in-depth discussion on implementation of DPDPA 2023.

The registrations for all the programs other than in Bangalore has been closed. Registration for the program in Bengaluru is open.

Register today for the program and Examination.

Naavi

Posted in Cyber Law | Leave a comment

Transactional Analysis applied to Artificial Intelligence behaviour.

As the world is trying to develop regulations for Artificial Intelligence and prevent Privacy Abuse, Copyright Abuse, irrational and unexplainable decisions etc., a question arises what exactly is the definition of “Artificial Intelligence”, when does a “Software” become “Artificial Intelligence” and whether the  known principles of Behavioural Science can be applied to Artificial Intelligence behaviour also. 

A software is a set of instructions that can be read by a device and converted into actionable instructions to peripherals. The software code is created by a human and fed into the system from time to time as “Updations and new Versions”. Each such modification is dictated by the developer’s learnings of the behaviour of the software vis-a-vis the expected utility of the software. In this scenario, the legal issues of the status of the software and the software developer is settled. Software is a tool developed by the developer for the benefit of the user. The user takes control of the software through a purchase or license contract and as the owner of the tool, is responsible for the consequences of its use. Hence when an automated decision creates harm to a person (User or any third party), the owner (Licensee or the developer) should bear the responsibility. This is clearly laid down in Indian law through Section 11 of ITA 2000.

Despite this, we are today discussing the legal consequences of the use of AI, whether its actions need to be regulated through a separate law and if so how?.

We are discussing the copyright issues when AI generates some literary work or even the software. In US, Courts have held that  when an idea is generated by an AI, it is not copyrightable.. (Thaler Vs Perlmutter). Even in India, music created by AI is not held copyrightable (Gramophone Company of India Ltd. v. Super Cassettes Industries Ltd. (2011)). Recently videos created by AI of deceased singer SPB has created a property rights issue on who owns the AI version of SPB.

Copyright or other Intellectual property laws are powerful international laws that are protected by treaties and hence are likely to prevail over other new generation legal issues raised by technology. The upholding of the concept that “Creativity” cannot be recognized in AI also to some extent destroys the argument that AI is a “Juridical entity” different from the “Software” which is accountable in the name of the developer.

The EU act adopts the definition of AI as 

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

Earlier definitions used terms like “Computer Systems that can perform human like tasks” such as “Seeing”, “hearing”, “Touching” etc and convert them into recordable experiences. 

In the current status of the industry, AI has developed into Generative AI algorithms and humanoid robots. In such use cases, the definition of AI is touching the concept of “Intuition”, “restraint”, “Discretion” etc which are attributable to a human intellect. 

For example,  human does not react the same way for every similar stimuli. Some times humans get angry and is not able to show discretion in action. Some times, they do. 

What is it to be Human Like in terms of behaviour?

“Software” and “Artificial Intelligence” are not two binary positions and there is no clear line of demarcation. However to be clear about the legal position of AI, it is necessary to have an understanding of what exactly is Artificial Intelligence and whether there is a proper legal definition when a “Software” becomes “Artificial Intelligence”. An AI algorithm normally is not able to show such  discretion.

Human can “Forget” and “Move On”. A Computer is not able to “Forget” and hence every action of it is a reflection of its previous learning.  Even if we build a model where the behaviour of AI changes statistically with each new experience, the human behaviour has an element of spontaneity that an AI misses.

Thus a software which is coded to change its future output based on the statistical analysis of new inputs by modifying its own code  created by a “original coder” of the Version Zero, is knocking at the doors of being called “Automated Decision making system with self learning ability”. This is often called Artificial Intelligence  based on Machine Learning technology.

The inputs to such system may come from sensors of camera or mike etc but are interpreted by the software that converts the binary inputs into some other form of sight and sound. 

This process is similar to the human brain system which also receives inputs from its sensory organs and processes it in the brain some times with reference to the earlier recorded experiences (Which we may call as prejudice).

But the difference between human intelligence and Artificial intelligence is that all human responses are not same. It varies on several known and unknown factors. If we try to remove this characteristic of human behaviour, we will be “De humanizing” the decisions in the society and convert the society into an artificial society. 

The objective of any law is to preserve the good qualities of the society and one such good quality is the unpredictability of human mind. The “Creativity” aspect of the software which often comes into discussion in IPR cases arise out of this need to prevent human character. We donot want all humans to be zombies. One impact of this can be seen in the Computer Games. If we are playing Tennis or Cricket or Golf on the computer, we know that a certain type of action on the key board results in a certain type of swing of the bat on the screen while in a real situation, a Sports person has many innovative ways of dealing with the same ball. It does not seem ideal that we remove this creativity and the beauty of uncertainty and make every short ball go for sixes while in reality many rank bad balls result in wickets.

In Generative AI, we have often seen a “Rogue” behaviour of the algorithm where it behaves “Mischievously” or  “Creatively”. Whether this rogue behaviour itself is  “Creativity” and is an indication that the “Software” has become “Human” because it  can make mistakes is a point to ponder.

The thought that emerges out of this discussion is that as long as a software is bound to a predictive nature of behaviour, it remains a software. But when the software is capable of behaving in an unpredictable manner, it is not  becoming “Sentient” but actually becoming “Human”.

A dilemma arises here. “To Err is Human” and hence one view is that unless a Computer learns to err, it cannot be called “Human like”. 

But if AI is allowed to “Err” then it will be losing the benefit of being a “Computer” where 2+2 is always 4. It is only the human mind that thinks why 2+2 cannot be always 4.

“To Err” and “To Forget”, “To show discretion”, “To do things  in a way it has never been done before”  are  human characters which today’s so called Artificial intelligence algorithms may not be exhibiting. Until such a situation arises, AI of today remains only a software and has to be treated as “Software” in terms of legal implications with responsibility for actions determined by the software development and license terms.

The Future

In the future, when a software is capable of behaving like a human with an ability to “Feel” and “Alter its behaviour based on the feeling” etc., we should consider that the software has become an AI in the real sense. The new laws of AI will then be applicable only when the software reaches the maturity level of a human.

In the case of laws applicable to humans, we have one set of laws applicable to a “Minor” and another set of laws applicable to an “Adult”. It is expected that a human becomes capable of taking independent decisions just after a certain age is attained. Though there is a serious flaw in this argument that at the stroke of midnight on a particular day, a person becomes an adult and we also  donot measure the age from the time of birth  along with the time zone.  We are living with this imperfect law all along. 

Now when we are considering the transition of a software to an AI, we need to consider if we introduce a more reliable measure of whether the software can be considered as AI for which a criteria has to be developed along with a system of testing and certifying.

In other words, all software remains a “Software” unless “Certified as AI”. As long as a software remains a software, the responsibility of the software remains with the original developer/owner or the licensee.  This is like a “Birth Certificate” in the case of a human being. The birth of an individual does not go on record until it is registered and certified. Similarly a software does not become eligible to be called “AI” unless it is registered and certified.

The “Certification” that a software is an “AI” has to be provided by a regulatory agency based on certain criteria. The argument that we put forth  is that the criteria has to take into account the ability of an AI to err, to forget, to show restraint, to be innovative etc. 

Can we develop such character mapping of an AI? and a new  thought of “AI Transaction Analysis”.

Dr Eric Berne postulated that a human appears to behave from three ego-states namely the Parent, Adult and Child. 

The accompanying diagram shows the typical description of the three ego states, PAC or Parent Adult, Child ego states.

The “Parent” Ego state in the context of an AI is the reflection of the “GIGO” principle that there are set instructions and the output is based on the input.

The “Adult” ego state in the context of AI is the reflection of unsupervised learning ability of an AI. It is a logical response to an input stimuli.

The “Child” Ego state is the creative and unpredictable nature of the humans.

Eric Berne and other researchers who followed him further divided the ego states. For example the Child ego state was sub divided into “Compliant” (Adapted child) and “rebellious” (Natural Child) and “Parent Ego state” was also divided into “Nurturing Parent” and “Critical Parent”.

It is time that we apply these principles to identifying the maturity status of AI, identify and certify the status of the AI. I call upon Behaviour scientists to come in and start contributing towards flagging a software as an AI by applying the PAC principle to AI.

For mapping PAC of an individual behaviour scientists have developed many tests. Similarly we need to design tests for AI categorization. I have experimented such scenario based tests during my stint as a faculty in the Bank in the early 1980s. But practitioners of behaviour science has many advanced tests to map the PAC state of a human and it can be applied to test and certify an AI also.

We need to think if AI regulation needs to take into account such classification of AI into its ego-states instead of the classification that has been adopted in the EU act.

Open for debate…

 

(OPEN FOR DEBATE)

Naavi

Also Refer:

“In quest of developing “I’m OK-You’re OK AI algorithm”

Case tracker

Posted in Cyber Law | Leave a comment

The Era of Data Protection is here…Kickstart your journey with this Book.

Naavi at the International Book fair in Delhi on 18th February 2024

The “Era of Data Protection” in India has started. DPDPA 2023 has been passed into a law.

Even at this stage there is a question before us…Is the law applicable today? or can I wait till the rules are notified? If there is a data breach today, can I claim that there is no law in place and escape liability?

If I want to be compliant and open up the Act, it appears that we will be opening up a pandora’s box with challenges alround.

For many companies, the existing practices need a complete overhaul and the IT systems need a new architecture.

There are many “Privacy Solutions” and companies need to understand how far they take you to your destination. If you are not vigilant, some of the solutions will take you to a wrong destination and you may have to re-trace your steps and start your journey once again.

In this hour of dilemma, one place where you can start your exploration is with this book. ..Data Guardians, A comprehensive handbook of DPDPA 2023 and DGPSI by Naavi.

The next step is to work for a DPO or a Data Auditor by taking up the C.DPO.DA. Certification Course of FDPPI.

While the Government is taking time off for elections, it is time for professionals to use the time interval to be ready for the next phase of development in their career as DPO or Data Auditor.

Knowledge seekers need not worry about missing the changes that may come in with the notification of the rules…..because FDPPI’s current C.DPO.DA. course comes with a guarantee of a Virtual Bridging Session to update the trainees with changes that may come up. …a new norm set by FDPPI which is today unique…and may be emulated by others soon.

Naavi

Posted in Cyber Law | Leave a comment

In quest of developing “I’m OK-You’re OK AI algorithm”

When we discuss Artificial Intelligence and Machine Learning, we often try to draw similarities between how a human brain learns and how the AI algorithm can learn. We are also aware that AI can exhibit a rogue behaviour (Refer the video on ChatGPT below) and could be a threat to the human race. As the Video demonstrates, Chat GPT has both Dr Jekyll and Mr Hyde within itself and through appropriate Prompting, user can evoke either the good response or the bad response. 

Based on this, there is one school of thought that suggests that Chat GPT or any other AI is just a tool and whether it is good or bad depends on the person who uses it. In other words the behaviour of the AI not only depends on what is the inherent nature of AI and what prompts it receives from outside. In the case of Chat GPT, prompts come in the form of text fed into the software. But in the case of a robot, it comes from the sensors attached to thee robot which may try to replicate the sense of vision, hearing, touch, smell or taste or any other sixth sense that we may find in future which can present itself in machines even if not available in humans.

I recall that the power of human mind if properly channelled often exhibits miraculous powers of strength or smell or hearing. We are aware that a surgery can be made on a person under hypnosis without anastasia  or the body being made as rigid as steel or demonstrate a heightened sense of smell like a dog, imparted to a person under hypnosis.

Hypnotism as a subject itself leading to age regression and super powers is a different topic but it exhibits that human brain is endowed with lot more capabilities than we realize. 

The neuro surgeons in future may not stop at merely curing the deficiencies of brain but also impart a super human power to the human brain and we need to discuss this as part of Neurorights regulation.

In the meantime, we need to appreciate that an AI created as a replica of human brain may be able to surpass the average performance level of human brain and in such state it is not just “Sentient” but is a super human. One easy example of this super human capability is the ability to remember things indefinitely.

The theory of sub concious mind and hypnosis being able to activate the sub concious mind is known. But otherwise normal humans do experience “Forgetting” and as they age the neuron activity may malfunction. The AI however may hold memory and recall it without any erosion. It is as if it is operating in a state similar to a human where the conscious and subconscious minds are both working simultaneously.

Concept of AI Abuse

When we look at regulations  for AI we need to also ask the philosophical question of whether we can regulate the “Upbringing” by parents. May be we do so and treat some behaviour of parents as “Child Abuse” and  regulate it.

We need to start debating if “AI abuse” can be similarly regulated and that is what we are looking at in the form of AI Ethics. 

Looking at AI as a software that works on “Binary instructions” that interact with a “Code Reading device” to make it change the behaviour of a computer screen or a speaker etc and regulating this as an induced computer behaviour,  is one traditional way of looking at regulations affecting AI.

In this perspective, the behaviour of an AI is attributed  to the owner of the system (Section 11 of ITA 2000) and any regulation coming through linking looks sufficient.

However the world at large is today discussing the “Sentient” status of AI as well as “Feelings” or “Bias” in machine learning, considering the AI as a “Juridical person” etc.

I am OK-You are OK principle for AI

While one school of thought supports this theory that AI can be “Sentient”, and therefore AI algorithm should be considered as a “Juridical person”,  there is a scope for debating if in the process we need to understand why an AI behaves in a particular manner and whether there is any relationship between the behaviour of these high end AI and the behavioural theories  that Persons like Eric Berne or Thomas Harris had propounded for human behavioural analysis.

I am not sure if this thought has surfaced elsewhere in the world, but even if this is the first time that this thought has emerged into the open, there is scope for further research by the behavioural theorists and the AI developers. May be in the process we will find some guidance to think of AI regulation and Neuro Rights regulation.

To start with,  let us look at the development of “Bias” in AI. We normally associate it with the deficiencies of the training data. However we must appreciate that even amongst humans we do have persons with a “Bias”. We live with them as part of our employee force or as friends.

Just as a bad upbringing by parents make some individuals turn out to be bad, bad training could make an AI biased.  Some times the environment turns a bad person into good and a good person into bad. Similarly a good Chat GPT can be converted into a rogue ChatGPT by malicious prompting.  I am not sure if the AI which has been created as capable of responding to both good and bad prompts, can be reined in by the user through his prompts to adopt some self regulatory ethical principles. Experts in AI can respond to this.

While the creator of AI can try to introduce some ethical boundaries and ensure that motivated prompting by a user does not break the ethical code, whether this can be mandated by law as need to create AI of the  “I am OK-You are OK” types rather than “I am Ok- you are not Ok” types. or “I am not OK-You are not OK types”.

If so, either as ethics or law, the AI developer needs to initiate his ML process to generate “I am OK, You are OK ” type AI if it is considered good for the society. This will be the “Due Diligence” of the AI developer.

This is different from the usual discussion on “Prevention of Bias” arising out of bad training data which has been flagged by the  industry at present. We can call this Naavi’s theory of AI Behavioural Regulation.

When we are drawing up regulations for AI, the question is whether we need to mandate that  the developer shall try to generate an “I’m OK-You’re OK” type and ban “I’m not Ok -you are not OK” or I am OK-You are not OK” type. 

The regulatory suggestion should be that “I am OK-You are OK” is the due diligence. “I am not OK-You are not OK” is banned and the other two types are to be regulated in some form.

Birth Certificate for AI

Naavi has been on record stating that if we can stamp every AI algorithm with a owner’s stamp, it is like assigning the responsibility for behaviour to the creator and would go a long way to ensure a responsible AI society.

This can be achieved by providing that every AI needs to be  mandatorily registered with a regulatory authority. 

Just as we have  a mandatory birth certificate for humans, there should be a mandatory “AI Activation Certificate” which is authorized by a regulator. It can also be accompanied by a “Death Certificate” equivalent of “AI deactivation certificate”. Unregistered AI should be banned from usage in the society like an “Illegal Currency”.

When the license is made transferable, it is like a minor being adopted by foster parents or a lady marrying and adopting a new family name and accordingly the change of environmental control on the AI algorithm is recognized and recorded.

Mandatory Registration coupled with a Development guideline on the I am OK-You are OK goal should be considered when India is trying to make a law for AI.

For the time being, I leave this as a thought for research  and would like to add more thoughts as we go along. 

Readers are welcome to add their thoughts.

Naavi

 

P.S: Background of Naavi

Naavi entered the realm of “Behavioural Science” some time in 1979-80 attending a training for Branch managers of IOB at Pune. The theme of the 6 day program for rural branch managers was “Transactional Analysis” and the faculty was one Dr Jasjit Singh. (Or was it Jaswant Singh?). Since then, “Transactional Analysis” and “Games people Play” of Dr Eric Berne and the concept of “I’m Ok You’re OK”, by Dr Thomas Harris and similar works have been of interest to me. Even earlier Professor Din Coly’s concepts of hypnotism has been of interest and which in more recent times motivated me to pursue a Certificate of hypnosis from California Institute of Hypnosis and finally linking up with Neuro Rights has been a journey of its own besides the Cyber Law-Privacy journey of Naavi that needs to be recalled.

Also see these videos

 

 

 

 

 

 

 

Posted in Cyber Law | Leave a comment

Is “Impersonation” a Privacy Issue?

The Right to Privacy as guaranteed by the Constitution and which is sought to be indirectly protected through DPDPA 2023 is a “Right of Choice” of an individual on how his personal information can be collected and used by another entity. That entity that processes the personal information is the “Data Fiduciary” (An individual or an organization)  which is expected to be penalized if the obligations stated in the Act for processing of personal data are not complied with. 

Any contravention of DPDPA 2023 results in the regulator (Data Protection Board or DPB) conducting an inquiry and imposing penalties on the Data Fiduciary. It does not provide for criminal consequences nor personal remedy to the victim of contravention.

“Impersonation” is on the other hand attributed to an act of an individual who uses an identity which belongs to another person. There is a relationship between Privacy protection of an individual and impersonation of the individual which needs to be identified and addressed by both persons looking at “Privacy Protection” and “Impersonation”.

In Privacy Protection, an individual often uses an assumed name for fun or for anonymity. Some times it is used by a data fiduciary without specific consent of the data principal as a security measure.  As long as the alternate identification is not causing harm to another person, it may not matter. But when the name is “Confusingly similar” to another person and is used in a context where the consumer of the information could misunderstand the identity as belonging to another person, then we have situations where “Impersonation” as a “Crime” arises.

The border line between “Pseudonymization” and “Impersonation” is thin and is dependent on the context and intention. For example, If I send an e-mail with the name Sunil Gavaskar and talk about Cricket and that too about a match in the 1980’s, it is quite possible that the recipient of the message may confuse it as a message from the cricketer Sunil Gavaskar.  All  celebrity names have this problem. 

A similar situation arises in the names of the domain names where use of a confusingly similar name of another entity as domain name is termed “Cyber Squatting”. 

A question arises on what is the relationship of “Right to Privacy of Mr X” with the use of the name X by Mr Y as a pseudonym, either for an e-mail or for a website.

Is it violation of the privacy of Mr X by Mr Y?. Is Mr Y a “Data Fiduciary”?  Is he using the pseudonym “X” for “Personal use” and therefore out of the scope of DPDPA 2023?

Similarly when a false name is used for domain names and e-mails are configured as @falseName, there is a potential impersonation effect.

In these cases, Mr Y has not received the personal information of Mr X and hence there is “No Notice” or “No Consent”. DPDPA 2023 nor any Privacy Law has not directly addressed this problem.

In this scenario, it  becomes necessary to look at other laws such as ITA 2000 and see how they work along with DPDPA 2023 in ensuring that “Privacy” is protected in letter and spirit whether the personal information is “Collected” or “Generated”.

This problem is accentuated in the era of AI and Deepfake where information may be generated in such a manner that it may be wrongly attributed to another person and cause harm. 

In view of the above, there is an unstated link between DPDPA 2023 compliance and compliance to Section 66C and 66D of ITA 2000 or Section 66 of ITA 2000.

Compliance of DPDPA 2023 is therefore incomplete without compliance of ITA 2000 to some extent. 

This has been captured in the DGPSI (Data Governance and Protection Standard of India) framework of compliance which is the only framework in India that addresses DPDPA 2023 compliance.

Open to debate….Comments welcome.

Naavi

Posted in Cyber Law | Leave a comment