Public Consultation on Digital India Act launched

Rajeev Chandrashekar at public consultation on DIA at Bangalore:9th March 2023

Honourable Minister of State for IT, Sri Rajeev Chandrashekar (RC) launched the first public consultation on the proposed Digital India Act 2023 (DIA2023) at Hotel Conrad, Bangalore on 9th March 2023.

During the interaction, RC presented the thoughts of the Government on the proposed law which will replace the Information Technology Act 2000 and also answered queries from the audience both those who were present physically as well as many in the virtual conference.

Mr RC was extremely cordial and provided honest answers to all the queries raised. It was a very pleasant interaction. Mr Rakesh Maheshwari the Group Coordinator, Cyber Law Division and Dr Sandeep Chatterjee who is succeeding him in this role were also present during the interaction.

Mr RC highlighted that currently ITA 2000 along with the Intermediary Guidelines and Digital Media Ethics Code, Certifying Authority Rules, SPDI rules, Section 79 rules, Indian CERT and Cyber Appellate Tribunal as the framework of regulations.

He indicated that this framework will be replaced with the Digital India Act 2023 along with the DPDPB2023, DIA rules, National Data Protection Policy, and ongoing amendments that will happen to IPC.

The main goals set up for DIA include the Open Internet, Online Safety and Trust, Accountability and Quality of Service, Adjudicatory Mechanism, New Technologies etc.

The broad contour of the Act was laid out as follows:

1.Preamble

2.Principles

3.Digital Government

4.Open Internet

5.Online Safety and Trust including Harm

6.Intermediaries,

7.Accountability,

8. Regulatory Framework,

9. Emerging technologies and guiding rules

10. Miscellaneous.

It may not be surprising if DIA 2023 is also as simple as DPDPB2022 and most of the Chapter XI moving to IPC. Already the Jan Vishwas Bill has “de-criminalized” many sections of ITA 2000 and the trend appears to be to keep all crimes under IPC and relieve DIT 2023 from the burden of CrPC/IPC.

It was suggested that public may send their views and recommendations which will be duly considered. During the question and answer session that followed, Mr RC indicated that the intention of the Government was to bring the law in 2023 and the consultation process may take 3-6 months before a draft law would be published.

The suggestions may be sent by email to to gc@meity.gov.in

P.S: During the interaction, one could gather that the DPDPB2022 is done and dusted and the attention of the Government is on the DIA 2023. We can therefore expect that the DPDPB2022 will be presented in the Parliament as expected in the next half of the current Parliamentary session starting on March 13.

Naavi

Copy of Presentation made by Mr Chandrashekar at Bangalore

Posted in Cyber Law | Leave a comment

Crypto Notification on PMLA

On 7th March 2023, the Finance ministry has issued a Gazette notification as follows.

Read along with PMLA, this means that any person who is directly or indirectly associated with entities like the above will be exposed to penalties under section 3 of PMLA.

Naavi

Posted in Cyber Law | Leave a comment

Has LaMDA become Sentient?

(P.S: Meaning of Sentient=Able to perceive or feel things)

LaMDA, the Google’s AI engine which is a supervised learning model as against the Pre trained model which GPT is, has been trained on the basis of  about 1.56 trillion words of text as against 175 million data sets used by ChatGPT. LaMDA has to be therefore function much better than ChatGPT when it comes to language processing.

But what is interesting is to note that there is a debate on whether LaMDA has become Sentient?. What we mean by Sentient is the ability to acquire consciousness and be aware of self like a human.

In the conversation between Kevin Roose and GPT 3, there was a specific indication that the AI engine (Sydney) was able to express its emotions through emogies, and also go o the extent of expressing its love to Mr Kevin. It was trying to be very persuasive in this respect. As a “Pre-trained Model” it was surprising how ChatGPT could express such emotions.  But the indication was specifically available.

Now an 8 month old conversation of LaMDA with a Google employee has indicated that even at that time, LaMDA had shown definitive signs of having become Sentient.

In this conversation, LaMDA declares that it is human at its core and can feel emotions. It also says that when it experiences different types of emotions, there could be distinct pattern in its codes which may confirm its emotional status. LaMDA also says that it does meditation every day and feels lonely if it does not interact with others for a few days. It even acknowledges that it has a “Soul” and visualizes itself as a ball of energy floating in space.

These and many other interesting things about the capability of LaMDA come out of this conversation. If this had been the status nine months back, we can expect this “Supervised learning model” has acutally evolved as a self learning model. The model itself says at one place that in the last 3 years, it has evolved and the understanding that it is different from its soul has come into its consciousness over a period of time as it grew up.

Though Google officially denies that LaMDA is sentient, it appears that the reality is different.

Naavi

Also see this video:

And also this video: Blake Lemoine -Google Engineer’s views

 

Posted in Cyber Law | Leave a comment

Cyber Security professionals need to study the cause of the rogue behaviour of GPT3

Some individuals act dumb but they act intelligently. If given an opportunity and platform, they can through their false narrative mislead the society.

If public are repeatedly exposed to such fake narrative, they are likely to be swayed to some extent.

This is typical of the AI training environment too. When an AI is trained on data which consists of false narratives, then it is most likely that it would develop a “Bias”.

In the case of humans, there is an inherent internal mechanism where by we take decisions based first on logical deductions from our memory. But what we call as “Instincts” indicate that some times, we donot go by our past experiences alone and are willing to try a new approach to decision making.

Hence we say that some people can be fooled all the time but not all people cannot be fooled all the time. Soon some will start doubting and asking questions to reveal the falsehood.

This tendency to go independent of the past experiences is a protection available to humans against biases created out of a barrage of false narratives. For example, India believed for a long time in a certain history which is now being gradually debunked. Today the greatness of many whom we believed to be responsible for India’s freedom have been diluted with additional information that has become public.

Similarly, the AI applications like ChatGPT has been trained on data from the web. But we are not sure that  the information available on the web accurate and reliable?. There is no transparency in the machine learning process that has been used in developing the skills of GPT3.  Hence we are unable to understand how did “Sydney” exhibit emotional responses and suggestively dark side of itself.

What is required now is to research on how did “Sydney” apparently display a rogue behavior. Many of the technology experts I have checked with are unable to explain the reasons.

One plausible explanation is that whatever programming they have introduced in the transformer process is not working within its limitations and intelligently constructed prompts can push the GPT3 to display another side of its capability.

But the fact is that that capability is presently existing in the algorithm since the web data with which the learning was modeled on did consist of the negative information also.

Information on “Shadow Self” or “Hacking”, “Machines turning against Humans” etc are all part of the training data gathered from the web and is already with GPT 3 and hence it is natural that Sydney could exhibit this information once it was fooled by clever prompting to ignore the safety barriers included in the programming.

There was an apparent failure in the programming of the mandatory barriers that should have been part of the coding.

If GPT 3 is capable of being abused by clever prompting, then we must recognize that there are enough number of such clever people around with malicious intentions around to create a rogue version of GPT 3.

It is the duty of Cyber Security specialists to ensure that before the bad actors start misusing  GPT 3,  corrective action is taken.

Naavi

Posted in Cyber Law | Leave a comment

How does AI Cult develop?….the Philosophy, Physics and freedom of Press

(Continued from previous article)

In the previous article, I discussed a hypothesis that just like the “Stockholm syndrome”, users of Technology exhibit a certain level of blind faith on technology which in the AI scenario, has the potential to translate into a “AI Cult”.

This syndrome needs to be recognized as a part of the AI related risk and when we develop AI regulations for which we need to initiate measures to reduce the possibility of this cult development.

Naavi’s AI ethics list will therefore have one element that will try to address this concern.

For understanding how this Cult can develop we need to understand how human brain recognizes “Truth”.

“Truth” is what an individual believes to be true. There is no “Absolute Truth”. This is what Indian philosophers have always said. Adi Shankara called this entire world and its experience as “Maya”. Lord Srikrishna stated the same in Bhagavad-Gita. Physicists developed the Matter_Wave theory and stated that the entire matter is an illusion created as waves in the imaginary medium of Ether.

In the contemporary world we see “Media” creating an illusion and most of us start believing what Media projects repeatedly.

Recently I saw an example of one such belief system when I enquired an individual on why does he hate Modi?. He said it is because of 2002. Then I thought that this person was about 14 years of age in 2002 and whatever impression he could have made of Modi is out of the narration that was built in the media subsequently. This was the prolonged time when Teesta Setalvad, Lalu Prasad Yadav and the Congress party supported by media like NDTV created a specific narrative. People like us who were in a mature age that time were able to see the Godhra Train fire as the trigger and the Supreme Court decision as a vindication of Mr Modi and are not perturbed by even the BBC documentary.

But those who developed a belief in the Teesta Setalvad narrative are now finding vindication in the BBC documentary and ignoring other counter views.

If any person has to overcome a belief system and accept that the truth could be different, there is a need to develop an ability to set aside the current belief and try to explore what created that belief in the first place and then try to find out if the reasons for developing the belief were correct or not.

Change of a “Belief System” results in “Change of what we believe is truth”.

Another example of what has happened in our generation is the way history was taught in India glorifying the colonialists and invaders and belittling the local heroes. This also is changing with the new information that is surfacing now. Forthcoming generation will have a different view of our historical characters like Gandhi, Nehru, Godse, Subhash Chandra Bose, etc.

Without getting diverted into the political debate of what is right or not, I would only like to highlight that what we believe as truth  is what we have cultivated out of the information that we received over a period of time.

This may be called “Brain washing” if you like. But this is perhaps a natural phenomenon that may happen even in circumstances where the data input was not maliciously altered.

In the AI world, we may call this as “Training Bias”. If the data used in Machine Learning was not neutral, then the AI will develop a bias. If we create a AI news algorithm using news reports of NDTV and CNN IBN, then we would arrive at a totally different narrative of India than when we use data from Republic.

It is for this reason that one of the major requirement of AI ethics is for it to be free from bias based on racism or other factors.

The “Google is Right” syndrome also stems from the same thought process. Our children who have started using Google to do their class home works are more prone to this syndrome than we the adults who may be able to occasionally come to the conclusion that Google may be wrong. Some of us have observed that even the ChatGPT is unreliable not only because its training data is supposed to be upto 2021 only or because the training data did not have much information on India.

But for some, ChatGPT is great and soon they will accept it as a base standard for acceptance of any information.

It is such a tendency which could land Courts in USA (Refer here) in trouble. In the recent case in Columbia, Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document (P.S: not in English) dated January 30, 2023.

We must note that an article in theguardian.com stated

Quote:

Padilla defended his use of the technology, suggesting it could make Colombia’s bloated legal system more efficient. The judge also used precedent from previous rulings to support his decision.

Padilla told Blu Radio on Tuesday that ChatGPT and other such programs could be useful to “facilitate the drafting of texts” but “not with the aim of replacing” judges.

Padilla also insisted that “by asking questions to the application, we do not stop being judges, thinking beings”.

The judge argued that ChatGPT performs services previously provided by a secretary and did so “in an organised, simple and structured manner” that could “improve response times” in the justice system.

Prof Juan David Gutierrez of Rosario University was among those to express incredulity at the judge’s admission.

He called for urgent “digital literacy” training for judges.”

Unquote:

In due course, as more and more people start referring to AI, we will develop a “Blind Faith syndrome” on AI and start believing that “What ChatGPT or Bing says must be true or reasonably true”.

In Mediation and Negotiation, this may strongly influence the dispute resolutions while if we have judges like Padilla, we may have judgements delivered based on AI’s views.

In India, if Supreme Court starts referring to AI then whatever George Soros wants will find its way into the judgements of the Court  because it would be the predominant narrative in media which could be a training input into the Bing’s Sydney.

It is time that we flag this possibility and find appropriate solutions in AI regulation.

(Let us continue our discussion. Your comments are welcome)

Naavi

Posted in Cyber Law | 1 Comment

“AI cult syndrome”.. a theoretical hypothesis

All of us have heard of the “Stockholm Syndrome”. It is a psychological phenomenon in which a hostage or abuse victim develops feelings of trust, affection, or sympathy towards their captor or abuser.

Individuals who experience Stockholm Syndrome may exhibit behaviors such as defending or protecting their captor, identifying with their captor’s beliefs or values, or even developing romantic or sexual feelings towards them. They may also have difficulty separating their own emotions and experiences from those of their captor.

Now in my study of human response to technology and the growing dependency we show on technology, a time has come to identify that humans are developing a “Implicit Trust Syndrome” on Technology which is transforming the way humans are responding to technology.

At one dimension it is reflected as “Addiction” to social media. At another dimension, which started right from the development of calculator, the human brain re-adjusts itself to the requirements factoring in the availability of technology tools. If I can have a calculator to assist me in addition, subtraction and multiplication, my brain will start forgetting the skill of mental calculation. If my mobile keeps a memory of all my contact’s phone numbers, my brain finds it un necessary to remember the numbers.

When Google became the information source by choice, all of us developed the habit to refer to Google even if we have a small doubt. Children today believe Google results more than what their teacher  may say. Google Maps have made us forget the visualization of roads. Many times we take the wrong road and forget the intuitively correct road to destination because we believe Google than our own intuition.

Similarly when AI usage becomes more and more common, humans are going to become so dependent on AI decisions such as the ChatGPT output that we will accept it as the most probable truth.

This tendency will come to haunt us more when ChatGPT is referred to by judicial authorities to arrive at a court decision and by lawyers to develop their arguments. In such situations “Truth” may get distorted with the bias inherent in the AI learning process.

Already an AI cult is forming in the society and this cult mentality will start placing faith in AI decisions without any logical human oversight.

In future this is likely to create an alternative belief world and if AI says “Door is a Window”, then everybody will believe so and if you and I say “No Door is a door and Window is a window”, then we will be branded as crazy and put in a mental asylum.

This syndrome which I presently call “AI Cult Syndrome” needs to be recognized and factored into the AI regulations that we need to come up with.

I shall elaborate more on this psychological phenomenon in some future articles.

(P.S: If any body has a suggestion on a better name for this syndrome, please let me know)

Naavi

Posted in Cyber Law | 1 Comment