Chat GPT: Destroying the Trust in Internet

When Internet was first introduced with the World Wide Web, the world was excited. We all thought that an “Information Super Highway” has been created and it will bring the Encyclopedia Britannica into my desktop. No doubt this happened and for some time, www and information available under GUI was the backbone of many of us converting the information available into more useful niche level knowledge. Most of the time in such exercises, the www was feeding some information which we humans interpreted, gave new meanings and developed into a value added information. Naavi.org creating “Cyber Jurisprudence” is one of the examples of this.

The only thing we were worried at that time was the presence of “Viruses” that would bloat and make the hard disk crash unless they are removed. We were worried that some anti virus software companies may be deliberately creating such viruses to boost their sales. The Internet thrived and e-Commerce gained popularity. With this all our financial transactions got trapped in the Internet world and gave scope for “Virus” to become a “Trojan” and a malware that could commit financial crimes.

At that time one of the suggestions, I used to talk about was to keep the physical Banks separate from Internet and create new E-Banking channels under the laws of E Commerce instead of the laws of Banking. I advocated that Banks should open Internet Banking accounts separate from the physical Banking accounts so that the risks could be contained. But technology enthusiasts did not agree. They combined Internet Banking into physical Banking and all Interent Risks became Risks in Banking transactions for every body. The scope for Anti-Virus or Anti-Malware expanded. These risks are now reflecting in the form of Phishing, Ransomware etc.

Further the development of Social Media made e-mail based interactions much more exciting and brought in real time discussions into our society. We all got addicted and started become part of the “Peer-to-Peer Media”. We started believing Twitter to be more reliable than the news papers or the TV.

As a result of these developments, we have successfully replaced the trusted systems of news in the society, trusted systems of financial transactions and made us all dependent on the Internet based services which are fraught with greater risks.

Any attempt at increasing the security in terms of “Encryption” soon created it’s own monster such as the Crypto Currency which started destroying the economic system and funding cyber crimes and Cyber Terrorism.

The use of “Bots” in messaging services destroyed the reliability of Twitter as a source of user generated news since it became the purveyor of fake news and created a manipulated media.

But all these problems seem to be insignificant when we consider the latest threat that is hitting us namely the “ChatGPT”.

Chat GPT has become a craze but it is likely to become one of the biggest menaces of the society soon.

US seems to be going crazy with the adoption of ChatGPT to replace jobs and to generate content for the web which itself is the feedstock for further training of the new versions of the ChatGPT. ChatGPT will be trained on its own outputs and if its output is inefficient or wrong, it will only get re-inforced and future outputs will become more and more inefficient, unrustworthy. The US courts seem to believe that Judiciary can use ChatGPT to write judgements and US Bar Council may think that robots can become lawyers in the Court.

ChatBots will therefore rule the web world and it will be difficult to distinguish real data from ChatGPT created data.

Today there is an article in The register titled “AI-generated art can be copyrighted, says Us officials -with a catch” . According to this article, US authorities may recognize “Copyright” if content is created by humans using Chat GPT. Considering the skill in asking questions to ChatGPT, it appears that the US authorities are willing to recognize “Dependent Creativity” as copyrightable. In this respect ChatGPT will be considered just like any other tool such as the Word or Power Point that helps in creating literary work with automatic formatting, spelling corrections etc. This view will be contested but soon the supporters of ChatGPT will over ride any counter views and provide acceptability to ChatGPT as a tool that can be used to create Copyrightable works.

The fact that these developments are creating existential threats to the human race is being forgotten in the excitement over this “Innovation”. Just as in the early days of Bitcoin, all of us were so enamored by the technology behind Bitcoins that except for the crazy persons like the undersigned the world was bowled over by Bitcoins and let it become a Frankenstein monster. Today regulators are struggling to reign in the adverse impact of Private Crypto currencies and its ability to corrupt the decision makers and the Judiciary. Indian Supreme Court itself supported Bitcoin at one point of time and if it was not for the RBI with its current generation of policy makers, Bitcoin would have become part of our economic system by now since the bureaucracy politicians and Judiciary had already been compromised to different extent.

A similar situation is now developing in the ChatGPT and AI area. The regulators are hesitating to control the technological innovation and we are sinking deeper and deeper into a hole with each passing day and are likely to reach a stage of no return soon.

I have already flagged this existential threat of Chat GPT going rogue in my earlier articles highlighting the Kevin Roose interview. Now there is another example of how ChatGPT is misbehaving and already showing signs of rogue behaviour. I want everyone to study the following article in The Register

A detailed study of this article would reveal that the questions I have been raising on why did “Sydney” respond the way it did to Kevin Roose are also questions which others in the world are raising. The author of the above article Alexander Hanff has highlighted the fact that ChatGPT declared him dead and invented evidence to substantiate it’s reply. In the Kevin Roose case we rationalized the rogue behaviour as a mischievous behaviour of a creative ChatBot hallucinating in finding the continuity of the conversation. But the Alexandar Haff conversation reflects the “Malevolent nature” which is a revelation of a criminal mind inside ChatGPT.

How did the benign program develop a criminal mind is for the technologists to explain. But for the observers of the AI world who have a balanced view of the need for technological innovation to be balanced with the mitigation of risks to the society, (Let us call these AI-baiters as the AI-Heavy water), the behaviour exhibited by the ChatGPT current version is threatening enough to raise alarm.

The alarm is that we are already getting late in introducing the AI regulation. We need to regulate the development of AI similar to the way we control the Fission and Fusion reactors for energy production in reactors rather than the uncontrolled fission/Fusion in the bombs.

I have been suggesting that we should start our regulations in India by interpreting ITA 2000 in a specific manner introducing accountability for the developers of Chat GPT type of AI tools and make them respsonsible as Intermediaries for any adverse effect created by their tools.

In the meantime, some of the consultants such as Mrs Karnika Seth has developed a full fledged draft law for AI regulation itself. I am providing a link to the draft law which can be discussed separately.

The development of a draft law indicates that if the Government wants to start acting on AI regulation, they can take off quickly. Hope this would be done as soon as possible.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.