Extreme Risks in AI..Experts Warn of catastrophe ..

The recent developments in GPT 5, has prompted Google itself to sound out a warning that there are inevitable catastrophic consequences in the AI developments of recent times. While the Google Deepmind says that their teams are working on echnical safety and ethics it is not clear if the creators of AI are themselves aware what a monster it can turn out to be or has already turned out to be.

This following video explains the extreme risks presented by GPT 5 and is a must watch for all of us.

What these video highlights is that the current models of AI can learn by themselves and could acquire dangerous capabilities that include committing Cyber offences, Persuasion and Manipulation of human beings, etc. (Read : Model evaluation for extreme Risks)

While we can pat the back of the technologists who are devloping self learning and Adaptive robots models.

AI is going through the following levels of intelligence

Limited Memory – The first level of AI uses limited memory to learn and improve its responses. It absorbs learning data and improves over time with experience, similar to the human brain. 

Reactive – The second level of AI has no memory and predicts outputs based on the input that it receives. They will respond the same way to identical situations. Netflix recommendations and Spam filters are examples of Reactive AI. 

Theory of Mind – This is currently the third level of AI and understands the needs of other intelligent entities. Machines aim to have the capability to understand and remember other entities’ emotions and needs and adjust their behavior based on these, such as humans in social interaction. 

Self-aware – This is the last level of AI, where machines have human-like intelligence and self-awareness. Machines will have the capacity to be aware of others’ emotions and mental states, as well as their own. At this point, machines will have the same human-level consciousness and human intelligence.

It is believed that research models have already reached level 3 and entering level 4 of self awareness. The early rogue behaviour of Bard in the Kevin Roose interview are an indication that at least these AI models can talk about being “Self Aware” whether they are really aware or not.

Unless we take the extreme Indian philosophical outlook that World will go the way God has ordained it to go and the Kaliyug has to end some time, it is evident that the end of human race may be within the next generation and the “Planet of Intelligent Humanoid Robos” is descending on us.

If we however take a more optimistic outlook and not panic, we should focus on how to prevent or delay the potential catastrophe that may affect our next generation before the Global Warming or a Nuclear war can have their impact.

The global leaders including Elon Musk and Sundar Pitchai for records have flagged the risk and asked Governments to act by bringing in regulations. Some have called for stopping all AI related research for some time.

But it is time for us to act. Let us start thinking about regulating the development of AI on global scale so that we can survive first till the next century.

I therefore call upon our Minister of State Mr Rajeev Chandrashekar to initiate necessary steps to bring in AI regualtion in India immediately.

While the Digital India Act may try to address some of the issues, it is necessary to use the current ITA 2000 and thereafter the proposed DPDPB 2022/23 to bring in some regulations immediately.

Naavi/FDPPI would try to adddress some of these issues and develop a note to be placed with the Government for consideration.

Volunteers who would like to conribute their ideas are welcome to send their views to Naavi.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

2 Responses to Extreme Risks in AI..Experts Warn of catastrophe ..

  1. Sridhar Kalyanasundaram says:

    Naavi sir – letting my imagination run as a part of ‘building risk scenarios’, I visualize robotic-surgery systems with AI, performing procedures on patients that ‘they’ diagonize, and the resultant outcomes being subject to judicial scrutiny – Who would be to blame for an unintended outcome? – the original AI scripters (or are they called code-developers?), or the Hospital that provided the robotic-infrastructure? …. do we even understand what would be happenning in such scenarios?

    • In India the law does not recognize AI as a juridical entity. It’s activity is attributable to the person who uses it as a tool. In this case the surgeon or his employer the hospital. The hospital will have to take due measures of risk assessment to ensure that the tool is secure from all aspects before buying and also use indemnity protection against the software supplier. While medical negligence may have to be boarne by the surgeon back to back indemnity may help. An audit and assurance certificate is required before accepting such software. Over and above all this, just as we take the consent from the patient before surgery, the consent should include the risk of failure of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.