Should AI ethics include “Forgetting”?…towards AI regulation in India

This is a continuation of our discussion on ” Towards AI Regulation in India”.

Presently, any AI algorithm is a piece of computer instruction which creates an automatic functioning of a software/hardware. The automated functioning of the AI device is governed by the provisions of Information Technology Act 2000, Section 11 read with Section 2(za) which inter-alia states as under.

Quote

“An electronic record shall be attributed to the originator if it was sent by the originator himself or by a person who had the authority to act on behalf of the originator in respect of that record or by an information system programmed by or on behalf of the originator to operate automatically”

“Originator” means a person who sends, generates, stores or transmits any   electronic message or causes any electronic message to be sent, generated,   stored or transmitted to any other person but does not include an intermediary; 

Unquote

In view of the above, at present the activity of any AI algorithm would be legally the responsibility of the owner of the algorithm. If the algorithm is embedded into a device such as an autonomous driving vehicle, automated credit rating mechanism, prosthetic device or a humanoid robot etc., the responsibility continues to who ever owns the system and markets it to the consumer. Since the functioning of a final device is a combination of multiple systems, the suppliers of sub systems become contractually related to the final claimant of the owner of the device.

It was in this context that we discussed the responsibility for illegal activities of robots like Sophia which was created by a Hongkong firm and granted citizenship by Saudi Arabia. (refer this earlier article).

However, it is considered better for implementation of law if the law has better clarity. Otherwise if a person approaches the Adjudicator under Section 46 of ITA 2000 or the Director CERT in or a Court and claims damage from the actions of ChatGPT or any other AI algorithm or robot, it is difficult to imagine how the judicial authority would respond.

We therefore need the MeitY to immediately designate a “Artificial Intelligence Authority of India” starting with designation of an official in the MeitY within the powers under ITA 2000. This would be like the “Controller for Online Games” who may be appointed under a gazette notification.

The first step that the AI regulator should initiate is a method to create a registry of AI developers and mandate registration. This means that there should be consequences of non-registration which needs to be developed in the notification.

Obviously this will be opposed and has to be followed through as the first battle for the AI regulation.

A similar development happened in the Bitcoin/Crypto regulation which finally resulted in CBDC as an officially approved Crypto Currency and de-recognition of all other  Private Cryptos.  Similarly, AI developed by registered developers will be “Officially recognized” algorithms with a “White” label and others should be considered as “Grey” or “Black” labelled depending on a criteria.

We can start with this labelling and how the society accepts it over time may be observed and further action taken as and when required.

But the “White” AI developers will be those who voluntarily submit themselves to the ethical boundaries set by the registration and the principles of ethics already being discussed worldwide can be included in the guideline one by one.

One of the requirements we have already discussed in this regard is that every AI developer shall be accorded a unique registration number by the authority which shall be embedded in the developer’s work.

Additionally a set of ethical guidelines would be applicable for the development.

The first set of such principles were proposed by Isaac Asimov in his short story “Runaround” in 1942 and consisted of the following three laws of robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Once AI tarted developing, experts have been discussing the ethical principlesto be followed by AI research and development teams and several sets of principles have emerged.

One such principle was “The principles of Asilomar” developed by a group of experts in AI and ethics at the 2017 Asilomar Conference on Beneficial AI, and they provide guidance on how to ensure that AI is developed and used in a way that benefits humanity and avoids unintended harm could also be used. These principles include the following 23 principles:

  1. Research Goals: The goal of AI research should be to create not only a technology but also a world in which the technology is safe and beneficial.
  2. Long-Term Goals: Long-term, society-level planning is necessary, including global and national strategies, research programs, standards and regulations.
  3. Importance of Value Alignment: It is crucial to align the goals and behavior of AI systems with human values throughout their operation.
  4. Control: Every AI system should have accessible and understandable control mechanisms, so that humans can align the goals and behaviors of the system with human values.
  5. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  6. Personal Privacy: The privacy rights of individuals must be protected.
  7. Sharing: The benefits of AI should be shared widely.
  8. Openness: AI research and development should be open, transparent and accessible.
  9. Collaboration: Collaboration between researchers and stakeholders is necessary to ensure that AI has a positive impact.
  10. Responsibility: Researchers and developers of AI systems have a responsibility to ensure that their systems are robust and verifiable and to avoid creating systems that are a threat to humanity.
  11. Safety: AI systems must be safe and secure throughout their operation.
  12. Failure Transparency: If an AI system causes harm, it should be possible to find out why.
  13. Responsibility for AI Systems: Those designing, building, deploying, or operating AI systems are responsible for ensuring that they do what they are intended to do and do not cause harm.
  14. Value Alignment: The beliefs, values and preferences of AI systems should be aligned with human values and ethical principles.
  15. Human Control: There should be a way for humans to disengage or overwrite AI systems if they are causing harm.
  16. Non-subversion: The power granted to AI systems should be used to preserve human values and to avoid subverting these values.
  17. Long-Term Responsibility: Organizations and institutions developing or deploying AI systems have a long-term responsibility to ensure their alignment with human values.
  18. Importance of Basic Research: Basic research is necessary to ensure that AI systems are transparent, controllable, and predictable.
  19. Risks and Benefits: The risks and benefits of AI should be systematically studied and understood.
  20. Diversity: Diverse perspectives and approaches are necessary to ensure that AI benefits humanity.
  21. Human augmentation: AI has the potential to significantly enhance human capabilities, but it is important to ensure that such enhancements are safe and beneficial.
  22. Ethics and Values: The ethical and moral implications of AI must be carefully studied and considered.
  23. Responsibility of AI Developers and Deployers: AI developers and deployers have a responsibility to ensure that AI systems are developed and used in a way that is aligned with human values.

Another such principle is “Turin Principles” developed in 2018 by a group of experts in AI and other principles such as the  Asilomar Principles.

Turin Principles  consist of the following 10 Principles

  1. Human control: AI systems should be designed and operated in a way that ensures human control over the technology and its decisions.

  2. Transparency: AI systems should be transparent and explainable, so that their functioning and decision-making processes can be understood by humans.
  3. Responsibility: Those who design, develop, and operate AI systems should be held accountable for their functioning and impacts.
  4. Human values: AI systems should be designed and used in a way that is consistent with human values, including dignity, rights, freedoms, and cultural diversity.
  5. Fairness and non-discrimination: AI systems should not discriminate against individuals or groups, and should ensure that everyone is treated fairly and without bias.
  6. Privacy: AI systems should respect the privacy of individuals, and the protection of personal data.
  7. Environmental and social responsibility: AI systems should be developed and used in a way that is environmentally sustainable and socially responsible.
  8. Quality and safety: AI systems should be of high quality and safe, and should be designed to minimize harm and risks to individuals and society.
  9. Capacity building: There should be investment in capacity building for individuals and organizations to understand, develop, and use AI in a responsible and ethical manner.
  10. Cooperation: The development and use of AI should be based on international cooperation, and the sharing of knowledge, expertise, and best practices.

PS: Note that both the above principles include the “Principle of Accountability” which we have indicated as the first requirement of our set of principles.

Additionally there have been other initiatives such as the “Montreal Declaration for a Responsible Development of AI”, “Partnership on AI”, “IEEE Global Initiative for Ethical considerations in AI and Autonomous systems, “AI Now Institute’s AI Principles etc”.  We shall discuss these principles independently in other follow up articles.

The ethical guidelines suggested  includes “Protection of Privacy” which means that processing of Personal data must be done in accordance with the known principles of Privacy.  If however, processing has to be legal, then any restriction on automatic processing should be subject to the restrictions of law under GDPR/CCPA/ITA2000 or other similar laws.

One of the areas in which some disputes have arisen and settled through judicial process is the exercise of “Right to be forgotten” where search engines have been often mandated by law to specifically remove personal identity references in certain publicly available information.

This apart, there is an issue in the learning process embedded in the self learning AI algorithms which keep collecting and processing information over a time and learning with each new information input.

An ethical question arises here whether there should be some rules built into the use of learning inputs which are dated. Humans have an inbuilt mechanism to forget without which we will be burdened with all the bad memories of life. Machines donot forget and hence if the decisions of an AI is based on information which is of a past time,  the outcome may not be correct. Even humans change over a period of time and a person who was a bad person during his teens may become a good person when he is an adult and a saint when he is older. It could be the reverse also where a good person may turn bad over a time.

If AI has to maintain quality, then AI should also be trained to understand what is relevant and what is less relevant and what is not relevant, before arriving at the final decision. Hence some form of weightage based on the time of the learning event needs to be part of the  ML process.

“Ability to forget” should be a quality that a good AI should develop and hence has to be one of the ethical principles that needs to be added to the developing set of Naavi’s Ethical Principles of Artificial Intelligence” (NEPAI).

We shall continue our study of all the sets of principles presently available and arrive at our own version in due course.

I welcome contributions from others in developing this set of principles.

Naavi

OPEN FOR DISCUSSION

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.