Classification of AI under EU AI act

(Continuation of Previous Article)

Having discussed the definition of AI and the applicability of EU AI Act in broad terms in the two previous articles, let us continue our discussion on “Classification of AI” under EU AI Act which is important from the point of view of “Risk Assessment”.

For the purpose of compliance EU AI Act defines AI systems with the following classification

1.Prohibited Systems (Title II-Article 5)

2.High Risk Systems( Title III-Article 6,7)

3. Limited Risk Systems

4. Minimal Risk systems

5.General Purpose AI Model (Title VIIIA, Article 52a)

This classification is based on “Risk Assessment”. Prohibited systems are those AI systems which present an “Unacceptable Risk”. In making the assessment of “Risk” one needs to look at the “Harm” caused to the end users of the AI systems namely the people.

As per Article 1.1,

the purpose of this Regulation is to improve the functioning of the internal market and promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection against harmful effects of artificial intelligence systems in the Union and supporting innovation.

The term “Harm” under the Act therefore includes all adverse effect on “Functioning of internal market” besides “Promoting human centric and trustworthy artificial intelligence”. Any thing that affects the protection of health, safety, fundamental rights, rule of law, environmental protection etc will be considered as “harm” under the Act.

When we apply the AI law to “Personal Data Protection” we look at only the harm caused to the individuals. But EU Act appears to expand it’s scope to the economic environment and more particularly the EU Geographic space.

This also means that the “Extra Territorial” application of the penalty clauses are also limited to the adverse impact that may be caused within the Union. Hence if there is an AI system in India that does not impact the EU, the compliance to EU-AI act is redundant. Hence if organizations need to consider ISO 42001 certification for their AI systems which have a foot print only in India, it may be considered as redundant. What is more relevant is the compliance to ITA 2000/DPDPA which is addressed by a DGPSI audit and not ISO 42001 audit.

Now we shall explore Article 5 which defines the “Unacceptable Risks” or “Prohibited AI practices”.

According to Article 5.1, the unacceptable risks include

(1)AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm.

(2)AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

(3)use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. (This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;)

(4)AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(5)the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless and in as far as such use is strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Annex IIa and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. This paragraph is without prejudice to the provisions in Article 9 of the GDPR for the processing of biometric data for purposes other than law enforcement.

(6)use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. (This prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;)

(7) use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

(8)use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

It may be observed that the Act provides several exemptions for the use of the prohibited systems by law enforcement authorities. Such use may however may be subject to some conditions and required to be reported to the Commission in annual reports.

Let us continue our discussion on other classifications of Risk based AI systems in the next article.

Naavi

 

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.