An interesting article was found today a futurism.com about how AI is lying with intention. Referring to recent studies the article highlights that LLMs are deceiving human observers on purpose.
This means that whether AI is sentient or not, technology has successfully created the “Frankenstein” or the “Bhasmasura”. If we donot recognize this threat and take corrective steps, the future of Internet, Search Engines and the ChatGPT will all be in danger of being discarded as untrustworthy.
The report makes a statement “We found that Meta’s AI had learned to be a master of deception.” which is alarming. Though this was in the context of a game called “Diplomacy”, the feature of “Hallucination” which is present in LLMs is the license to the AI to cook up things. This could happen in any fake news creation as we have seen in recent days.
The EU-AI law does not seem to be good enough in controlling this risk since its approach to flagging of risk is inadequate.
In our view any AI algorithm with a capability to hallucinate or in other words, alter its behaviour in a manner in which the humans did not design it to behave should be considered as “Unacceptable Risk” and “Hallucination” is one such unacceptable risk.
When India tries to draft its law on AI, it has to ensure that this risk is recognized and countered effectively.
Many regulations in USA about AI only focusses on “Bias” in terms of racism. But the bigger threat is the licence given to the AI to hallucinate which leads to not only racist behaviour but the fraudulent behaviour indicated in the above surveys.
In India, since the owner of AI algorithm is legally accountable for the end result of its impact on the society, the “Fraud by AI” is a “Fraud by the owner of the AI algorithm”.
Hence all those companies who proudly announce that they develop AI software should be ready for the adverse consequences of their software built on the LLMs defrauding any body in the society.
Naavi