The India impact AI summit has been a great success despite the first day problem of crowd management and the needless embarassment caused by one of the exhibitors. It has created a high degree of awareness in the Indian public and also drawn international attention to India’s progress in the field. It will take some time for the current status of AI to be fully understood in the “Sarvam AI mayam” euphoria created by the event
Despite the different reports about the event in the media, there is not much coverage on the “AI Risks” both to the users and to the society.
Normally innovators are not concerned about the impact of any new technology on the society. The talk of “Ethics” is simply an eye wash. Untill “Ethics” is enforced through a law which is sufficiently deterrant, no commercial organization can be expected to recognize “Ethics” beyond the word being repeated in speeches.
It is the responsibility of the society to conisder if India has to recognize the AI risks and take regualtory steps to ensure that they donot become a problem like how Cyber Crimes have become a problem for the society.
AI driven Risks may manifest both as operational Risks as well as AI driven Cyber Crimes. They will create a larger challenge to the society which cannot be ignored.
These are additional to the debate whether AI will result in Job Losses, Businesses going bust, AI taking over humans etc.
Were there any stalls in the summit on these themes?… Were there panel discussions?…Were there expert talks? Were there solutions discussed?….. We need to explore.
In the meantime, I leave below some instances of AI related issues in health care which I had collected a few days back which should open our eyes on operational risks in the use of AI.
- UnitedHealth & Humana “nH Predict” Algorithm (2025):
- AI algorithm used to deny coverage to elderly patients had a 90% error rate on appeal.
- The system, optimized for cost-cutting, disproportionately impacted patients, with humans often overturning 9 out of 10 denials.
- Dermatology AI Bias (2024):
- A study on skin cancer detection AI found that most systems struggled to perform on non-white skin, with significant performance drops in sensitivity for dark-skinned individuals.
- Pulse Oximeters Overestimation (2024):
- A UK review confirmed that pulse oximeters, often aided by AI, tended to overestimate oxygen levels in people with darker skin, leading to potential delays in treatment.
- Epic Sepsis Model (2022/2024):
- A widely deployed sepsis prediction model in hundreds of U.S. hospitals was found to have a very poor, failing performance compared to its advertised performance
- It missed 67% of sepsis cases while triggering excessive false alarms.
- Fake Medical Information (2025):
- Studies showed that AI chatbots, such as GPT-4, failed to gather complete medical histories and sometimes generated incorrect, dangerous diagnoses based on simulated patient conversations.
- ECG Misinterpretation (2025):
- In a 2025 trial, an AI-enabled ECG tool wrongly flagged a heart attack for a healthy 29-year-old woman, illustrating how models can be “statistically confident while still being clinically wrong”.
- NEDA “Tessa” Chatbot (2023):
- The National Eating Disorders Association had to disable its chatbot, Tessa, after it was found to be providing dangerous weight-loss advice and calorie-tracking recommendations to people with eating disorders.
- Data Privacy Violations (DeepMind):
- Google’s DeepMind received criticism after it was revealed that the NHS had provided data on 1.6 million patients to train its “Streams” app without proper patient consent.
- Robotic Surgery Failures (2023):
- AI-powered robotic systems have shown failures where the electrical current can leave the robot, resulting in accidental burns to surrounding tissues
Let us study such incidents and try and find solutions in the form of technology and governance.
We need to start discussing solutions to AI risks and the need for new regulations including modification of ITA 2000 and introduction of the concept of Neuro Rights within DPDPA.
Naavi








