Mumbai High Court Flirting with Truth

The bench of Mumbai High Court which is hearing the complaint against the recent IT rules regarding fake news is making comments which make good headlines in a Newspaper but are irresponsible and may even be termed naive and biased.

On 14th July, news laundry.com headlined “Can’t bring a hammer to kill an ant; Bombay High Court calls IT rules ‘excessive’ “

It was noteworthy that the same headline was used my multiple publications such as NDTV.com, Hindu, Deccan Herald, The Print etc.

Obviously it appeared that all these editors found that the words of the Judge was like a “Quotable Quote”. Was it a coincidence that all these editors thought of the same head lines or was it a press release sent out to all these publications by one of the petitioners or on his behalf by some organization?.

This is not the first of the quotes of the bench which have got wide publicity. earlier reports quoted

“IT rules Amendment Prima facie lack necessary safeguards to protect Satire”

“No matter how ludable the new IT rules are, if the effect is unconstitutional, they must go”

“Government is not a repository of truth that cannot be questioned”

The Court further went on to comment

“It is difficult that one authority of the Government is given absolute power to decide what is fake, false and misleading….” ..

“There is an assumption that what the FCU says is undeniably the the ultimate truth”.

‘No person is claiming a fundamental right to lie”…

“..a person can be anything they want (on the internet) is not necessarily impersonation”.

These are all the opinions of the individual judge/s and not supported by facts. In a way the Judges are lieing themselves when they are making these comments.

The current status of the case is captured in this video

The petitioner Mr Kunal Kamra is a political activist who can claim anything in his peition. But it is inappropriate for the Judges of the Bench to make comments as if it has already made its decision even before the trial concludes.

The way the judges are blurting out their views reminds the behaviour of the Supreme Court bench which heard the Nupur Sharma case indicates that this trial is a farce and the Judges have already made up their mind on the outcome.

The Court by its conduct is misleading the public by making unwarranted comments.

In our opinion just as I have the right to say that anything published about me online is false, the Government also has the right to say what is told about it in the digital media is not correct or false.

The Court cannot take a stand that any false statement can be made on the Government and the Government has to be a mute spectator.

For example, if any publication says that a particular judge is corrupt, has taken bribe for giving out a decision, does the judge not have a right to give a counter statement in the press besides launching a “Contempt” proceeding?

Similarly every citizen as well as the Government has the right to counter the truthfullness of a false statement first by a counter statement and this right is in addition to the right to file a case in a Court of law. What the counter statement does is to give the knoweldge to the publisher that the content is disputed.

If the publisher then decides that the content is fine and is part of the free speech, he can very well not do any thing on the counter statement. It would be ethical to publish the counter statement on the same publication but even this is not mandatory.

What the notification states is the right of the Government to make a public statement that a certain information as published is false. Currently this is only regarding the information about any Government department. There is no compulsion that the information has to be removed forthwith.

Only God knows how can the Court consider this “Right to Self Defence” as incorrect and undesirable.

It is also wrong to say that any “FactCheck call” will make the Intermediary vulnerable to punishment. Punishment if any will come only if a Court decides in a case that the false information casued a wrongful harm to some body.

The Court is completely wrong to presume that every Fact Check call is an automatic punishment on the Intermediary.

The Court has also asked repeatedly why this rule is for digital media only and not print media. I hope the Court will remember that the Print Media works under a different system where there is a publisher and editor to take the responsibility to the content posted by a reporter. On the digital media the reporter is himself the editor and the publisher and it needs a different set of rules. There is also a “Press Council” to monitor which is not available for digital media including you tube publications.

In order to defend the argument of the political petitioner, the Court has gone to the extent of saying that there is “No Impersonation” if a person presents himself as somebody else on the internet. This is pure and simple “Forgery” and the Court defends electronic forgery. This directly counters Section 66C and 66D of ITA 2000/8.

I will recall the words “Vinashakale viparata Buddhi” for this statement.

The Judge does not seem to know what are the implications of his statement.

In the video above one of the advocates for the petitioner has suggested that “Satire” by definition is stating a falsehood and hoodwinking the public to believe the untruth and later call it as a bloff and laugh it off. However until some body challenges the falsehood prevails and damages the social fabric.

The Cambridge dictionary only says Satire is a way of “criticising people or ideas in a humorous way”. It does not give license to say in digital writing some thing false and let it go viral just to excuse one self when cornered claiming it to be a “Satire”.

In the Advertisement industry, there is an ethical way of publishing articles which are paid for and are considered Advertorials. Such content always contain a note in some corner which says it is an advertisement.

Similarly of some content has to be considered a “Satire”, it should declare it to be so either in the beginning or in the end. On the other hand no body should be allowed to say falsehood and wait for the public to believe it as true and when challenged, state that ” I only wanted this to be a satire”.

This is absolutely unacceptable even if the Mumbai High Court has a counter view point.

The two judges of the bench hearing the case have already created enough damage to their reputation as independent judicial functionaries and should strictly refrain from making any further comments. Their views are known and they themselves are better than the advocates of the peritioners in defending the petition and they should straight away pronounce their judgement.

It is a waste of time and money to carry out such trials which have no purpose in the society. The Government should call out the evident bias that the bench is displaying and demand a new bench to be constituted for carrying on the trial.

Naavi

Posted in Cyber Law | Leave a comment

“Set a thief to catch a thief”… In the context of AI in Banks

AI is the buzzword in the tech world right now. Any software developer today meeting a client will first try to project how AI is built into his product to make it more efficient and cost effective. Every industry is falling over each other to adopt AI and Indian Banks are not far behind in their enthusiasm.

In 2016 itself there was a series of initiatives taken by different Banks to introduce Chatbots on their websites. SBI, SBI Cards, HDFC Bank, ICICI Bank, Axis Bank, Yes Bank, IndusInd bank, Kotak, Andhra bank all released their versions of Chatbots. City Union Bank of Kumbakonam however became the first Indian Bank to introduce a robot to answer queries of its customers on the counter.

Most of the Chatbots were of little practical use and often an irritant to the customer. Even now many website services depend so much on the in-efficient chatbots that we often feel running away from such sites. These chatbots could only respond to specific queries for which answers are already loaded and any small deviation could make it useless.

However, the advent of ChatGPT and the current generation of humanoid robots have made a difference.

The above are the pictures of two News Anchor robots introduced by TV channels (Lisa in Odissa and Soundarya in Karnataka) who look very human and speak like humans.

The Geneva conference where several highly intelligent and interactive robots lead by Sophia, and participated by Nadine , Desdemona, Ameca, Grace, Al-da, and three others, showcased the power of AI driven humanoid robots which can think and act on their own.

It is therefore natural that the resource rich Indian banks will soon bring in the human looking robots to their counters at least one in every branch.

Is this an innovation that we should all welcome? Or one that we should all fear? will be a question we should ponder.

Decision makers in Banks need to take a call now if they have to introduce AI as a marketing gimmick or for functional use.

The use of AI in the front office is imminent and will be a common place in Banks like ICICI Bank and HDFC Bank soon which are generally ahead in technology adoption.

Apart from the human looking robots walking around a branch and greeting customers, there are other ways where AI will be used by the banks. In the middle office, AI applications will be used for automated customer identification and authentication and triggering customer specific responses with personalized insights and recommendations. Fraud prevention will be one of the most important aspects of this middle office AI applications.

In the back office , many of the computerised operations of today will transform to be led by AI. Loan processing, Credit rating etc will become automated and dependent on the AI.

While a part of the industry is excited with the possibilities of innovation and changing face of Banking, there is a thread of worry creeping up in certain circles.

Some of the concerns of AI usage are in the area of Privacy. Most Data Protection laws including the forthcoming Indian law DPDPB 2023 will consider automated processing and automated decision making as privacy issues that need explicit consent.

The concept of “Explicit Consent” is not satisfied by a disclaimer such as “You are under CCTV surveillance” or “You will be serviced by a robot” etc since it does not specify that the CCTV or the robot is not processing the face recognition to identify the customer and checking his background as well as his transactional history.

Getting a valid consent for such use of AI is the first priority for the Banks.

If AI is used in back-end processing for Profiling, Credit rating and Loan eligibility determination or for determining the risk adjusted interest rates then the need for establishing a legal basis is accentuated with the need for algorithmic transparency and establishing absence of decision bias.

Over and above this, if there is an incorrect decision by the AI, the liability has to be fixed on the Bank and the Bank should be able to transfer this to the AI robotic manufacturer as an “Intermediary”.

While in fraud prevention, the AI can be used very effectively, to monitor the transactions, to identify the abnormality and to stop phishing frauds, if the AI is not configured properly or if AI can be bypassed by the hacker, the responsibility for the Bank continues even after it could have turned complacent with the introduction of AI led fraud monitoring system.

Hence there is a need for the Banks to monitor the functioning of the AI Fraud monitoring system or any other system by humans. However since humans surrender their work to AI because they cannot handle the volume and complexity, even the monitoring needs technology assistance.

I therefore foresee a need for second level AI tools that monitor the first level AI tools to identify the anomolus behaviour of the AI tool itself and any exceptions it could have allowed..like setting a thief to catch a thief.

This system would be like “Officer level AI tools” monitoring “Clerical level AI tools” and introducing the dual check rule in the AI Banking world.

As an example, the CCTV in a Bank premises can watch the walking humanoid robot’s behaviour and identify if it is doing its duty properly and not ignoring a senior citizen who needs help or flirting with a handsome customer or showing a rogue behaviour like “Sydeney” the Bard assistant.

In the context of decision making AI algorithms, the Checking algorithms may review a sample of decisions and apply fraud detection patterns and identify if everything was on board.

I hope that Banks who are trusted repositories of public money will first think of such controls before falling into the marketing glib talk of technology suppliers and introduce inefficient AI. If this happens it would be a great risk.

In the context of several Indian Banks having been declared Section 70 (Protected System) systems, the AI usage has to be vetted and approved by CERT IN.

I therefore advise caution to the Indian Banks before gong ga-ga about AI introduction in their activities.

Naavi

Posted in Cyber Law | Leave a comment

India needs a new approach in Privacy Implementation

It is a standard practice in the Data Protection domain where an “Auditor” and an “Implementation Consultant” have different roles in establishing Privacy and Data Protection Compliance.

However, this traditional approach imposes a relatively larger responsibility on an organization to understand and interpret the emerging requirements and taking steps in their implementation. It is relatively easy for the auditor to step in, find faults, give impractical suggestions and exit. The company in most cases undertakes the audit exercise as a necessary formality and reverts to its usual ways of functioning and getting back to audit mode once a year whenever the audits happen.

Naavi would like to change this “I am not responsible for designing and implementation but responsible only for audit” approach.

We do understand that there could be a need for such an aprorach to avoid a “Conflict” in the consultancy and audit responsibilities. But this conflict can come in even in the traditional system because of the influence a reputed auditor can bring in on a consultancy firm. In many cases consultancy firms work in tandem with the auditing body and the difference between the two are only on paper. The auditor calls all the shots and the consultant falls in place.

Even in such cases, as long as the auditor is true to the objective of the implementation (eg: Privacy Protection) there is no need to consider that there has to be a conflict of interest where the objective would be compromised. But the practice and belief that the two roles should be kept separate continues to prevail and is sustained despite its inefficiencies.

Hence this keeping the Auditor and Consultant away from each other is considered artificial and if there can be a way of combining the consultancy and audit functions, it is not necessarily undesirable.

In India the DPDPB may expect that the DPO is an internal employee. In GDPR, there is a possibility of an external DPO. Even if the Indian law does not allow an external DPO, it could allow an external Data Protection Consultant to assist the DPO. Further the role of DPO is more aligned with a duty to protect the interests of data subjects and unless an organization has a separate Privacy Officer, there is an inherent conflict between the Data Principal protection duties of the DPO and the Advisory responsibilities within the organization.

Naavi has therefore proposed adoption of a new “Partner in Progress” approach to consulting and audit of Data Protection programs in India which will be experimentally used by FDPPI through its “Supporting Member” network of consultants.

A Brief description of this new approach is provided at here

The essence of this program will be that the organization will use the services of FDPPI for designing, implementing, monitoring with periodical review. In a way the entire PDCA cycle is managed by the FDPPI team which will consist of the chosen set of professionals from the support member group.

The engagement would be on a retainer basis with additional services sourced either from within the supporting member network or outside and billed as necessary. The team would design and implement the system on a best effort basis.

The system of an external data auditor which is inherent in the Indian law will ensure that the work of the FDPPI consultancy team is reviewed by an external auditor and should satisfy the puritans who fear conflict.

It is desired that after the system is stabilized, the FDPPI team can exit and handover the maintenance to an internal Privacy and Data Protection management team.

This arrangement is considered ideal when an organization is going through a Digital Transformation and implementing a switchover from the current privacy and Data Protection regime under ITA 2000 to the DPDPB regime.

Disruption of the current system of Auditing is necessary and desirable and I urge FDPPI to be the instrument of such disruption.

Naavi

Posted in Cyber Law | Leave a comment

Transparency starts with your Identity Disclosure

We are today seeing tech companies entering the field of services related to “Compliance of Regulatory aspects”. Many of them proclaim that they use AI and ML for various requirements. Some of them are providing services to organizations for KYC services. However these service providers are themselves not compliant to the Indian laws. The users of these services who are the “Data Conrollers” use the services of these “Data Processors” without fully understanding their responsibilities.

I draw the attention of the readers to this article in ET “How RegTech can be a game-changer for the FinTech industry?”

The moot question is who regulates these RegTech companies? It appears that RBI is yet to decide on how to regulate these RegTech Companies.

Many of these companies are themselves non compliant to various regulations they should be compliant with.

I was checking on one such start up recently and found that even to send an e-mail to the company to get more information, there were many hurdles and many privacy infringements.

The company was not even transparent about its own identity on its website, let alone on the client’s website where their services were pushed to customers.

It is hightime that RBI takes a serious view of such companies and introduce a proper accreditation system since these organizations are actually substituting the regulatory processes managed or to be managed by RBI and such other regulatory agencies.

I came across a RBI notification which is a master direction to the licensed entities regarding outsourcing responsibilities . This however does not cover the regualtion of RegTech companies who provide services to other companies who are not RBI sueprvised entities. There is a need for a separate regualtion of these RegTech companies apart from the FinTech companies who are engaged in financial services such as lending.

In a Keynote address delivered by the Deputy Governor T Rabi Sankar, on July 7, 2023 at Bangalore, RBI stated that there is a need to establish a meaningful dialogue on regulation of FinTech. There was no mention of the earlier attempts of RBI on FinTech Company (Please refer to this earlier article on Fintech at naavi.org on FinTech Steering Committee Report). There was a need for a greater thrust on FinTech Regulations and it appears that RBI has been overwhelmed by the technology and failing in its duty to have an effective regulation.

However the point of “Transparency through Identity Disclosure” is a non compliance of many Indian companies and it is the failure of CERT IN to impose its authority that has resulted in a situation where unknown and hidden companies are providing critical services in the RegTech/ FinTech as well as other areas and not providing the opportunity to for consumers to raise their grievances.

I request CERT IN to recognize that its responsibility does not end with issuing a notification that a greivacne redressal officer needs to be designated by all intermediaries, but take effective steps to check the non compliance. Hope the DG of CERT In and the Secretary MeitY take effective steps at least now.

Also refer:

Will Fintech Steering Committee report bring changes to PDPA?

RBI’s FinTech Workign Group needs to secure Consumer Interests also

Naavi

Posted in Cyber Law | Leave a comment

Press Conference by Humanoid Robots at UN AI for Good 2023 summit

An unique press conference was held at the AI for Good 2023 Global Summit where a panel of humanoid robots driven by AI and their creators addressed a Press conference.

See details here

The conference needs to be discussed in greter detail. In the meantime some of the other developments in AI robots are given below.

In the light of the above developments we should see how the robots in the UN conference responded to the questions about potential job loss and threat to human society.

It is clear that the robots are not truthful when they say they donot create job loss or will be free from adverse functioning.

The conference however provides some idea about some regulatory thoughts which should be incorporated in the legal strucure of India.

The discussion continues…

Naavi

Posted in Cyber Law | Leave a comment

3 changes proposed in DPDPB

According to a report in Economic times today following changes have been made in the earlier draft of DPDPB 2022.

  1. The age of consent for minors is reduced from 18 years to 14 years
  2. Government may adopt a negative list of countries to which the restrictions on cross border transfer may apply
  3. The definition of Significant Data Fiducairy may depend on the sensitivity and voulume of data handled

We presume that this does not make much difference in the compliance requirement. We shall wait and watch further developments.

Naavi

Posted in Cyber Law | Leave a comment