“Personal Digital Age” needs to be given a legal recognition

Information Technology Act 2000 (ITA 2000) defined Electronic Document as a “Binary Expression” and legally recognized a definition for  “Document in Electronic Form”.  It provided a legal recognition making such documents  as equivalent to “Paper documents”. Simultaneously the “Digital  Signature” and “Electronic Signature” were also defined and provided legal recognition. Even the “Computer”, “Computer Network” etc were provided legal definitions as devices that store and process binary expressions.

However, when it came to defining “Digital Contracts” though the electronic form of offer and acceptance were defined. the basic definition of “When does an agreement become a contract”, “What is the contractual capacity” etc were adopted from the Indian Contract Act.

Hence any contract entered into on the Digital space by a person with a physical age of less than 18 years( 21 years if a legal guardian has been appointed earlier) were voidable contracts.  The law did not provide any thought to the impossibility of determining the age of a person in a digital communication.

Though ITA 2000 made “Digital/Electronic Signature” mandatory and therefore any person without a valid digital certificate issued by a Registered Certifying authority cannot enter into a valid digital contract equivalent to a signed contract, the possibility of “Deemed Contract” is possible and is being widely used in all online “Click-Wrap” agreements.

When the Data Protection Act was introduced, the concept of “Consent” was also based on the definition of contract under the Contract Act and hence could not escape the need for a valid signature.

When “Nomination” was suggested in PDPB 2019 and DPDPB 2022, we therefore flagged the Jurisprudential point that a “Will” not being recognized under ITA 2000 and a “Contract” getting automatically extinguished on the death of a person required a paper written will for nominating a transfer of a digital asset on the death of a person.

However the problem of switching the consent from the parent to the erstwhile minor without a discontinuity of service at the stroke of a minor attaining the age of 18 years remained a problem.

Further one could argue that if a person is born at say 09.22 hours on 20th February 2004, then he would attain majority at 09.22 hrs on 20th February 2022 and not at the stroke of the midnight on 20th February 2022.

Additionally the problem of age verification becomes a challenge to every contractual transaction on the Internet. In the early days of Internet, the industry used to ask for credit card ownership as an indicator of adult hood. But with the current risks of credit card frauds, it is not possible to ask for credit card numbers or a nominal debit of Rs 1 etc to verify the age of a person.

In 2005, I had suggested an issue of “Adult Pass” for Pornographic websites (Refer this article: What is an adult pass?) . Recently it was pointed out that France is likely to adopt this thought and issue “Pornographic Passports”. Such documents provide an assurance of age which can be used for allowing entry to adult websites .

In India we already have the system of “Aadhar” which can be also used as an authentication of age. UIDAI can issue on request an “Adult” certificate. Since “Minors” also carry Aadhaar ID along with the Parental Aadhaar ID, we have a ready infrastructure to make UIDAI confirm whether a person is a minor or a major and if he or she is a minor, who should be considered as a “Valid Parent” to provide consent. We can leave it to the UIDAI authority to sort out special cases of single parents, divorced parents, surrogate parents etc so that the digital service provider will simply accept the UIDAI certificate of “Majority” or “Minority” as a valid document for his services.

This service can be introduced by UIDAI immediately and I urge them to do so.

Personal Digital Age

This apart, I am today raising another fundamental philosophical issue of what should be considered as the right age at which a person digitally transforms from a minor to a major for his Internet/digital activities.

Today we debate if under Privacy laws we should consider recognizing “Consent giving powers” to a person of 16 years of age or 13 years of age etc., and argue that a person of that age has the necessary maturity.

But it is difficult to link any physical age to digital maturity. If we accept that today’s younger generation are more computer savvy and hence they should attain “Digital Maturity” earlier than 18 years, the same argument can make a senior citizen as a person below the “Digital Maturity Age”.

Hence determining the “Digital Maturity Age” or “Personal Digital Minority” on the basis of physical age is completely unacceptable since it would be like comparing apples to mangoes.

We need to therefore find a way for finding out the “Digital Personal Age” of a person and incorporate it into the legal definitions.

For example, I took my first birth in the Internet when I got my first E-Mail ID from vsnl.com namely naavi@vsnl.com. This happened some time in 1994-95 and I need to dig into the old records to find out the date when I registered my VSNL account. Since then, I have been using different email accounts  of yahoo.com etc and now the gmail.com and other emails.

Though I might have started using computers earlier to the day of obtaining the digital email ID, I consider that I was digitally born on the  Internet with a unique ID when my first e-mail account was created. Can this be considered as a basis of “Digital Age”?

This means that every person can get a certificate of first creation of an email from any service provider which can be confirmed by some authority and that can be taken as the digital birth certificate.

Once this principle is established, determining whether digital maturity should be considered after 5 years, 10 years or any other time can be easy to determine.

I am therefore placing two suggestions through this article

  1. UIDAI to introduce a physical age certificate with parental tag for minors
  2. E-Mail service providers to introduce the first date of creation as a certificate on request subject to declaration of details such as name etc.

This system would be far better than simply using a self declaration “I am above 18 years of age” in the online contract creating documents or making special arrangement for verifying the age through supplementary enquiry etc.

I am sure that very soon we will have a reasonably acceptable AI algorithms that can assess the digital maturity and if a regulatory authority accredits different AI algorithms like they accredit the digital certificate issuing authorities, then the AI algorithm could make an assessment of “Digital Maturity Age” like we do in the case of Psychometric tests for identifying the mental age and IQ and use it for issue of “Digital Adulthood”.

I am aware that these are the first thoughts that have been floated for other professionals and regulators to consider. I hope unlike my other thoughts (eg: Adult pass) which took 15 to 20 years to get accepted  by the community the above suggestions will have a mush shorter gestation period.

Your views are welcome.

Naavi

P.S: It is interesting to note that some organizations (eg: Infosys) have developed a Digital quotient  through which they try to objectively represent the maturity of their employees in the employment scenario. This parameter is used to determine the “Halflife” of the employees and how quickly their knowledge becomes obsolescent. In a way this is a model of “Depreciation of the financial value of the human capital”.

The Digital maturity from the Internet contracting perspective can use some of the parameters that have been identified for determining the Digital Quotient of employees.

There is also the “Digital Maturity of an organization” which is built into the Quality systems which is different.

What we are now discussing is the “Personal Internet maturity for the purpose of determining the right digital cut off for identifying digital minors from digital majors”

Our concept is simply based on the efflux of time after a person was digitally born and is not linked to an evaluation of his skill sets as is used in the Digital Quotient of employees or the Digital Maturity of an organization. Just as every adult above 18 years is not equally intelligent even under our concept of a “Personal Digital Adult”, there may be difference in the intelligence levels of different persons. But if our physical society including adult franchise is based simply on efflux of time, perhaps it becomes relevant even in the  digital society. If we were prepared to model our adult franchise on the basis of qualifications, then there would be an argument for determining the digital adulthood also on the basis of the skills of a Netizen. Skills are relevant for employment but may not be for online contracts for e-commerce or privacy permissions.

Probably we need to discuss this more in the coming days.

Naavi

[COMMENTS ARE WELCOME]

 

Posted in Cyber Law | Leave a comment

Is AI regulation built into DPDPB 2022?

Jurisprudence is an interpretation of law by experts. One narrow view of “Jurisprudence” is that it is restricted to the views of a Court like the Supreme Court which is considered binding for the lower Courts. But this is a narrow view and needs to be modified.

A larger view of Jurisprudence is that is a scientific study of law and involves not only the history and philosophy of law but also the views and opinions of the Judiciary as well as the subject matter experts.

Interpretation of statutory texts is also “Jurisprudence”.

It may take time for the Indian legal community to come out of its shell and adopt this open view that Jurisprudence can originate from outside the Courts.

The last 22 years of Naavi.org indicates that a large part of Cyber Jurisprudential principles in India originated here and Courts took their own time in accepting these views.

One classic example which should go into the study of Law in India is that the interpretation of Section 65B of Indian Evidence Act was first made from the school of Naavi and also used and adopted in the Suhas Katti case in 2004. In 2005 Supreme Court had a differential view and only in 2012, Supreme Court adopted the Naavi’s thought process on the mandatory nature of Section 65B. The logic for the intervention of a human witness to convert the digital evidence into an admissible evidence in a Court has been explained by Naavi in many professional circles and despite some disagreements here and there arising out of the difficulty for unlearning the age old concepts of “Primary” and “Secondary Evidence” and inability to switch to interpretations based on “Digital Documents” are gradually adopting the views of Naavi.  This is an example of how “Jurisprudence” can develop outside the Judiciary and may get assimilated in the system.

Naavi.org and Naavi has been propounding several new  thoughts such as the Theory of Data , The ” Privacy Protection Law” as an extension of ITA 2000 etc and in due course they are likely to be tested in a Court of Law and hopefully adopted by the Judiciary.

One such Jurisprudential thought that arises out of the Digital Personal Data Protection Bill 2022 (DPDPB 2022) which is in the form of a draft before the Parliament is link between this draft and the discussion on Artificial Intelligence and Neuro Rights regulation.

In our earlier article  we had discussed how  ITA 2000 can be extended for AI regulation through a proper interpretation of Section 11 of ITA 2000.

Now let us see how we can consider DPDPB 2022 as extending to AI regulation.

The definition of Automated Processing under DPDPB 2022 states

(1) “automated” means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data; 

This definition can be extended to all forms of AI including ANI, AGI. Coupled with Section 11 of ITA 2000, accountability of AI will rest with the “Person who caused the automated system to behave in a given manner either with specific instructions or otherwise through a self learning machine learning process”.

What can now be added to the body of law is the “Ethics” in the form of rules and notifications. The notification under DPDPB 2022 can include the Code of Ethics that are required by AI industry to follow and make it part of the current regulation under ITA 2000 and DPDPB 2022 (which could become DPDPA2023 when passed)

With this interpretation, AI will be subject to all the regulations that include

a) Informed Consent

b) Purpose oriented consent

c) Minimal collection and retention

d)Rights of information, accuracy, withdrawal and grievance redressal etc

Further, the regulator of DPDPB 2022 namely the Data Protection Board becomes the regulator for AI related ethical violations. Penalties under DPDPB 2022 will also apply for AI related violations.  The exemptions and deemed consent provisions will apply as stated in the DPDPB 2022.

Further, the provisions of “Significant Data Fiduciaries”, DPIA, DPO appointment, Data Auditor appointment etc will also apply to AI companies.

It is time for us to also look at PDPSI once again and see if any minor modifications are required to be indicated in the DTS calculation.

Overall we are ready to get into the AI regulatory world with DPDPB 2022.

Naavi

Posted in Cyber Law | Leave a comment

Governance by Data is the new Corporate Mantra for the next decade

The world of Business Management has undergone a substantial change in the last decade with the advent of Information and Communication Technology (ICT). The impact of ICT was first felt in establishing an effective communication channel with Customers and Business Associates of an organization with the use of Internet, E Mail, Mobiles, Messaging services etc.  In the second generation of the use of ICT in business we saw the development of E Commerce where both purchases and sales were effectively handled online. Along with these, Customer service and HR functions also started using Online technologies. Some of the industries which really bloomed with the growing use of Internet in the broadband era were education, Online consultancy etc.

The next generation of Business Development with Technology is happened with the use of Data for Business decision making.  But now we have come beyond all these developments and started finding new uses of Data in Business. The future of Business Management is closely integrated with  innovative use of Data in Business.

Data for Business efficiency is the past. Data for New Business is the future.

Data is today an “Asset” of business and business managers need to find ways of using data not only for decision making and improving operational efficiency but to generate new products and services.

Today’s Business Management strategies are therefore directed on how to use “Data for creating more Revenue”.  Revenue can be generated both by saving on current operations (like replacing manpower with better use of ChatGPT ?)  and also through finding new products and services.

Where feasible, 3D printing can enable development of physical products including prototype development, customization, spare part production etc Products can be embedded with smart  chips to provide feedback for improvement.

What is the future however is to find new “Data Products”, “Produce Data Products”, “Market Data Products”, “Finance data Products” and find the manpower for managing Data Products.

In other words we are looking at a future of Technology Oriented companies where “Data” is the raw material of business and the entire business structure of production, marketing, finance and human resources have to be planned around ” Data as a Business Asset”.

Correspondingly R&D has to be developed to understand the Data Product needs of the consumers. This requires conducting market surveys related to Consumer’s Data Consuming and Usage habits. This is precisely the point where the “Data Protection Laws” create a hurdle for the Data Business. The Data Business Managers therefore need to have a good understanding of the Data Protection Laws and ensure that they are compliant with the law but continue to explore and harness business opportunities with the use of Data.

If therefore EU with GDPR is too restrictive, the choice of business location has to be in a place where the Data Protection Law is industry friendly. At the same time just because land is cheap we cannot put up a factory in a desert. We need to look at other resources and their availability. Similarly the Data Dirven business need to be set up in a location where regulations make it feasible to start and grow the business without un necessary harassment but where the resources such as manpower, Internet connectivity etc are also available.

The “Feasibility” analysis has to be therefore conducted with reference to the Product Idea vis a vis the regulatory restrictions along with the availability of other resources.

It is therefore considered that the knowledge of Data Protection and Laws related to Data Protection is an important input for the Business Management Community.

The future of Corporate Governance is “Governance by Data” and the Business Management education needs to incorporate elements of the new technology developments such as AI, Meta Verse etc from Management perspective along with the relevant regulations.

Privacy Activists and Courts should also remember that they cannot always take a stand against business since this could result in deceleration of business growth. Law Makers need to also ensure that while technology has to be regulated, the regulation should ensure that growth occurs in the desired direction.

Naavi

Posted in Cyber Law | Leave a comment

Will Ministry of Consumer Affairs Pre-empt MeitY on AI regulation?

While many are rejoicing the success of Chat GPT 3 and waiting for the Google’s Bard to come up with a more efficient NLP system, there is an underlying fear that the growth of AGI and ASI may soon pass the critical stage and start creating rogue and malicious AI programs.

We can soon expect many variants of ChatGPT to surface with many ChatBots on different websites all trying to proclaim that they are “AI Powered”.

The Indian Government has taken the first step where the Ministry of Consumer Affairs is mandating that companies who want to project their projects or services as “AI Enabled” will be subjected to certain guidelines.

One concern would be that the “AI tag” could be used to mislead the public and hence the Ministry of Consumer Affairs may bring out some “Disclosure Standards” for claiming “AI empowered” tag.

The accompanying news report suggests that Bureau of Indian Standards is working on standards and will put them in public domain.

Just as Google was caught unprepared with the release of ChatGPT by Open AI, MeitY has been caught off-guard with the announcement that the Ministry of Consumer affairs will come out with an AI standard.

In a way, MeitY should be concerned that in an area where they should have taken a lead, another department has started acting before them.

While we need to appreciate the Ministry of Consumer Affairs and BIS for the initiative, it is necessary for MeitY to also join them and work in collaboration to develop a standard which is sound.

The definition of “AI” may be wide and encompass a simple script that automates some activity to  IoTs and robots working in deep learning domain and fixing some standards for disclosure for Consumer awareness would be tough.

It is possible that there will also be many of the small time players providing ChatBots which provide incorrect responses. Some may be hacked and taken over by malicious characters which will cheat the consumers with the “AI Empowered Certification”.

The Ministry of Consumer Affairs will not be able to make a proper assessment of the AI activity since it requires deep understanding of the technology.

However, one aspect that we have been asking for as the first regulatory principle namely “Registration of AI development companies” and “Code stamping of the Registration ID” can be done by the BIS registration.

While incorporation of other ethics of AI may take some time, I advocate that we adopt the known laws to cover the AI regulation at least as an immediate measure.

The Suggested Solution

The solution I suggest is to consider AI products as the responsibility of its owners just as we make parents and guardians responsible for the acts of the Minors.

The transport department has already made rules that if vehicles are driven by minor children the parents will be fined.

We can adopt the same principle here and introduce penalties for

a) Not registering an AI development (applicable to developers)

b) Not registering the use of AI in products (Which BIS may be thinking now)

c) Making the owner of AI liable for any adverse consequence of an AI algorithm even if they are registered (So that Registration does not become a certification of assurance of the functional quality)

This law can be brought in without any new law just by a notification of an explanation under Section 11 of Information Technology Act 2000,

This section already states

Attribution of Electronic Records

An electronic record shall be attributed to the originator

(a)if it was sent by the originator himself;

(b)by a person who had the authority to act on behalf of the originator in respect of that electronic record; or

(c)by an information system programmed by or on behalf of the originator to operate automatically

This automatically means that an output of an AI is attributed to the owner of the AI program. Hence if the output is faulty, malicious or damaging the responsibility falls on the owner of the algorithm. The laws such as IPC can be invoked where necessary.

The owner of the AI algorithm initially is the developer and subsequently the liability should be transferred to the user though the ownership for other reasons of licensing or IPR may remain with the developer.

Hence an explanation can be added to this section to mean the following:

Explanation:

Where the information system is programmed by one person and used by another person, the legal liability arising out of the functioning of the AI algorithm shall be borne by the user.

Where the user is the absolute owner of the algorithm the transfer contract shall include disclosure of the functionalities, the default configurations and the code.

Where the user is only a licensee, the license agreement shall disclose the licensor and the default configuration that affects the functional impact on the consumers.

If the developer does not disclose the required information, he shall be considered as liable for the acts of the AI algorithm.

This suggestion is some what similar to the concept of “Informed Consent” being obtained where the Data Controller discloses the details of processing and data processors to the data subject in a data protection law. The requirement would be a reverse of this consent mechanism where the transferor of the license rights provides an “Informed Disclosure” which the transferee shall further disclose to the consumers.

Since this suggestion does not need any change of law, it can be implemented immediately even before BIS comes up with its recommendations and our own UNESCO recommendation based AI law can be formulated.

Naavi

(Comments welcome)

Posted in Cyber Law | Leave a comment

MHA introduces Cyber Crime Reporting Number

Posted in Cyber Law | Leave a comment

Citi Bank Customers in India face a sudden closure of account

Posted in Cyber Law | Leave a comment