Admissibility of AI Generated Evidence in Law

Gaurav Sahay - Practice Head(Tech. & Gen. Corp.) Fox Mandal & Associates LLP)

With the rise of Artificial Intelligence (AI) and Machine Learning (ML), machines are now capable of performing tasks traditionally associated with human intelligence and intervention. AI operates through intricate algorithms, which can inadvertently magnify existing prejudices and biases on a larger scale. Deep learning, a subset of ML utilizing neural networks with three or more layers, aims to mimic the cognitive abilities of the human brain to enhance machine performance. Deep learning, a subset of ML utilizing neural networks with three or more layers, aims to mimic the cognitive abilities of the human brain to enhance machine performance. However, this technology, exemplified in applications like deepfakes, can introduce biases due to inadequate diversity in setting principles of guard rails, testing and inadequate due diligence to identify potential biases during development. Such biases can persist and propagate through automated AI systems, exacerbated by flawed data inputs. 

 

The lack of diverse testing and proper identification of potential biases causes the ingress of biases and, which are then automated by AI and spread by it. The bias may also exist due to the fact that the algorithm relies on flawed information while delivering solutions/answers. For instance, in India, Aadhaar—a unique 12-digit identification number based on biometric and demographic data—may soon underpin various AI applications. Algorithmic biases in such systems could potentially infringe upon the fundamental rights of Indian citizens. 

 

In continuity of the above, similarly, the Deepfake technology represents a significant misuse of AI and machine learning capabilities. It involves the collection and analysis of diverse datasets to predict and generate synthetic content. Deep learning algorithms utilize extensive data to seamlessly superimpose one person's facial features and expressions onto another individual's face. Advancements in this domain have rendered it increasingly challenging to discern authentic images and videos from the ones that have been potentially manipulated. Deepfakes pose various risks, such as their potential use as misleading electronic evidence in legal proceedings, dissemination during critical electoral periods to influence outcomes, perpetration of financial fraud, and incitement of violence against targeted groups.

 

The increasing use of AI in generating deepfakes has raised serious concerns about privacy violations. Under the Sensitive Personal Data Information Rules, consent was only mandatory for processing sensitive personal data or information. This meant that AI systems could process non-sensitive personal information without explicit consent or a specific legal basis. The Data Protection and Privacy Act, 2023 addresses this issue by requiring consent for processing all categories of personal data, without exception. Going forward, AI systems should ideally only process personal data for which explicit consent has been obtained, forming part of the datasets used to train these systems. However, the inclusion of 'any other electronic form' under the Data Protection and Privacy Act, 2023 raises questions about the reliability of evidence produced, particularly concerning whether appropriate consents were obtained during the derivation and submission of such evidence, which may involve AI technology.

 

The admissibility of electronic evidence in courts thus poses a potential challenge for any use of AI, particularly in distinguishing between authentic and manipulated images and videos. To counter the said difficult situation, the recent enactment of the Bharatiya Sakshya Adhiniyam in 2023 marks a significant improvement over the previous legislation, the Indian Evidence Act of 1872, in this regard. The new law classifies electronic records as primary evidence and includes within its purview various electronic formats such as semiconductor memory, communication devices and ‘any other electronic form’. It is contemplated that ‘any other electronic form’ would enable the admission of data from smart devices, sensors and emerging technologies, including AI.

 

The Bharatiya Sakshya Adhiniyam, 2023, has made significant amendments by broadening the definition of "evidence" under Section 2(1)(e) to include "information given electronically." This may encompass and embrace other types of digital data, including AI-generated information, as evidence in court. For establishing the legitimacy of electronic documents, the BSA uses the certificate system created under Section 65B (4) of the Information Technology Act of 2000 (ITA). While this approach is completely justifiable to some extent, it may not justify and sustain the difficulties of AI-generated evidence, including but not limited to deepfakes and potential biases inherent in some AI algorithms and tools.

 

The United States has witnessed an increasing acceptance of AI-assisted investigations. India at present lacks judicial precedents involving AI and investigations. This raises concerns regarding trustworthiness and legal assessment and consideration while evaluating such confessions. Most AI tools are “black boxes”, which essentially means that the algorithm driving the AI is riddled with biases and errors. Due to the lack of transparency, it becomes nearly impossible to ascertain the accuracy of AI-derived evidence/information, such as confessions modified by an AI tool, or any incriminating material recorded electronically. As already mentioned, AI may lead to inherent biases and an AI tool/system trained on a dataset that has a disproportionate number of convictions for a specific race or demographic category may be more likely to identify suspects from that group. Such biases might taint AI-generated confessions and result in false convictions.

 

To substantiate evidence and investigations, an expert’s opinion would ideally be sought to shed light on the employment and limitations of the AI system/tool employed. Further, a pre-trial procedure would help in assessing the reliability and impartiality of the AI tool/system being used. If ‘any other electronic form’ of evidence, including AI, is admissible in courts, in the period ensuing, the Bharatiya Sakshya Adhiniyam, 2023 should be amended to emphasize on the transparency and explainability in AI-generated evidence. This may include vetting AI algorithms to decipher how the AI tool/system arrived at its results. The implementation of guidelines like the Daubert standard in the United States, which includes testing and peer review, should be extrapolated to the admissibility of AI-generated evidence. In India, courts may be hesitant to admit AI evidence in absence of alleged transparency/ accuracy in relation to its reliability and impartiality. Unlike the United States, which has substantial research on certain commercially accessible AI systems/ tools, India lacks comparable resources for its own AI systems. 

 

One potential solution involves establishing a dedicated organization or commission tasked with evaluating the reliability and fairness of AI tools used in law enforcement. Such a body could certify AI systems and tools for deployment, recognizing the ongoing advancements in AI technology. While the introduction of the Bharatiya Sakshya Adhiniyam, 2023 and various amendments represents significant progress, concerns persist regarding the transparency and reliability of outputs generated by AI. Addressing these concerns is crucial to uphold not only the principle of justice but also public trust in an era of rapid technological advancement.

 

In conclusion, the legal principle that justice must not only be done, but it must also be seen to be done is a more pressing issue in this age of growing technological advancements. The proposed Digital India Act must address the specific provisions regulating AI, at the same time ensuring that the growth of AI is not restricted.

 

The Journey Into Industry

Gaurav specializes in Technology; General Corporate & Commercial; Employment; and M&A, Joint Ventures & Private Equity practices.

He advises domestic and multinational clients on areas of corporate and commercial laws across industries that include GIS, downstream supply chain management, collaborative sales, marketing, cloud transactions, data privacy, advertising, distribution, and supply chain solutions.

With a strong focus on technology transactions, he has represented and advised clients in major agreements, and his work includes technical and service level agreements, intermediary logistics, technology & IP licensing, annual maintenance, and commercial arrangement agreements. He also advises on contract management, risk management, cloud transactions, data privacy, compliance, advertising law and real estate, and consults on employment law issues.

Gaurav lives in Bengaluru, is a passionate cricketer and golfer, a guitarist and a pianist, and loves painting and reading.