Supreme Court voices concern over growing 'menace' of citing AI-generated non-existent judgements
The High Court had noted in its order that the submissions of the appellant were generated using ChatGPT, including a judgement which had no citation in the real world
360° Perspective Analysis
Deep-dive into Geography, Polity, Economy, History, Environment & Social dimensions — AI-powered, on-demand
Context
The Supreme Court of India has expressed serious concern over the growing misuse of Artificial Intelligence (AI) by lawyers and litigants who cite non-existent, AI-generated case laws in court submissions. This observation came while the Court was hearing a case where the Bombay High Court had identified written submissions prepared using an AI tool, which included a reference to a fake case law. The Supreme Court called this a 'menace' that wastes precious judicial time and affects the integrity of the judicial process, noting that it is already examining the matter on the judicial side.
UPSC Perspectives
Governance & Regulatory Framework
The issue of AI-generated misinformation in courts presents a significant governance challenge, testing the adaptability of legal regulatory bodies. The primary responsibility for setting professional standards for lawyers lies with the [Bar Council of India] (BCI), established under the [Advocates Act, 1961]. This incident underscores the urgent need for the BCI to formulate explicit guidelines on the ethical use of AI tools in legal practice. Such regulations would need to balance technological adoption for efficiency with the core duties of professional diligence and honesty to the court. As of early 2026, while the judiciary has started exploring AI's potential through initiatives like the [e-Courts Project], formal policies governing its use by lawyers remain in development. The Supreme Court's observation signals a move towards creating a formal framework that may include mandatory disclosure of AI usage and verification of all citations, ensuring that technology serves as an aid rather than an unchecked substitute for professional legal judgment.
Judicial Administration & Efficiency
The judiciary's efficiency, a key component of access to justice, is directly threatened by the submission of fabricated information. The Supreme Court's reference to the 'waste of precious judicial time' highlights how AI 'hallucinations' (a phenomenon where AI generates false information) can clog the already burdened system. This not only delays proceedings but also erodes trust in the adjudicatory process. While the Indian judiciary is adopting technology to enhance efficiency through tools like [SUPACE] (Supreme Court Portal for Assistance in Court's Efficiency) for legal research and SUVAS for translation, the misuse of public-facing AI tools by lawyers presents a counterproductive trend. The long-term solution may involve developing sophisticated AI-powered verification tools for court registries to automatically flag non-existent citations, thus using technology to counter its own misuse and safeguard the integrity of judicial administration.
Ethics, Technology & Professional Responsibility
This incident brings the principles of legal ethics and professional responsibility into sharp focus in the digital age. A lawyer's primary duty is to the court and the administration of justice, which requires presenting verified and accurate information. Citing a non-existent case, whether intentionally or through negligent reliance on an AI tool, is a breach of this duty. This falls under the broader ethical concept of candor toward the tribunal. The problem of AI hallucinations is a known limitation of generative AI models, and a lawyer's failure to verify the output constitutes a lack of due diligence. This situation may prompt a re-evaluation of legal education and continuing legal education (CLE) programs to include mandatory training on the responsible use of legal technology, emphasizing that AI is a tool to assist, not replace, the fundamental skills of critical thinking and verification.