Earlier this month, a bench of the Supreme Court presided over by the Chief Justice of India Surya Kant expressed concern about lawyers submitting petitions drafted with the assistance of AI tools that contain fabricated case citations. The CJI termed this practice alarming and “absolutely uncalled for.” The remark is significant because it not only highlights the perils of AI but also a deeper professional concern. The problem is not the use of AI per se. Used safely, AI can reduce routine work and act as an enabler for research. It is the uncritical reliance on AI-generated outputs, including case citations that do not exist — a phenomenon commonly described as “hallucination.”
This is not the first time Indian courts have encountered such practices. In November last year, a rejoinder filed before the Supreme Court was found to cite AI-generated judgments that did not exist. Similarly, in a case, the Bombay High Court noted that an income-tax assessing officer had relied upon judicial decisions that were later discovered to be fictitious. In Karnataka, a trial court judge reportedly relied on fabricated judgments generated by an AI tool while drafting portions of a judgment. These episodes reveal that the problem is not confined to the bar alone; it can affect any authority who treats AI outputs as authoritative without verification.
However, terming this solely as an Indian problem is also incorrect. In fact, courts across jurisdictions have been confronting similar challenges. In the United States, the federal court in Roberto Mata v Avianca, Inc, imposed monetary sanctions after counsel relied on fake precedents generated by ChatGPT. In England, courts have encountered multiple instances of fabricated citations in high-value commercial disputes, including an £89 million damages claim in which several cited authorities were found to be fictitious. These cases highlight a global reality: Generative AI systems, while powerful, are not designed as reliable legal databases and require human verification.
The judicial response internationally has largely taken two forms. First, courts have imposed costs or sanctions on counsel who submit unverified AI-generated material. Second, professional bodies and courts have begun issuing guidance, including certification requirements that either no AI has been used or, where it has been used, the content has been independently verified.
In India, however, there are at present no comprehensive guidelines on the use of generative AI in court pleadings. The Supreme Court has issued a White Paper and some high courts, such as Kerala, have taken positive steps by articulating broader AI policies addressing administrative and technological integration within the judiciary, including urging caution before relying on responses generated by AI tools. Nevertheless, there remains a need for a more comprehensive response from multiple institutional actors, including necessary deterrence mechanisms.
The primary responsibility lies with advocates. Submitting fabricated precedents, even if inadvertently generated by AI, is not merely a technical lapse; it amounts to an unfair practice and hence, violates an advocate’s duty towards the Court. In the Supreme Court context, this duty is heightened for Advocates-on-Record (AORs), who are specifically entrusted with filing pleadings. The AOR system exists precisely to ensure a layer of accountability in matters brought before the Court. If AI-generated hallucinations enter the record, the failure is not technological but supervisory, and hence, there is a need for an adequate deterrence mechanism. Similarly, the Bar Council of India and other statutory/regulatory bodies governing lawyers and law students must also create structured guidance on the responsible use of AI in professional practice. The answer is not prohibition but training — understanding the limitations of AI tools, including the risks of fabricated citations.
At the same time, the Court must reflect institutionally. During the hearing, Justice Bagchi observed that the “art of legal drafting” has suffered in recent times, particularly in Special Leave Petitions that increasingly consist of extended quotations rather than structured legal reasoning. This observation deserves attention beyond the AI debate.
The proliferation of poorly drafted petitions, excessive quotation without analytical synthesis, and routine invocation of precedents without contextual engagement are problems that predate generative AI. If anything, AI has merely exposed an existing weakness. A legal culture that prizes volume over precision, or mechanical citation over conceptual clarity, creates fertile ground for technological shortcuts. There is also an institutional dimension. The Courts often entertain inadequately drafted petitions without insisting on compliance with established pleading standards, which inadvertently lowers the incentive for rigour. The law on pleadings — clarity of cause of action, material facts, specific grounds, and legal provisions invoked — exists for a reason. If enforced consistently, it acts as a natural deterrent to superficial drafting, whether human or machine-generated.
The writer leads Charkha, the Constitutional Law Centre at the Vidhi Centre for Legal Policy. Views are personal
