‘Hallucinated’ cases are affecting lawyers’ careers – they need to be trained to use AI
Generative artificial intelligence, which produces original content by drawing on large existing datasets, has been hailed as a revolutionary tool for lawyers. From drafting contracts to summarising case law, generative AI tools such as ChatGPT and Lexis+ AI promise speed and efficiency.
But the English courts are now seeing a darker side of generative AI. This includes fabricated cases, invented quotations, and misleading citations entering court documents.
As someone who studies how technology and the law interact, I argue it is vital that lawyers are taught how, and how not, to use generative AI. Lawyers need to be able to avoid the risk of sanctions for breaking the rules, but also the development of a legal system that risks deciding questions of justice based on fabricated case law.
On 6 June 2025, the high court handed down a landmark judgment on two separate cases: Frederick Ayinde v The London Borough of Haringey and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC.
The court reprimanded a pupil barrister (a trainee) and a solicitor after their submissions contained fictitious and inaccurate case law. The judges were clear: “freely available generative artificial intelligence tools… are not capable of conducting reliable legal research”.
As such, the use of unverified AI output can no longer be excused as error or oversight. Lawyers, junior or senior, are fully responsible for what they put before the court.
Hallucinated case law
AI “hallucinations” – the confident generation of non-existent or misattributed information – are well documented. Legal cases are no exception. Research has recently found that hallucination rates range from 58% to 88% in response to specific legal queries, often on precisely the sorts of issues lawyers are asked to resolve.
These errors have now leapt off the screen and into real legal proceedings. In Ayinde, the trainee barrister cited a case that did not exist at all. The erroneous example had been misattributed to a genuine case number from a completely different matter.
In Al-Haroun, a solicitor listed 45 cases provided by his client. Of these, 18 were fictitious and many others irrelevant. The judicial assistant is quoted in the judgment as saying: “The vast majority of the authorities are made up or misunderstood”.
These incidents highlight a profession facing a perfect storm: overstretched practitioners, increasingly powerful but unreliable AI tools, and courts no longer willing to treat errors as mishaps. For the junior legal profession, the consequences are stark.
Many are experimenting with AI out of necessity or curiosity. Without the training to spot hallucinations, though, new lawyers risk reputational damage before their careers have fully begun.
The high court took a disciplinary approach, placing responsibility squarely on the individual and their supervisors. This raises a pressing question. Are junior lawyers being punished too harshly for what is, at least in part, a training and supervision gap?
Education as prevention
Law schools have long taught research methods, ethics, and citation practice. What is new is the need to frame those same skills around generative AI.
While many law schools and universities are either exploring AI within their modules or creating new modules that look at AI, there is a broader shift towards considering how AI is changing the legal sector as a whole.
Students must learn why AI produces hallucinations, how to design prompts responsibly, how to verify outputs against authoritative databases and when using such tools may be inappropriate.
The high court’s insistence on responsibility is justified. The integrity of justice depends on accurate citations and honest advocacy. But the solution cannot rest on sanction alone.
If AI is part of legal practice, then AI training and literacy must be part of legal training. Regulators, professional bodies and universities share a collective duty to ensure that junior lawyers are not left to learn through error or in the most unforgiving of environments, the courtroom.
Similar issues have arisen from non-legal professionals. In a Manchester civil case, a litigant in person admitted relying on ChatGPT to generate legal authorities in support of their argument. The individual returned to court with four citations, one entirely fabricated and three with genuine case names but with fictitious quotations attributed to them.
While the submissions appeared legitimate, closer inspection by opposing counsel revealed the paragraphs did not exist. The judge accepted the litigant had been inadvertently misled by the AI tool and imposed no penalty. This shows both the risks of unverified AI-generated content entering proceedings and the challenges for unrepresented parties in navigating court processes.
The message from Ayinde and Al-Haroun is simple but profound: using GenAI does not reduce a lawyer’s professional duty, it heightens it. For junior lawyers, that duty will arrive on day one. The challenge for legal educators is to prepare students for this reality, embedding AI verification, transparency, and ethical reasoning into the curriculum.
Craig Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
