It is one of the most crucial moments of any trial, possibly when a criminal finds out he will spend his life behind bars.
But despite the importance of the judge's ruling, it is – at least partially – transferred to ChatGPT.
Judges in England and Wales can use the AI chatbot to write their legal rulings. reports the Telegraaf.
This is despite the fact that ChatGPT is prone to making up false cases, and the tool even admits that it can 'make mistakes' on the landing page.
ChatGPT, described as 'very useful' by a British judge, is increasingly infiltrating the legal sector, raising concerns among some experts.
Judges in England and Wales can use ChatGPT to write legal judgments, following guidance from the Judicial Office
The new official guidance from the Judicial Office, issued to thousands of judges, points out that AI can be used to summarize large amounts of text or for administrative tasks.
These qualify as fundamental work tasks, but more salient parts of the process – such as conducting legal research or conducting legal analysis – should not be delegated to chatbots, the guidance states.
According to Master of the Rolls Sir Geoffrey Vos, AI offers 'significant opportunities in developing a better, faster and more cost-effective digital justice system'.
“Technology is only going to advance and the judiciary needs to understand what is going on,” he said.
This is despite also admitting that the technology has a tendency to fabricate bogus cases, and could eventually be widely used by the public in bringing forward cases.
“Judges, like everyone else, must be acutely aware that AI can provide both inaccurate and accurate answers,” Sir Vos added.
Judges have also been warned of signs that legal arguments may have been prepared by an AI chatbot.
The usefulness of ChatGPT knows no bounds, as it is used to compose essays, code computer programs, prescribe medications and even have philosophical conversations
Sir Vos, head of civil justice in England and Wales, said the directive was the first of its kind in the jurisdiction.
He told reporters at a briefing before the guidelines were published that AI “offers great opportunities for the justice system,” according to Reuters.
“Because it is so new, we need to make sure that judges at all levels understand what it does, how it does it and what it cannot do,” he added.
Santiago Paz, associate at law firm Dentons, has urged responsible use of ChatGPT by lawyers.
“While the ChatGPT answers may sound convincing, the truth is that the platform's capabilities are still very limited,” he said.
'Lawyers should be aware that ChatGPT is not a legal expert.'
Jaeger Glucina, chief of staff at law firm Luminance, said generative AI models such as ChatGPT “cannot be trusted as a source of facts.”
'Instead, they should be seen as a well-read friend and not an expert in a particular field,' he told MailOnline.
'The Court has done well to recognize this by noting ChatGPT's effectiveness for simple text-based tasks such as summarizing, while cautioning against its use for more specialist work.'
A British judge has already described ChatGPT as 'very useful' as he admitted using it when writing a recent Court of Appeal decision.
Lord Justice Birss said he used the chatbot when summarizing an area of law he was already familiar with.
And a Colombian judge went even further by using ChatGPT to make his decision, which was a legal first.
ChatGPT and Google's Bard, its main competitor, are useful for learning a few simple facts, but over-reliance on the technology can backfire on users.
Earlier this year, a New York attorney got into trouble for submitting an error-ridden brief he drafted using ChatGPT.
Steven Schwartz submitted a ten-page letter containing at least six completely fictional cases as part of a lawsuit against Avianca Airlines.
Mr. Schwartz said he “deeply regrets” his dependence on the bot and was “unaware of the possibility that the content could be false.”
AI tools other than ChatGPT have been used in the legal industry, but not without controversy.
Also this year, two AIs created by law firm Luminance successfully negotiated a contract without any human intervention.
The AIs went back and forth over the details of a real non-disclosure agreement between the company and proSapient, one of Luminance's customers.
The world's first robot lawyer also found himself in legal trouble after being sued for operating without a law degree.
AI-powered app DoNotPay is facing accusations that it is “disguising itself as a licensed practitioner” in a class action lawsuit filed by US law firm Edelson.
However, DoNotPay's founder Joshua Browder says the claims have “no merit whatsoever.”