December 8, 2025
Updated guidance has been published for judicial office holders on the use of Artificial Intelligence.
Much of the guidance is self-explanatory: judges and those working on their behalf are expected to use AI with caution, avoiding any use that risks breaching confidentiality or privacy, and remaining alert to the potential for bias. It also emphasises that those seeking to use AI tools must understand both their capabilities and potential limitations. For example, it suggests that AI tools may be helpful to summarise large bodies of text and perform administrative tasks, but warns against using it for legal research or analysis.
Unsurprisingly, the guidance also stresses that the accuracy of any information provided by an AI tool must be checked before it is relied upon, and reminds judicial office holders not only that they must always read the underlying documents, but that they are personally responsible for material which is produced in their name. Interestingly, the indication is that if AI is used as a “useful secondary tool” and in a way that complies with the guidance, there is no need for a judge necessarily to disclose that they have used an AI tool as part of their research or as an aide to help prepare a judgment.
Finally, the guidance explicitly advises judges to be on the lookout for the use of AI tools by litigants. It recognises that such use is now commonplace and that, again, there is no reason for a legal representative necessarily to disclose it. However, it suggests that judges may need to “remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot”. Similarly, judges are put on notice of the increasing sophistication of deepfakes and the corresponding risk of a greater incidence of forged documents.
To read the guidance in full, click here.
Expertise