As legal professionals increasingly explore the use of generative AI to facilitate research, drafting and other legal tasks, a significant challenge emerges: the strict duty of confidentiality that governs lawyers’ practice is in contradiction with the needs of the large linguistic model (LLM). training.
For AI to effectively understand and respond to the nuances of legal jurisdictions, it needs comprehensive data – data that, in the legal field, is often shrouded in confidentiality.
We have more information than ever before about how these revolutionary AI technologies will influence the future of the legal industry. To learn more, see our latest legal trends report.
The skills paradox
The lawyer’s primary obligation is to provide competent representation, as provided by the rules of competence of the legal profession. This includes the responsibility to maintain current knowledge and, as stated in rule 3.1-2 of the Model Code of Professional Ethics of the Federation of Law Societies of Canada, to understand and appropriately use relevant technology, including AI. However, lawyers are also bound by Rule 3.3-1, which requires strict confidentiality of all client information.
Similarly, in the United States, the American Bar Association published Formal Opinion 512 on general artificial intelligence tools. This document highlights that lawyers must consider their ethical duties when using AI, including competence, client confidentiality, supervision and reasonable fees.
This paradox results in a no-win situation for legal professionals: while they must use AI to remain competent, they are prevented from improving AI models due to their inability to share case details. Without comprehensive legal data, LLMs are often undertrained, particularly in specialized areas of law and specific jurisdictions.
As a result, AI tools may produce incorrect or jurisdictionally irrelevant results, increasing the risk of “legal hallucinations” – fabricated or inaccurate legal information.
Legal hallucinations: a persistent problem
Legal hallucinations are a significant problem when using LLMs in legal work. Studies have shown that LLMs, such as ChatGPT and Llama, frequently generate incorrect legal conclusions. These hallucinations are widespread when asked about specific court cases, with error rates as high as 88%.
This is particularly problematic for lawyers who rely on AI to speed up research or drafting, as the models may not differentiate between nuanced regional laws or provide false legal precedents. AI’s inability to properly handle variations in laws between jurisdictions speaks to a fundamental lack of training data.
The confidentiality trap
The heart of the problem lies in the ban on legal professionals sharing their work product with AI training systems due to confidentiality obligations. Lawyers cannot ethically disclose the intricacies of their clients’ cases, even for the purpose of training more competent AI. While LLMs need this vast pool of legal data to improve, lawyers are bound by confidentiality rules that prohibit them from sharing their clients’ information without express permission.
However, maintaining this strict siloing of information within the legal profession limits the development of competent AI. Without access to diverse, jurisdiction-specific legal data, AI models find themselves stuck in a “legal monoculture” – reciting overly generalized notions of law that fail to account for local variations, particularly in more generalized jurisdictions. small or less important.
The solution: regulated information sharing
One potential solution to this problem is to empower legal regulators, such as law societies and associations, to act as intermediaries for AI training.
Most rules allow records to be shared with regulators without breaching confidentiality. Regulators could mandate the sharing of anonymized or filtered records of their members for the specific purpose of training legal AI models, ensuring that the AI tool receives a broad range of legal data while preserving client privacy .
By requiring attorneys to submit their data through a regulator, the process can be closely monitored to ensure no identifying information is shared. These anonymized files would be invaluable for training AI models to understand complex variations in the law between jurisdictions, reducing the risk of legal hallucinations and enabling more reliable AI results.
Benefits to the legal profession and the public
The advantages of this approach are twofold:
- First, lawyers would have access to much more accurate and jurisdiction-specific AI tools, making them more efficient and improving the overall quality of legal services.
- Second, the public would benefit from better legal outcomes because AI-assisted lawyers would be better equipped to handle cases quickly and competently.
By mandating this data sharing process, regulators can help break the current cycle in which legal professionals are unable to contribute to or fully benefit from AI models. Shared models could be released under open source or Creative Commons licenses, allowing legal professionals and technology developers to continually refine and improve legal AI.
This open access would ultimately democratize legal resources, giving even small businesses or individual practitioners access to powerful AI tools previously reserved for those with significant technological resources.
Conclusion: a way forward
The strict duty of confidentiality is essential to maintaining trust between lawyers and their clients, but it also hinders the development of competent legal AI. Without access to the vast pool of legal data locked behind privacy rules, AI will continue to suffer from jurisdiction-specific knowledge gaps, producing results that may not comply with local laws.
The solution lies with legal regulators, who are uniquely positioned to facilitate the sharing of anonymized legal data for AI training purposes. By filtering client records provided by regulators, lawyers can continue to honor their duty of confidentiality while also enabling the development of better-trained AI models.
This approach ensures that legal AI will benefit not only the legal profession but also the general public, helping to create a more effective, efficient and fair legal system. By tackling this “privacy trap,” the legal profession can move forward into the future, harnessing the power of AI without sacrificing its ethical obligations.
Learn more about the impact of AI on law firms in our latest legal trends report. Automation is not only reshaping the legal industry but also leaving vast opportunities for law firms to bridge the justice gap while increasing their profits.
Note: This article was originally published by Joshua Lenon on LinkedIn and is based on a lightning talk he gave at a recent Vancouver Tech Week event hosted by Toby Tobkin.
We published this blog post in October 2024. Last update: .
Categorized in: Marketing