Artificial intelligence (AI) is transforming industries across the board, and the legal field is no exception. Although law firms can typically be cautious adopters of new technology, and while AI in particular remains the subject of debate within many firms, AI is beginning to play a role in how lawyers deliver services. For clients, the key questions are: How are my lawyers using AI? How does it impact the quality and cost of my legal work? And should I be using AI tools myself?
As early adopters of AI for use in legal services, we believe AI has real potential to improve outcomes for clients, but it must be used thoughtfully, strategically, and with the right safeguards.
WHY AI MATTERS FOR CLIENTS
One of the most practical reasons AI matters is cost. Used correctly, AI can help lawyers work more efficiently, reducing the bills sent out to clients.
For example, AI can quickly review volumes of large documents, summarizing and flagging key provisions ahead of time to help speed up attorney review. AI can act as a supercharged search engine, instantly locating obscure statutory provisions, case law, or informal guidance that might otherwise have taken an attorney hours to find. And, while it may be impractical to have AI create an entire document—such as a purchase agreement—from scratch, an attorney can easily upload a familiar form and request that the AI add a few additional provisions needed for the transaction at hand, significantly cutting down on drafting time. These time savings have clear potential to translate into lower bills.
But AI isn’t just about speed. It can also improve thoroughness and accuracy. AI tools can catch details buried in hundreds of pages that a human might overlook, offer perspective on varying ways to define key terminology, or highlight unusual contract terms that deserve closer scrutiny. When used properly, this means clients get more reliable and thorough legal analysis. It can be the equivalent of having an extra set of sharp eyes on a project without having an extra attorney billing on the matter.
WHERE AI FALLS SHORT
The biggest potential downsides of AI are confidentiality risks and inaccurate output.
Certain AI models are able to protect client data within a closed environment, i.e., one that is walled-off from third-party access. However, in general, uploading confidential documents for AI review can expose sensitive information to third-party servers, triggering contractual violations or violations of internal policies, or potentially waiving attorney-client privilege and allowing such entered information and the corresponding outputs to be discoverable in a litigation.[1] It’s also possible that an AI model could “train” itself on such confidential information and later inadvertently reproduce it for other users.[2]
Since it is often advantageous to consult multiple AI models on a given issue, one workaround attorneys can use, for example, is to ask a protective AI to summarize a document, removing all sensitive or client-specific data, then allowing the attorney to input that summary into other AI models for second or third opinions.
Similarly concerning is AI’s propensity to “hallucinate” and confidently deliver false information. In addition to studies documenting this phenomenon in the legal context, there have been several prominent examples of attorneys getting themselves into trouble by reproducing hallucinated information.[3] Any lawyer who has worked with AI can likely give several examples of their preferred AI tool providing them with materially inaccurate legal information. In comes cases, a given AI model can even provide widely differing answers, depending on how the question is phrased.
Attorneys should be carefully reviewing and verifying all information received from an AI tool. It may also be prudent for them to ask multiple AI models the same question, or even ask a given AI model the same question in multiple ways, to ensure more complete coverage of the specified subject matter. Perhaps most importantly, your attorney should not be using AI as a substitute for competency in a given practice area.
Finally, AI lacks the common sense of a lawyer with years of experience and, unless explicitly provided bit by bit, will lack legal context related to the specific client and matter, causing it to always follow instructions precisely, when what may actually be needed is an approach entirely different from the brief prompt provided. This can be especially likely to happen when, for example, clients ask an AI model to provide an initial draft of a document and then ask their attorney to review. While it may feel like a cost-saving measure, in practice it can backfire, with the attorney spending more time untangling and correcting an AI-generated draft than they otherwise would have spent on the project.
The better approach is to ask your lawyer how they can use AI to meet your legal needs.
GOOD QUESTIONS TO ASK
As a client, you don’t need to know the technical details of how AI works. But you should feel empowered to ask your legal team questions like:
- Does your firm use AI in reviewing materials, conducting due diligence, or drafting documents?
 - What safeguards are in place to protect my confidential information if AI is used?
 - How do you ensure that AI-generated insights are accurate and legally reliable?
 - Can AI help reduce my legal costs in a meaningful way without compromising quality?
 
These questions signal to your lawyer that you value efficiency but also expect careful stewardship of your business and legal risks.
CONCLUSION: ASK YOUR LAWYER ABOUT AI
AI likely will not replace lawyers any time soon, but it can help them serve clients better. The firms that use AI thoughtfully will be able to deliver faster, more accurate, and potentially more cost-effective results. However, AI needs to be used with proper guardrails in place, including policies, training, and secure systems.
We encourage clients to proactively discuss AI with their attorneys, asking them how they plan to use AI to benefit their clients and how they plan to do so responsibly.
[1] https://www.reuters.com/legal/legalindustry/rules-use-ai-generated-evidence-flux-2024-09-23/; https://www.jdsupra.com/legalnews/discovery-pitfalls-in-the-age-of-ai-1518070/
[2] https://www.pcmag.com/news/samsung-software-engineers-busted-for-pasting-proprietary-code-into-chatgpt
[3] https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries; https://www.reuters.com/legal/government/trouble-with-ai-hallucinations-spreads-big-law-firms-2025-05-23/