Skip to main content
This is an illustrative scenario based on common patterns we see across SMBs. Not a real client engagement.
Legal Services~40 peopleIllustrative

Illustrative scenario — 40-person regional law firm navigates AI in client communications

Outcome: Established an AI policy that satisfied the firm's ethics committee in under 30 days.

Common starting state

A 40-person regional law firm—eight partners, a dozen associates, and a support staff—has a problem it can't quite name. Three associates have started using ChatGPT to draft memos and research summaries. One partner uses it to outline closing arguments. The firm administrator uses it for HR correspondence.

No one told them not to. No one told them how. State bar guidance exists but is general enough to be unhelpful operationally. The managing partner got a call from another firm about a publicized incident involving a competitor—an associate who submitted a brief with AI-hallucinated case citations—and decided it's time to get ahead of this.

The risk here isn't just data leakage. It's attorney-client privilege, work product doctrine, competence obligations under state bar rules, and vendor due diligence the firm has never done for any software it uses.

Risks identified

Legal is a denser compliance environment than most SMBs. An Audit here surfaces distinct layers:

Privilege risk in the prompt. When an associate pastes a memo summarizing a client's legal position into ChatGPT, that content may be discoverable if the AI vendor's data practices are ever examined in litigation. The question "did you share privileged communications with a third party?" has a more complicated answer now.

Bar ethics requirements. Most state bars now have guidance on AI use that includes a competence obligation—attorneys must understand the tools they're using well enough to supervise the output. Using an AI tool that hallucinates citations without a verification step is a disciplinary risk.

Staff using different tools than attorneys. In this scenario, attorneys are more cautious (because they've heard about the citation incident), while support staff have fewer guardrails and handle sensitive client intake materials. The data perimeter is largest where the controls are weakest.

No vendor contracts cover AI. The firm's document management system and practice management software have both added AI features in the last 18 months. Neither contract was updated. Neither product has a BAA or specific confidentiality terms around AI features.

What we'd typically recommend

The priority at a firm like this is getting to a documented position that the ethics committee can stand behind—before the bar association asks.

  1. Separate the use cases before writing the policy. AI for internal drafting (memos, outlines, research first-passes) has a different risk profile than AI touching client intake data or court filings. The policy should address these separately rather than with a single blanket rule.

  2. Prohibit AI drafting in court filings without a documented review step. Not "no AI in filings"—that's too restrictive and unenforceable. Rather: any AI-generated content in a filing must go through a named partner review step, and that review is logged. This satisfies competence obligations and creates a defensible record.

  3. Classify the support staff workflows. Intake documents, client onboarding forms, and HR files should be out of scope for any AI tool not under contract with appropriate confidentiality terms. This is a quick policy change that closes the highest-risk gap.

  4. Run a vendor review on the two platforms that added AI features. Write to both vendors with a standard set of questions: do they use customer data to train models, what data-residency commitments do they make, what's the process for data deletion, and do they carry E&O coverage that covers AI-generated outputs? The answers become part of the firm's vendor due-diligence file.

  5. Brief the partners, not just the associates. Policy dissemination at law firms often stops at the associate level. Partners who are also AI users need the same briefing—and are more likely to take it seriously if it's framed as bar ethics compliance rather than IT policy.

Outcome to expect

The scenario resolves in two phases. Phase one (weeks one through four): the firm has a written policy, it's passed through the ethics committee, and the two vendor reviews are in flight. Staff have received a thirty-minute briefing. The immediate risk—privilege exposure through uncontrolled personal ChatGPT use—is addressed.

Phase two (months two through four): the vendor reviews come back, one vendor provides satisfactory terms and one doesn't, and the firm makes a product decision accordingly. The firm's AI policy becomes part of new-hire onboarding.

The 30-day ethics committee figure reflects what's achievable when the policy is scoped correctly—narrow enough to be defensible, practical enough to be followed. Firms that try to write a comprehensive AI policy covering every conceivable use case in one pass typically stall.

Recognize your business in this scenario?

A free 30-minute call to look at where AI is already touching your operations and what to prioritize first.

Book a free call