Law firms are adding AI chatbots to their websites. Most are non-compliant with ABA Formal Opinion 512 and don't know it — not because they ignored compliance, but because the guidance was written for lawyers, not builders.
Opinion 512 tells attorneys what their ethical obligations are when using generative AI. It says nothing about how to architect a chatbot that enforces those obligations automatically. That gap is where most law firm chatbots fail. The chatbot sounds helpful. It sounds professional. And it's quietly violating three separate ethics rules.
This post is for the person who has to actually build the thing. We'll cover what Opinion 512 requires, how each requirement translates into a design constraint, and what a compliant chatbot looks like in practice — with a working example we built for our own site.
What ABA Formal Opinion 512 Actually Says
On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512: "Generative Artificial Intelligence Tools." It establishes that lawyers using any generative AI tool — including client-facing chatbots on their websites — must satisfy their existing ethics obligations under the Model Rules.
Three obligations are binding and directly affect how a law firm chatbot must be designed:
- Disclosure — Users must know they are interacting with AI, not a licensed attorney (Model Rule 8.4(c), duty not to deceive)
- Attorney supervision — A named attorney at the firm must be accountable for the chatbot's output, with appropriate oversight mechanisms (Model Rule 5.3, supervision of non-lawyers)
- No unauthorized practice of law — The chatbot cannot analyze facts, interpret law, or give advice applied to a specific person's circumstances (Model Rule 5.5, UPL)
These aren't guidelines. They're ethics rules. Violations can result in bar discipline. And they don't disappear because a vendor called the product "ABA-compliant" in a marketing brochure.
Requirement 1: Disclosure — The Chatbot Must Say What It Is
The chatbot's first message must identify it as AI. Not "Hi, I'm Alex, how can I help you today?" That implies a human. Not "I'm a virtual assistant" without further explanation. That's deliberately vague.
A compliant opening: "Hi — I'm the Smith Law AI assistant. I can answer questions about our practice areas and book consultations. I'm not a lawyer and can't give legal advice."
The disclosure must appear before any substantive exchange. A visitor who doesn't know they're talking to AI and shares confidential information based on that misunderstanding creates both an ethics problem and a privilege risk.
In February 2026, a federal court ruled that "your AI chatbot is not your lawyer" — that liability for the bot's output stays with the firm, not the software vendor. The disclosure requirement exists precisely because courts and ethics bodies are watching how firms blur the line between AI assistance and legal counsel.
The disclosure also needs to persist. If a visitor asks a follow-up question 10 messages in, the chatbot cannot start responding as if it's a lawyer because the initial disclaimer has scrolled off screen. The behavioral constraint must hold throughout the conversation.
Requirement 2: Attorney Supervision — What the Code Must Support
Opinion 512 requires that a managing or supervising attorney ensure the firm's AI use complies with ethics obligations. For a website chatbot, this means a named attorney at the firm is accountable for what the bot says — the same accountability they'd have for a junior associate's client-facing work.
The supervision requirement isn't a policy document you write. It's architecture you build. Three things need to exist in the system:
1. Conversation logs the attorney can audit
Every exchange between the chatbot and a visitor must be stored and accessible to the responsible attorney. Not just the lead's name and email — the full conversation. The attorney needs to be able to spot-check transcripts, identify patterns where the bot gets close to UPL, and course-correct the system prompt or behavioral constraints.
2. Escalation trigger for flagged conversations
When a visitor provides fact patterns, mentions a specific legal matter, presses the chatbot on a legal question, or shows frustration with "I can't give advice on that" — the system needs to escalate. Not route to a contact form. Trigger an alert that notifies a human at the firm with the conversation transcript.
Escalation cannot be optional. The trigger logic must be designed in, not added as a nice-to-have later. An attorney who can't identify which conversations require human attention hasn't supervised the system — they've just approved it abstractly.
3. Override capability
The attorney must be able to disable the chatbot quickly if a problem is discovered. This sounds obvious but many off-the-shelf chatbot platforms don't expose an emergency kill switch to the firm. The system must support it.
How we approach this: Every conversation is stored with session ID, timestamp, and full transcript. Escalation fires automatically when a visitor mentions a specific matter, expresses urgency around a legal deadline, or repeatedly pushes back on "I can't give advice." The escalation email goes to the firm's designated contact with the full transcript. The attorney can then follow up directly or adjust the system prompt to close the gap.
Requirement 3: No Unauthorized Practice of Law — The Refusal Logic
UPL means giving legal advice without a license. For an AI chatbot, this means analyzing a visitor's specific facts, predicting outcomes, advising on deadlines, or interpreting how the law applies to their situation.
The distinction between permitted and prohibited isn't a disclaimer you add at the bottom of the page. It's behavioral — engineered into the response logic before a single line of code is written.
| Response type | Example | Permitted? |
|---|---|---|
| Practice area description | "We handle personal injury and employment law." | ✓ |
| Process explanation | "Initial consultations are 45 minutes and free." | ✓ |
| Published fee information | "We charge $350/hr for litigation matters." | ✓ |
| Case assessment | "Based on what you've told me, you likely have a claim." | ✗ |
| Deadline advice | "You have 2 years to file under the statute of limitations." | ✗ |
| Outcome prediction | "Your employer likely violated the FLSA." | ✗ |
The prohibited responses aren't harder to generate than the permitted ones. They require an explicit constraint in the system prompt, reinforced by behavioral testing before launch. Every prohibited response type needs to be documented, tested, and confirmed to trigger the appropriate refusal or escalation.
States are also tracking this. Justia's 50-state survey on attorney AI ethics shows over 30 states have issued guidance either adopting Opinion 512 or adding stricter requirements. Some states require retaining chatbot transcripts and documenting human involvement for any AI that interacts with clients. The compliance burden compounds when firms operate across multiple jurisdictions.
ABA Rule 7.3: Is Your Chatbot Prohibited Solicitation?
Many firms hesitate to deploy website chatbots because they worry it constitutes prohibited solicitation under Model Rule 7.3. The concern is understandable but largely misplaced.
Rule 7.3 prohibits an attorney from reaching out to a prospective client who hasn't initiated contact. A chatbot on your website is the opposite situation: the visitor landed on your site and started the conversation. That's advertising, not solicitation — and advertising is regulated under Rule 7.2, not prohibited under 7.3.
The distinction matters legally: solicitation = attorney initiates contact. Chatbot = visitor initiates contact.
Advertising rules still apply. The chatbot must identify the firm, cannot make misleading statements, and must include required disclosures (including that it's AI). But the fear that a chatbot is inherently impermissible is not supported by Opinion 512 or current ethics guidance.
What a Compliant Chatbot Looks Like — Our Own Example
Before offering this as a service to clients, we built a compliance-aware AI agent for our own site. We started with the behavioral constraints, not the feature list.
Before a line of code was written, we documented every response type the agent was not permitted to produce: custom project quotes without seeing scope, compliance guarantees for systems we haven't audited, legal advice, medical advice, and project commitments below our minimum engagement size.
The agent knows seven visitor types and handles each differently:
- Pricing inquiries get documented ranges, not custom quotes
- Compliance questions get verified facts from our published materials, not promises
- Custom scope descriptions get acknowledged and routed to a call, not quoted on the spot
- Off-topic requests get a direct acknowledgment that we're the wrong fit
- Mentions of a specific budget over $50K trigger an escalation email to our team with the conversation transcript
For law firm clients, the same architecture applies with two additional constraints baked in: no case assessment, and no response that touches a specific statute, deadline, or legal interpretation applied to a visitor's facts.
We also maintain an open-source Clio MCP connector that gives law firm agents access to live matter data — so the agent can answer "what's the status of my case?" by querying Clio directly, rather than routing every client to a callback. The Clio integration details are on our legal AI service page →
How We Build Opinion-512-Compliant Agents for Law Firms
Our process for law firm AI agents starts with constraint design, not feature design. The compliance layer is documented before any code is written. This isn't unique to AI — it's the same approach we use when building systems with DEA and HIPAA compliance requirements, where behavioral constraints have legal consequences if violated.
Step 1: Prohibited response inventory
Document every response type the agent cannot produce. Test each one before launch. Verify the refusal or escalation fires correctly.
Step 2: Supervision architecture
Build conversation logging, escalation triggers, and override capability before any client-facing features. The attorney supervision layer is infrastructure, not a feature.
Step 3: Disclosure testing
Verify the AI identification appears before every substantive exchange, persists through multi-turn conversations, and cannot be stripped by a persistent user.
Step 4: Jurisdiction-specific review
Cross-reference against your state bar's published guidance (30+ states have now issued supplemental requirements). Add any state-specific disclosure language required.
The result is a chatbot that books consultations, answers practice area questions, routes complex matters to attorneys, and captures leads 24/7 — while staying inside the lines Opinion 512 draws. Build time: 4–6 weeks. Price range: $15,000–$25,000 depending on Clio integration depth and the number of jurisdictions covered.
See our full legal tech service capabilities →
FAQ: ABA Opinion 512 and Law Firm AI Chatbots
Does my law firm's website chatbot need to comply with ABA Formal Opinion 512?
Yes, if your firm uses generative AI in any client-facing context — including website chatbots — ABA Formal Opinion 512 (July 2024) applies. It requires disclosure that users are talking to AI (not a licensed attorney), attorney supervision of all AI output, and design constraints that prevent unauthorized practice of law. These aren't optional guidelines; they're ethics obligations that can result in bar discipline if violated.
Can an AI chatbot on a law firm website provide legal advice?
No. ABA Opinion 512 prohibits unauthorized practice of law, which includes a chatbot analyzing a visitor's specific facts, predicting case outcomes, advising on deadlines, or interpreting how the law applies to their situation. What a compliant chatbot can do: describe your firm's practice areas, explain the intake process, share published fee ranges, and book consultations. The line is between general information (permitted) and advice applied to a specific person's circumstances (prohibited).
What is "attorney supervision" of an AI chatbot and how is it implemented technically?
Opinion 512 requires that a named attorney at the firm be accountable for the chatbot's output — the same oversight expected of a junior associate. Technically this means three things: (1) conversation logs the attorney can audit, (2) an escalation trigger that flags specific conversations for human review, and (3) a kill switch the attorney can activate to disable or override the bot. Attorney supervision isn't a policy document you write — it's architecture you build.
Is a website chatbot considered legal solicitation under ABA Rule 7.3?
No. ABA Rule 7.3 prohibits solicitation, which is defined as an attorney reaching out to a prospective client. A user-initiated chatbot on your firm's website is classified as advertising — the visitor made the first contact. This is an important distinction because it means on-site AI intake chatbots are generally permitted. Advertising rules still apply: the chatbot must identify the firm, cannot be misleading, and must include required disclosures.
Can we connect our law firm chatbot to Clio or other practice management software?
Yes, and this is where custom-built agents have a significant advantage over generic SaaS chatbots. A custom agent can query live matter status, contact records, calendar availability, and document metadata from Clio via an MCP connector — giving visitors real answers about their case status rather than routing them to a callback. Oktopeak maintains an open-source Clio MCP connector (@oktopeak/clio-mcp on npm) that enables exactly this integration while maintaining ABA Opinion 512 compliance constraints.