May 6, 2026 · 11 min read

Why "Chat With Your Business" Won't Work for Law Firms or Clinics

An AI product gets in front of you that pitches one chat box for your whole company. Text it from your phone, the inbox runs itself, the calendar runs itself, the CRM runs itself. The pitch is real. The trend is real. But the graveyard behind it is bigger than the trend, and in regulated industries the structural ceiling is hard. Here is the data and what wins instead.

The horizontal AI graveyard, in 14 months

The "AI assistant for your whole business" pitch has been one of the loudest categories in venture for two years. The exits and shutdowns from the last 14 months tell a different story.

  • Inflection AI / Pi. Raised over $1.5B. Microsoft acqui-hired the team for $650M in March 2024. Pi added usage caps in August 2024 and the consumer product was effectively wound down; what remained pivoted to B2B API licensing.
  • Adept AI. Raised $415M for "an AI teammate for every knowledge worker." Amazon licensed the team and IP for a reported $330M in June 2024. The last remaining co-founder, David Luan, exited Amazon in February 2026. The original company is effectively a zombie.
  • Humane AI Pin. Raised over $230M from OpenAI's Sam Altman and others. HP bought the assets in February 2025 for $116M, against a target of around $1B. Existing devices were bricked at 3:00 PM ET on February 28, 2025.
  • Limitless (formerly Rewind). Personal AI memory hardware plus app. Acquired by Meta in December 2025; the pendant was discontinued and customers were moved to a free unlimited plan for one year while the product winds down.
  • Sana AI. Workspace AI assistant. Acquired by Workday for approximately $1.1B in September 2025, closing in November. A "winning" outcome by VC math. Independent existence: over.
  • Forward Health. Not a chat product, but the closest healthcare analogue: $650M raised for AI-powered "CarePods." Company shut down in November 2024 after deploying three pods, one of which was removed almost immediately.

SimpleClosure's 2025 State of Startup Shutdowns report found that Series A shutdowns jumped from roughly 6% to 14% of all closures year over year, with AI wrapper companies catastrophically over-represented. The MIT NANDA Project's 2025 study reported that 95% of generative AI pilots in the enterprise failed to deliver measurable P&L impact.

None of this is an argument that AI does not work. It is an argument that the horizontal "one chat box for everyone" architecture is structurally fragile. Two forces compress it from above and from below.

The model vendors already ate the horizontal layer

The pitch of every "chat with your business" product is the same: connect Gmail, Calendar, Drive, Slack, Notion, your CRM, your finance tool, and ask one assistant questions across all of them. As of May 2026, every major model vendor ships exactly that, bundled, at the same price the user is already paying.

  • OpenAI Apps (renamed from Connectors on December 17, 2025): Gmail, Google Calendar, Google Contacts, Google Drive, SharePoint, Dropbox, Box, Slack, GitHub, and ten or more others, included with Plus, Pro, Team, and Enterprise.
  • Anthropic Claude Connectors (launched July 2025; over 50 official integrations by Q1 2026): Google Drive, Gmail, Google Calendar, Slack, Notion, Asana, Figma, Canva, plus a wave of creative tools added April 28, 2026.
  • Microsoft 365 Copilot: 15M paid seats by Q2 FY2026 per Microsoft earnings commentary. Now bundled with Copilot for Sales, Service, and Finance at no additional charge in the Enterprise tier.
  • Google Gemini in Workspace: bundled into all Workspace tiers from January 2025 (alongside list-price increases of 17–22%). Workspace Business Standard at $14 per user per month now includes Gemini access; Workspace Studio in 2026 added no-code agent creation.
  • Perplexity Comet Enterprise: AI browser launched March 2026, with MDM deployment, agent action controls, and a CrowdStrike security integration.

Pricing math is brutal. A buyer paying $20 per month for ChatGPT Plus already gets read access to Gmail, Calendar, Drive, SharePoint, Slack, GitHub, and a Box. A buyer paying $14 per month for Workspace Business Standard already gets Gemini side-panel access to their email, docs, sheets, and calendar with a HIPAA BAA on top. A horizontal "chat with your business" wrapper at $30 per month has to justify the delta against a vendor whose marginal cost of adding one more connector is close to zero.

The wedges that remain are surface-area tricks. iMessage entry points (Apple does not officially permit business messaging through iMessage; the Beeper precedent is the relevant case). Multi-model routers (Anthropic and OpenAI both ship native model selection now, and the routing logic itself is a thin wrapper over commodity APIs). "Knowledge Hub" branding for what is functionally a vector store on top of the same connectors. None of these are durable against the next ship cycle of the model vendors.

This is the upper bound on horizontal "chat with your business" products. The lower bound is harder.

The compliance wall: ABA Opinion 512

The American Bar Association's Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on July 29, 2024. It is the first national-scope guidance on lawyer use of generative AI, and it is binding through the Model Rules of Professional Conduct that almost every state bar has adopted in some form.

Five Model Rules are invoked. Two of them rule out a horizontal "chat with your business" architecture for legal work.

Rule 1.6 — Confidentiality. A lawyer must understand how a generative AI tool processes client data, including whether prompts and outputs are used for model training, whether they are reviewed by humans, where they are stored, and who has access. The lawyer must obtain informed client consent before disclosing confidential information to a third-party GAI tool. The opinion explicitly states that boilerplate consent language in engagement letters is inadequate. The consent is matter-specific, tool-specific, and risk-specific.

Rule 5.1 / 5.3 — Supervision. The opinion treats GAI output as the work product of a non-lawyer assistant. The supervising attorney is professionally responsible for review and verification before any AI-generated work product is used in client matters or filings.

The other three rules cover competence (1.1), fee transparency (1.5 — efficiency gains from AI cannot be billed as if performed manually, and lawyers may not bill clients for time spent learning a tool unless the client specifically requested it), and candor to the tribunal (3.3 — the Mata v. Avianca rule, no hallucinated citations).

State bars have layered on top. The Florida Bar Opinion 24-1 (January 2024) requires informed client consent before disclosing confidential information to a third-party GAI tool. The State Bar of California's Practical Guidance (November 2023, updated 2024) goes further: a lawyer "must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections." The New York City Bar's Formal Opinion 2024-5 (August 2024) is more permissive on consent but still requires verification and confidentiality safeguards.

A horizontal "chat with your business" product that pipes a document into whatever model its router selects, with a vendor that cannot prove per-tool confidentiality posture to the lawyer's standard, fails the test. Not because the lawyer is unsophisticated or the product is malicious. Because the architecture is generic where the rule requires specific.

The compliance wall: HIPAA, multiplied across models

HIPAA requires a Business Associate Agreement with every entity that processes Protected Health Information. For AI, that means a BAA with every model provider whose model touches a PHI prompt. The current state of BAA availability across the major model vendors:

Provider BAA available? Where
OpenAI Yes ChatGPT Enterprise (sales-managed), API platform via baa@openai.com, ChatGPT for Healthcare (Jan 2026, GPT-5.2). Not on Free / Plus / Team / self-serve Enterprise.
Anthropic Yes Claude API with HIPAA-enabled organization configuration; sales-assisted Enterprise plans only. Claude.ai (the consumer chat product) is NOT covered.
Microsoft Yes (inherited) Microsoft 365 Copilot inherits the M365 BAA on HIPAA-eligible plans (Business Premium, E3, E5). Consumer Copilot is NOT covered.
Google Yes Gemini for Workspace listed as included functionality under the Workspace HIPAA BAA effective Sept 30, 2025 (Help Me Write, smart replies, side panel). Consumer Gemini is NOT covered.
Cohere Limited BAA for custom model development customers only.
Meta Llama No public program Self-hosting moves liability to the deployer.
Mistral, DeepSeek, xAI Grok, Qwen No public program No BAA available to general customers.

Now consider the typical "chat with your business" pitch deck. It usually shows a logo wall of OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere, and Qwen, with a "we route your prompt to the best model for the job" router on top. That architecture cannot deliver a coherent BAA chain because there is no BAA available for several of the providers in the basket. Any prompt routed to one of those models, if it contains PHI, is a violation. The clinic, not the wrapper, owns the liability.

The serious healthcare AI products solve this by picking a single covered provider and building deep on top. Abridge runs on a covered foundation and integrates inside the EHR. ChatGPT for Healthcare runs on GPT-5.2 with explicit BAA coverage and is rolling out at AdventHealth, Baylor Scott & White, Boston Children's, Cedars-Sinai, HCA, Memorial Sloan Kettering, Stanford Medicine Children's, and UCSF. Microsoft 365 Copilot inherits its BAA from the underlying tenant. None of these products try to be a horizontal "one assistant for any business that exists." They are vertical, by design, because the regulation requires them to be.

Where the venture capital actually went

If horizontal "chat with your business" were the winning architecture, the funding would follow. It does not. The 2025–2026 round sheet for AI in regulated industries reads as follows.

  • Harvey AI (legal): Series G of $200M at an $11B valuation in March 2026, co-led by GIC and Sequoia. Total raised approximately $1.22B across ten rounds. Customer base: over 100,000 lawyers across 1,300 organizations.
  • Abridge (clinical documentation): Series E of $300M at $5.3B in June 2025, plus a $316M extension in April 2026. Deployed across 250+ health systems including Kaiser (24,600 physicians), Mayo, Johns Hopkins, Duke, UPMC, Yale New Haven. First "Pal" in Epic's Partners and Pals program — the deepest possible Epic integration tier.
  • Hippocratic AI (healthcare voice agents): Series C of $126M at $3.5B in November 2025. Deployed at 50+ health systems, payors, and pharma in six countries. Over 115M patient interactions.
  • EvenUp (personal injury legal): Series E of $150M at over $2B in October 2025, led by Bessemer with LexisNexis participation. Deployed at 2,000+ firms, including 20% of the Top 100 PI firms; over 200,000 cases resolved, $10B+ in damages handled.

Bessemer's State of AI 2025 reported that vertical AI companies founded after 2019 are reaching 80% of traditional SaaS contract values while growing 400% year over year. Sequoia's "Services: The New Software" essay (Julien Bek, March 2026) framed the thesis directly: "The next $1T company will be a software company masquerading as a services firm." A copilot sells the tool. An autopilot sells the work. The autopilot opportunities Sequoia called out include legal transactional work, healthcare revenue cycle, accounting, claims adjusting, and tax advisory — all vertical, all regulated, all wrong-shaped for a horizontal "chat with your business" wrapper.

Even Glean — the strongest horizontal enterprise example — reached $200M ARR and a $7.2B valuation in 2025 by selling to the Fortune 500, not to small businesses. Its Futurum coverage's headline was literally "Can Its Knowledge Graph Beat Copilot?" The horizontal enterprise lane is direct competition with Microsoft. The horizontal SMB lane is direct competition with ChatGPT Plus. There is no middle.

The right primitive: open connectors, vertical depth

The Model Context Protocol, which Anthropic introduced in November 2024 and donated to the Linux Foundation's Agentic AI Foundation in December 2025, is now co-stewarded by Anthropic, Block, and OpenAI. ChatGPT's Apps SDK is built on it. Cross-LLM consumption is confirmed across Claude, ChatGPT, Gemini, Cursor, VS Code Copilot Chat, Goose, LM Studio, and Ollama.

What this means in practice: the connector layer is now an open standard. A vertical specialist who builds a connector to Clio, MyCase, Epic, or Athena ships it once and it works against any compliant LLM client. Horizontal products no longer own the integration surface. The economics of "we built the Gmail and Slack connectors in-house" disappear when those connectors ship from the model vendor for free.

What remains valuable is what cannot be commoditized: knowing the schema and the workflow of a regulated vertical.

  • A conflict check (ABA Model Rules 1.7, 1.9, 1.10) requires structured matter intake — parties, affiliates, adverse parties, related entities, witnesses, lenders, insurers — reconciled against the firm's complete history. A generic "chat with your business" wrapper has no canonical schema for "matter," "party role," or "adverse vs. represented." It cannot perform this check. Clio knows what a matter is. A connector built against Clio knows what a matter is.
  • Trust accounting (IOLTA reconciliation) requires three-way matching against bank statement, client ledger, and trust account ledger, with per-client sub-accounting. California requires annual self-certification. A horizontal product that can summarize a CSV cannot do this. A connector that operates inside the practice management system can.
  • Prior authorization in healthcare requires payer-specific medical necessity criteria, CPT/ICD-10 mapping, and resubmission workflows. Each payer's rules are proprietary and change frequently. The vertical AI product that gets paid for this is the one that lives inside the EHR with the payer rules current.

A horizontal product trying to do these is in the wrong shape. A vertical connector with deep integration is in the right shape. MCP is the protocol that lets one vertical specialist ship the connector to all the LLM clients at once.

The practical first step

You do not need to wait for a $30/month wrapper to start using AI inside your business. The path is incremental and reversible.

  1. Install Claude Desktop (or any MCP-capable client). It is free and ships with most of the connectors a small business actually needs.
  2. Add one connector for a tool you already use — Google Drive, Slack, Notion, GitHub. Ask questions. See whether it is useful before adding the next one.
  3. For regulated work, verify compliance posture before any sensitive data flows. For legal, this means understanding the model provider's data handling and obtaining the appropriate matter-specific client consent under ABA Opinion 512. For healthcare, it means confirming an executed BAA covering the model and the connector before any PHI flows.
  4. Add a vertical connector when you need one. For law firms on Clio, our open-source Clio MCP connector is on npm at @oktopeak/clio-mcp and in the official MCP Registry as io.github.oktopeak/clio-mcp. MyCase is next. Both ship with audit logging compatible with ABA Opinion 512 supervision requirements.
  5. For the most sensitive work, run the model locally. The same MCP connectors work against a local model in LM Studio or Ollama, with no third-party processor in the data path. We covered the deployment in The Privilege Stack.

None of this requires a single vendor. None of it depends on Apple's iMessage policies not changing. None of it depends on a horizontal wrapper surviving its next funding cycle. It works at the Claude Desktop layer today and at the local-model layer tomorrow.

The take

Horizontal "chat with your business" products are competing in a lane where the model vendors give the same thing away for free, the regulated industries are structurally locked out by ABA Opinion 512 and HIPAA, and the venture capital has voted with $11B and $5.3B and $3.5B and $2B for vertical depth instead. That is not a product cycle. It is an architectural floor.

The bet for the next decade is not which horizontal assistant wins. It is which vertical, with which workflow, on which compliance-ready connector. The work is in knowing what a matter is, what a prior authorization is, what an IOLTA reconciliation is. The work is not in routing a message.

Building the vertical AI integration for your firm or product?

We ship MCP connectors, audit-logged AI workflows, and vertical integrations for legal and healthcare SaaS. 30 minutes with a co-founder. No pitch.

See Our Legal AI Integration Service →

Sources verified May 6, 2026: ABA news release on Formal Opinion 512, Florida Bar Opinion 24-1, California State Bar Practical Guidance on Generative AI, NYC Bar Formal Opinion 2024-5, OpenAI BAA documentation, Anthropic BAA documentation, Google Workspace HIPAA functionality, Microsoft Copilot HIPAA coverage, CNBC on Harvey AI Series G, Abridge on Epic Pal status, Hippocratic AI Series C, EvenUp Series E, Sequoia "Services: The New Software", Bessemer State of AI 2025, OpenAI Apps documentation, Anthropic Connectors directory, MCP specification, Anthropic on MCP donation to AAIF. Funding figures and customer counts are vendor-reported as of the announcement dates linked above and may have changed.

Legal Tech

Related Articles

View all Legal Tech articles ➔

Book a Call