The three privacy questions every regulated firm asks
A partner at a LATAM wealth management firm asked me three privacy questions in a row on our first call. "Cuando vos cargas la informacion, esa informacion es publica. Si yo pago un acceso a ChatGPT, no me podes sacar toda la informacion en 2 minutos. Que pasa si se filtra un reporte." Translation: when you upload information, is that information public. If I pay for ChatGPT access, can someone pull all the information in 2 minutes. What happens if a report leaks.
His central bank auditors ask the same questions: which systems, which servers, where the data lives. A Brickell boutique law firm partner asked the lawyer version: "What about client privacy, what about attorney-client privilege, what happens if the AI memorizes my matter notes." This is the same gate law firms protecting attorney-client privilege raise on the first call.
These three questions kill the deal every time when they go unanswered. Here is the answer I give them.
What Anthropic actually does with your data
On a Claude Team or Enterprise plan, Anthropic does not use your inputs or outputs to train the model. That is contractual, not aspirational. It's in the data processing addendum that ships with both plans, and your compliance team can review the language before signing.
Your files live inside your workspace, accessible only to the seats you assign. Anthropic holds SOC 2 Type II certification, which means an independent auditor has reviewed Anthropic's controls over a multi-month observation window. The Enterprise plan adds single sign-on, audit logs, and admin controls a CISO can review and approve.
For SEC-registered investment advisers and wealth firms working under SEC and IFRS, this configuration is consistent with Reg S-P customer-privacy requirements. For LATAM funds reporting under IFRS to central bank regulators, the data residency and audit-log surface is documentable. For US law firms, attorney-client privilege is not weakened because your files do not leave your workspace and are not used to train the model.
What I do on top of that
The plan-level guarantees are necessary, not sufficient. Three things happen on top.
First, configure the Claude workspace before any client data touches it
Seats, sharing rules, allowed external integrations, retention windows, and the CLAUDE.md operating file that defines what the AI can and cannot do with sensitive material. This usually takes a half-day session with your IT lead and a compliance reviewer.
Second, write a data-path document
For a wealth firm, this means specifying where each input lives, who can access it, what the AI is allowed to read, and what gets logged. For a law firm, it means mapping which case folders flow into Claude, which stay air-gapped, and how matter-specific confidentiality is preserved across prompts. Your compliance officer reviews and signs the same document your auditor will see.
Third, set the confidentiality posture inside the operating file
Never expose specific dollar figures externally, never name specific clients in machine-readable contexts, never publish material flagged confidential. The CLAUDE.md becomes a guardrail Claude reads on every interaction.
A packaging manufacturer's confidentiality list
A packaging manufacturer operating in 4 countries hired me last year. The CEO keeps a confidential list inside the CLAUDE.md operating file: items that must never appear in any proposal, email, or public output the AI generates. Margin tables. Sensitive supplier and client identities. The AI reads the list before responding to any request and respects it on every interaction. The guardrail is not external. It sits inside the operating file the model reads first.
What about ChatGPT
Buyers often arrive having already tried ChatGPT and gotten nervous. The free and Plus tiers of ChatGPT do, by default, retain conversations and may use them for training. The Team and Enterprise tiers do not, and ChatGPT's Enterprise tier also holds SOC 2 Type II. The same logic applies: the plan-level guarantee matters, the workspace configuration matters more.
I work with Claude because of how the operating file pattern interacts with the model, not because Claude is "more private." Either Anthropic Enterprise or OpenAI Enterprise can be configured to meet the same compliance bar. The difference is the Skill plus CLAUDE.md system, which compounds over time on Claude in ways that are harder to replicate elsewhere.
The deliverables, ranked
Pilot engagements always include the data-path document as a deliverable in week one, before any production data is touched. If your compliance team rejects the configuration, the pilot stops and you pay nothing for what didn't ship.
The plan-level guarantees and the data-path document handle the perimeter. The CLAUDE.md operating file handles what happens inside it. Together they give the compliance officer something to audit at every layer. The configuration sits alongside the diagnostic that maps your data, so the audit and the compliance posture ship as one engagement.
If your compliance team needs to see the data-path document before any pilot starts, send me your environment and I will draft it before the first call. No NDA required for the draft itself; the document just maps how AI would touch your data, not what your data is.