Skip to main content
METHODOLOGY

Will my data be private? Will the AI train on my information?

The compliance question every CFO, GC, and CCO asks before saying yes. On a Claude Team or Enterprise plan, your data does not train the model and your files stay inside your workspace. SOC 2 Type II certified.

On a Claude Team or Enterprise plan, your data is not used to train the model, your files stay inside your workspace, and Anthropic holds SOC 2 Type II certification. For SEC, IFRS, central bank, and attorney-client privilege contexts, this configuration is audit-ready. I document the data path so your compliance team can sign off.

The three privacy questions every regulated firm asks

A partner at a LATAM wealth management firm asked me three privacy questions in a row on our first call. "Cuando vos cargas la informacion, esa informacion es publica. Si yo pago un acceso a ChatGPT, no me podes sacar toda la informacion en 2 minutos. Que pasa si se filtra un reporte." Translation: when you upload information, is that information public. If I pay for ChatGPT access, can someone pull all the information in 2 minutes. What happens if a report leaks.

His central bank auditors ask the same questions: which systems, which servers, where the data lives. A Brickell boutique law firm partner asked the lawyer version: "What about client privacy, what about attorney-client privilege, what happens if the AI memorizes my matter notes." This is the same gate law firms protecting attorney-client privilege raise on the first call.

These three questions kill the deal every time when they go unanswered. Here is the answer I give them.

PLAN-LEVEL GUARANTEES

What Anthropic actually does with your data

On a Claude Team or Enterprise plan, Anthropic does not use your inputs or outputs to train the model. That is contractual, not aspirational. It's in the data processing addendum that ships with both plans, and your compliance team can review the language before signing.

Your files live inside your workspace, accessible only to the seats you assign. Anthropic holds SOC 2 Type II certification, which means an independent auditor has reviewed Anthropic's controls over a multi-month observation window. The Enterprise plan adds single sign-on, audit logs, and admin controls a CISO can review and approve.

For SEC-registered investment advisers and wealth firms working under SEC and IFRS, this configuration is consistent with Reg S-P customer-privacy requirements. For LATAM funds reporting under IFRS to central bank regulators, the data residency and audit-log surface is documentable. For US law firms, attorney-client privilege is not weakened because your files do not leave your workspace and are not used to train the model.

CONFIGURATION ON TOP

What I do on top of that

The plan-level guarantees are necessary, not sufficient. Three things happen on top.

First, configure the Claude workspace before any client data touches it

Seats, sharing rules, allowed external integrations, retention windows, and the CLAUDE.md operating file that defines what the AI can and cannot do with sensitive material. This usually takes a half-day session with your IT lead and a compliance reviewer.

Second, write a data-path document

For a wealth firm, this means specifying where each input lives, who can access it, what the AI is allowed to read, and what gets logged. For a law firm, it means mapping which case folders flow into Claude, which stay air-gapped, and how matter-specific confidentiality is preserved across prompts. Your compliance officer reviews and signs the same document your auditor will see.

Third, set the confidentiality posture inside the operating file

Never expose specific dollar figures externally, never name specific clients in machine-readable contexts, never publish material flagged confidential. The CLAUDE.md becomes a guardrail Claude reads on every interaction.

ONE REAL EXAMPLE

A packaging manufacturer's confidentiality list

A packaging manufacturer operating in 4 countries hired me last year. The CEO keeps a confidential list inside the CLAUDE.md operating file: items that must never appear in any proposal, email, or public output the AI generates. Margin tables. Sensitive supplier and client identities. The AI reads the list before responding to any request and respects it on every interaction. The guardrail is not external. It sits inside the operating file the model reads first.

WHY CLAUDE

What about ChatGPT

Buyers often arrive having already tried ChatGPT and gotten nervous. The free and Plus tiers of ChatGPT do, by default, retain conversations and may use them for training. The Team and Enterprise tiers do not, and ChatGPT's Enterprise tier also holds SOC 2 Type II. The same logic applies: the plan-level guarantee matters, the workspace configuration matters more.

I work with Claude because of how the operating file pattern interacts with the model, not because Claude is "more private." Either Anthropic Enterprise or OpenAI Enterprise can be configured to meet the same compliance bar. The difference is the Skill plus CLAUDE.md system, which compounds over time on Claude in ways that are harder to replicate elsewhere.

WHAT YOU SIGN UP FOR

The deliverables, ranked

Pilot engagements always include the data-path document as a deliverable in week one, before any production data is touched. If your compliance team rejects the configuration, the pilot stops and you pay nothing for what didn't ship.

The plan-level guarantees and the data-path document handle the perimeter. The CLAUDE.md operating file handles what happens inside it. Together they give the compliance officer something to audit at every layer. The configuration sits alongside the diagnostic that maps your data, so the audit and the compliance posture ship as one engagement.

If your compliance team needs to see the data-path document before any pilot starts, send me your environment and I will draft it before the first call. No NDA required for the draft itself; the document just maps how AI would touch your data, not what your data is.

Compliance Questions

Frequently Asked Questions

No. On a Claude Team or Enterprise plan, Anthropic contractually does not use your inputs or outputs to train the model. The same applies to ChatGPT Team and Enterprise tiers. Free and Plus tiers can retain data; the business plans cannot.

Attorney-client privilege is not weakened on the right plan. Your files stay inside your workspace, are not used for training, and only seats you assign can access them. The configuration is consistent with privilege as long as your firm controls the seat list.

Yes, when configured correctly. Anthropic holds SOC 2 Type II certification, the Enterprise plan provides audit logs and SSO, and the data-path document I provide maps how customer information flows. This is consistent with Reg S-P customer-privacy requirements.

Leaks happen at the human or integration layer, not the model. The CLAUDE.md operating file sets confidentiality rules, the workspace permissions limit who can access what, and audit logs show every access event. Standard incident response procedures apply.

Free tier may use conversations for training. Team plan does not, has multi-seat workspaces, and meets SOC 2 Type II. Enterprise adds SSO, audit logs, custom retention, admin controls, and longer context windows. Compliance-regulated firms need at least Team.

Yes. The data-path document I write describes which systems hold the data, where each input flows, who can access it, and what gets logged. This is the same document your compliance officer signs and your auditor reviews. Available in English and Spanish for LATAM regulators.

No. On Team and Enterprise plans, conversations are scoped to the seat that started them. Other seats do not see them. Conversations are not used to train the shared model, so cross-customer memorization is structurally not possible. The model has no recall between sessions.

Documents uploaded to a Claude Team or Enterprise workspace stay in that workspace. They are not sent to other Anthropic customers, not used for training, and are accessible only to the seats with explicit permissions. The CLAUDE.md operating file adds confidentiality rules on top.

Typically 5 to 15 business days from when I deliver the data-path document. Faster if your compliance team has already reviewed Anthropic's SOC 2 attestation. Slower if it's a regulated industry first-of-its-kind review. I help your team answer auditor questions during this window.

That's most regulators in 2026. The configuration I deploy maps to existing data-protection and outsourcing rules your compliance team already knows. The data-path document explains how AI touches each control. Your regulator's eventual guidance will likely formalize what this configuration already covers.

If your compliance team needs to see the data-path document before any pilot, send me your environment and I will draft it before the first call.

No NDA required for the draft itself. The document maps how AI would touch your data, not what your data is.