Skip to main content
RESOURCE

How to handle employees using ChatGPT without approval (without banning it)

Your team is already using ChatGPT with company data. Banning it pushes the problem underground. The fix is a 5-step playbook: run a 30-day audit, write a one-page acceptable use policy, stand up a sanctioned private AI layer, train the team on data classification, then measure and iterate. Governance beats prohibition.

Ignacio Lopez
Ignacio Lopez·Fractional Head of AI, Work-Smart.ai·Coconut Grove, Miami
Published April 8, 2026·LinkedIn →

Your team is already using it. You just cannot see it.

A CFO told me last month that he found out his controller had been pasting client portfolio data into ChatGPT for six months. He found out because the controller mentioned it in a stand-up, casually, the way you mention a tool you use every day. There was no policy. There was no approved alternative. There was no list of what counts as sensitive. The controller was not doing anything wrong by her own logic. She was trying to get her job done.

This is the pattern in every mid-market company I walk into. Employees are already using ChatGPT, Claude, Gemini, Perplexity, and a long tail of free tools the IT person has never heard of. They paste contracts into prompts. They draft client emails. They summarize financials. The CEO finds out when something leaks, when a board member asks about AI governance, or when somebody mentions it at a stand-up. By that point the data has been in a public model for months and there is no way to claw it back.

The reality is simple. Your team is not waiting for a policy. They are using the tools that help them ship work, and they are improvising because nobody gave them something better. The condition is shadow AI. The reality is that you have no visibility and no governance. What to do about it is what the rest of this guide is for.

Why banning ChatGPT does not work

The first instinct of most leadership teams is to block the domain. It feels decisive. It satisfies the board question. It also moves the problem to where you cannot see it. The team switches to personal phones, personal Gmail accounts, and tools that do not show up in any audit log. The data still gets pasted. You just stopped knowing about it.

Banning also makes IT the enemy. The team that was trying to get more done with AI now sees the company as the obstacle. That is the fastest way to lose the cultural permission to govern anything else. You will need that permission later, when you roll out the sanctioned tools and the policy. Burning it on a firewall rule is expensive.

The third cost is the productivity gain you walk away from. The employees who were using ChatGPT to draft proposals, summarize meetings, or clean up spreadsheets were getting real benefit out of it. Banning the tool without offering a replacement makes the company slower at exactly the moment competitors are getting faster. Governance is not the opposite of adoption. It is the precondition for it.

Step 1: Run a 30-day shadow AI audit

The audit is not a witch hunt. It is a snapshot. You are answering four questions. What AI tools is the team using right now. What kind of work are they using them for. What data have they been pasting in. And where are the gaps that pushed them to those tools in the first place. You announce the audit before you start, you tell the team it is not disciplinary, and you make clear the goal is to give them better tools, not to take tools away.

The mechanics are straightforward. Pull network logs for the most common AI domains over the last 90 days. Run a short anonymous survey, ten questions, asking what tools people use and what they use them for. Then sit down for 30-minute conversations with one person from each major function. Sales, finance, operations, marketing, customer success. The conversations surface more than the logs do. People will tell you exactly what they paste and why, once they understand they are not in trouble.

What you are looking for is a usage map. Which tools, which teams, which tasks, which data classes. You will be surprised by two things. The first is how much usage there already is. The second is how often the same task shows up across functions, which tells you where the sanctioned tooling needs to land first.

Deliverable

A one-page shadow AI usage map. Tools in use, teams using them, tasks they are used for, and the three highest-risk data exposures ranked by likelihood and impact.

Step 2: Draft a one-page acceptable use policy

The policy lives or dies on whether people read it. Anything longer than one page does not get read. The structure is the same in every engagement. Three data classes in plain language. The list of approved tools and which data class each one is approved for. A short list of prohibited inputs with concrete examples. The reporting path when something goes wrong. The review cadence.

The data classes are the load-bearing element. Public means the information is already on your website or in a press release, anyone can see it, any AI tool is fine. Internal means it is not secret but it is not for the public, approved tools only. Restricted means client data, financial data, employee data, contracts, anything regulated. Restricted never goes into a general purpose AI tool unless you have a signed data processing agreement and the right enterprise plan. Three classes is enough. Five is too many.

The reporting path matters more than people realize. If somebody pastes something they should not have, you want them to tell you within an hour, not hide it for a quarter. That only happens if the policy says exactly who to tell, in which channel, with no consequences for honest mistakes. Write the path. Name the person. Make it boring to use.

Deliverable

A one-page acceptable use policy covering data classes, approved tools, prohibited inputs, reporting path, and review cadence. Signed by the CEO and circulated company-wide.

Step 3: Stand up a sanctioned private AI layer

Policy without an alternative is theater. If you tell the team they cannot paste contracts into ChatGPT and you do not give them a place to do that work, they will paste the contracts into ChatGPT. The sanctioned private AI layer is the thing that makes the policy credible. It needs to be at least as good as the tool the team is already using, with an admin console, audit logs, and a contract that says your data is not used for training.

For most companies under 200 employees, the right starting point is one of the enterprise plans from a major provider. ChatGPT Team or Claude for Work both work. They are paid per seat, your data stays out of training, and you get visibility into who is using what. Self-hosted private models, with retrieval over your own documents, are a level up. They make sense when you have a regulatory constraint that an enterprise SaaS contract cannot solve, or when you want the AI to actually know your business. Start simpler. Add the private model when the simpler option runs out.

The tradeoff is real. ChatGPT Team is fast to deploy, costs roughly 25 to 30 dollars per user per month, and gives you the same model the team is already using. A private layer built on top of your documents takes longer to stand up, costs more, and pays back when the team can ask questions about your contracts, your policies, your data, and get answers grounded in your actual business. Most companies need both eventually. Start with the one that ships in two weeks.

Deliverable

A sanctioned AI tooling decision, signed contract, and rollout plan with a named admin, seat allocation by function, and a 30-day adoption target.

Step 4: Train the team on prompt hygiene and data classification

The training is not a course. It is a 60-minute workshop, run live, repeated for each functional team. The format is the same every time. Fifteen minutes on the policy and the three data classes, with examples drawn from the audit. Twenty minutes on prompt hygiene, how to write a useful prompt, how to redact before pasting, how to spot a hallucination. Twenty minutes hands-on, where the team uses the sanctioned tool on a real task they brought to the workshop. Five minutes for questions and the reporting path.

The hands-on portion is what makes it stick. Reading a policy does not change behavior. Watching a colleague get a useful answer out of the sanctioned tool, on a task they were about to do anyway, does. Run the workshop once per function. Record it for new hires. Refresh it every six months as the tools evolve.

The other thing the workshop does is surface the workflows that are still painful. People will raise their hands and describe a task the sanctioned tool cannot do well. Write those down. They become the next round of automation work. Shadow AI rollouts that ignore the underlying workflow gaps end up rebuilding the shadow.

Deliverable

A 60-minute workshop deck and recording, run once per function, with a list of unresolved workflow gaps that feed the next automation cycle.

Step 5: Measure, publish, iterate

The policy is a living document. The metrics are how you know whether it is working. Five numbers are enough. Sanctioned tool adoption, measured as active users per month divided by total seats. Shadow tool usage, measured as traffic to non-approved AI domains over time. Reported incidents, measured as the count per quarter. Workshop completion, measured as the percent of headcount who have been through the training. Time-to-resolution for incidents, measured in days from report to closeout.

Publish the numbers. Not in a slide deck nobody reads, on a one-page dashboard the leadership team sees every month. The act of publishing is what creates the feedback loop. When sanctioned adoption climbs and shadow usage drops, you know the policy is landing. When sanctioned adoption stalls, the tool is wrong or the workflow is wrong and you fix it. When incidents spike, the training is not landing and you re-run the workshop.

The dashboard belongs in the same place as the rest of your operational metrics. If you have a command center, it goes there. If you do not, this is a good reason to build one. The point is that AI governance is not a one-time project. It is a layer of the operation that needs the same visibility as cash, pipeline, and headcount.

Deliverable

A monthly AI governance dashboard with five metrics, surfaced in the same place as the rest of the operational reporting, and a quarterly review on the calendar.

What this looked like at a $14B wealth advisory firm

A $14B wealth advisory firm engaged me as Fractional Head of AI. The presenting problem was institutional knowledge, twelve years of context locked in Outlook and file servers. The hidden problem, which the audit surfaced inside the first month, was that advisors were already pasting client information into ChatGPT to draft summaries and generate meeting prep. Nobody had told them not to. Nobody had given them a place to do that work safely.

We ran the audit on the schedule above. We wrote a one-page policy with three data classes, made restricted client data the bright line, and stood up a sanctioned enterprise tool with admin controls and audit logs. The training ran across the major functions. The workshops surfaced more pain than the audit had, especially around how long it took advisors to find historical context on a client before a meeting. That became the next workstream.

The result was not dramatic in the way a marketing case study wants it to be. It was structural. The firm went from no visibility and no policy to a published policy, a sanctioned tool the team actually used, and a monthly dashboard the leadership team reviewed. The shadow usage did not vanish overnight. It declined steadily as the sanctioned tool got better and the workshops landed. That is what governance looks like when it works. Read the full case study.

What this looks like when it is done

You have a one-page policy your team can recite. You have a sanctioned tool with adoption climbing toward the rest of the headcount. You have a monthly dashboard on the leadership team's screen. When somebody pastes something they should not have, they tell you within a day and you close it out without drama. The board question about AI governance has a one-paragraph answer with numbers in it. The work is not finished. It is operational. That is the difference.

If you want to see where your shadow AI risk sits today, the free assessment covers it as part of the governance layer. If you want to talk through what a 30-day audit would look like inside your operation, the Fractional Head of AI engagement is built around exactly this kind of work. You can also read the underlying framework in the AI Operating System, see how voice and brand fit into the same layer in the Voice DNA service, or browse the rest of the resources. Background on how I work is on the about page.

Common Questions

Frequently Asked Questions

You can, and most companies that try it reverse the decision within a quarter. Blocking the domain pushes employees to personal devices, personal accounts, and tools you have never heard of. You lose the visibility you had and gain nothing in security. The better move is to give the team a sanctioned alternative inside a clear policy, then monitor what is actually happening. Governance beats prohibition every time at this size.

At minimum: client names tied to financial information, employee compensation and reviews, contracts and pricing under negotiation, personally identifiable information regulated by GDPR or HIPAA, and anything covered by an NDA. Most mid-market companies need three data classes in their policy. Public, internal, and restricted. Public is fine in any tool. Internal stays in approved tools. Restricted never goes into a general purpose AI tool, even a paid one, unless you have a signed data processing agreement and the right plan.

For most companies under 200 employees, ChatGPT Team or Claude for Work is the right starting point. Your data is not used for training, you get an admin console, and the cost is reasonable per seat. A self-hosted private model only makes sense when you have a legal or regulatory reason that an enterprise SaaS contract cannot solve, or when you want to query your own documents at scale. Start with the simpler option. Move up only when the simpler option breaks.

Tell them what you are doing and why before you start. The audit is not surveillance, it is a snapshot. You are looking at what tools the team uses, what they use them for, and where the gaps are. Most employees are relieved that someone is finally paying attention and ready to give them something better. The conversations during the audit usually surface more than the data does.

Five things. First, the three data classes and what each one means in plain language. Second, the list of approved tools and which data class each one is approved for. Third, a short list of prohibited inputs with examples. Fourth, the reporting path when something goes wrong, named person, named channel, no ambiguity. Fifth, the review cadence so the policy stays alive. Anything longer than one page does not get read.

The biggest line item is usually the sanctioned tooling. ChatGPT Team and Claude for Work both run roughly 25 to 30 dollars per user per month at list price. For a 50-person company that is 1,250 to 1,500 dollars per month. The audit, policy, training, and metrics workstream is a one-time effort, typically 4 to 8 weeks of focused work. The cost of doing nothing is harder to quantify, but most operators who have had a leak will tell you it dwarfs the rollout.

You see the audit findings inside 30 days. The policy and sanctioned tooling go live in another 2 to 4 weeks. Adoption climbs for the first quarter as the team learns the new tools and the workshop content sinks in. Real change in behavior, fewer risky pastes, more reuse of approved tools, more questions in the right channel, shows up around month 3. The dashboard is the proof. If it does not move, the policy is not landing and you adjust.

Most of my clients found out about shadow AI the hard way. The audit is the cheapest insurance you can buy, and it usually pays for itself in the first week.