The line that explains why most AI rollouts fail
A partner at a Brickell boutique law firm with the dead-tool problem told me the line that explains it. "We spent money on a tool two years ago, nobody used it, and that still stings." A senior partner at a $14B wealth advisory firm said the same thing in different words: "The difference between Microsoft Copilot and Claude is huge." They had paid for Copilot. Nobody used it. The tool was bought, not adopted. The same pattern shows up in wealth firms with Microsoft Copilot regret where the license sits unused on every desk.
A company buys a license, sends out a launch email, schedules a one-hour training, and then expects behavior to change. It does not.
The tool lives in a separate browser tab. The team's actual work lives in email, files, WhatsApp, and spreadsheets. The gap never closes.
The 90-minute workshop
The first session is 90 minutes, hands-on, with the people who will actually use it in the room. We do not look at slides. We do not watch a demo. We do not run through capabilities.
Each person picks one task they did this week that ate too much time. We build a Skill that handles that task. By the end of 90 minutes, every person in the room has a working tool tied to a real workflow. They do not have to remember to use it next week, because they already used it today.
Mapi at The Joy of Impact wrote me a message after her workshop:
"Acabo de pasar... ah, tan lindo. No hubieramos podido hacer esto sin ti."
She and her VA in Colombia walked out with six Skills installed and the operating file connected to her Drive. Three weeks later she was referring me to her own coach.
Why this works when launch emails and one-hour trainings don't
Three reasons.
One. The tool is built into the workflow, not next to it.
A finance analyst's first Skill drafts the variance commentary she writes every Monday morning. A construction PM's first Skill summarizes the foreman's WhatsApp report into the project update he sends every evening. The Skill replaces real work the person was already doing. They don't have to remember the new tool, because the new tool finishes a task they would have started anyway.
Two. Each person owns their first win.
The session is hands-on, not passive. Each attendee builds something themselves with my help. They saw it work on their actual data, with their actual context. The psychological barrier ("I don't know how to use this") is gone before they leave the room.
Three. The retainer keeps the system alive.
New workflows appear every quarter. Without ongoing Skill creation and CLAUDE.md updates, the system goes stale within 6 to 12 months and the tool dies a second death. Sami at Almaga ships new Skills weekly. A solopreneur psychologist on the lower tier ships them quarterly. The retainer flexes to what the team actually uses, and the workshop-and-retainer engagement is how this stays alive past month four.
What gets installed in the workshop
For a 5-person team, the 90-minute session typically produces:
- 5 working Skills, one per person, tied to real weekly tasks
- The first version of CLAUDE.md, with the company's context, voice, and key rules
- Connections to whatever sources the team's data lives in (Drive, Outlook, sometimes WhatsApp Business)
- A working agreement: who edits the CLAUDE.md, how new Skills get requested, what the monthly cadence looks like
A 10 to 20-person team gets a half-day version. A 50+ team gets multiple sessions broken by department. The shape is the same: each person ships one thing, and the team walks out with a system built on the Skill plus CLAUDE.md architecture, not a promise.
What model upgrades have to do with this
Anthropic shipped Opus 4.7 in April 2026. Newer models follow instructions more literally than older ones. That is good news for teams with a clean CLAUDE.md and well-written Skills, because the same operating file produces better output on the new model. It is bad news for teams running unstructured prompts, because sloppy prompts break harder on more literal models.
The workshop-first method protects the team from this. The Skill format and the CLAUDE.md architecture get better with each model release, not worse. That is the part most agencies miss when they hand a team a license and walk away.
What you actually own at the end
Every Skill from the workshop sits in your Claude account. The CLAUDE.md operating file lives in your Drive. The integrations connect to your accounts. You own everything we build. If you decide tomorrow that you no longer need a Fractional Head of AI, the system stays running because nothing about it depends on me being around.
If your team has a dead tool sitting in a browser tab right now, that is the conversation to have. Tell me which tool died and why, and I will tell you whether a workshop-first restart is worth running.