Skip to main content
AI VISIBILITY SERVICE

How to get your company cited in ChatGPT, Claude, and Perplexity

You get cited by pulling four levers at once. Structured data so retrievers can parse your pages, semantic HTML so they understand the hierarchy, machine-optimized content with direct answers near the top, and authority signals like named authors and freshness. Those four map to the four retriever paths: Google SGE, Bing-fed ChatGPT Search, the Perplexity index, and the Claude retriever.

Ignacio Lopez
Ignacio Lopez·Fractional Head of AI, Work-Smart.ai·Coconut Grove, Miami
Published March 15, 2026·Updated April 8, 2026·LinkedIn →

Your buyers are asking ChatGPT. You are not in the answer.

A managing partner at a Miami firm told me last month that two of his last three inbound calls started the same way. The prospect had asked ChatGPT who the best advisors in his category were, gotten back a short list, and called the first name on the list. His firm was not on the list. His SEO agency told him they were working on it. His competitors were booking the meetings.

This is the pattern across every mid-market operator I talk to. The buyer has already shifted. Before a prospect fills out your contact form, they have asked ChatGPT, Claude, or Perplexity a version of the question your business exists to answer. If your site is not structured so those tools can extract and cite it, you are invisible at the exact moment the decision gets made. The condition is zero citations. The reality is the buyer is gone by the time your sales team hears about them. What to do about it is the rest of this page.

Why traditional SEO does not solve this

Google ranks pages against a query. LLMs retrieve passages and compose an answer. Those are different systems with different inputs. A search engine wants to send a user to the best page. A retriever wants to lift the best paragraph and paste it into a synthesized response. Optimizing a page to rank on Google means picking a keyword, getting backlinks, and writing content that signals relevance. Optimizing a page to be cited by Claude means making sure the answer is parseable, the structure is clean, and the authority is signaled in ways a retriever can trust.

Ranking number three on Google does not mean Claude cites you. I have tested this on real sites. A wealth advisory firm ranked in the top five for its core category keywords and was cited zero times across 12 questions in Claude, ChatGPT, and Perplexity. A law firm with half the backlink volume was cited twice in Claude because its pillar pages had a 45 word direct answer at the top and a proper FAQPage schema block. The gap is not effort. The gap is the wrong work.

The honest thing to say here is that most SEO agencies are not built for this yet. They know keywords, links, and content cadence. They do not know JSON-LD validation, Speakable specifications, retriever-friendly heading hierarchy, or the difference between the Bing index and the Perplexity index. If your current agency is promising AI visibility and the deliverables look like a 2023 SEO retainer, you are paying for the wrong work.

The 4 retriever paths

LLMs do not crawl the web the way a single search engine does. They pull from four distinct retrieval paths, and each one has its own signals. Getting cited at scale means being present across all four.

1. Google SGE and AI Overviews

Google feeds its own retriever from its core index. The signals that matter are the ones Google has always liked, extended with a bias toward structured data and clean E-E-A-T cues. Author schema, Article schema with datePublished and dateModified, FAQPage schema, and HowTo schema all increase the odds of inclusion. Freshness matters more than it used to. If your page was last updated in 2023, SGE will discount it.

2. Bing-fed ChatGPT Search

ChatGPT Search retrieves from the Bing index. That means Bing Webmaster Tools is a required setup step, not optional. Submit a sitemap through Bing, enable IndexNow so Bing picks up changes within hours instead of weeks, and validate your schema in the Bing tooling. ChatGPT Search weights direct answers highly, and it prefers pages where the H1 matches the user intent closely.

3. Perplexity

Perplexity runs its own real-time crawl and maintains a backlink-driven index. Site authority still matters here in ways that feel familiar to traditional SEO. The difference is that Perplexity will pull from five to fifteen sources per answer, so even a mid-authority site can land in the citation list if the content structure is clean and the direct answer is well-formed. Backlinks from industry publications carry more weight than backlinks from generic directories.

4. Claude retriever

Claude is the least documented of the four. What we know from running the audit monthly is that Claude rewards three things: crawlability, direct-answer formatting, and freshness. Clean HTML, a crisp H1 phrased as the buyer question, a 40 to 50 word answer immediately below, and a dateModified field that reflects recent work. Of the four retrievers, Claude has been the earliest to cite well-structured mid-market sites in my experience, including on this site.

The 6-layer AI Visibility framework

The four retriever paths all draw from the same underlying work on your site. That work breaks into six layers. Every engagement covers all six, in order, because skipping a layer leaves the ones above it resting on nothing.

Layer 1

Technical crawlability

Every page has to be reachable by bots. Clean robots.txt, current sitemap submitted to Google Search Console and Bing Webmaster Tools, Prerender or equivalent for any client-rendered SPA, IndexNow pings on publish, and zero JavaScript walls that hide the main content from a non-JS retriever. If a bot cannot see your page, nothing else on this list matters.

Layer 2

Structured data

JSON-LD on every page, validated. At minimum: Article or Service schema, FAQPage schema where you have a FAQ section, SpeakableSpecification pointing at your answer capsule, HowTo schema for procedural pages, Organization schema on the home and about pages, and Person schema on author blocks. This is the layer that lets retrievers match your content to a buyer question without guessing.

Layer 3

Machine-optimized content

Every pillar page opens the same way. H1 is the buyer question in the buyer language. A 40 to 50 word direct answer immediately below the H1, in a dedicated answer capsule block. A clear intro, then the substance, then a FAQ section with five to seven questions that mirror the FAQPage schema. This is the format that retrievers are trained to lift cleanly.

Layer 4

Semantic HTML and accessibility

One H1 per page. Correct heading hierarchy with no skipped levels. Meaningful link text instead of click here. Alt attributes on every image. Lists marked up as lists, not as styled divs. Accessibility and retriever friendliness are the same problem viewed from two angles. A page that works for a screen reader works for a retriever.

Layer 5

Authority and freshness

Named authors with real credentials, LinkedIn links, and Person schema. dateModified bumped whenever the page gets a meaningful update, not just a typo fix. Internal link density that maps to your pillar questions so retrievers can see the topical cluster. Real backlinks from industry publications, podcast appearances, and directories that matter in your vertical. E-E-A-T is not a buzzword on this layer, it is the thing that tips a borderline citation in your favor.

Layer 6

Monitoring and measurement

Pick 10 to 12 questions your buyers actually ask. Run them against ChatGPT, Claude, Perplexity, and Google SGE every month. Record whether your site is cited. Track the delta. Fix the gaps. Ship updates. Re-run next month. Visibility is not a project you finish. It is a running score you maintain, and this layer is how you know whether the other five are working.

What this looked like on our own site

The most honest thing I can put on this page is the audit I ran on work-smart.ai in April 2026. Twelve buyer questions, four retrievers, 48 total runs. The baseline result was 3 citations. All 3 came from Claude. The other three retrievers returned zero. The three cited pages were the engagement process page, the Miami consultants page, and the Voice DNA service page.

The findings were not ambiguous. Claude was rewarding the pages that already had clean answer capsules, named author blocks, and FAQPage schema. The pages that did not have those elements were invisible everywhere. Perplexity was returning zero because the site was a client-rendered SPA and Perplexity was not seeing the content. ChatGPT Search was returning zero because Bing had not fully reindexed the rebuild.

The response was a four-part fix. First, the three already-cited pages were rebuilt to defend the position, tighter answer capsules, stronger schema, refreshed author blocks, and dateModified bumps. Second, two new pillar pages were shipped against the biggest gaps: a Shadow AI Playbook for the governance question and a CFO ROI Framework for the cost question. Third, Prerender was enabled across the SPA so retrievers see fully rendered HTML. Fourth, the 12-question audit went on the calendar as a recurring monthly job so the feedback loop is permanent.

I am running the same audit on this page the month it ships. If a service provider is selling AI visibility and cannot show you their own citation numbers with dates on them, they are selling a pitch deck, not a service. You can read the Shadow AI Playbook and the CFO ROI Framework to see the two gap-fill pillars in the wild.

What the engagement looks like

Phase 1 is the audit. Two weeks, fixed scope. I run your 10 to 12 buyer questions against all four retrievers, record the baseline, crawl your site for the six layers, and deliver a gap report with a prioritized fix list. You own the report regardless of whether we work together on Phase 2. Some companies take the audit and execute internally. That works if you have a technical content team with bandwidth.

Phase 2 is the fix. Four to eight weeks depending on scope. Technical crawlability gets cleaned up first, then schema, then content rewrites on the pages that are close, then net-new pillar pages on the questions with no coverage. Every shipped page is validated for schema, accessibility, and answer capsule structure before it goes live. The work is done in public and you see each page before it ships.

Phase 3 is monitoring. Monthly retainer. The 12-question audit is re-run against all four retrievers, citation rate is tracked against the baseline, and we iterate on the pages that are still not getting picked up. New buyer questions get added to the audit set as your market shifts. This is the phase that compounds. Most of the citation gains show up in months three through six, not in week two.

What you own at the end

All the schema markup, written directly into your codebase. Every pillar page we ship, with the source files in your repo. The monitoring dashboard and the audit spreadsheet with the baseline, the monthly scores, and the questions. The playbook document so your team can keep publishing in the same format after the engagement ends. No lock-in, no proprietary tooling you have to keep renting, no hostage deliverables. If you want to take it all in-house after six months, you can. See how this fits with the rest of the services, or read about how I work. The wealth advisory case study covers a parallel engagement, and the Voice DNA service is the companion layer for companies that need their content to sound the same across every page the retrievers lift from. For vertical context, see the legal and financial services industry pages, and if you want the diagnostic first the free assessment covers AI visibility as part of the full operating system.

Common Questions

Frequently Asked Questions

Google ranks pages. LLMs retrieve answers. Those are two different systems with two different signal sets. Traditional SEO optimizes for keyword match, backlinks, and click-through. LLM retrievers optimize for clean structured data, direct answers near the top of the page, named authors with credentials, and freshness. You can rank number three on Google and still be invisible to Claude. The craft overlaps at the edges, but the work is not the same and neither is the measurement.

No, and anyone who does is lying. The retrievers change their behavior month over month, and nobody outside the model labs knows the exact weights. What I can tell you is how to build a site that is technically extractable, semantically clean, and structurally honest about what your company knows. That is the controllable part. Citations follow. I run the 12-question audit monthly on my own clients, including on my own site, so the feedback loop is real and the numbers are transparent.

For a site with reasonable authority, first citations usually land inside 60 to 90 days after the fix. Claude tends to cite earliest because it retrieves from a smaller, cleaner set. Perplexity follows as its crawl refreshes. ChatGPT Search depends on Bing reindexing your pages, which can take 30 to 60 days on its own. Google SGE is the slowest of the four. A full visibility curve takes two to three quarters to stabilize. Anyone promising faster is quoting an exception, not a plan.

Yes, with caveats. Backlinks still matter for Perplexity, which leans on site authority signals. They matter less for Claude, which seems to reward clean structure and direct answers regardless of inbound link volume. A mid-market operator with 50 good backlinks and crisp schema beats a competitor with 500 backlinks and broken JSON-LD. If your backlink profile is thin, the fastest path is authentic authority: named authors, original data from real engagements, case studies with numbers, and a steady publishing cadence on the question your buyers actually ask.

Usually both. The audit tells us which of your existing pages are close to citable and just need structural fixes, and which buyer questions have no corresponding page at all. The first bucket gets rewritten in place. Cleaner H1, answer capsule at the top, FAQ section, schema, author block. The second bucket becomes a short list of net-new pillar pages, one per buyer question we want to own. Most engagements end up 60 percent rewrite and 40 percent new pillars. Your exact mix depends on the audit.

One number. Citation rate. I take the 10 to 12 questions your buyers actually ask, run each one against ChatGPT, Claude, Perplexity, and Google SGE, and record whether your site is cited in the answer. That is the baseline. Every month we re-run it and track the delta. Pageviews, impressions, and rankings are useful secondary signals, but citation rate is the primary one because it is the only metric that maps directly to whether an LLM will send your next buyer to your door.

Quoted per engagement, not per hour. A visibility audit by itself is a fixed scope of two weeks and produces a gap report, a prioritized fix list, and a monitoring baseline. The execution phase is scoped against the gap list and runs four to eight weeks depending on how many pillar pages you need and how much existing content needs rework. Ongoing monitoring is a monthly retainer. Pricing ranges are shared on the intro call so we can match the scope to your budget instead of the other way around.

Your buyers are asking ChatGPT about your industry every day. The audit tells you whether you are in the answer, and if not, exactly what to fix.