Methodology
Four AI platforms, measured independently
Aoraforge · UpdatedThe problem with one combined number
ChatGPT, Perplexity, Google AI Overviews, and Brave Search are four different systems. They use different ranking models, pull from different sources, and apply different rules for what they cite. Aggregating them into one "AI citation rate" hides exactly the information you need.
The same brand might be cited:
`` ChatGPT 7/10 = 70% [44%, 87%] Won Perplexity 2/10 = 20% [7%, 44%] Invisible (wide CI) Google AIO 9/10 = 90% [65%, 98%] Brand-anchor Brave 5/10 = 50% [27%, 73%] Contested (wide CI) ───────────────────────────────────────── Aggregate 23/40 = 57.5% ``
The aggregate is wrong in every useful sense. It implies medium visibility across AI search broadly. The truth: you own Google AIO, are competitive on ChatGPT, are invisible on Perplexity, and are mid-pack on Brave (which means mid-pack for Claude users too, since Claude's web_search tool runs on Brave). Each is a different content-investment decision and a different competitor set.
What each platform rewards
| Platform | Pulls from | What moves the needle | What hurts you | |---|---|---|---| | Perplexity | Live web search; Reddit and Quora heavily weighted; recency favored | Long Reddit threads (1,500+ words), Quora answers with operator-voice depth, news coverage in the past 30 days | Stale content; corporate-tone marketing pages | | ChatGPT | Training data + web_search tool retrieval | Long-form expert content on your own domain, Wikipedia mentions, schema.org Article + FAQPage markup, methodology essays, .edu/.gov citations | Thin pages; orphaned content; sites with no schema | | Google AI Overviews | Google Search index + featured-snippet selection logic; some retrieval-augmented generation | Schema markup (especially LocalBusiness, FAQPage, Product), FAQ pages, Bing/IndexNow cross-listing, structured comparison tables | Unstructured prose; sites without schema | | Brave Search | Independent web crawl with its own ranking model. Also the substrate Anthropic's Claude web_search tool routes through, so polling Brave covers Claude users indirectly. | Brave-friendly content (long-form, methodology-heavy), structured data, GitHub repos, technical write-ups, methodology essays with citations | Brave-blocked or low-Brave-rank content; thin pages |
Why we don't poll Claude separately
Anthropic's Claude doesn't operate its own crawler. The Claude web_search tool routes through Brave Search (Anthropic docs). Polling Claude directly would just be polling Brave with Anthropic's prompt template wrapped around it — same substrate, more cost, extra prompt-shaping noise.
We poll Brave directly: it's the substrate-correct measurement that covers what Claude can cite, without paying for the wrapper. When you see a Brave column in an Aoraforge report, that rate also tells you what Claude users see — not approximately, but mechanistically (same retrieval index).
Why fixes don't transfer cleanly
This matters operationally because fixing one platform sometimes weakens another:
- Optimizing for ChatGPT (long Wikipedia-style methodology content on your domain) can weaken Perplexity (which prefers Reddit-voice, terse, first-person operator content).
- Optimizing for Google AIO (heavy schema + FAQ pages) doesn't transfer to Perplexity (which mostly ignores schema in favor of forum content).
- Optimizing for Brave (long-form technical content with citations) often does compound with ChatGPT — they share retrieval patterns, so a single methodology essay can lift both. Bonus: lifting Brave also lifts what Claude users see.
A monolithic AI search optimization strategy that ignores per-platform weighting is worse than no strategy: it spends real content effort on the wrong levers and reports a single number that hides the misallocation.
The kit-level summary is a navigation aid
In Aoraforge reports, the kit-level (cross-platform) summary is provided only as a navigation aid — a way to find which queries to drill into. The substantive measurement is always per-platform with its own Wilson 90% confidence interval. If you read an Aoraforge report and remember only one number per query, you're reading it wrong.
How to act on this
When choosing between content investments:
1. Pull the per-platform breakdown for your top 5 queries. 2. Find the platform with the largest gap between you and your strongest competitor. 3. Match the platform to the lever using the table above. 4. Don't optimize for the kit-level number — you'll spread investment thin across four very different fights.
This is what the Pro Audit roadmap does in writing. The Citation Pack does it in execution.
Frequently asked
Why these four platforms specifically? ChatGPT, Perplexity, Google AI Overviews, and Brave Search collectively account for the majority of AI search retrieval in 2026. Brave is included as a first-class platform because (1) it has direct user reach and (2) it's the underlying substrate Anthropic's Claude uses for web search. So polling Brave covers Brave-direct users and Claude users with one set of polls.
What about Reddit, Quora, Wikipedia themselves — aren't they part of AI search? They're upstream sources, not AI search platforms. AI search platforms cite them. Citation Pack threads on Reddit and Quora are leverage points for getting cited on the four platforms above. We measure citations on the platforms; the source-side activity is the engineering, not the measurement.
Will you add new platforms over time? Yes. The polling spec versioning system supports adding platforms in minor versions (v1.1+). Customers on v1.0 engagements stay on the four-platform measurement for the duration of that engagement; new engagements ship under the latest spec.
Are platform behaviors stable enough to write a methodology around? The platform behaviors change every 6–12 months as models retrain. The methodology (per-platform measurement, Wilson CIs, four check-ins, pre-registration) is platform-agnostic — what each platform rewards may shift, but the framework for measuring it doesn't.