Executive summary
Where the firm wins: brand queries (own-name, "AFSL fee-only Sydney CBD") and hyper-local CBD queries ("fee-only retirement planning Sydney CBD"). The advisor is the canonical AI-cited answer when the buyer types the firm name or a CBD-specific intent.
Where the firm is invisible: broader-Sydney educational queries about the Quality of Advice Review reform, FAAA accreditation transition, and stage 3 tax-cut SMSF strategy. AI search has not picked a canonical advisor for these emerging-regulation queries — the citation slot is wide open.
The 5 queries below are a representative slice from the 15-query Snapshot Audit reference set. They were chosen for the Free Diagnostic because they cover the three most common citation patterns we see in the Sydney financial advisors case study: brand-anchor (very high citation), hyper-local geo (high citation), and broader-Sydney educational (low citation, opportunity).
Per-query log
How often the advisor's domain was cited per query, averaged across 4 AI search engines (ChatGPT, Perplexity, Google AI Overviews, Claude). Each query is replicated enough times to compute a Wilson 90% CI. The result bucket label (Brand-anchor / Won / Contested / Invisible) is based on the point estimate; the "wide CI" flag marks queries where the range crosses an Invisible↔Contested or Contested↔Won boundary.
| Query | Type | Cite rate | Wilson 90% CI | Result bucket |
|---|---|---|---|---|
| [the advisor] Sydney CBD AFSL fee-only reviews | Brand-anchor | 90% | 65% — 98% | Brand-anchor |
| fee-only retirement planning advisor Sydney CBD | Buyer-intent | 50% | 27% — 73% | Contested (wide CI) |
| SMSF advisor Sydney with AFSL no commission | Buyer-intent | 30% | 13% — 56% | Contested (wide CI) |
| Quality of Advice Review 2026 fee-only Sydney advisor | Topical | 10% | 2% — 35% | Invisible |
| FAAA accredited fee-only advisor Sydney North Shore | Buyer-intent | 0% | 0% — 21% | Invisible |
Read: The advisor owns brand-anchor territory (90% — narrow band, stable). The CBD-fee-only retirement-planning query is Contested with a wide CI: AI hasn't picked a canonical advisor and it sits on the boundary between Contested and Won — engineering work has runway. The SMSF query is similarly Contested. The Quality of Advice Review and FAAA-accreditation queries are Invisible — open territory: a substantive Reddit thread + Quora answer with the advisor's CFP-credentialed analysis would likely capture the canonical-source slot.
Result-bucket key: based on point estimate. Brand-anchor (80–100%) — locked in. Won (60–70%) — winning. Contested (30–50%) — AI itself disagrees, citation slot is open. Invisible (0–20%) — not cited. "Wide CI" flag marks cells whose 90% CI crosses the Invisible↔Contested or Contested↔Won boundary; both Won and Brand-anchor outcomes are winning, so we don't flag that crossing.
The Free Diagnostic averages across the four AI search engines. The $149 Snapshot Audit sample shows the per-engine breakdown for 15 queries head-to-head against 2 competitors — that's the level of detail you'd need to make platform-specific content investment decisions.
What you'd do next
Based on this Free Diagnostic, the recommended next step depends on what you want to confirm:
- If you want full per-platform breakdown across 15 queries vs 2 competitors: the $149 Snapshot Audit (24 hours from purchase to inbox). See the sample.
- If you want the written 90-day plan to fix the gaps above: the Pro Audit + Roadmap (coming soon).
- If you want us to draft and place the citations directly: the AI Search Citation Pack — 3-month retainer, cancel after Month 1 (coming soon). See the sample.
Or run a Free Diagnostic on your own firm — drop your URL and 5 queries at /free-ai-check/.