Methodology

Pre-registered queries — committed before any work begins

By

Aoraforge
· Updated

The discipline: lock the queries before the work begins

A measurement where the vendor picks what to measure after the work is done is not a measurement. The standard, in any rigorous field, is: commit to the questions before the work begins.

That's what pre-registration does for Aoraforge's AI Search Citation Pack.

The pre-registration lock

The 15 target queries are agreed at the Day-0 kickoff call and never silently swapped mid-engagement.

What gets locked:

| Item | Locked at | Notes | |---|---|---| | 15 target queries | Kickoff (Day 0) | Mix typical: 4 brand-adjacent, 6 buyer-intent geo, 5 educational/comparison | | 2 named competitors | Kickoff | For head-to-head measurement | | Brand-name match rules | Kickoff | Exact match, or with permitted variations (e.g., "[brand]" matches "[brand] Sydney" but not "[brand variant]" if customer chooses tighter rules) | | "Cited" definition | Kickoff | Default: brand mentioned by name in the answer paragraph. Looser: anywhere in the response including footnotes. Customer chooses. |

These four items go into a service-agreement appendix signed by both parties.

If a pre-registered query turns out to be unwinnable — no platform will cite anyone for it, including the strongest competitor — that is itself a finding, not permission to substitute. It tells the customer: the AI doesn't see this question as having a brand-named answer; market it differently.

The end-of-cycle manual review

The outcome standard:

`` standard = at least 5 of the 15 pre-registered queries show at least one cited brand thread on any of the 4 platforms by the end of the first retainer cycle ``

  • 5 or more of the 15 pre-registered queries with a citation by end-of-cycle → standard met
  • Fewer than 5 → standard not met; trajectory analysis informs the next-cycle action plan

At the end of the cycle, Aoraforge manually reviews the polling logs against the same standard. The customer receives the polling logs with the end-of-cycle report; the citation-parser output is deterministic.

Why this design

  • Pre-registered: the 15 queries are locked at Day 0, before Aoraforge sees the citation distribution.
  • Manual review against logged data: the end-of-cycle review is performed by Aoraforge against the polling logs the customer also receives.
  • Conservative threshold: 5 of 15 = 33%. A strong Citation Pack typically lands 8–12 of 15. The 5 threshold is minimum acceptable to show the methodology worked, not a stretch goal.

The clinical trial precedent

This convention mirrors how clinical trials report endpoints. From the International Committee of Medical Journal Editors (ICMJE) 2007 statement:

> Trial registration ensures that the prospectively planned outcome assessments are stated, removing the possibility that trial sponsors can later choose which results to publish. The credibility of the medical literature depends on this discipline.

Same logic applies to citation engineering. A rate-of-citation report on a query set the vendor curated during the engagement is at best a marketing artifact, not a measurement. Aoraforge imports the discipline directly.

What a non-pre-registered audit looks like

Most "AI citation audit" reports in the market today are run after the work is done. The vendor:

1. Looks at which queries the brand happens to be cited on 2. Selects 10–15 of those for the report 3. Calls them "the target queries" 4. Reports a high citation rate

This is statistically equivalent to running an A/B test, looking at which variant won, and then writing the hypothesis. The number is real; the inference is fraudulent.

Pre-registration solves this in one move: the queries are public (in the service-agreement appendix), agreed by both parties, and locked. The vendor can't cherry-pick after seeing the data.

Frequently asked

What if I want to add a 16th query mid-engagement because something changed in my market? The 15 pre-registered queries stay. We can add additional queries as unbounded observations (polled but not part of the end-of-cycle review standard) for a small additional fee. The review standard stays attached to the original 15.

What if the AI platforms change mid-cycle in ways that affect citability? Locked. Platform changes are a known risk and one of the reasons the threshold is set at 5/15 (conservative) rather than 12/15. We don't move the goalposts mid-engagement even when the substrate moves.

Can I see the polling logs while the engagement is in flight? Customers receive the per-check-in poll logs continuously, not just at the end. The trajectory is visible throughout.

What if I disagree with the citation parser's verdict on a specific query? The parser is deterministic and the rules were locked at kickoff. If you disagree with the rules themselves (not the parser), that's a kickoff-time discussion. After Day 0, the rules don't change.

How often does the standard get met? That's an empirical question the customer can audit by reading their own engagement logs. We commit to publishing engagement-level outcome summaries (anonymized at customer request) on request.

---

Primary references: ICMJE (2007). Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Nosek et al. (2018). The preregistration revolution. PNAS 115(11): 2600–2606.