Methodology
Four check-ins across 30 days, not a single snapshot
Aoraforge · UpdatedThe problem with one snapshot
If you publish a Reddit thread today and check whether AI search cites it 30 days from now, the answer alone tells you nothing useful:
- If it's cited — was the work effective, or did the index just take 27 days to crawl?
- If it's not cited — did the work fail, or is the platform still catching up?
A single end-of-window measurement can't tell those four states apart.
AI platforms refresh on different cadences
| Platform | Refresh model | |---|---| | Perplexity | Essentially real-time live retrieval — new content can be cited within hours | | Brave | Continuous crawl; near-real-time. Also the substrate behind Claude's web_search — so this rate covers Claude users too. | | Google AIO | Roughly weekly model refresh; underlying SERP cache shorter | | ChatGPT | Training cuts + web_search retrieval; lag varies by model release; web_search is real-time but ranking signals lag |
A Day-30 audit polled once per platform sees four different layers of staleness mashed into one number.
What Aoraforge does
We poll each engineered thread at four check-ins: Day 7, Day 14, Day 21, and Day 30 (relative to publication). The trajectory is the audit.
The five trajectory patterns
| Pattern | Day-7 | Day-14 | Day-21 | Day-30 | Verdict | What to do | |---|---|---|---|---|---|---| | Compounding | 0.20 | 0.45 | 0.65 | 0.70 | Work is gaining authority across runs | Hold and let it ride | | Decaying | 0.70 | 0.60 | 0.40 | 0.25 | Work landed but newer competitor content is overtaking | Refresh schedule — republish or update content | | Lagging | 0.00 | 0.00 | 0.05 | 0.30 | Platform hadn't caught up yet | Day-60 follow-up will likely show higher; do nothing | | Flat zero | 0.00 | 0.00 | 0.00 | 0.00 | Engineering didn't work for this platform — or platform doesn't cite this query class | Diagnose: query class, content shape, or platform mismatch | | Spike-decay | 0.00 | 0.50 | 0.10 | 0.05 | Brief boost then displaced; usually means a dominant source got indexed and re-displaced | Refresh + amplify the spike-source |
Three of those (compounding, decaying, lagging) are common across the Citation Pack. Aggregating all of them to a single number loses every diagnostic signal.
Real-world example
Sample output for the query "best solar installer Sydney with battery and warranty":
`` ChatGPT trajectory: Day 7: 3/10 = 30% [13%, 56%] Contested Day 14: 5/10 = 50% [27%, 73%] Contested (wide CI) Day 21: 6/10 = 60% [36%, 80%] Won (wide CI) Day 30: 7/10 = 70% [44%, 87%] Won Verdict: COMPOUNDING — work is gaining authority ``
Each row carries its own Wilson 90% CI and result bucket. The pattern is the audit.
Why not more than four check-ins?
Diminishing returns. Four points across 30 days is enough to fit a clear pattern signature without polling-cost explosion. More frequent (e.g., daily) check-ins:
- Add cost without adding diagnostic value
- Add noise (daily variance is mostly retrieval-side stochasticity, not signal)
- Don't change the verdict (the same five patterns emerge from 4 points or 30)
Four is the sweet spot where the trajectory diagnosis is reliable and the cost is manageable.
What this means for the end-of-cycle manual review
The end-of-cycle verification poll is additional to the four check-ins. It's the formal standard: at the end of the retainer cycle we manually review the polling logs and count whether the brand was cited on at least one platform on at least 5 of the 15 pre-registered queries. See Pre-registered queries — committed before any work begins for the full review logic.
Frequently asked
Can I get more than four check-ins for a specific high-stakes query? Yes. Customers can request additional check-ins for individual queries at kickoff. The four-point baseline is the minimum; more is fine.
What if a thread takes longer than 7 days to publish? Check-ins are relative to each thread's publication date, not the engagement start. If the customer publishes a thread on Day 12, check-ins for that thread happen at engagement Days 19, 26, 33, 42.
Do you re-poll for long-tail durability beyond the cycle? Pro Audit re-runs at 60–90 day cadence are how customers measure long-tail durability. The Citation Pack's formal measurement window is bounded by the retainer cycle; beyond that, re-running the Pro Audit is the right tool.
Is the trajectory signal robust to platform changes? The trajectory categories are robust (compounding/decaying/lagging/flat/spike-decay are platform-agnostic). The specific rates within a category will shift as platform behaviors evolve, but the diagnostic framework holds.