Welcome to the AI Citation Grader

Audit any page for AI Overview citation readiness. Queue automatic fixes. Track results.

LIVE

5-step quick start

1
Add a client (one time per client)

Open Clients in the sidebar. Click + Add client. Fill in: slug (folder name), WP URL, WP user, WP App Password, industry, business facts JSON. Save.

2
Scan a URL or whole site

Open Scan in the sidebar. Pick mode: single URL, whole site (sitemap), or paste-list. Hit Run. Each page takes 3-8 sec.

3
Review the findings

Open Findings for the table view, Strategy for the consultant memo (top priority pages + dominant pattern), Per-page Fix for the detailed audit on any single URL.

4
Queue fixes for FAIL/PARTIAL pages

In the Findings table, click Queue fix on any row. Enter the client slug. The Netlify Function loads that client's WP creds and applies the fix automatically (schema retrofit, meta rewrite, indexing API resubmit). Takes ~30 sec per page.

5
Watch fixes apply in the Queue

Open Fix Queue to see status (pending → processing → done) with score before/after. Click "Details" on any row to see what changed (schema diff, meta rewrite, checks fixed, checks still failing).

Sidebar map

SectionWhat it does
ScanAudit a URL, a whole site (sitemap), or a paste-list
FindingsTable of every audited page with grade, score, pattern, top issue
StrategyConsultant memo: dominant pattern + top priority pages to fix first
Issue BreakdownAggregated table of every issue type and how many pages have it
Per-page FixPick any URL, see all 21+ checks pass/fail with specific fix instructions
Fix QueueStatus of every fix triggered. Auto-refreshes. Click Details for diff view.
ExportDownload CSV / JSON / Markdown / "Claude Code prompt" for batch fixing
HistoryPast scans saved to Supabase. Click any row to reload it.
ClientsAdd/edit client config (WP creds + business facts). Required for "Queue fix" to work.
What's checkedMethodology: every check explained, what bad/good looks like, why we check it

Need help?

Internal playbook: /reference/RANK_MATH_SCHEMA_FIX_PLAYBOOK.md in the visibility-engine repo

Slack: #oliversonlaw for client-specific questions, #rcdc-team-tasks for general

If a fix fails, check the Fix Queue → Details panel. Most common issues: client_slug typo (must match folder name with hyphen), or the page is on a different builder (Elementor) where post_content edits don't render.

How this grader works

The grader runs two layers of checks:

  • Universal Tier 1 + Tier 2 — checks that drive citation across every service-business vertical: meta title/description quality, schema present and correctly placed, single H1, answer capsule after H1, BreadcrumbList, Person schema with credentials, @graph format, FAQPage with 5+ Q&As, robots max-snippet, freshness signals, AggregateRating, authority outbound links, unique @ids. These are structural signals that AI systems use regardless of industry.
  • Industry-aware schema check — the grader auto-detects your vertical (legal, medical, dental, real estate, plumbing, HVAC, roofing, solar, restaurant, etc.) and checks for the correct Schema.org @type for that niche. A solar site needs SolarInstaller, not Attorney. A dental site needs Dentist. Generic LocalBusiness is allowed as fallback but flagged when a more direct type exists.
  • Industry-specific Tier 2 packs — additional checks calibrated per vertical: legal sites get statute-citation checks, medical/dental get peer-reviewed reference checks, etc. New industry packs are added as we audit cited pages from each vertical.

Empirical basis: the legal-vertical pack is calibrated against dmcantor.com (36% AI Overview citation rate, 147/407 prompts April 2026). We audited 8 of their cited pages and 165 lower-citing competitor posts to identify what actually predicts citation in that vertical. The universal checks transfer cross-industry; vertical-specific checks need their own calibration as we add packs (medical, dental, solar, real estate, etc.).

What we explicitly dropped: theoretical SEO checks that did NOT predict citation in the audited data — speakable schema, table-of-contents, bolded first sentence, HowTo schema, question-only H2s. These may help SEO generally but did not distinguish cited from uncited pages.

What the grades mean

GradeMeaningTrigger
PASSCitation-ready. Page has nearly all must-haves and most should-haves.Tier 1 ≥ 85% AND Tier 2 ≥ 50%
PARTIALMost must-haves present, but missing key signals. Likely indexed but not cited consistently.Tier 1 ≥ 70% (or PASS thresholds not met)
FAILMissing core must-haves. Will rarely or never be cited by AI Overviews.Tier 1 < 70%
ERRORPage couldn't be fetched (timeout, 404, 5xx, blocked).HTTP error or timeout > 25s

Score (0-100): Tier 1 weighted 70%, Tier 2 weighted 30%. A page can score 100 only if it passes every check.

Tier 1 — must-have for AI citation

Missing any of these and AI Overviews will skip your page even if it ranks well organically.

CheckWhat it looks forWhy it matters
Meta title50-60 chars, has separator (`|` or `-`)Click-through rate. Google truncates after ~60 chars.
Meta description150-160 chars, has CTA, doesn't duplicate titleSERP click-through. Rank Math syncs this to og:description.
H1 differs from titleH1 text not identical to meta titleH1 should add a benefit; identical = wasted real estate.
Attorney/LegalService schemaJSON-LD with @type Attorney or LegalServiceCantor uses both. Direct schema beats generic LocalBusiness.
Article/BlogPosting schemaJSON-LD with @type Article (preferred) or BlogPosting88% of cited Cantor pages use Article. Article is stronger than BlogPosting.
priceRange"priceRange": "$$" (or similar) in business schemaCantor includes this. AI uses it as a quality signal.
GeoCoordinatesLatitude/longitude in Place schemaRequired for "near me" + local AI Overview citations.
BreadcrumbListBreadcrumbList JSON-LDHelps AI understand page hierarchy + topical context.
Person schemaAuthor with hasCredential arrayE-E-A-T signal. AI prefers attributed authors.
@graph formatSchemas consolidated in single @graph arrayAvoids duplicate @ids. Easier for parsers.
Schema positionInline JSON-LD NOT at top of body contentSchema in head or end of body. Top-of-body inline schema breaks reading flow.
Single H1Exactly one H1 tag on pageMultiple H1s confuse semantic parsers. Common WordPress theme bug.
Answer capsule30-80 word paragraph after H1This is the chunk AI extracts and quotes verbatim.

Tier 2 — should-have

Missing these reduces citation probability but won't block it entirely.

CheckWhat it looks forWhy it matters
robots max-snippet<meta name="robots" content="max-snippet:-1">Tells Google: extract any length snippet. Default limits AI quotes.
article:published + modifiedOpen Graph article date meta tagsFreshness signal. Cantor refreshes modified_time regularly.
Visible byline"By [Author]" or class="author-byline" in bodySchema author alone isn't enough. AI needs visible attribution.
AggregateRatingAggregateRating with reviewCount + ratingValueTrust signal. Google enriches SERP with stars.
Statute citations3+ A.R.S. / U.S.C. / C.F.R. referencesIndustry-specific (legal). Authority signal. Cantor cites 4-12 per page.
Authority links3+ outbound links to .gov / .eduTrust transfer from authoritative sources.
FAQPage with 5+ Q&AsFAQPage schema with mainEntity array length ≥ 53.2x citation probability vs no FAQ. Each Q is a citation chance.
Unique @idsEvery JSON-LD @id is distinctDuplicate @ids confuse Google's structured data parser.

How to use the results

  1. Run a scan on a single URL or whole site (sitemap)
  2. Findings tab shows every page with its grade + top issue
  3. Issue Breakdown tab aggregates issues — fix the most common ones first
  4. Per-page Fix tab shows all 21 checks for any page with the specific fix copy
  5. Export tab → "Claude Code prompt" gives you a paste-ready brief that fixes every issue tier-by-tier in a new Claude Code session
  6. Re-run after fixes. Score improvements are immediate, citation lift takes 1-3 weeks for AI to re-crawl.

What to scan

Drives the schema @type check + any vertical-specific Tier 2 checks.
Each page takes ~3-8 seconds. 50 pages = ~5 min. 200 pages = ~20 min. 2000 pages = ~3 hours (browser may not stay open). For full portfolio, use the bulk audit script.
0
Total audited
0
Pass
0
Partial
0
Fail
0
Errors
Avg score
URL Grade Score Pattern T1 T2 Title Desc Recommendation
Run a scan to populate.

Strategic recommendation

A senior consultant's read of your audit. What pattern is driving the failures, and what to do next.

Run a scan first. The strategist will analyze the failure pattern and recommend the next step.

Issues across all audited pages

Tier Issue Affected pages % of audited Fix
Run a scan to populate.

Per-page fix plan

Pick a page above to see its specific issues.

Fix queue

Auto-refreshing every 12s

Pages queued for the n8n retrofit workflow. n8n will pick up pending rows, fix them, then mark done with before/after audit data.

0
Pending
0
Processing
0
Done
0
Failed
Queued URL Client Fix types Status Score before Score after
No pages queued yet. Use the "Send to retrofit" button on any FAIL/PARTIAL page.

Scan history

Past scans saved to Supabase. Click any row to load it back into the dashboard.

Date Domain Mode Pages Pass Partial Fail Avg score Label
Loading...

Export results

Download in any format. The Claude Code prompt is ready to paste into a new Claude Code session.

Clients

Each client's WP credentials + business facts. The "Queue fix" button uses these to apply fixes via the Netlify Function. Stored encrypted in Supabase, never exposed to the dashboard.

Slug Business name WP URL Industry Active
Loading...