Audit any page for AI Overview citation readiness. Queue automatic fixes. Track results.
5-step quick start
Open Clients in the sidebar. Click + Add client. Fill in: slug (folder name), WP URL, WP user, WP App Password, industry, business facts JSON. Save.
Open Scan in the sidebar. Pick mode: single URL, whole site (sitemap), or paste-list. Hit Run. Each page takes 3-8 sec.
Open Findings for the table view, Strategy for the consultant memo (top priority pages + dominant pattern), Per-page Fix for the detailed audit on any single URL.
In the Findings table, click Queue fix on any row. Enter the client slug. The Netlify Function loads that client's WP creds and applies the fix automatically (schema retrofit, meta rewrite, indexing API resubmit). Takes ~30 sec per page.
Open Fix Queue to see status (pending → processing → done) with score before/after. Click "Details" on any row to see what changed (schema diff, meta rewrite, checks fixed, checks still failing).
Sidebar map
| Section | What it does |
|---|---|
| Scan | Audit a URL, a whole site (sitemap), or a paste-list |
| Findings | Table of every audited page with grade, score, pattern, top issue |
| Strategy | Consultant memo: dominant pattern + top priority pages to fix first |
| Issue Breakdown | Aggregated table of every issue type and how many pages have it |
| Per-page Fix | Pick any URL, see all 21+ checks pass/fail with specific fix instructions |
| Fix Queue | Status of every fix triggered. Auto-refreshes. Click Details for diff view. |
| Export | Download CSV / JSON / Markdown / "Claude Code prompt" for batch fixing |
| History | Past scans saved to Supabase. Click any row to reload it. |
| Clients | Add/edit client config (WP creds + business facts). Required for "Queue fix" to work. |
| What's checked | Methodology: every check explained, what bad/good looks like, why we check it |
Need help?
Internal playbook: /reference/RANK_MATH_SCHEMA_FIX_PLAYBOOK.md in the visibility-engine repo
Slack: #oliversonlaw for client-specific questions, #rcdc-team-tasks for general
If a fix fails, check the Fix Queue → Details panel. Most common issues: client_slug typo (must match folder name with hyphen), or the page is on a different builder (Elementor) where post_content edits don't render.
How this grader works
The grader runs two layers of checks:
- Universal Tier 1 + Tier 2 — checks that drive citation across every service-business vertical: meta title/description quality, schema present and correctly placed, single H1, answer capsule after H1, BreadcrumbList, Person schema with credentials, @graph format, FAQPage with 5+ Q&As, robots max-snippet, freshness signals, AggregateRating, authority outbound links, unique @ids. These are structural signals that AI systems use regardless of industry.
- Industry-aware schema check — the grader auto-detects your vertical (legal, medical, dental, real estate, plumbing, HVAC, roofing, solar, restaurant, etc.) and checks for the correct Schema.org @type for that niche. A solar site needs
SolarInstaller, notAttorney. A dental site needsDentist. GenericLocalBusinessis allowed as fallback but flagged when a more direct type exists. - Industry-specific Tier 2 packs — additional checks calibrated per vertical: legal sites get statute-citation checks, medical/dental get peer-reviewed reference checks, etc. New industry packs are added as we audit cited pages from each vertical.
Empirical basis: the legal-vertical pack is calibrated against dmcantor.com (36% AI Overview citation rate, 147/407 prompts April 2026). We audited 8 of their cited pages and 165 lower-citing competitor posts to identify what actually predicts citation in that vertical. The universal checks transfer cross-industry; vertical-specific checks need their own calibration as we add packs (medical, dental, solar, real estate, etc.).
What we explicitly dropped: theoretical SEO checks that did NOT predict citation in the audited data — speakable schema, table-of-contents, bolded first sentence, HowTo schema, question-only H2s. These may help SEO generally but did not distinguish cited from uncited pages.
What the grades mean
| Grade | Meaning | Trigger |
|---|---|---|
| PASS | Citation-ready. Page has nearly all must-haves and most should-haves. | Tier 1 ≥ 85% AND Tier 2 ≥ 50% |
| PARTIAL | Most must-haves present, but missing key signals. Likely indexed but not cited consistently. | Tier 1 ≥ 70% (or PASS thresholds not met) |
| FAIL | Missing core must-haves. Will rarely or never be cited by AI Overviews. | Tier 1 < 70% |
| ERROR | Page couldn't be fetched (timeout, 404, 5xx, blocked). | HTTP error or timeout > 25s |
Score (0-100): Tier 1 weighted 70%, Tier 2 weighted 30%. A page can score 100 only if it passes every check.
Tier 1 — must-have for AI citation
Missing any of these and AI Overviews will skip your page even if it ranks well organically.
| Check | What it looks for | Why it matters |
|---|---|---|
| Meta title | 50-60 chars, has separator (`|` or `-`) | Click-through rate. Google truncates after ~60 chars. |
| Meta description | 150-160 chars, has CTA, doesn't duplicate title | SERP click-through. Rank Math syncs this to og:description. |
| H1 differs from title | H1 text not identical to meta title | H1 should add a benefit; identical = wasted real estate. |
| Attorney/LegalService schema | JSON-LD with @type Attorney or LegalService | Cantor uses both. Direct schema beats generic LocalBusiness. |
| Article/BlogPosting schema | JSON-LD with @type Article (preferred) or BlogPosting | 88% of cited Cantor pages use Article. Article is stronger than BlogPosting. |
| priceRange | "priceRange": "$$" (or similar) in business schema | Cantor includes this. AI uses it as a quality signal. |
| GeoCoordinates | Latitude/longitude in Place schema | Required for "near me" + local AI Overview citations. |
| BreadcrumbList | BreadcrumbList JSON-LD | Helps AI understand page hierarchy + topical context. |
| Person schema | Author with hasCredential array | E-E-A-T signal. AI prefers attributed authors. |
| @graph format | Schemas consolidated in single @graph array | Avoids duplicate @ids. Easier for parsers. |
| Schema position | Inline JSON-LD NOT at top of body content | Schema in head or end of body. Top-of-body inline schema breaks reading flow. |
| Single H1 | Exactly one H1 tag on page | Multiple H1s confuse semantic parsers. Common WordPress theme bug. |
| Answer capsule | 30-80 word paragraph after H1 | This is the chunk AI extracts and quotes verbatim. |
Tier 2 — should-have
Missing these reduces citation probability but won't block it entirely.
| Check | What it looks for | Why it matters |
|---|---|---|
| robots max-snippet | <meta name="robots" content="max-snippet:-1"> | Tells Google: extract any length snippet. Default limits AI quotes. |
| article:published + modified | Open Graph article date meta tags | Freshness signal. Cantor refreshes modified_time regularly. |
| Visible byline | "By [Author]" or class="author-byline" in body | Schema author alone isn't enough. AI needs visible attribution. |
| AggregateRating | AggregateRating with reviewCount + ratingValue | Trust signal. Google enriches SERP with stars. |
| Statute citations | 3+ A.R.S. / U.S.C. / C.F.R. references | Industry-specific (legal). Authority signal. Cantor cites 4-12 per page. |
| Authority links | 3+ outbound links to .gov / .edu | Trust transfer from authoritative sources. |
| FAQPage with 5+ Q&As | FAQPage schema with mainEntity array length ≥ 5 | 3.2x citation probability vs no FAQ. Each Q is a citation chance. |
| Unique @ids | Every JSON-LD @id is distinct | Duplicate @ids confuse Google's structured data parser. |
How to use the results
- Run a scan on a single URL or whole site (sitemap)
- Findings tab shows every page with its grade + top issue
- Issue Breakdown tab aggregates issues — fix the most common ones first
- Per-page Fix tab shows all 21 checks for any page with the specific fix copy
- Export tab → "Claude Code prompt" gives you a paste-ready brief that fixes every issue tier-by-tier in a new Claude Code session
- Re-run after fixes. Score improvements are immediate, citation lift takes 1-3 weeks for AI to re-crawl.
What to scan
| URL | Grade | Score | Pattern | T1 | T2 | Title | Desc | Recommendation | ||
|---|---|---|---|---|---|---|---|---|---|---|
| Run a scan to populate. | ||||||||||
Strategic recommendation
A senior consultant's read of your audit. What pattern is driving the failures, and what to do next.
Run a scan first. The strategist will analyze the failure pattern and recommend the next step.
Issues across all audited pages
| Tier | Issue | Affected pages | % of audited | Fix |
|---|---|---|---|---|
| Run a scan to populate. | ||||
Per-page fix plan
Pick a page above to see its specific issues.
Fix queue
Pages queued for the n8n retrofit workflow. n8n will pick up pending rows, fix them, then mark done with before/after audit data.
| Queued | URL | Client | Fix types | Status | Score before | Score after | |
|---|---|---|---|---|---|---|---|
| No pages queued yet. Use the "Send to retrofit" button on any FAIL/PARTIAL page. | |||||||
Scan history
Past scans saved to Supabase. Click any row to load it back into the dashboard.
| Date | Domain | Mode | Pages | Pass | Partial | Fail | Avg score | Label | |
|---|---|---|---|---|---|---|---|---|---|
| Loading... | |||||||||
Export results
Download in any format. The Claude Code prompt is ready to paste into a new Claude Code session.
Clients
Each client's WP credentials + business facts. The "Queue fix" button uses these to apply fixes via the Netlify Function. Stored encrypted in Supabase, never exposed to the dashboard.
| Slug | Business name | WP URL | Industry | Active | |
|---|---|---|---|---|---|
| Loading... | |||||