2026-04-03
Scrutia vs a professional audit: 91% concordance
Whenever you run an automated tool on a site, the question is always the same: is it worth anything compared to a real audit?
We decided to answer with data. We compared Scrutia's results with the official accessibility declarations of nine public websites — audits carried out by recognized expert firms on samples of 13 to 22 pages.
The key number: ±5 points
Across the five sites with a recent audit (2024–2025), the average gap between Scrutia's estimated score and the official score is ±5 points. That is remarkably close for an automated tool that audits a single page in 2 minutes.
The tightest results:
- City A: 79% estimated (Scrutia) vs 81% official (gap: 2 points)
- City B: 89% estimated (Scrutia) vs 84% official (gap: 5 points)
- Health agency: 74% estimated (Scrutia) vs 82% official (gap: 8 points)
Detailed analysis: a large city website
We pushed the comparison the furthest on a large European city website. The official audit, carried out in late 2024 on 18 pages, found 81% compliance and 11 non-conformances.
Scrutia found 10 of those 11 non-conformances automatically:
- Decorative images not properly hidden from assistive technologies
- Non-explicit links ("Learn more" without context)
- Cookie modal not compatible with assistive technologies
- Language changes not marked up in the source
- Invisible content when CSS is disabled
- Horizontal overflow on a 320 px wide screen
- Text spacing that cannot be adjusted without content loss
- Form fields without associated labels
- Inconsistent tab order
- And more
The only criterion Scrutia could not catch concerned downloadable office documents — each PDF would need to be opened to check its structure, which no automated tool can do.
8 additional issues
Beyond the overlap, Scrutia flagged 8 extra non-conformances the official audit had not reported:
- Focusable SVGs inside
aria-hiddenelements, creating keyboard traps on 13 pages - 18 links opening new windows without warning the user
- Duplicate HTML IDs in SVGs across every page
- An empty h1 on the homepage
- In-text links visually indistinguishable from the surrounding copy
Some are genuine problems the official audit may have classified as minor. Others reflect a stricter interpretation of certain criteria. Either way, they are concrete improvement leads for the engineering team.
Why the scores differ
The official score (81%) and Scrutia's confirmed score (60%) look far apart. But they do not measure the same thing:
A human audit decides everything. Experts give a firm verdict (compliant or not) for every criterion. Zero items left "to verify".
Scrutia is cautious. 35 criteria are marked "to verify manually" — they count neither as compliant nor as non-compliant. If we treat them as compliant (a reasonable assumption for a well-built site), the estimated score rises to 79% — just 2 points from the official one.
Scrutia is also stricter. The 8 extra non-conformances mechanically lower the confirmed score. A human expert might have classified some of them as minor or grouped them together.
That is why Scrutia now displays a range (60% — 79%) instead of a single number. Reality sits somewhere between the two, and the official audit (81%) confirms it.
What it means for you
A pre-audit in 2 minutes, not 5 days
A full audit from an expert firm takes several days and costs several thousand euros. It is justified and necessary for legal compliance. But between two official audits, how do you know whether a redesign introduced regressions? How do you prioritize fixes?
Scrutia gives you an immediate, actionable diagnosis. In 2 minutes, you know which issues are the most critical, with code fixes for each one.
Complementary, not competing
The goal is not to replace human expertise. Many WCAG criteria fundamentally require a human (screen-reader testing, visual analysis, document review). Scrutia automatically covers the rest and precisely flags which criteria still need manual verification.
The ideal workflow: Scrutia continuously (on every deployment, every sprint) + an expert audit once a year for the official declaration.
Reproducible results
Unlike a scanner that gives a different result on every pass, most of Scrutia's verdicts are deterministic self-verdicts — JavaScript code that inspects the DOM, not an AI interpreting it. Same page, same result, every time.
Methodology
Scrutia scores are "estimated scores": they treat "to verify manually" criteria as potentially compliant. Official scores come from the accessibility declarations published on each site, based on audits by independent firms using screen readers (NVDA, JAWS, VoiceOver) on samples of 13 to 22 pages.
All comparison data is available on our validation page.
Want to compare your site with your last official audit? Run a free pre-audit on Scrutia.