Accessibility Testing: Why Manual Testing Is Required
- Last Edited April 19, 2026
- by Garenne Bigby
Automated accessibility testing is fast, cheap, and catches about 30% of the problems a real disabled user will hit on your site. The other 70% requires human judgment — does this form field actually make sense when a screen reader announces it? Does this page work if the user never touches a mouse? Does the color contrast matter on this particular background where the designer chose a photographic image? Those questions cannot be answered by a scanner. They have to be answered by a person.
This guide walks through why manual accessibility testing is non-negotiable in 2026, what it actually involves, and how recent federal regulations have made the case for human testing even stronger than it was when automated-only overlays were in vogue.
Manual accessibility testing is the practice of having real people — ideally including disabled users — work through your site using assistive technologies to find barriers that automated tools miss. Automated testing is still useful; it is the first pass, the regression-prevention layer, and the cheap way to catch obvious regressions. But the ceiling on what automated testing can catch is well-documented: industry benchmarks put it at roughly 30-50% of WCAG issues, and the harder issues — logical tab order, meaningful alt text, usable error messaging, accessible forms in real user workflows — are the ones that decide whether a disabled user can actually use the site.
Why Automated Testing Alone Isn’t Enough
Automated scanners excel at rule-based checks: missing alt attributes, insufficient color contrast in primitive cases, form controls without labels, improperly nested headings. These are real issues and worth catching early. What scanners cannot do:
- Evaluate meaning. A scanner can confirm an image has alt text; it cannot tell you whether “product_photo.jpg” or “wooden desk chair with black leather “seat” is the right description.
- Reason about context. A form label technically present in the DOM may be visually hidden, misaligned with the input it describes, or overridden by conflicting ARIA — all things a scanner may pass while a screen reader user is stuck.
- Walk through user flows. “Add to cart → checkout → payment” works or breaks as a sequence. Automated tools test pages; users complete journeys.
- Detect false positives and negatives. Auto-tools flag violations that are not violations, and silently pass issues a human would catch in seconds.
The 2025 FTC order against overlay vendor AccessiBe — a $1 million settlement over deceptive marketing that claimed automated tools could make sites WCAG-compliant — made the legal framing of this explicit. Automated accessibility remediation is marketing; manual testing is how compliance actually happens.
When to Perform Manual Accessibility Testing
Manual testing works best as a layered practice, not a one-time checkbox:
- Pre-launch on new features — before any significant new flow ships, have at least one human tester validate it end-to-end with a screen reader and with keyboard-only navigation.
- Regularly on high-traffic pages — homepage, navigation, search, checkout, signup, account settings. Anything that, if broken for a screen reader user, locks them out of your product.
- After design changes — visual refreshes and CSS updates are the fastest way to regress accessibility without any code review flagging it.
- Before regulatory deadlines — WCAG 2.2 was published October 2023, the DOJ ADA Title II web rule (April 2024) requires WCAG 2.1 AA compliance for state/local government starting April 2026, the HHS Section 504 rule (May 2024) extends the same requirement to HHS-funded organizations, and the European Accessibility Act entered force June 2025. Manual testing is the only way to verify real conformance.
1. Screen Reader Compatibility
Screen readers announce page content to users with low vision or blindness. Compatibility with them is the highest-leverage manual test and the one most often neglected.
Test with the major screen readers your audience actually uses:
- NVDA — free, Windows. The standard for cross-site testing.
- JAWS — paid, Windows. Dominant in enterprise and government environments.
- VoiceOver — built into macOS and iOS. Essential for Apple-heavy user bases.
- TalkBack — built into Android. Required for mobile web testing on Android devices.
- Narrator — built into Windows. Improving rapidly; worth including in a full test matrix.
For each, walk the page sequentially and check:
- Reading flow is coherent — content is announced in a logical order that matches visual reading order.
- Non-text content is described — images have meaningful alt text, icons communicate purpose, decorative images are skipped.
- Hidden content stays hidden — off-canvas menus, modal backdrops, and display:none content should not be announced until activated.
- Heading structure navigates — H1 → H2 → H3 hierarchy works when used as a navigation shortcut (most screen-reader users rely on this).
- Landmarks and skip links work — header, nav, main, footer, and “skip to main content” are announced and functional.
- Form fields announce labels correctly — the label, type, state, and any error state is announced before the user types.
- Dynamic content is communicated — live regions, toast notifications, and AJAX updates reach the user via ARIA live announcements.
2. Keyboard-Only Navigation
Many users navigate entirely with a keyboard — not just disabled users, but also power users, users with temporary injuries, and users on devices without pointing devices. Keyboard testing is quick: unplug your mouse (or just don’t touch it) and try to complete every critical user flow using only Tab, Shift-Tab, Enter, Space, and arrow keys.
Verify:
- Every interactive element is reachable — menus, buttons, forms, modals, dropdowns, tabs, carousels.
- Focus order matches visual order — Tab should move through the page in the order a sighted user reads it.
- Focus indicator is visible — a clear ring or outline shows which element has focus. Invisible focus is invisible navigation.
- No keyboard traps — a user should never get stuck inside a component with no way to Tab out.
- Skip navigation works — “Skip to main content” appears on first Tab and jumps past the main nav.
- Custom widgets behave like their native counterparts — a custom dropdown opened with Space should close with Escape; tab panels should use arrow keys to switch.
Complementary Tools for the Automated Pass
Before you start manual testing, run the easy automated checks. The faster you catch obvious violations, the more time you have for the harder human checks:
- axe DevTools (Deque) — free Chrome/Firefox extension. The industry-standard automated accessibility scanner.
- WAVE (WebAIM) — free web-based and extension tool. Strong visualization of issues inline on the page.
- Lighthouse — built into Chrome DevTools. Good as a regression check in CI pipelines.
- Accessibility Insights (Microsoft) — free, combines fast pass + assessment workflow, good for full WCAG conformance reporting.
- Sitebulb, Ahrefs, Semrush — SEO crawlers with accessibility reports; useful for site-wide pattern detection.
Treat automated tools as a filter, not a verdict. Anything they flag is worth fixing; anything they pass still needs human testing.
Debunking Common Accessibility Myths
Myth 1: All screen reader users are blind. Many have low vision (not total blindness), use screen readers alongside residual sight, or rely on them for reading comprehension support. Others have full sight but use screen readers for productivity — proofreading, accessibility development, or efficiency.
Myth 2: Accessibility only matters for permanent disabilities. Situational and temporary disabilities affect everyone: a parent holding a baby can only use one hand; a user in a noisy café needs captions; someone with a broken arm is keyboard-only for a month. Designing for permanent disability is designing for the whole population.
Myth 3: Automated scanners make manual testing unnecessary. See the FTC AccessiBe settlement — $1 million in 2025 over exactly this claim. Automated and manual testing are complementary, not substitutes.
Myth 4: Accessibility is expensive and slows projects down. Retrofitting after launch is expensive. Baking in accessibility from the design stage is a rounding error on typical project costs. Teams that ship accessible components from day one routinely outperform teams that bolt accessibility on at the end.
The Legal and Regulatory Picture in 2026
Manual accessibility testing stopped being a “nice to have” once regulators started attaching specific WCAG versions to enforceable deadlines. The current landscape:
- WCAG 2.2 — published October 2023 as the current W3C standard.
- Section 508 Refresh (January 2018) — WCAG 2.0 AA is the federal ICT standard. See our Rehabilitation Act guide for the full federal-law picture.
- DOJ ADA Title II Web Rule (April 2024) — requires WCAG 2.1 AA for state and local government websites. First compliance deadline April 2026 for jurisdictions with 50,000+ residents.
- HHS Section 504 Final Rule (May 2024) — WCAG 2.1 AA for HHS-funded organizations. Compliance deadlines May 11, 2026 (15+ employees) and May 10, 2027 (under 15).
- European Accessibility Act (June 2025) — applies to businesses with ≥10 staff and €2M+ turnover trading in the EU, including non-EU companies serving EU markets.
- ADA Title III — private-sector accommodations. Ongoing plaintiff-driven litigation continues to drive lawsuits, with plaintiffs’ firms specifically targeting sites that rely on automated remediation.
Every one of these requires WCAG conformance, and every WCAG success criterion has some portion that only human testing can verify.
What to Look For in a Manual Accessibility Tester
Good manual testers combine three things: fluency with assistive technology, familiarity with WCAG success criteria, and the patience to walk a flow slowly enough to catch subtle issues. Some practical selection criteria:
- Involve disabled users when possible. Usability testing with blind, low-vision, motor-impaired, and cognitively disabled users catches issues that expert non-disabled testers miss.
- Use more than one tester. Different assistive-tech configurations reveal different issues.
- Prefer outside testers for key audits. Internal staff know the site too well to experience it the way a new user does.
- Certifications help. IAAP CPACC and WAS certifications indicate baseline knowledge of the standards.
- Require written reports. Findings should be documented with severity, WCAG reference, affected URLs, and reproduction steps. “We found some issues” is not a deliverable.
Frequently Asked Questions
Can I skip manual testing if I use automated tools?
No. Automated tools catch roughly 30-50% of WCAG issues — the rule-based, programmatic ones. The remaining issues (meaningful alt text, logical flow, keyboard traps, screen reader behavior in dynamic UIs) require human judgment. The FTC’s 2025 $1M settlement against AccessiBe addressed exactly this gap.
How often should I perform manual accessibility testing?
Full site audits quarterly or annually, targeted flow testing before every major release, spot checks on high-traffic pages monthly. Automated regression tests in CI catch drift between manual audits.
Which screen reader should I use first?
NVDA on Windows is the most common starting point — free, widely used, and works on the same platforms as most of your users. Expand to JAWS for enterprise audiences, VoiceOver for Apple-heavy audiences, and TalkBack for mobile coverage.
Do I need to hire disabled users as testers?
Not strictly, but it meaningfully improves results. Disabled users catch issues that even expert non-disabled accessibility consultants miss because their daily workflow is different. Services like Fable, Applause, and Knowbility can connect you with qualified disabled testers.
Bottom Line
Manual accessibility testing is where real compliance happens. Automated scanners are a useful first pass and a regression layer — they catch the easy 30-50% fast and cheap. But the remaining issues, the ones that decide whether a disabled user can actually use your site, need a human with an assistive technology and the time to use it properly. In 2026, with WCAG 2.2 in place, federal rules tying WCAG compliance to enforcement deadlines, and the FTC on record against automated-only compliance claims, manual testing is the only way to demonstrate that your site actually meets the standard. For a full view of the services behind this approach, see Dyno Mapper’s accessibility testing.