DYNO Mapper

Home / Blog / Accessibility Testing / How to Involve Users in Web Accessibility Testing

How to Involve Users in Web Accessibility Testing

Automated accessibility tools can catch most of the technical violations in a site — missing alt text, insufficient color contrast, unlabeled form fields, broken landmark structure. What they cannot catch is whether the site is actually usable for people who depend on accessibility features to navigate it. For that, you need testing with real users who rely on assistive technology every day.

User testing with people who have disabilities isn’t a nice-to-have in 2026. Under the US ADA, the updated ADA Title II web rule finalized in 2024, the European Accessibility Act (in force since June 2025), and the new WCAG 2.2 criteria published by the W3C in October 2023, the standard for what “accessible enough” means has risen substantially. User testing is how you verify that you’ve actually cleared that bar.

Users in Web Accessibility Testing

This guide covers the practical workflow: when to bring users in, how to find qualified testers, how to structure sessions, what to pay, and how to combine user testing with automated tools for a complete accessibility program.

Why User Testing Matters

Automated accessibility testing (tools like axe DevTools, WAVE, Pa11y, Lighthouse, and Google’s Accessibility Insights) is fast, repeatable, and catches roughly 30-40% of WCAG violations on a typical site. That’s genuinely useful, but it leaves a lot on the table. The other 60-70% of accessibility issues are judgment calls that only a human can make: Does this heading structure make sense when navigating by headings with a screen reader? Is this error message actually helpful to a user who can’t see the red border around the failed field? Can this interaction be completed using voice control, a switch device, or a keyboard alone?

User testing with real assistive-technology users is the only reliable way to answer those questions. It also reveals patterns that don’t show up in code review: the cognitive load of a poorly-organized form, the frustration of inconsistent focus behavior, the subtle differences between “works with a screen reader” and “works well with a screen reader.”

When to Involve Users

Accessibility testing is much cheaper when done early. Rebuilding a form after launch because screen reader users can’t complete it is far more expensive than testing a prototype before development starts. The standard phases to plan testing around:

  • Discovery and requirements. Talk to users with disabilities about how they use similar products. What works? What’s frustrating? What would they build differently? This shapes the design brief before any code is written.
  • Wireframes and prototypes. Test low-fidelity mockups with assistive-technology users. This is where you catch fundamental problems (confusing navigation, missing information, broken mental models) that are expensive to fix once implementation is underway.
  • Implementation. Usability testing on staging environments before launch. Include keyboard-only users, screen reader users, magnification users, and users of voice control.
  • Post-launch. Ongoing testing as features ship. Accessibility regressions happen easily when teams don’t have a continuous feedback loop.
  • Major releases and migrations. Any significant change warrants a fresh round of testing — new frameworks, new design systems, or site redesigns can introduce accessibility issues even when individual components were tested separately.

Finding Qualified Testers

Finding assistive-technology users willing and able to do accessibility testing used to be a bottleneck; in 2026 there are several established options:

  • Specialized testing platforms. Fable, Applause Accessibility, and UserTesting all run panels of assistive-technology users available for remote usability sessions. Pricing varies but a typical 60-minute session runs $200-600 depending on the platform.
  • Disability organizations. National organizations (the National Federation of the Blind, the American Council of the Blind, the Hearing Loss Association of America, and regional equivalents in other countries) often maintain tester networks or can refer qualified testers.
  • Internal staff. If your organization employs people with disabilities, they can provide an invaluable first pass — but don’t rely on them exclusively, and always compensate them for testing work outside their normal role.
  • Accessibility consultancies. Firms like Deque, Level Access, and TPGi offer expert audits and user-testing services as part of broader accessibility engagements.
  • Community networks. Platforms like LinkedIn, disability-focused Slack communities, and accessibility-adjacent Twitter circles can surface testers, though quality control is your responsibility.

Building a Diverse Tester Pool

“People with disabilities” is not a homogeneous group. A blind user navigating with JAWS and a low-vision user magnifying the screen at 400% have overlapping but distinct needs. Someone who acquired a disability as an adult often uses assistive technology differently than someone who’s used it since childhood. Build your tester pool across at least these dimensions:

  • Disability type: visual (blind, low-vision, color blindness), hearing (deaf, hard of hearing), motor (switch users, tremor, limited mobility), cognitive (dyslexia, ADHD, memory issues, learning disabilities), speech, and multiple/co-occurring disabilities.
  • Assistive technology: NVDA, JAWS, VoiceOver (macOS and iOS), TalkBack (Android), ZoomText, Dragon NaturallySpeaking, Windows Voice Access, Apple Voice Control, switch devices, braille displays, eye trackers.
  • Expertise level: long-time AT users have internalized workarounds that newer users don’t know. Both perspectives are valuable. Experts spot subtle issues; newer users spot barriers experienced users have learned to route around.
  • Age and context: age-related accessibility needs (declining vision, arthritis, cognitive changes) differ from lifelong disability experience.

For a general consumer product, aim for 5-8 participants spanning multiple disability categories per round. For a specialized product targeting a specific audience (e.g., a tax-prep tool for blind users), testing with 5-8 people from that specific audience may be more valuable than spreading across categories.

Running the Session

The mechanics of accessibility user testing are similar to standard usability testing, with a few important adjustments:

  • Let users bring their own setup. Their personal AT configuration is the one that matters. Test on their OS, their screen reader version, their custom keyboard shortcuts. Testing NVDA on your laptop tells you how your laptop handles NVDA; testing it on their laptop tells you how your site handles their real-world setup.
  • Ask them to think aloud. This is harder than it sounds when someone is concentrating on navigation. Prompt gently: “What are you doing now?” “Were you expecting that?”
  • Give realistic tasks, not accessibility-specific ones. Ask users to complete a real task (sign up, find a product, submit a form) rather than “test if the navigation is accessible.” The former surfaces real issues; the latter surfaces issues users think you want to hear about.
  • Schedule longer sessions. Tasks that take 5 minutes for a sighted mouse user may take 20 minutes with a screen reader. Plan for 60-90 minute sessions and include rest breaks.
  • Record with consent, and let users decline. Screen recording is standard; audio is useful for capturing think-aloud commentary. Some users are uncomfortable being recorded; respect that preference and take notes instead.
  • Keep the sample representative of real conditions. Don’t test only in quiet lab conditions. If the product will be used on the go, test on mobile with realistic distractions.

Compensation and Ethics

Pay your testers fairly. Industry-standard rates for assistive-technology users doing professional accessibility testing are $100-300 per hour, often higher for specialists with rare AT expertise or deep technical backgrounds. This is skilled work: AT users have years of experience with tools most developers have never used, and their feedback requires translating their real-world navigation patterns into actionable fixes.

A few additional ethical considerations:

  • Informed consent. Explain what the session will cover, how data will be used, who will see recordings, and that they can stop at any time.
  • Accessibility of the testing process itself. Your consent forms, scheduling emails, and prototype URLs need to be accessible. Testing accessibility with an inaccessible recruitment process is a red flag.
  • Credit and attribution. When tester feedback directly shapes product decisions, consider named acknowledgment (with permission). Tester work is often invisible; making it visible helps normalize it in the industry.
  • Follow-through. Tell testers what changed because of their feedback. Nothing burns goodwill faster than running session after session where nothing visible changes.

Combining User Testing with Automated Tools

User testing and automated testing complement each other. A typical 2026 accessibility program combines:

  • Automated scans (axe-core, Pa11y, Lighthouse, WAVE) run continuously in CI/CD to catch regressions on every deployment.
  • Manual expert audits (internal accessibility engineers or external consultants using axe DevTools, Accessibility Insights, and manual keyboard/screen-reader walkthroughs) before major releases.
  • User testing with AT users at prototype, pre-launch, and post-launch stages — the focus of this guide.
  • Ongoing accessibility monitoring (tools like Siteimprove, Deque Axe Monitor, Level Access continual auditing) for production sites.

Automated tools catch the easy violations, manual expert review catches the technical nuances, and user testing catches the experiential gaps — the things that are technically compliant but still make the product painful to use. Drop any of the three and your accessibility program has blind spots.

Connecting to WCAG and Legal Compliance

User testing isn’t strictly required by law, but it’s the most reliable way to verify that your site actually meets the standards that are required:

  • WCAG 2.1 Level AA is the de facto target in the US under the ADA (including the April 2024 Title II web rule, which phases in compliance for state and local governments in 2026-2027) and in the EU under the EAA.
  • WCAG 2.2 (published October 2023) adds nine new success criteria, including focus appearance, target size (minimum), and several cognitive-accessibility improvements. Leading organizations are building to 2.2 to future-proof.
  • Section 508 (US federal agencies and contractors) references WCAG 2.0 AA but compliance programs increasingly target 2.1 AA or higher.

The WCAG criteria themselves can be tested partially through automation (about 30-40% coverage), partially through manual expert review, and partially only through actual user testing — especially the cognitive-accessibility and understandability criteria that resist algorithmic checking.

Frequently Asked Questions

How many accessibility testers do I need?

For a general-purpose product, 5-8 testers per round covering multiple disability categories provides good signal. For specialized products targeting a specific audience, 5-8 testers from that audience is more valuable than spreading thin. Most projects benefit from multiple rounds over the life of the project rather than a single large round.

Can I skip user testing if my site passes WCAG 2.1 AA?

Passing WCAG automated checks is a baseline, not a finish line. Many sites pass automated WCAG tests but still have significant usability barriers for AT users — think of it as getting a passing grade on a spell-check but not on a writing exam. User testing is how you verify that technical compliance translates to actual usability.

What does accessibility user testing cost?

Typical costs: $200-600 per 60-minute session via specialized platforms (Fable, Applause), plus tester compensation at $100-300/hour for independently-recruited testers. A full testing round of 5-8 participants typically runs $2,000-8,000 depending on platform, session length, and tester specialization.

Do I need to test with every disability category?

No, but you should think about which disabilities are most likely to be affected by your product and prioritize those. A shopping site needs testing with screen reader users (for complex product listings and forms) and motor-impaired users (for checkout flow). A video-heavy site needs testing with deaf and hard-of-hearing users (for captions and transcripts). Start with the categories that apply most directly and expand over time.

How do I handle disagreements between testers?

Treat them as data rather than conflicts. Different AT users will have different preferences; the signal is often in the pattern across multiple testers, not in any single person’s feedback. If two screen reader users disagree on a navigation approach, that’s worth investigating — you may discover that one preference is more standard and the other reflects an individual workflow.

Bottom Line

User testing with people who have disabilities is the single most reliable way to verify that a product is accessible in practice, not just in automated-scan compliance. It’s also increasingly important as the legal environment tightens under the ADA Title II web rule, the EU Accessibility Act, and WCAG 2.2.

Start early, build a tester pool that spans multiple disability categories and assistive technologies, pay fairly, and combine user testing with automated scans and expert audits. For a deeper look at the broader accessibility legal framework, see our guides on the European Accessibility Act, the Americans with Disabilities Act, and Section 508.

Leave a Comment

Your email address will not be published. Required fields are marked *