Accessibility Testing Best Practices to Follow
- Last Edited April 20, 2026
- by Garenne Bigby
Good accessibility testing is not one test — it is a layered practice. Automated scanners catch about 30-50% of WCAG issues quickly and cheaply. Manual testing with real assistive technology catches the rest. User testing with disabled people catches the issues the experts miss. Real-world accessibility programs stack all three on top of a solid design foundation.
This guide walks through the practices that make accessibility testing effective in 2026 — the WCAG 2.2 standard, the automated and manual layers, the legal deadlines driving compliance this year, and the organizational habits that keep sites accessible once they become accessible.
Why Accessibility Testing Matters
Accessibility testing is how you verify that people with disabilities can actually use your site. It is distinct from UX testing, QA testing, and SEO testing, though it overlaps with all three. Disabled users make up roughly 15-25% of the adult population depending on how disability is measured — an audience larger than the one most marketing teams actively design for.
Beyond audience reach, the business case in 2026 is concrete: the U.S. Rehabilitation Act, ADA Title II and III, and the European Accessibility Act all require specific WCAG conformance for large swaths of the web — and the FTC’s $1 million order against overlay vendor AccessiBe in January 2025 made clear that automated widgets do not satisfy those requirements.
The POUR Framework (WCAG 2.2)
WCAG 2.2 — the current W3C standard, published October 2023 — organizes its success criteria around four principles, known as POUR:
- Perceivable — content is presentable to users in ways they can perceive (alt text for images, captions for video, adequate color contrast).
- Operable — interface components are operable (keyboard accessibility, enough time to read, no seizure-inducing content, clear navigation).
- Understandable — content and operation are understandable (readable text, predictable behavior, input assistance).
- Robust — content can be interpreted reliably by a wide range of user agents including assistive technology (valid HTML, proper ARIA, compatible markup).
Every accessibility test is ultimately checking one or more of these. Knowing which POUR bucket an issue falls into helps prioritize the fix.
Start with Automated Testing
Automated tools are the cheap first pass — fast to run, easy to integrate into CI, good at catching obvious rule-based violations like missing alt attributes, form controls without labels, and color contrast below threshold. They catch the issues that a human should never have to find.
The standard toolkit:
- axe DevTools (Deque) — browser extension and CI library. The industry-standard automated scanner.
- WAVE (WebAIM) — browser extension and web interface. Strong inline visualization of issues.
- Lighthouse — built into Chrome DevTools. Useful for regression tests and baseline scores.
- Accessibility Insights (Microsoft) — combines a quick pass with guided WCAG-conformance assessment.
- Pa11y and axe-core — open-source libraries for CI integration.
Run at least one automated tool against every page template and after every significant change. But remember the ceiling: automated tools catch roughly 30-50% of WCAG issues. The rest needs humans.
Layer on Manual Testing
Manual testing catches what automated tools cannot: meaningful alt text, logical heading hierarchy, usable form error messaging, accessible dynamic content, coherent reading order. Our guide to manual accessibility testing goes deep on the full workflow; the short version:
- Walk through each user flow with a screen reader.
- Walk through each user flow using only the keyboard.
- Check color contrast on every text-on-background combination, especially low-contrast photo overlays.
- Verify dynamic content (live regions, toasts, modals, AJAX updates) announces correctly.
- Test at 200% browser zoom — content should not get cut off.
- Test with reduced motion preferences enabled.
Test with Real Assistive Technology
Assistive technology testing is where most accessibility issues actually surface. Test on the tools your audience uses:
- NVDA — free screen reader, Windows. The most common starting point.
- JAWS — paid screen reader, Windows. Dominant in enterprise and government.
- VoiceOver — built into macOS and iOS. Essential for Apple-heavy audiences.
- TalkBack — built into Android. Required for mobile testing.
- Narrator — built into Windows; improving rapidly.
Beyond screen readers, sample other assistive tech: voice control (Dragon NaturallySpeaking, Voice Control on macOS/iOS), switch access, screen magnification, and eye-tracking. Each surfaces different issues.
Keyboard-only testing is the fastest manual test anyone can run: unplug your mouse and complete every critical user flow using Tab, Shift-Tab, Enter, Space, and arrow keys. If you cannot do it, neither can your keyboard-dependent users.
Visual and Content Checks
Several checks are hybrid human+automated — automated tools flag the issue, humans confirm the fix makes sense in context.
- Color contrast — 4.5:1 minimum for normal text, 3:1 for large text (WCAG 2.2 SC 1.4.3). Never convey meaning through color alone.
- Target size — interactive controls must be at least 24 × 24 CSS pixels (WCAG 2.2 SC 2.5.8, new in 2.2). Adequate spacing between adjacent targets.
- Typography — 16px body minimum, 1.4-1.6 line height, 45-80 character line length, readable fonts at zoom.
- Alt text — descriptive for content images, empty (
alt="") for decorative images. “red trail running shoe, side view” beats “running shoe” beats “IMG_0247.jpg.” - Heading hierarchy — one H1, then H2s for sections, H3s for subsections. Never skip levels for visual styling.
- Link text — descriptive (“download the 2026 report”) beats generic (“click here”). Screen-reader users navigate by link list; generic link text tells them nothing.
- Forms — every control needs a programmatically associated label, clear error messages with instructions to fix, adequate input assistance, and predictable submission behavior.
- Data tables — use proper
<table>,<thead>,<th scope>markup so screen readers announce row/column context. - Skip links — “Skip to main content” should be the first focusable element, visible when focused.
For the deeper connection between these practices and organic search, see our guide on web accessibility and SEO.
Documents, Media, and Non-HTML Content
Accessibility does not stop at HTML. Sites routinely ship PDFs, videos, and audio content that get ignored in testing and fail accessibility reviews:
- PDFs — tag documents properly (heading structure, reading order, alt text on images, form field labels). Run Adobe Acrobat’s accessibility checker or PAC 2024. Offer HTML versions of critical content where possible.
- Videos — human-reviewed captions (YouTube auto-captions are a starting point, not a finished product), transcripts, and audio descriptions for content that depends on visual information.
- Audio — transcripts at minimum. Bonus: transcripts are indexed by search engines, driving additional organic traffic to podcast and audio content.
- Word, Excel, PowerPoint — use built-in accessibility checkers before exporting or publishing. Structure with proper headings, alt text, table markup.
Test with Disabled Users
The single highest-leverage test nobody runs enough: real usability sessions with actual disabled users of varied disability types. Expert accessibility consultants catch issues automated tools miss; disabled users catch issues expert consultants miss. Both are necessary; neither substitutes for the other.
Practical options:
- Fable — on-demand usability testing with disabled participants.
- Applause — QA and accessibility testing at scale.
- Knowbility, Deque, TPGi, Level Access — consultancies that run formal audits with disabled testers.
- In-house — if you have disabled employees, their input on your own products is invaluable. Compensate them properly.
The Legal Landscape in 2026
Accessibility testing in 2026 is backstopped by specific regulatory deadlines:
- DOJ ADA Title II Web Rule (April 2024) — state and local governments must meet WCAG 2.1 Level AA, with the first compliance deadlines landing in April 2026 for jurisdictions with 50,000+ residents.
- HHS Section 504 Final Rule (May 2024) — HHS-funded organizations must meet WCAG 2.1 Level AA for websites and mobile apps, with deadlines May 11, 2026 (15+ employees) and May 10, 2027 (under 15).
- European Accessibility Act — in force June 28, 2025; applies to non-EU companies serving EU consumers with €2M+ turnover. Technical standard EN 301 549 (WCAG 2.1 AA, updating to WCAG 2.2).
- Section 508 Refresh — WCAG 2.0 AA for federal ICT since January 2018. Federal agencies and contractors.
- ADA Title III (ongoing) — private-sector digital accommodations. Plaintiff-driven litigation continues, with overlay-reliant sites being common targets.
Building an Accessibility Program That Lasts
One-time audits do not keep sites accessible. Features ship, CSS changes, teams rotate — accessibility drifts. Programs that actually maintain accessibility share several habits:
- Executive sponsor. Someone senior owns accessibility outcomes and reviews progress quarterly. Without executive weight, accessibility always loses to shipping deadlines.
- Accessibility champions. One named person per product team responsible for accessibility reviews on PRs and design reviews.
- Automated CI tests. axe-core or Pa11y running on every pull request catches regressions before they ship.
- Scheduled audits. Full manual accessibility audits quarterly or annually, conducted by people outside the product team.
- WCAG training. Design, engineering, QA, and content teams all need baseline WCAG knowledge. IAAP CPACC and WAS certifications are good target credentials.
- Disabled user feedback loop. A channel for disabled users to report accessibility issues, with explicit SLAs for response and fix.
- Documented accessibility statement. Published statement of conformance level, known issues, and contact information for accessibility feedback. Required by some regulations, valuable signaling in all cases.
Frequently Asked Questions
How often should I run accessibility tests?
Automated tests on every pull request in CI. Manual spot checks on high-traffic pages monthly. Full manual audits quarterly or annually. User testing with disabled users at least annually and before major redesigns. The cadence scales with how fast your site changes.
Can I satisfy accessibility requirements with only automated testing?
No. Automated tools catch 30-50% of WCAG issues. Federal rules (DOJ Title II, HHS Section 504) require real WCAG conformance, which requires manual verification. The FTC’s 2025 AccessiBe settlement addressed exactly the claim that automated tools alone produce compliance.
What is the minimum accessibility standard I should target?
WCAG 2.1 Level AA is the current regulatory minimum for most US and EU requirements. WCAG 2.2 Level AA gives you headroom (it includes all of 2.1 plus nine additional success criteria) and is increasingly the standard vendors, consultants, and plaintiffs cite. Target 2.2 AA unless you have a specific reason to stop at 2.1.
How do I prioritize fixes when I find dozens of issues?
Prioritize by impact: issues that block users from core flows (checkout, signup, search) before cosmetic issues. Prioritize by frequency: issues on high-traffic pages and templates used across the site before issues on rarely-visited pages. Fix WCAG Level A issues before Level AA. Most audits rank issues by severity; follow the ranking.
Bottom Line
Effective accessibility testing in 2026 is layered: automated scanners for speed, manual testing for the issues scanners miss, assistive technology testing for the real assistive-tech experience, and user testing with disabled people for the issues experts miss. Do them all; do them on a schedule; and tie them to an organizational program that survives team changes. The 2024 DOJ and HHS rules and the 2025 European Accessibility Act give the work a hard deadline this year and next — the teams that started testing a year ago will be the ones that are compliant when those deadlines arrive.