Accessibility Audits Made Practical: A Project Manager’s Guide to WCAG Tools
The cost of amending triples with every phase. But Boehm’s curve estimation doesn’t account for the Unknown Factor: Nobody Does It Wrong on Purpose. We need to factor in the time to research what the issue is and how to fix it.
To properly estimate the cost, we need to determine what we missed in the first place. That means for every element, we ask:
Which WCAG criteria apply to this?
How can we measure them?
Which ones are we failing here?
A helpful tool for this is the WCAG EM Report Tool.
Which WCAG criteria apply to this?
To answer this, it’s best to open the WCAG 2.2 in one window and a document in another, and go through them one by one. Note down which ones definitely apply to your product, which might apply, and which ones definitely don’t, with a short note as to why not. Having that little note as to why not will help you down the line.
Most legal requirements cite WCAG 2.1 as a benchmark. It’s better to stick to newer editions (Hello WCAG 3.0? When are you dropping?)
How can we measure them?
Certain success criteria have a definitive pass/fail margin: Non-text content or text alternatives: Is a text alternative (SC 1.1.1) present or not? Easy yes/no question. Same with captions (SC 1.2.2.): Do they exist? Contrast Issues! You have a straightforward pass/fail criterion for that, and the WebAIM tool to check it.
For some success criteria, it’s up to subjective judgment.
Easy language is one such hot topic. A benchmark can be B1 level. The European Accessibility Act specifies that language use should not be above B1 for banking-related services in particular. Still, B1 is pretty arbitrary and depending on your industry, you may need to use jargon words that go beyond B1.
Despite being called “easy”, it’s arguable one of the most complex. There are ways to ensure texts are understandable through hosting focus groups, following language-specific rules… but for most web professionals, it’s them and ChatGPT vs copywriting. The good news is: Even easy language specialists say that ChatGPT has gotten really good at it. Does it replace focus groups and human review? No. But it’s better than not even trying in the first place. Pair it with a glossary of abbreviations and specific vocabulary to be on the safe side.
Which ones are we failing here?
Let’s be generous and think of technical failures only. For this, we consult the WebAIM Million, a report on the accessibility of the top 1,000,000 home pages. I dare say that the sample is big enough to be representative of common issues.
WebAIM Million Methodology
The WAVE accessibility engine was used to analyze the rendered DOM of all pages after scripting and styles were applied. WAVE detects end-user accessibility barriers and Web Content Accessibility Guidelines (WCAG) conformance failures. All automated tools, including WAVE, have limitations—not all conformance failures can be automatically detected. Absence of detected errors does not indicate that a page is accessible or conformant. Although this report describes only a subset of accessibility issues on only 1,000,000 home pages, this report provides a quantified and reliable representation of the current state of the accessibility of the most influential pages on the web.
According to the report, our main perpetrators are:
Low contrast text 79.1%
Missing alternative text for images 55.5%
Missing form input labels 48.2%
Empty links 45.4%
Empty buttons 29.6%
Missing document language 15.8%
But, I will cross low contrast off our testing list for 2 reasons:
The WAVE tool has, as WebAIM mentioned in their methodology section, limitations. Like every automated tool, it can only test what it can register. If your website has a text box with a transparent background in front of an image, or even a gradient background colors, it cannot check if there’s enough contrast between the text and the image. And when in doubt about contrast, it just fails you.
Good news: Color correction with CSS is easier than with make-up. Sure, we could argue that it affects the whole branding and should be overhauled holistically, but as stated above: A whole rebranding is out of scope (even for our hypothetical agile team with the best possible baseline powered by the benefit of doubt)… when all you really need to do is change 1 or 2 hex code values.
That leaves missing alternative text, missing form input labels, empty links and buttons, and missing document language.
Alternative Text
Alternative Text is not hard to fix, but more nuanced than color contrast: Do we only need to write the text description itself, or are we missing the whole alt-attribute? Do we even need alternative text in this particular case, or is it a purely decorative image? And then the biggest question:
What makes a good image description?
Sadly, there is no blanket statement to answer this. It depends on the purpose of the image, the content, and maybe even its placement. Luckily, we have the handy dandy Alt Text Decision Tree to help you figure out when you need alt text, and articles like this by Nielson Norman Group to figure out what to write as alt text.
Plain Bad Development
Missing form input labels, empty links, empty buttons, missing document language… These are straight-up tech debt. They should be done properly already!
It shouldn’t be considered additional effort to fix those because they were simply skipped in the first place. Tech debt is estimated to account for about 20-40% of the tech estate. Yikes.
Missing form input labels, empty links, empty buttons, and missing document language should all be part of the Definition of Done.
Summary: It’s a lot, but don’t freak out.
Don’t overwhelm yourself at the beginning. After a while, you will get a feeling for it.