Common pitfalls and misunderstandings in accessibility auditing

Over the last five years I’ve had the honour of training over 70 accessibility auditors.

Everyone has a unique way of approaching auditing, and there’s always a certain amount of personal judgement that will make each auditor unique.

The WCAG guidelines have done an immensely useful service in standardising how to test websites for accessibility. However, the way they are worded and the way the Success Criteria/checkpoints are ordered are not ideal for people learning accessibility auditing.

In this article, I want to share my insights into the things many auditors initially misunderstand that impact the quality of their audits.

I’ll focus on the most frequently confused areas:

  1. Images – Coding them the right way with the correct description
  2. Videos – Knowing which checkpoint to test under
  3. Headings – The distinction between semantics and content
  4. Colour – Use of colour versus colour contrast
  5. Keyboard – Knowing which checkpoint is most relevant


The sheer range and function of images on websites has grown ever more complicated in recent years with the use of icon fonts, SVG, interactive charts and Infographics.

The last time I checked I identified at least eight different types of images on a website that come under the Success Criterion: 1.1.1 Non-text content.

It’s always helpful to step back and ask two key questions.

  1. What is the purpose of an image? and
  1. Does it have a description that conveys its meaning effectively?

For reference, a good basic introduction to writing alt-text is Simple recommendations for writing text descriptions that make a big difference by Leonie Watson. Here I’ll highlight the two most common errors that auditors make:

  • Image link descriptions
  • Icon fonts

Image link descriptions

The first error we often find is for images that are links. The difficulty is that images can be links by themselves, or they can be wrapped inside link text. And the way you should correctly code the two is different.

For images that are links themselves, you should add a description that gives the link destination not a description of the image.

The expected format would be something like:

<a href="link.html"> <img src="image1.jpg" alt="link description"></a>

When an image is embedded within link text you should give the image an empty alternative text (alt=””) as the description is already taken care of by the link text.

Here the expected format would be something like:

<a href="link.html"> <img src="image1.jpg" alt=""> text link description</a>

If you add a description to the image as well, a screen reader effectively has the link description read out twice, which is unnecessary noise, especially if you have a sequence of links coded this way.

Icon fonts

Icon fonts can be read out as nonsensical text by screen readers. For example, I’ve had open and close quotes read out as ‘Knife and Fork’ and ‘Trumpet’ respectively. To fix this, you should hide the icon fonts from screen readers using ARIA techniques, as you can’t use alt text on icon fonts if they are coded the standard way.

For example, to hide a right chevron font, for example in an accordion, you would use aria-hidden=“true”.

The code would typically be:

<span class=“icon-chevron-right” aria-hidden="true">

When icon fonts have a function, for example a print icon, you need to add hidden text. I’ll explain that technique in a future article.

For now, the important thing to note is that auditing images for accessibility is one of the most complex checks to do effectively.


There are five specific checkpoints or Success Criteria for video and audio in WCAG, and it’s essential to identify which one is relevant for particular types of media.

As a starting point, you need to be clear that people who are blind have different requirements to people that are Deaf or hard of hearing:

  • People who are Deaf or hard of hearing need alternatives to speech in videos via captions or transcripts.
  • People who are blind need alternatives to visual content that contains essential information in videos via Audio description or transcripts.

The W3C WCAG 2.0 guidelines detail all the variations of these requirements. Unfortunately, I often see auditors that get it half right by identifying an issue but putting it under the wrong Success Criteria.

For example, if the media is just audio such as a podcast, it comes under 1.2.1 Audio, and video only Alternatives (A) and requires a transcript. If the video is a live web broadcast, it comes under 1.2.4 Captions (Live) (AA).

The thing that often catches people out is videos that contain more than just dialogue, in which case you need to check if an audio description option has been added for people who are blind. The further complexity is there is a difference between WCAG level A and AA:

  • if you just want to meet level A you can provide audio description OR a transcript (1.2.3 Audio description or Media Alternative (A)),
  • but for level AA you must provide an audio description – a transcript is not sufficient (1.2.5 Audio Description (Pre-recorded) (AA)).


Checking and testing for headings on Web pages is relatively straightforward. However, auditors often put issues they find under the wrong Success Criteria.

What you need to be aware of is that WCAG treats the semantics of headings (whether they have been coded as an h1, h2 etc.) separately to their meaning and relevance:

  • If headings are missing, or not used effectively to convey the structure of the page (for example, if every heading on the page has been coded as an h1) the issue comes under the 1.3.1 Info and Relationships (A) Success Criterion.
  • If the headings are not relevant to the page content or are duplicated multiple times, the issue comes under the 2.4.6 Headings and labels (AA) Success Criterion, which requires all headings and labels to be clear, descriptive and unique on a page. This Success criterion is an editorial one whereas 1.3.1 Info and Relationships is a technical one.


Confusion around colour is less common but still happens more than you would think. There are two separate Success Criteria – one is focused on how colour is used to convey information, and the other is concerned with the contrast of the colours used.

The Success Criterion 1.4.1 Use of colour (A) is designed to help people with colour blindness or other visual conditions which mean they that cannot easily distinguish between certain colour combinations, most commonly red and green.

1.4.1 requires you to not use colour alone to convey information. For example, don’t just use red to indicate a negative number, but make sure there is another way to show this, by including a minus sign in front of the negative number. Colour use in graphs and charts is particularly challenging to make accessible – a topic for another blog.

So, it’s always important to ask yourself when auditing – If I take colour away from this page does any important information become lost?

The other colour Success Criterion is 1.4.3 Colour contrast (AA). It is designed to help people with a range of visual impairments that struggle with low contrast text and images of text.

The contrast ratio between foreground and background needs to be at least 4.5:1 for standard size text and 3:1 for large text.

While you can often tell by eye if a contrast between text and background is low, it’s important to test specifically what the contrast ratio is. There is a wide range of tools you can use – some automated, some manual. What you need to watch out for are false positives where the way the page has been coded confuses automated tools because they can’t confirm the background colour. So it’s good practice to do a manual check as well.


Keyboard accessibility is a key fundamental in WCAG. The challenge is there are a large number of keyboard Success Criteria, so you need to be clear what issues come under.

The most critical one is 2.1.1 Keyboard (A). It’s checking if I can at least access and activate all the links, form elements and widget controls (like accordions) on the page by just using the keyboard. It supersedes all other keyboard checkpoints – it does not matter if the order jumps around or I can’t see where the links are in this test, as these issues come under other Success Criteria.

2.1.2 No Keyboard trap (A) is checking whether you can become stuck when you are navigating around the page. This might be when you are in the controls of a video player or other widget such as a date picker. A trap means you can’t move back or forward in the page and need to exit as the only solution. However, in modal dialogs you want a benign keyboard trap that keeps you focused in the dialog until you close it or select a call to action in the dialog. This is because you don’t want to be able to tab out of a dialog as this will take you back to the page behind which is typically obscured by the dialog on top.

2.4.1 Bypass blocks (A) is a relatively simple test – it’s checking if you can jump over the main navigation with a ‘skip link’ near the top of the page. One thing I have noticed with auditors is they don’t always test skip links work on every page – a link might be present, but it can be broken because one page template has it correctly coded, but another doesn’t.

Two Success Criteria that are often confused as well are 2.4.3 Focus order(A) and 2.4.7 Focus visible (AA):

  • 2.4.3 Focus order (A) is checking that the order you experience when tabbing through the page makes sense. Does it go back or jump around the screen?
  • 2.4.7 Focus visible (AA) is checking whether you can always clearly see which link you are on as you tab through the page – is it visible and does the indicator have sufficient contrast (at least 3:1)? This often breaks on button links.

The last area of confusion to flag is the fundamental difference between 3.2.1 On Focus (A) and 3.2.2 On Input (A)which both test for very similar things:

  • 3.2.1 On Focus (A) is checking as you tab through the page if the focus changes unexpectedly, for example, if you move to a new window or dialog which takes focus.
  • 3.2.2 On Input (A) is relevant when you interact with a control, checking if, for example, selecting a checkbox, or opening an accordion causes the focus to change unexpectedly.


In this article, I’ve given you an overview of some of the most common mistakes and pitfalls that auditors tend to get wrong that impact the quality of their findings. These kinds of issues often only come to light during training or as part of an audit quality assurance (QA) process. While it might not always be possible, I strongly advise audit reports to go through a QA review before delivery as it allows a sense-check on findings and picks up on any misunderstandings.

In future articles, I’ll dive more into the complexities of coding images accessibly, the challenges of auditing on mobile, and the best way to QA audit reports.

In the meantime, if we can help you train your QA testers to become accredited accessibility auditors we’d be delighted to – please contact us.

What do you think?

We hope these insights are useful to you and would love to hear your experiences around auditing. Please share your comments below.

Want more?

If this blog has been useful, you might like to sign-up for the Hassell Inclusion newsletter to get more insights like this in your email every month.

Tags: ,
Leave a Comment

Fields with a background colour of light yellow and marked with * are required fields.

Fields with a background colour of red are invalid fields and need re-entry.