Skip to main content

It needs to map back to a role

One of the coolest things about being someone who creates digital products is sometimes you get to create experiences that have never existed before.

The history of websites, web apps, and native apps is full of countless widgets that let you enter and manipulate content in new and exciting ways.

One thing that’s important to keep in mind when creating these new experiences is ensuring that everyone is able to use them. Unfortunately, many of these new experiences fail to work with assistive technology.

Even more unfortunately, the new experiences that don't work with assistive technology can be enthusiastically adopted by the mainstream. This means that access issues are then perpetuated at scale. I’d be remiss if I didn’t mention that this behavior is also a contributing factor to why disability representation is so low in the contemporary digital product space.

So, what can we do about this?

WCAG

All the rules in the Web Content Accessibility Guidelines (WCAG) are important. Some, however, are more important than others.

WCAG Success Criterion 4.1.2: Name, Role, Value is one of the most important rules of the whole document. If you are unfamiliar, it states that all interactive content needs to have the following:

1. A name

This is a string of unique text assistive technology can use to identify each interactive content. “Unique” is the operative word, because it helps you differentiate between things like the save button and the delete button.

A button HTML element with an attribute type of “button” and a text string called “Save.” An arrow identifies how the text string provides the name portion of the button’s accessible computed properties.

People are quick to think about blind people and screen readers when it comes to unique, accessible names, but it also impacts other equally important forms of assistive technology. For example, an interactive element’s name is also utilized by voice control software.

2. And a role

For the web, roles come from a HTML element’s semantics (implicit), or from ARIA (explicit). For example, using the <button> element for a button provides it with a role of pushbutton. Other digital experiences use similar approaches for their user interface toolkits.

Roles provide the “button” part of our save button. It’s why you don’t have to use the phrase “Save button” in your save button UI. The “save” part is provided by either:

  1. The button’s visible name, or
  2. Less ideally, an invisible label supplied by ARIA.
A button HTML element with an attribute type of “button” and a text string called “Save.” There are two arrows. The first arrow identifies how the text string provides the name portion of the button’s accessible computed properties. The second arrow identifies how the button element provides the pushbutton role portion of the button’s accessible computed properties.

There are a lot of other roles available. Ideally you’re not declaring roles that often, instead relying on semantic HTML to do the work for you.

Doing so has the added benefit of not accidentally using the wrong role, which would confuse the person using assistive technology—imagine if our save button told someone that it was a “save table cell”, and not a “save button.”

3. You may also need a value

Our save button does not require a value because you can’t enter content into it, or select it like an option in a group of checkboxes or radio buttons.

You can enter content into things like text input fields. This provides it with a value, with the value being the content provided. Values are used to programmatically communicate “what choices did this person make with this piece of UI”?

A text input field with a label called, “What is your favorite kind of soup?” and user-provided input of “french onion.” There are three arrows. The first arrow identifies how the label string provides the name portion of the input’s accessible computed properties. The second arrow identifies how the text input itself provides the entry role portion of the input’s accessible computed properties. The third arrow identifies how the user-provided input provides the value role portion of the input’s accessible computed properties.

If you use an <input> element with a type attribute set to text, the content you type into it will automatically translate into its value. Again, this is something you get with no additional effort on your part due to the element’s inherent semantics.

A text input field built by using a semantically neutral <div> element with the contenteditable attribute applied to it can’t do this. You’d have to do a lot more work behind the scenes to get this approach to provide an accessible value.

A text input field with a label called, “What is your favorite kind of soup?” and user-provided input of “french onion.” Following it is code that uses a paragraph element to create the label and a div with a contenteditable attribute and class called “input” declared on it. An arrow pointing to the div demonstrates how it provides the faux text input with a role of section. The name portion of the accessible computed properties is null, and the value is unknown.

Not meeting 4.1.2

Without accurate, relevant names, roles, and values present, it can be incredibly difficult, if not impossible for someone to use your digital experience. That’s exclusion. Hard stop.

New and exotic UI needs to map back to the existing roles made available for us, the same as existing, well-tread patterns.

For the case of a new experience, the role provides a frame of reference for how something is supposed to be operated. The roles we can pick from are comprehensive—I am hard pressed to think of an experience so out there that it couldn’t be mapped back to a relevant role.

If a relevant role can’t be determined it probably signals that:

  1. The experience should be broken down into simpler sub-interactions, and
  2. The experience is so abstruse that nobody will be able to understand how it is supposed to work.

Can’t I just make a new role?

You can totally apply something like role="gumball" to an element. However, assistive technology does not understand that declaration, because “gumball” is not part of the internal dictionary of role keywords they are aware of.

Because of the missing dictionary item, the expected behaviors will not be communicated to the person using assistive technology to try and operate your experience.

You could also argue that new roles should be introduced as technology evolves. And guess what? They have! It’s just that the process to do this is purposefully slow and methodical, as there are a ton of considerations to factor in.

The platform

The origins of the roles themselves are worth noting as well.

Roles themselves map back to platform APIs. Assistive technology can work on just more on the content your browser loaded. It can work on the browser itself, the operating system the browser is installed on, and all the other apps installed on the operating system.

Another way to say this: a checkbox on an operating system preferences menu and a website checkbox made with an <input> element work in very similar ways, and that’s intentional.

Remember how I said roles take a while to be created? This is what I mean. The starting point is multiple organizations who produce operating systems creating a UI pattern, and then agreeing upon adopting it.

Edicts still need to be carried out

There’s even more nuance to unpack for brand new roles, in that assistive technology manufacturers need to add the logic and heuristics for processing the addition. Because of that, there is a delay time between codification and implementation—to say nothing of bugs.

There is also the reality that some people are highly reluctant to upgrade their assistive technology, out of a very valid fear that an update will break an important thing they use navigate through the world. As a person who both uses and crafts digital experiences, I’m sure you can appreciate this mindset.

A linear flowchart showing the progression of how new UI gets codified for use with assistive technology. First there is a new UI proposal, which leads t mapping it to an existing platform native construct. An ARIA mapping is then created, and accessible API support is created. Assistive technology is then updated, and finally the assistive technology update is applied. There is a warning bounding box labeled, “Not guaranteed to happen” that wraps accessible API support, assistive technology support, and update applied, indicating that these steps don’t always happen. A linear flowchart showing the progression of how new UI gets codified for use with assistive technology. First there is a new UI proposal, which leads t mapping it to an existing platform native construct. An ARIA mapping is then created, and accessible API support is created. Assistive technology is then updated, and finally the assistive technology update is applied. There is a warning bounding box labeled, “Not guaranteed to happen” that wraps accessible API support, assistive technology support, and update applied, indicating that these steps don’t always happen.

Work with what you’ve got

You can’t invent new roles and expect them to work just by virtue of declaring them, or hoping that eventually operating systems catch up. You also can't try to twist an existing role into something it wasn't meant to do and expect people to understand your intent.

Keep how you are providing accessible, understandable names, roles, and values top-of-mind when you are creating brand-new experiences. This will help ensure that the cool new thing you’re building will be usable by everyone.

This is one of the reasons I’m bullish on Shift Left methodology. Having disabled representation present early in the concepting phase of work can help help identify these kinds of concerns early on, addressed by someone who can speak with the authority of lived experience.