Dark Adventures in Mobile Accessibility


[Intro music]

>> MICHAEL BECK: Welcome to technica11y, the webinar series dedicated to the technical challenges of making the web accessible. [Music].

>> MICHAEL BECK: This month our presenter is Shell Little, the Mobile Accessibility Lead at Wells Fargo DS4B.

So hello everybody and welcome to this edition of technica11y. I’m your host Michael Beck, the Operations Manager here at Tenon. I hope everyone who attended CSUN is fully recovered. I know it took me a couple of weeks to get back in the saddle, so to speak, but it was wonderful to meet many of you out there and I look forward to meeting more next time. It was my first CSUN and it was quite a bit overwhelming at times, but I still had a blast and learned quite a bit. Speaking of which, you can catch Tenon’s own Karl Groves and Job van Achterberg at AccessU in Austin on May 15th through the 17th and at the Accessibility Camp Toronto on May 18th. As noted at the beginning, this month we have Shell Little from Wells Fargo DS4B with us. Good morning, Shell!

>> SHELL LITTLE: Good morning!

>> MICHAEL BECK: Shell is going to be delving into something we haven’t explored yet on technica11y and that’s mobile accessibility. As I’m sure most of you know, even an operations guy like me, the mobile space is difficult to work in. There are things that…ahem…”technically” pass WCAG but are really bad experience for users with a variety of disabilities. And so to avoid more bad puns from me take it away, Shell.

>> SHELL LITTLE: Thanks Michael. Let me share my screen real quick…fantastic!

>> MICHAEL BECK: Oh, a reminder to everybody, sorry, if you have any questions, please throw them in the chat or the Q&A thing in Zoom and we’ll get to them at the end. So, take it away!

>> SHELL LITTLE: Awesome, thank you. So thanks, Michael, for that introduction. So today we’re going to be talking about mobile accessibility. So the title of my talk is “Dark Adventures in Mobile Accessibility,” because, as Michael mentioned, mobile is a tricky space to work in, so, if you are excited to hear about mobile stuff then you’re in the right place.

So real quick before we get into introductions, I’m going to go through my roadmap for the day.

So start off with an Intro. Going to move into the section called “Why?”

Why is it so hard? Why is mobile accessibility this dark scary thing? From there, we’ll talk about the WCAG criterion, especially a focus on the 2.1 update. Then, the large bulk of my talk is going to be just practical examples, things I’ve seen in the wild, things I’ve read about, things that kind of drive me crazy. So that will be kind of fun to go through, and then we’ll wrap it up with a conclusion and hopefully I’ll have enough time for some questions at the end. As Michael said, if you have any questions, feel free to drop them in the Zoom.

So a little bit about me. My name is Shell Little. My gender pronouns are she and her and you can find me on Twitter @ShellELittle. That’s where you’ll find me and get a hold of me the easiest because my email is a black hole! So I’ll save everybody the time, feel free to follow me, ping me, or tweet at me. I enjoy interacting with people when it comes to my talks on Twitter so if you are a Twitter human feel free to jump on there and make some comments and I’ll get back to you after my talk is done. I work for Wells Fargo DS4B, so I work for Wells Fargo Wholesale: business to business, bank to bank kind of thing. If you bank with Wells Fargo personally, you probably do not and will not ever see the software that I work on. So I work on the accessible user experience team. My team lead is Gerard Cohen who was on technica11y a couple of months ago himself. So, myself, I’m the mobile and inclusive design lead for our team. I’ve been with Wells for a couple of years now and really got thrown into my mobile position but leaned into it and I really love it, even though it’s hair pulling at times! I’m living in Seattle and partnered and all my children have tails and I’m very happy about that! (Chuckles)

>> SHELL LITTLE: As a side note, I’m a video game enthusiast, so if you had a chance to see the stuff I had from the GAConf, it’s really fun.

So, moving on. When it comes to mobile. there’s this kind of misconception that I’ve heard in the wild and I’ve read about online about the work around of, “Oh, it doesn’t work on our app but it’s fine because it works on the web!” So I just want to set the record straight and say the work around of, “It’s accessible on desktop,” does not cut it anymore.

We have long since passed that time where we are able to say, “Oh, just go to your computer.” Just the way that technology has evolved, the way people are accessing the web, the way people are interacting with your services, it’s time to no longer use that scapegoat. It’s kind of…it was rapid and fast with the way that technology is moving, but if we can all lean in and embrace that I think the world will appreciate it, especially people in the mobile space.

So why, “Dark Adventures?” Why this kind of spooky scary analogy? For me, I think of dark adventures and lawlessness and almost kind of dark waters because a lot of times, there are questions that I have in mobile or people have in mobile and there are no answers. There are no standards for certain things where I have a big question and I have nobody to answer those for me. There’s no standard for it. There’s no best practice. And it’s kind of exciting but also kind of scary at the same time just because we’ve got unchartered territory.

So just in general, we’re talking about dark adventures more just about the fact that we’re wading past the standards, you know the safe zone. You know, how I think about it, when you’re encompassed in these standards, you’re in this safe zone where you have tons of literature, tons of people doing it, people are talking about it online, you can read up on articles. You can have a service like Tenon come in and help, but, when you’re in this mobile space, it’s a lot harder to find those resources.

So, let’s jump into our first section: “Why is mobile accessibility so hard?”

There’s plenty of reasons why mobile accessibility is really hard, but for me I kind of broke it down into three major points. First of all, mobile isn’t simple. And that’s the dang truth there.

HTML does not equal native code. They are different spaces, different beasts. And the WCAG standards and mobile kind [of standards] have a interesting interaction with one another.

So what even is mobile? When you think about mobile, we think about cell phones typically. But do we think about tablets? Are we thinking about certain kinds of laptops these days? What really is mobile?

So, the W3C defines mobile as two different categories and I mostly agree. So we have got native applications, which a native application runs as a software application, uses the device’s built-in features such as cameras, microphones, location, et cetera. You would locate those applications off of Google Play or the iOS App Store versus something that’s a Web App which runs in a browser and has a common codebase across multiple platforms. And that does get messy because we have different ways to access these things and they have different features and blah blah. So, mobile browsers are an interesting thing. You’re able to access the web through these mobile browsers. So, you’re accessing web apps that are made to be consumed on computers but you’re actually doing it through your mobile browser and oftentimes, also you’re actively seeing peoples’ Web sites and information through another app. So Pinterest is famous for this. Twitter also does it Facebook does it where you’re not launching your own personal browser, you’re launching an internal, still wrapped within their information browser; it’s very interesting.

So that definitely muddies the water there.

And then also we’ve got native applications. So, we have native code. So code that’s specifically written for iOS versus Android. And then we also have HTML wrapped sites that are JavaScript bridge served up in a Web App format. It’s an application someone can download but really they are just consuming web code that’s wrapped and made to look pretty packaged for a “mobile device.” So, it gets complicated there’s a lot of different ways you can access this information. There’s a lot of different ways that you’re able to access the web. So, when you’re thinking about your users accessing your information, they could be coming from a mobile browser. They could be coming from a tablet that’s runs in OS that’s still technically mobile even though the screen is giant because how big tablets are these days. I personally run a Chromebook and I run WebEx on my Chromebook laptop. So, basically TL;DR, what we think of as mobile is just very broad. It means a lot of stuff. Back in the day, it didn’t used to mean so much but now, with the way technology has expanded, we are seeing something different.

So, next about the code, so HTML is not the same as native code and I think anybody who knows anything about development totally understands that HTML is not the same as native code. The way we know HTML5, DSS, and ARIA doesn’t really help you when we talk about PGP, Python, Native iOS. The things and tricks and tips that you know for building things accessibly in a web format, they kind of go out the window when it comes out to consolidated code.

So the way we design, develop and even the way that we test for these native specific native code is totally different than the way we do for HTML.

So some examples real quick. So, when we’re talking about iOS specifically: iOS headings, for example, there’s no hierarchy of headings it’s either a heading or it’s not. You can’t dictate this is H1 through H7; they are either headings or they are not. So, when we’re talking about things like serving up an application to a SmartWatch and someone saying, “Well we have to have an H1 on this, this SmartWatch has information on it we have to serve up H1,”…well, if it’s native iOS, there are no H1s. So even when we’re talking about testing or potentially accessing these specific native applications, we’re even using different screen readers. And I’m not even talking obviously iOS having VoiceOver but TalkBacks versus NVDA and JAWS on the web so you could be accessing the exact same code if we’re talking about say, a JavaScript bridge wrapped web application, you could be accessing the exact same code with a totally different screen reader that has absolutely different behavior, different orientation it announces things differently, so if you’re thinking about it as the way you would with JAWS or NVDA or an Android based app, TalkBacks are different, so you can’t think of them the exact same.

Zooming and enlarging text. So, obviously the web we control plus we zoom in and expand our pages and make text bigger. But when it comes to native code, we’re doing away with pinch to zoom. I personally would love to see the death of pinch to zoom myself. So, what we’re doing now is we’re relying heavily on the OS to handle the text size so the user themselves can go into the accssibility settings and change how big they want their font and the code, if done appropriately, should respond as†is needed. So, the way that we even zoom things and the way that we would build a native application, if you have a small box with content in it, you better code it in a way that if the text gets blown up 200% that nothing is going to break, it doesn’t pop outside of the box. It needs to be made in a way knowing that that text could grow quite a bit.

And then hover states. If you’re expecting your users who are, say, potentially accessing a news site, to hunt down links because, Oh, I have a hover state!” Technically that passes if you have your contrast up high enough it’s not color alone, you can’t expect your users to be able to hunt for underlines on links because there’s really no such thing as hover states when it comes to mobile. And the big thing is in most situations you can’t open the code up and see what’s going on. You have to rely on testing tools to tell you the story. So, I’m not able to open up the code inspector and check things out when we’re talking about 100% native code. I can’t simply ask my browser, “What the heck is going on?” I have to run screen readers over it and I have to use my best judgment and I have to do research and read into what’s going on if it’s iOS versus Android and that right there is difficult, especially when we’re talking about UAT environments maybe you don’t have access to the code. Maybe your development specifically for your native stuff, maybe it’s done in a different party, maybe it’s a third party company doing it for you and you don’t have access to that. So, it can be very difficult.

Last but not least, the WCAG 2.0 was not written for mobile. Now, obviously, it’s a great improvement from Version 1. A lot more prescriptive. Made to be broad. Made to encompass as many types of technology as it could. But WCAG 2.0 came out in 2008. For context, for those of us who have to think back to 2008, the RIM BlackBerry Bold that was the sexy new cell phone, the best selling phone everybody was talking about it in 2008.

So if that’s the kind of phone we were talking about, there’s no way the WCAG standards could have had any knowledge what was about to come, what cell phones would look like, or web applications, what even apps were about to look like because 2008 is when Facebook mobile site, not even their application, launched. The Facebook application launched in 2009, so the year the WCAG standards came out was the first year that people were able to access Facebook just alone on a phone, so that adds a lot of context when we’re saying, “I’m looking for standards for something super complicated in the double modal pattern that should never exist and uh we also have it wrapped in a JavaScript bridge on top of the fact that when I tap this button I’m sent to 100% native page what do we do about back button experience?” There’s no way the WCAG standard could have been equipped for that. Thankfully we have 2.1, so let’s move into Section No. 2 and let’s talk about the updates to WCAG 2.1 and how it applies to mobile.

So WCAG 2.1 has had several mobile related updates which is awesome to see. I follow the updates very closely. Had a couple of heartbreak moments with a couple of shift to AAA; it doesn’t make sense. You know, a lot of people were watching the March Madness. I was watching the WCAG standards. (Chuckles).

>> SHELL LITTLE: So while there’s plenty of standards…I can’t remember exactly how many are A and AA. I think it’s 12 standards, but, I’m just going to highlight a couple. I have five on the screen. I think I have five. Yes, I have five. So I have Orientation, Pointer Gestures, Motion Actuation, Target Size, and Reflow.

So first things first, let’s talk about orientation. Basically, Orientation says, “Do not restrict the view of the content to a single display orientation such as portrait or landscape.” Basically, allow your users to choose their own adventure when it comes to if they want them to be in portrait or landscape. Now, if you have already read through 2.1 and you’re super well aware, this is going to be a review for you. So, Orientation is super important because this criterion came to exist because people with motion, let’s see, there’s three different types of user groups who really rely heavily on this one, but, people with physical disabilities, especially people who have mounted devices, they have it in either landscape or portrait and to ask the user to constant switch the orientation of their phone is unreasonable. Now there are exceptions certain things require you to be in one mode or the other. Specifically the standards, they say keyboard application where you can play the keyboard, it wouldn’t make sense to have that work in a portrait mode. Orientation is super important to think about when we’re talking about working in the mobile space because a lot of times that doubles the wireframes for when you’re building something.

So if you’re not thinking about creating things and optimizing them for, typically for landscape zone, if you’re not thinking about that, you’re missing out on a big chunk of work. Now, yes, if things are technically responsive you can probably get away with it, but, a lot of times it requires some thinking and planning ahead which would require wireframing.

So Pointer Gestures: this one is involving fingers touching things or maybe things that mimic stylus pens, different types of touch targets. So, Pointer Gesturs, “Multi-point or path-based gestures can be used with a single pointer.” A big example people talked about a lot with this is maps. So, having to use multiple fingers to pinch and zoom and pull in and out. potentially having to drag in certain paths. Now, there are plenty of different work-arounds and exceptions. Some examples would be games. I can think of, like, Fruit Ninja was big one, people talked about you’re supposed to swipe your finger in a certain direction and without that the game would become unraveled. I would be interested in having conversations how to make that game accessible. That’s another talk. But, in general you have to be able to lean on your UI to be able to do these things. Google Maps they have a plus and minus zoom in button so you don’t have to pinch and zoom. Pointer Gestures are really important when we talk about legacy pages which we’ll talk about in a little bit. When things can’t or aren’t currently made to appropriately reflow, we can’t force users to be able to pinch and zoom and, pinch and zoom, as I mentioned earlier in the talk, I would love for pinch and zoom to stop. So it can be a problem.

For Motion Actuation, we’re talking about interactions that use device motion. We need to make sure those can be done with the UI. So, a great example, I remember when Facebook came out with 3D videos and I was on a plane and I was frustrated because I was like I look like an idiot waving my phone around on a plane full of people trying to watch a concert of some sort and trying to get the 360 experience. And eventually after they released it there was a way to swipe back and forth. If it wasn’t Facebook that came out with it originally, I can’t remember, there was another company that had, like, images where you were able to slide back and forth like panoramas. That’s a perfect example. Just allow your users to be able to swipe or put one finger, have arrows left and right, so they don’t have to wave their phone around because not everybody has that ability.

Target Size. Now, oh my gosh, I can hear the cries, “But Shell target size is AAA!” I know and I don’t really care that much so let’s talk about it! “Actionable items must be 44 by 44 CSS pixels.” Not too crazy. So the reason why Target Size is AAA is because there are certain things that are actionable items that is not practicable for them to be 44 by 44 or any sort of 96 of any kind. So, an example of that would be links, you know, in a sentence. You can’t make those ginormous because that doesn’t even make any sense. It’s also difficult because if you have two different ways, like multiple ways, to access something only one of them has to meet it, so that’s pretty good. But when we’re talking about it being AAA, understandable, maybe we can just talk about icons or buttons that are not X, Y, and Z variables because, when we’re talking about standards, both Apple and Google already have best practice standards of 48 by 48, so, if you’re following the Apple and Google standards then you have already passed the expected standards of touch target size. So the human interaction, the human interface guidelines from Apple recommends the 44 by 44 and then the Android material design guidelines, it’s 48 by 48, and then they have a recommendation of 7 to 10 millimeters in size so we’re talking about log-in buttons, cancel buttons, signout buttons, that’s pretty practical having these itty bitty teeny tiny little buttons, when you’re coming from the web going over to a browser, those buttons are tiny and it gets really difficult.

And last but not least, Reflow. Code should be responsive; it’s pretty simple. Make sure content doesn’t require scrolling in two dimensions. If you have a page that isn’t responsive and you have people scrolling left and right, up and down, that doesn’t pass reflow.

So, even with the updates mobile accessibility is still a lawless wasteland and that’s because mobile accessibility is difficult. There are so many nitty gritty interesting interactions that it’s just so hard to write standards for. So, it’s not the fact that WCAG standards aren’t good or they are not enough, it’s because mobile is really difficult and it’s difficult to write standards for.

So, let’s talk about a scenario real quick in the terms of responsive. So the rule being, “Cntent must be responsive.” So Reflow, it must have appropriate breakpoints. The issue I was talking about earlier, old legacy content does not wrap and break in mobile. I see that a lot in government, I see it a lot in higher education, and I see it a lot in older companies. You have Web sites. They are beautiful, they work, but they are not responsive. And if they are responsive, they are not responsive down to the breakpoint of a cell phone. That’s a big issue.

When we’re talking about it not being responsive, we’re breaking, reflow, which is no 2D scrolling obviously with exceptions such as tables, pictures, maps, diagrams, there’s plenty of examples. But that’s to 400%. Point Gestures: your user should not be forced to pinch and zoom. Maybe you’re able to get away with a double tap zoom, maybe it can work out. Then we also have the 2.0 standard of the resize text. So, that’s being able to ZoomText to 200%. So, if it’s not even responsive, it’s obviously not going to listen to what the OS has to say about changing text size. And then likely if your old legacy content isn’t responsive, it probably will not work in changing orientation or if it does, it’s just another view of the same giant website that people were scrolling around on.

So that’s a lot of fails right there just for a homepage that doesn’t respond and someone is accessing it through their mobile browser.

So you have two choices basically: make the page responsive or create a mobile version. Like good luck. That’s a lot of stuff. It’s a lot to do. Especially when we’re talking about legacy pages when there’s just a ton of them, even if it’s a roadmap to update it’s still a lot of content and that’s very difficult. And when we’re talking about just making willy nilly applications, users want to access functionality they have on the web with mobile and they want them similar if not the exact same. If you want to look at the iPhone store or the Google Play Store, if you want to find really, really angry users, look for Web sites that have apps that do not have the same experience, do not have the same functionality. That’s when you get angry users. So, we’re not even talking necessarily specifically about accessibility. Having differentiation between the two different experiences, that’s just not…nobody wants that anyway. o your best bet is to make things just responsive. Make them break appropriately. “Code responsibly,” as I’ve been told. And remediate them in that way because sometimes just throwing native code at something is not the answer.

So let’s move into Section Three. This is the last major section let’s just talk about some practical examples. Now, before I get into this I will say I am not a developer. That is not my strong suit. I personally am neurodivergent myself. So the programming that I do is limited at best.

So if anybody has any incredibly technical questions, I’m going to send you to Google or maybe someone else. But a lot of my experiences have to come with designing for accessibility and designing in mobile spaces. But also I do have some technical know-how, as well. I just wanted to preface that so nobody is expecting some big sexy page of code or anything.

So the first thing we’ll talk about will be camera functions. Now, camera function, we can talk about I kind of thought about two different things working in banking. Biometrics: that could be something like Face ID. Wells Fargo has its own internal Face ID, face recognition. You can Google it. There’s videos up showing what it looks like and how it behaves on the YouTube. So, you know having something that has camera functions or something like iPrint, Face IDs, those different types of different functionalities, you get the picture. And then in Banking. I know a lot of people love the fact that they are able to take pictures of their checks. And send them in. I bank with several banks myself and they all have that functionality. There’s also receipt capturing for business trips, for expenses. But also for a lot of the…I don’t want to say money saving apps…but financial apps that help you balance budgets and blah blah. They have, some of them have receipt capture as well to help you balance your checkbook, so on and so forth. But, basically, anything that’s forcing your camera to be used it could be scanning a ticket, a code, maybe a QR code.

So, basically when it comes to camera functionality, from my experience, what I have found from remediating different types of camera functionalities, from reading about them, from hearing from co-workers and people on the interweb, is Number One, “Information is key.” So, if it’s being displayed to your user, if something is happening and your user needs to know about it, it also has to be read by a screen reader. Any kind of tips, tricks, hints, anything that’s being informed: “Move your camera closer,” “Move your camera further away,” “It’s too dark,” “It’s too bright,” that kind of content, as long as it’s being displayed in a way that appropriately can be seen by people who have a vision related disability and also it can be accessed by a screen reader, that kind of information is crucial. Anything about moving your camera closer or moving it further away for someone who is sighted is monumentally more important to a screen reader user and screen reader users can use cameras successfully if we do our jobs right and plan ahead and we make sure these experiences are made for them in mind. So information is key.

“Simple is better.” So when it comes to things like cropping images or forcing users to take things at certain orientations, for example, the simpler you can make the camera functions, the better. It’s not too complicated. So I know for example some check picture things require you to crop. Some don’t. Now some users can’t crop. So if they can’t crop, what happens if they don’t? That kind of information. So if you can make it simple for them, then do. And then, when it comes to being simple, also automation. We’re talking about auto capture images, so it takes a picture for you. It also has auto flash and then anything basically auto, so, auto focus, auto flash, if you’re able to automate things and take the burden off of the user when it comes to camera function, that’s great. You should do that.

And then, obviously, give them options because sometimes auto capture is not good. If someone has a physical disability, maybe it takes them a little bit longer in the auto capture. It might be a barrier for them, so give people options. If they can’t get a good photograph in one way, allow them to try doing it manually. I know BECU app does that specifically where if you fail too many times they give you the option to just do it yourself which is appreciated.

Right, moving on we’re going to talk about Moving Content.

So, hopefully, I’ll get my slides up eventually, but at CSUN I gave a presentation on Pause Stop Hide and what happens with Pause Stop Hide. So, if someone has an attention related disability, moving content is a big deal. We’re talking about Moving Content. It’s funny I’m going to try to sum up my talk, an hour talk, in one slide, but I’ll try to do my best.

So when we’re talking about Moving Content, we have micro interactions, moving ads, and timers, tickers, scrollers. So, when it comes to mobile, these things get exacerbated because the screens are so little, so something that’s moving now is taking up much more real estate on a phone screen, so this moving content can be so much more intrusive, especially when we’re talking about moving ads in mobile which I’ll get into right now.

So first point I have is, “Pause Stop Hide. Are you there? Hello?” So, apparently, collectively, everybody has ignored that we have a criterion called Pause Stop Hide: “Content that lasts for longer than five seconds must be able to be paused, stopped or hidden,” and that includes ads for sure.

So, ad blockers: totally a thing! But if I’m accessing a Web site, say, I’m on Twitter and I select a link and it sends me to, like, some sort of journalism site, you bet your bottom dollar there will be 8,000 moving ads on that that I have no control over and I’m unable to close or move away from.

Now in my talk I gave, I had a slide that said, “Your users shouldn’t have to be hackers to use your software.” So, yeah, there are 6,000 different work-arounds, but in reality, shouldn’t we just follow the standard for Pause Stop Hide? So micro interactions are something that can be a barrier. I had a really big issue with auto complete feature that Google had implemented. So, basically on Gmail as you’re typing it suggests the rest of your sentence. And it suggests that for you in line with what you’re typing versus if you’re typing in the bar above a search, you have drop-down options. And have it fill in next to you, it’s a micro interaction. Technically, it’s not a fail for the standard but, boy oh boy, was that a barrier! Incredible barrier! And there are plenty of little micro interactions like that that, yeah, technically they pass, but they are a really big problem especially with how much people love micro interactions. And don’t get me wrong, some micro interactions are incredibly important. A, “Hey, I’m over here!” can be really important, but also a, “Hey!” button jiggling saying, “Hey, I’m over here!”…if it doesn’t stop, it can end the path.

So, in iOS there is a function to reduce motion. So is ReduceMotion enabled? One word or no spaces. You are able to code things to have a reduced motion for your users, talking about parallax scrolling or things that cause nausea or dizziness. Tt also helps with being, like, you know, I was saying, a micro interaction. But is ReducedMotion enabled, is it really general practice? I wish it was but the sad thing is I, personally, who has an attention related disability who finds moving content to be a barrier, I don’t use iOS. I’m an Android user so the ReduceMotion sounds great but I’ll never benefit from it, but in general, thinking about these types of things because native is so different from HTML, something like ReduceMotion, if that was used more we could break down the barriers with people with disabilities like mine. Another big takeaway for moving content, “Do not tie data saving to if I want moving content on or off.” In my presentation, I had an example from Pinterest. The only way I was able to turn off auto play, which is an incredible barrier for me was if I had, it was specifically to say when I wasn’t on WiFi. I’m unable to sit in my home on WiFi and enjoy auto play being off because to the developers, it’s only a data saving technique. They are not thinking beyond the saving and thinking about how moving content is a barrier.

Now, technically, “technica11y,” these all passed the standard because the auto play has five seconds to do its thing, first of all, and second of all, if I can pause it, which a lot of times you are able to pause like moving video ads on Pinterest and different things like that you’re able to technically pause it or close it, it’s technically not a fail but it’s still a barrier, so having access to reducing the amount of auto play features is really great.

Twitter and LinkedIn are two great examples that you’re just able to toggle it off. I don’t want things to auto play. That’s gifs, that’s videos, that’s anything that’s moving, basically. I’m able to toggle it off, so that’s a great feature. If you do have mobile app and you do have content that auto plays, I highly recommend thinking about creating a toggle because I myself find that to be, depending on how many spoons I have for the day, that could be make or break for me when it comes to pass.

Next is Back Button Behavior. This one is near and dear to my heart. I have a lot of feelings about it.

So, when we’re talking about a back behavior, there’s a lot of different “backs” when we’re talking about a back button. We have a Browser Back. We have OS Back, like a hardback, and then we also have a built-in Native Back. That’s three back buttons, that’s a lot of different behaviors.

So, when it comes to, specifically, let’s talk about a native back. If you have a native back, on say, the upper left hand corner in your window in a 100% native page, that back button is a Dot Back. It will send users back one page. Now, say, you’re at the end of the flow and the user just submitted and they were given, “Are you sure you want to submit?” screen and they have agreed and they have said yes and they have moved to a, “Congratulations!” and the thing that you want to do is totally done, High 5! They are on that screen. What does “back” mean there? If they were to go Dot Back, they will send them to either an error or to accidentally resend what they had sent and if we’re talking maybe in banking, that could be maybe another payment. If we’re talking about buying something, like maybe movie tickets or concert tickets, that could be buying accidentally several tickets. So, a lot of times the way to handle it is just to get rid of Back or to disable the Hardback. But if you have a button in the upper left hand corner that says Back on every screen except for the one that will technically navigate your user to an error, that’s a consistent navigation fail because the navigation item is not consistently located and consistently available on every page.

So, you’re unable to disable the hardback to just be a scapegoat. It’s something that we can absolutely design around.

So, how we design around that, and how people have designed around that, is to ask yourself, “What does Back mean?” So, my users at the end of a flow, where do they actually want to go if they want to go back and a lot of times, it’s back home. It’s back to the beginning of the flow. Maybe it’s a flow that they can do more than once. Maybe it’s an order form or something that they can repeat multiple times and go through the process multiple times.

Maybe they are setting up an account. It could be a thousand things. So when you give the user, “Congratulations, you’re done,” Back could mean something other than I want to go back one page. So, ask yourself, “What does back mean? What does my user want in this experience?” Because when you’re disabling a Hardback, so we’re talking about the Android Hardback, when you’re disabling that, what if a user is in the exact same flow but they are on the web, are you going to be able to disable the back button on the browser? Or maybe are we just going to have to design around it? Because there are so many different experiences with a Back button, we have to be able to design around them. Especially when we’ve got an iOS that has no Hardback, it relies heavily on a native back versus a hardback. So, say you did get rid of your native back but you did not get rid of your…or disable the hardback, right there you’ve got two different inconsistent experiences. So, Back buttons are pretty complicated but I will recommend that you just ask yourself what your user wants because getting rid of a Back behavior because that Back will put them into an error is not the answer.

“Tool Tips.” This is a quick one because obviously it’s pretty simple. So tool tips can be really important. They hit information. They can help contain help content and identifying information. But, tool tips don’t really work on mobile. They are not information you can really access. And the problem with that is that tool tips are something that would be even more useful when we’re looking at a mobile space because if you have less real estate. Getting rid of labels, moving away from labeling things inappropriately, and from identifying and differentiating between different icons, items, you know, actionable things, is really popular. “How can we save space? Get rid of the label. Let the users try to figure out what that means,” and that kind of experience is really difficult.

Now, technically, there we go again, if you’re…say we’re talking about a JavaScript bridge wrap native experience on app, so it’s HTML code. It has a Tooltip on it. Technically, that does pass the standard. But on a mobile device, somebody who is sighted, that doesn’t really help them. So, if you are a non-sighted user and you’re using a screen reader and you roll over that icon item button blah blah, you will get the toolkit information, that information will be read to you and everything will be great. But if you’re a sighted user with a cognitive disability, for example, a chevron or a button that says, “1” with no label next to it doesn’t really mean anything or an ambiguous up arrow with a, “2 “on it…like, what does that even mean? So, small icons, even though they are different and pass color contrast are still not enough; we can’t just do away with labels. Wo we have to find a way to work labels back into the way that we differentiate buttons. Now, there are some things that obviously have transcended labels, such as a hamburger menu. There’s also the triple One Two Three Up and Down Stoplight menu that just means, “More.” There are certain icons that have transcended but, in general, we just can’t do away with labels left and right.

And then, yeah, Tool Tips, super important. But again it’s kind of a funky experience. And this isn’t prescriptive, like maybe there are examples where it doesn’t matter, where it’s good to go, but in general you can’t rely on that kind of background information if you have a set of users who don’t have access to that information.

Okay. Next, and I think we’re getting close to the end, Hidden Page Titles.

So, page titles versus H1s. The fact that they are hidden and the fact that they help with wayfinding. So, this is more for iOS because as mentioned previously, there is really no way to do an H1; it’s just heading or not heading. So page titles are really important on a native page and sometimes page titles there’s really no place to put it, there’s no way to make the big broad sexy this is the H1, sometimes pages are a little bit more convoluted view. For example, a page that’s a success page doesn’t really have anything on it other than maybe a notification or an icon or an image.

So a way to do that is to do is `setTitle:@Home` or whatever you need. I spend a lot of time at work working with our content writers to determine what is the best page title for a particular native page so they are hidden obviously because it’s a page title.

Just trying to add extra context for non-sighted users and what’s really important to make sure every page has a unique page title is wayfinding and it’s obviously the way we do it on the web, but it also needs to be done in mobile and people don’t really think about that. If we’re thinking about in terms of H1, it’s easy to forget about page titles, but for iOS the H1s really won’t help you out. It’s more of a don’t forget than anything else. Obviously it’s not ground breaking.

And I think this is my last one. We’ll see. Navigation.

So just something interesting. This is not prescriptive or telling anybody what to do this is something I find incredibly interesting. So, Navigation. When I’m talking about navigation, I’m talking about like swipe order, so focusing the DOM, so focus order, meaningful sequence and keyboard behavior.

So that something I found fascinating. I had the pleasure of visiting Ben Yahoo just becoming an oak of their Accessibility Team. They do really fantastic accessibility user experience testing. And, we were told a story, my team and I, about a time they were so proud going into user testing. They had just really felt like they had covered everything. They had users with vision related disabilities. And they just felt ready to rock and the guy sat down and whipped out a Bluetooth keyboard and the whole room went silent and this guy used their app with a Bluetooth keyboard and it broke everything because people do use Bluetooth keyboards. It’s definitely a thing. Again when I was thinking about the work that I do with Chromebook. I have a keyboard attached. I’m practically using some native app, so if I tab through things, what happens? A big thing I find with focus order and meaningful sequence when it comes to native stuff, is it’s fascinating for me because the focus can get lost so easily behind modals, behind popups or behind when you have multiple pages open. So, say you open one window from the other and you’re in a native app and it does support that behavior, getting lost or if you swipe up too far, you’re still looking at Page 1, but say you swipe down too far you’re in Page 1, technically you’re focused on Page 2, but it’s not visible. Just really interesting stuff like that, so if we only think about native in terms of using screen readers and not thinking that sometimes people are without keyboards, it’s kind of an extra hold. An interesting story was about skiplinks. So, if a skiplink does not show visibly, obviously that’s an accessibility violation. When you’re thinking about native, you’re like, “If it’s announced to the screen reader user, that’s the main user, who will need a skiplink in this regard?” Pardon me, my animals in the background are making noise, but the main user who will want this is a screen reader user; it doesn’t have to be shown visibly. But then, if someone gets a Bluetooth keyboard out and the skiplink isn’t visible, they’re not going to hear that announcement and they will miss out. So that kid of blew my mind and it’s like, “My gosh, people use Bluetooth keyboards with cell phones?!” More anecdotal than anything else.

So wrapping up because we are short on time. In conclusion, when it comes to making accessible design decisions, we cannot wait for the WCAG standards to catch up and that’s not because the standards aren’t good enough. It’s not because they are low. It’s because making standards for mobile is really, really hard. It’s really hard to find a standard that can easily pass the goal and have a repeatable testable way. So, you have to move beyond into the dark water and we have to determine best practice standards for things like Back buttons, for things like what do you do when you have things like tool tips that aren’t visible; it’s a problem but it’s not a violation? We have to figure out ways to work beyond the standards to create experiences for people like me, who have attention related disabilities. Yeah, technically yes you pass really great standards, but the fact I can’t read articles because I’m bombarded with moving ads is a problem.

Next, mobile accessibility is hard because the lines between web and mobile are way too blurred at this point. We’re serving up HTML code wrapped up, we’re accessing special mobile sites via browser. People are going to access your content in ways you have never dreamed of, they are going to be on an Xbox surfing the web. I’m having a hard time with Netflix on my smart TV. The way we access technology is way beyond cell phones and computers. We’ve got tablets. We’ve got SmartWatches. We have smart televisions.

There’s thousands of different ways that we’re able to access content. So, if we’re thinking very much web and mobile, it’s not the way to think about it anymore. It’s muddy, it’s dark, it’s scary, and we’re all in it together. (Chuckles)

When code truly is native, it must be treated with a different approach than web code. Simple. Made that point quite a few times. If you’re going into, say, even testing for accessibility violations in mobile and you’re looking for exactly what you would look for in the web, you’re going to have a bad time. Perfect example that I made a bunch of times, H1s and headings, it’s just a different experience. I think this is the last one, yes. Mobile Accessibility is going to take time, creativity and input from persons with disabilities…I will repeat that…input from persons with disabilities to get right. It’s going to take time. But there are awesome companies doing awesome things when it comes to accessibility where there really are no standards. For example, people are able to take selfies with iOS because of the way they audio describe, because of the way that the phone communicates to the user. People who are blind are able to take selfies and that’s amazing! Things like that. These creative innovative ways and I’m not an iOS fangirl myself. But I do absolutely recognize that there are some really great things happening. Speech-to-text technology is changing my life. It’s amazing. The way that we’re using smart devices, these are going places and we need to have the time to talk to people creating those devices and the way they have worked alongside people with disabilities is really fantastic, so there are really great things happening. It’s not all doom and gloom. There’s really good stuff going on but it’s just going to take a little bit beyond the standards. It’s going to take brainstorming and making a lot of mistakes first before we get the right answers.

Again, my name is Shell. And you can find me†@ShellElittle on Twitter. I did put my email address out there I won’t read it aloud because I don’t support anybody emailing me. LinkedIn is better. Thank you for your time and I think we have a little bit of time for questions.

>> MICHAEL BECK: Yeah, thank you so much, Shell. It looks like…a lot of great stuff. I see you really spoke to Mallory; she was in the chat and cheerleading you quite a bit.

>> SHELL LITTLE: Oh yeah. “P

>> MICHAEL BECK: It was almost like a woman at a black Southern Baptist church screaming, “Preach on, brother!”

>> SHELL LITTLE: That’s great I’ll have to look at that. I can’t see it right now.

>> MICHAEL BECK: Yeah, does anybody have any questions? We’re just getting some great positive comments. Oh, someone may have a question.

>> SHELL LITTLE: Okay. I saw some pings and I was like, people will have lots of questions but I’m glad I got Mallory cheering in the chat.

>> MICHAEL BECK: Oh, yeah. What speech-to-text program do you use, if you use any?

>> SHELL LITTLE: So for speech-to-text, so in my free time, I stream on Mixer, a plug for Mixer. I rely heavily on the Google speech-to-text API. So anything that’s built into my device when it comes to searching, when it comes to typing, texting, and I gave a talk actually in Toronto at the a11yTO Conf. An example I used there was the fact that Pinterest does not allow me to use my voice-to-text feature in their search is a really huge barrier for me. So if anybody knows anybody at Pinterest, let me know. Because I rely heavily on things for dyslexia I, having dyslexia, I rely on things with speech-to-text when I’m looking up recipes, I’m not just looking up garbled words.

>> MICHAEL BECK: Is there any place that you would suggest developers go to learn more about mobile techniques?

>> SHELL LITTLE: Yeah, I’ve consumed a lot of the content from TPG. They have an iOS guide and an Android guide. And I highly recommend those two pieces of literature. Unfortunately, I’m not sure if they are paid or not so that might not be incredibly beneficial but it is worth it to me. As a non-developer, especially with iOS, I have to stack a whole project, so basically break down every element of a project to deliver to the developers, and with using the TPG guides, I was able to really learn a ton and successfully communicate the accessibility needs to our developers.

>> MICHAEL BECK: Okay, and Pamela points out that, for instance, like a library catalog may have extensive info on the desktop version but for mobile the details are usually significantly scaled back, that actually I know that very well as a former librarian myself. Would that be considered like a violation? Because she always thought that scaled down content for mobile was more accessible and not necessarily less.

>> SHELL LITTLE: When you’re talking about scaled down information, are you talking about, like, say like a description on a book has less content?

>> MICHAEL BECK: Yeah that generally happens. It also might have — it might pull off information — oh I’m trying to think here it’s been a while since I’ve looked at a mobile catalog. It may have less information but not necessarily vital. Like on the mobile catalog it may just have maybe a picture of the book or the title of the book and where it is and the call number and whatnot and maybe the availability across branches as opposed to on a regular web it may have a full-on description. It may have reviews of the book and all of that sort of thing.

>> SHELL LITTLE: Uh-huh got you, yeah, so I think obviously optimizing stuff for a mobile experience is really important because correct you don’t want to bombard your user. But at the same time that’s what expandable content sections are for and that’s what click here to learn more or expand is for. So, maybe minimizing the information that you’re giving upfront but if the user really wants to find that extra info, it shouldn’t be on a different platform. It should be maybe behind a, you know, maybe a pop up a window that gives you all of that information, and users are able to scroll through. But if we’re talking about, like, pulling out the fluffy stuff and leaving the really important content, I see no problem in that as long as the content would be considered parallel and equal in what a user would get out.

>> MICHAEL BECK: Okay yeah Pamela just pointed out that maybe a title might have 15 subject tags and 10 authors on a desktop and on mobile it might just have the first 3 subject tags and the first 2 authors. So that…

>> SHELL LITTLE: Got ya.

>> MICHAEL BECK: That may not be as successful because that’s a little more vital.

>> SHELL LITTLE: That’s where I would say like having a dot dot dot or, like I said, click here to or tap here to read more would be really important in that situation.

>> MICHAEL BECK: Have you heard of hooking up phones to laptops to see the code. Mallory has heard of people hooking up things in Macs and hooking them up in Safari and hooking up Androids to things and viewing the source in Chrome.

>> SHELL LITTLE: I have heard of it nor have I had the experience to do it myself nor have I seen it in person but it sounds super interesting to me.

>> MICHAEL BECK: John points out TPG’s testing guides are freely downloadable, so go check that out.

>> SHELL LITTLE: Fantastic.

>> MICHAEL BECK: And one final question for you as we’re nearing the end of our time, Chris Frye has a question about iOS headings he audits a lot of interfaces that using a ton of bold text to delineate categories such as dates with several child elements under those dates but in some screens there are weeks worth of items so we are looking at a minimum of seven headings on small screen real estate. Is there a soft rule of thumb when considering how many headings are appropriate or when developers should consider reorganizing the information?

>> SHELL LITTLE: Yeah, so, first of all, hi Chris Frye. I’m not sure I fully understood. Let me look through the question one more time. So, basically the question is there’s too much content basically?

>> MICHAEL BECK: Yeah if you’re hitting seven headings that’s quite a bit and is there a soft rule that you follow that would be like hey maybe we should cut it off here or go back to the drawing board and reorganization?

>> SHELL LITTLE: Yeah I think if we’re talking about, so a shared codebase, so, maybe it’s just like a web based thing wrapped into or broken down to a mobile device size, if it’s not working at a mobile device size because there’s too much content, because it’s too cluttered, then honestly it probably sounds that would be a great reason to reorganize, because if there’s that much content and some of it is unnecessary users who are potentially screen reader users accessing it on the web will find it just as cumbersome as somebody doing it on mobile. So just reorganizing and minimizing content because when we optimize for mobile we’re optimizing all experiences. Just getting rid of extra clutter, getting rid of unnecessary paths that loop. I think I’ve found that when we have redone old legacy stuff and just cleaned out the content and gave it a facelift that it makes the experiences better all over.

>> MICHAEL BECK: Okay. I think that was it. Well, PJ has a quick question where should we go before presentations to read about the upcoming topic. Well, you can go to technica11y.org. That’s technica11y.org. We usually have all of the bio information and topic information about two weeks before or at least two weeks before the next talk. Speaking of which, next month we’ll have Michelle Williams, who is a Senior UX Researcher for Accessibility at Pearson and she’ll be onto discuss what’s really needed to conduct accessibility user research and how that generally involves having an accessible ecosystem to work with. That will be on May 1st. Oh, it’s on May Day! Wow, instead of dancing around the maypole with Lord Summerisle and burning a Wicker Man that may or may not have Nicolas Cage in it, come listen to Michelle Williams here on technica11y on May 1st at 11 a.m. If you missed anything out you can keep an eye out for Tenon’s Web site we’ll let you know via social media when it’s available or subscribe on the channel on YouTube to get automatic notifications. And with that I would like to thank Shell again for her excellent presentation and all of you for joining us today and we’ll see you all next month. Thank you!

Shell Little

About Shell Little

Shell Little is the Mobile Accessibility Lead at Wells Fargo DS4B.

The Interaction of Color Related Guidelines in WCAG 2.1


[Intro music]

>> MICHAEL BECK: Welcome to technica11y, the webinar series dedicated to the technical challenges of making the web accessible. This month, our presenter is Luis Garcia, Senior Product Manager for Accessibility at eBay.

Hello, everyone, and welcome to this March edition of technica11y. Thank you so much for joining us today. And say hello to the Facebook world as you may have seen through the sharing that we are currently live streaming from our Tenon page. This month we are Luis Garcia, Senior Product Manager for Accessibility at eBay, and he’ll be talking about the various color related guidelines in WCAG and how fixing one issue might just create issues in other parts of the page. As always, we’ll be answering questions at the end, so you can put them in the chat box beforehand but we’ll get to those when Luis is done with his presentation so without any further ado, take it away.

>> LUIS GARCIA: So hey, everybody, thanks for coming out for signing up.

We’re going to be talking today about color alone, color contrast, color conflicts. All sorts of fun stuff that can happen when you’re doing color stuff with WCAG. So before we get started, I have the slide deck URL for anybody that wants to follow along at home or on your personal device. The URL is Garcialo.com/2019/technica11y.

And, as I was introduced, I’m Luis Garcia. I’m a Senior Product Manager for Accessibility at eBay if you want to follow me on Twitter I’m @garcialo. Let’s get started. So, a real quick overview of what we’ll be talking about. For the last few years I’ve been giving a talk where I explore exceptions to the WCAG success criteria and when I cover color related things, I show these two examples. So there’s two sentences. The first one says somewhere in this sentence is a link that fails 1.4.1 which is the use of color guideline or color alone guideline, and the thing with this one is the word, “sentence,” is a link. At least to me, it’s pretty clearly, you know, I can tell that that blue is a link.

And then I have a following sentence which says, “Somewhere in this sentence is a link that passes 1.4.1.” In this sentence, the word, “sentence,” is also a link, but, it’s a little bit less noticeable to me that it’s a link. So, it’s just kind of, like, there are some weird things going on with the contrast and the algorithm that we use for calculating the sufficient contrast for colors. I would also say things like, I kind of hate the color alone success criteria. The color contrast algorithms doesn’t always work as we see in the example above but it’s what we have available to us. The only difference between nothing and text is color in the shape of letters. I say things like, “I’m going to make a talk about how much I hate the color alone guideline.”

And I’ve also encountered many things from wonderful people I’ve worked with. And designers, developers, people trying to meet the letter of WCAG but not necessarily the spirit. This isn’t a nice talk that will make you feel all pepped up and gung ho about accessibility. It’s not about like got yas like when you use color, look out for this, and work through that this way. It’s kind of more of a warning of some things people might want to try to get away with so you can be prepared to say, “No,” if they say, like, “Well, this technically passes,” you can be like, “Yeah, that does technically pass, but we really should not be doing that.”

So we’ll go ahead and look at our agenda.

So as the title of the talk suggests, we’re going to be looking at color alone. We’ll look at color contrast. Then we’re going to be looking at the color conflicts. So color conflicts in this case isn’t necessarily, like, where things butt heads. It kind of is but it is kind of like things that aren’t completely clear from the guidelines given to under the circumstances like what should you do. Or there’s not enough guidance for how to make a judgment call.

But first, let’s cover the guidelines to see what exactly we’re talking about because I’m sure not everybody here is on the same page as far as, like, there’s people who are new, people who have been here a while doing accessibility so we just want to make sure we’re covering the basics.

So the color alone or use of color guideline, 1.4.1 in WCAG, says, “Color is not used as the only visual means of conveying information indicating an action, prompting a response, or distinguishing a visual element.”

And this affects users with low vision. This affects users with various types of colorblindness. And a note on that is that when we’re considering use of color, we want to make sure that we’re including all types of colorblindness. So I’ve seen several times where a designer, they are trying to do the right thing. And they are like, “Oh, we’re going to choose these colors because they work with people who have red-green colorblindness because that’s the most common type of colorblindness.” It’s, like, “Yeah, that’s good and I applaud that effort.”

But, just as people working in the accessibility industry, we are representing people that have disabilities and are typically like a small percentage of the population. So, just as we’re advocating for people we’re saying these people that are a small percentage, they are a protected class, they need to have representation so just as we’re saying that, we should also within that group not discriminate against people who might have a more rare type of colorblindness or a rare combination of disabilities. We want to make sure we’re completely inclusive and that we’re considering all the way down to the smallest populations, including people who might have grayscale type of vision, so, just doing stuff for people with red-green colorblindness that’s not going to be enough.

I’ll stop diverting from that and we’ll take a look at some of the common ways people will fail use of color.

So, one of the most common ways is input errors.

So, if you have type 2 inputs on this screen. One of them is a normal state. So the first name field with text input and then we have another text input for last name. And that one the text is in red and the outline of the input is in red to show that this is in an error state. That’s typically how this one gets failed because people will just change the color or something, you’re using color alone to indicate the error status; it’s obviously an issue.

We might have charts and graphs where we’re linking the data that’s in the legend with the data that’s presented in the bar chart or in our graph using only color. So, the example I have is I have some, you know, no actual values here. Just some rectangles that represent bar charts, a bar chart. And then there’s a legend for lions, tigers, and bears. And then the only association between the two is the colors.

Then we have the classic one, “Links in a sentence.” This one, simplified, is links in a sentence, but it’s really color alone that’s used to indicate the interactive text when in the same context as non-interactive text. So, it basically means that in the same space, you have things that are interactive, things that are not interactive, and we’re using color to show what’s interactive and this typically occurs in sentences and paragraphs. But it also happens in lists and in tables so let me just show you these examples.

So, in the sentence it says, “Somewhere in the sentence is a link that fails use of color” and the words, “a link” are in blue. And so, the only things that are blue are a link and then everything else is kind of like the typical like black color.

But, this also occurs in a list. So you have in this list I have three list items, the first list item, second list item, and third list item. The second list item is in blue, the first and third are in black, so, the blue is there to indicate it’s a link and that’s another way we can fail it in the context of a list.

And then finally, “Links in a table.” So, I have a 3 by 3 table with different types of pies. So, I have the columns are pie flavor, sale price, and normal price. And then going down the rows, I have different types of pies, I have an apple pie, a pumpkin pie, and a key lime pie; all very delicious pies, but in this chart, they have different prices.

So, the prices that I have underneath the sale price column are all linked and all of the ones that are in the normal price column are not linked. However, they have basically the same visual styling with the exception that the ones that are links are in blue.

So, this is a place where we would want to do something in addition to avoid failing the use of color guideline.

So, what are some things that we can use to make sure we’re not using color alone? Some of the things that we can use are going to be text, where we just provide a little bit more information via text; we’re a little bit more verbose. We can provide underlines. We can use patterns, shapes, and icons for things like charts and graphs. And we can use the position of content. We can use brightness or luminance; that’s a 3 to 1 contrast ratio. And we can use other text styling like bolding, italicizing and changing the size of our text.

Here are some examples. Kind of looking back at the examples we just looked at, but, giving them a little bit more context, a little bit more information to avoid just using color alone. I took that last name error field from before and just added some text into the label. It says, “Please provide a last name.” It’s probably not the best error message, but it’s basically some text showing how we can provide the information that there’s an error state that’s using more than just a color. Then, the same thing, similarly we have underline for link in a sentence, the link is underlined. We have a sentence where the word, “link,” is underlined and it’s also blue so we’re not just using the color but also using text formating and continuing with that, we can use other types of text formating such as bold, so I have another sentence that says, “The link is emboldened,” where the word, “link,” is a link. It’s blue and in bold making it visually distinct from the non-text content in the sentence. However, if you plan to emphasize content on your site, then you’re back to having the same issue.

So, that issue kind of happens with underline, as well. If you’re using underlines to distinguish links or if you’re using bold to distinguish links, then make sure on the same page in the same contexts, you’re not using that styling to mean something else. Because otherwise, you’re back where you started and you’re back to having color alone being used as the only means of conveying that information.

Then I have charts again. But now I’ve put some friendly little shapes inside them. Inside of the first bar, there’s a square, the second one has a circle, the third one has a triangle. In the legend, the bullet points for the three data points, “lions, tigers, and bears,” each correspond to a shape. So now we’re using some shapes in addition to the color to associate the content in the bar chart to the content in the legend.

Alright, so, let’s take a look at where we’re at in the story of this webinar. So, we have completed color alone, andnd of got an idea of how that works. Next we’ll look at color contrast, then we’ll get to color conflicts.

So, for color contrast we’ll ignore the AAA guideline because most people don’t use that and we’ll only look at the two success criteria at AA level. That will be, “Contrast minimum” or as I’ll say. “Text contrast” and then we’ll look at the new “Nontext contrast” guideline from WCAG 2.1.

So, for text contrast, this one is fairly straightforward. It applies to text and images of text and any text or images of the text need to have a contrast ratio of at least 4.5 to 1, with some exceptions. So, if we have larger text that can have 3 to 1 contrast, slightly lower contrast requirements. Inactive UI components are exempt from the contrast minimum. Similarly, decorative content, invisible content, incidental text, those are exempt and logos, those are exempt from contrast minimums. And a side note about text contrast is there aren’t any contrast maximums defined in WCAG.

This is important to note because for some people dyslexia, reading black text on white, something like a really high contrast can actually cause pain for them. So, if you’re looking at the webinar, you can see the text I have it’s black but it’s kind of like an offblack and then my background isn’t white, it’s kind of like a creamy kind of offwhite color. So that’s there to not have, like, such a stark contrast between the background color and the foreground color. Oh, and another note, don’t neglect your image alttext when considering text contrast if your images fail to load on your page. You want to make sure everybody can read the alttext that shows up in place of your images. Yes, you can style alttext. If you’re following along at home, you can click through that link and see generally how you would do that.

Alright, so, “Text contrast” clarifications. So, if you have much larger text that doesn’t mean you can go below the 3 to 1 contrast ratio. I’ve had some people ask me that, “What if we make our text bigger? How much lower can we go below the 3 to 1?” Well, under WCAG it doesn’t let us really have any avenue for you to do that. It’s 3 to 1 no matter how large the text is, even if it’s up on a billboard or Megatron, or the Jumbotron, 3 to 1 is the lowest contrast ratio that’s allowed in WCAG.

And we have the logo exemption for branding kind of stuff of the logos, but that doesn’t really extend to brand colors. So, I learned that lesson when I was, so people who have been working in accessibility or if you have done stuff related to contrast and trying to make the colors work, you’ll know the orange, yellow, some of those colors are kind of hard to make work against white or against black.

And so when I was at UT Austin, our Web site, we had, for those who don’t know, the colors for UT Austin are burnt orange and white. And so, we had on our Web site, like, all of our links are orange. So that was difficult. Because making the orange, finding an orange that worked was a bit of a challenge but we were able to get it done we were able to get our branding people on board to find an orange that was burnt enough for them or burnt enough for us or not too burnt that it looked brown.

So the exemption doesn’t apply to brand colors, just for the logos.

And low contrast is often used as an affordance for disabled content, which means that that’s kind of a visual cue to users if something has a low contrast and typically in gray, that’s going to mean that the control or the element or that section of content is disabled.

And the last bit of clarification is the success criteria for, “Text contrast” actually says that inactive UI components are exempt from the contrast minimum but really it’s equivalent to being disabled in HTML or unavailable for user interaction. So, the differentiation there between inactive and disabled is basically the difference between having, let’s say, a submit button at the end of a form that will only become available for the user to use after you’ve completed the required fields, for instance.

So, I would say that that’s disabled because in HTML we would put the keyword disabled on that input, which means it would be unavailable for users to interact with.

However, inactive just means that the user can interact with it, but it’s not currently what’s active.

So, a good example for that would be if you have a tab list, like a group of tabs, you can switch between the different tabs. So the one that’s active is the one that you’re on.

But, then all of those tabs you could go to, those are inactive. That’s generally the way that I think of the difference between inactive and disabled. In WCAG, they still kind of use inactive and disabled in a similar way. But, just so you know, that’s what I mean when I use those two words. And I would say the inactive content, such as the additional tabs you might want to click on, I think those should meet the contrast minimums because how are you going to know which tab you want to go to if you can’t read the content?

Alright, so, moving on to non-text contrast. So, this is a newer guideline introduced in WCAG 2.1. And this one basically says that, “UI components and graphical objects that are needed to communicate information or functionality have a contrast ratio of at least 3 to 1.”

So, some clarifications on non-text contrast for clarifications…[chuckle]…so, some clarifications on non-text contrasts.

So, if you have text in your UI components and your graphics, those are main texts and are still held to the standards for the text contrast. So, if you have regular text inside of your button, that’s going to be, the text self would still need to meet the 4.5 to 1 contrast ratio if it’s normal text. And it can be 3 to 1 if it’s larger text. And then logos are still exempt if it’s a logo in your graphic, it’s still going to — still exempt from the contrast minimums.

Another thing is that non-text contrast does not always require the component boundaries to meet the contrast minimums. It also does not always compare the contrast between the component itself or the graphic itself to the background. So, when we do text contrast we are always comparing the color of the text to the color of the background of that text. So that’s not always the case with non-text contrast and we’ll look at some examples in just a second.

Lastly, it does not require that different states of the same component need to have a sufficient contrast minimum met unless they are next to each other. And we’ll actually go a little bit more into detail into that a little bit later on.

So, when do I check for contrast? Generally, it’s…what is required to identify the component and its state. So, if there’s information about a control that’s needed to tell that it’s a component or that it’s, like, interactive, or, like, what the state of something is, that’s when we want to make sure that we’re checking for the contrast. And then to identify the part or parts of the graphic that are important to understand what that graphic is.

Let’s just take a look at some examples that are in the guideline itself.

So, I have this in reading view. This is the 1.4.11 non-text contrast.

Here is an example of how I was saying we’re not always going to have the boundaries of the interactive controls need to be specified. So, both of these examples would pass assuming the colors meet the contrast minimums. So button here on the left doesn’t have a border, it’s just a text button. But here on the right side, we have the word button in the center of a button and we have some styling for the border.

In this case the left one would pass because the text button, which is all that’s kind of needed to let the user know that this is a button, that this is actionable. That’s going to meet the contrast minimum for text 4.5 to 1. And then similarly, on the right side, even if this boundary that’s around the word, “button,” fails to meet the 3 to 1 contrast ratio, the word, “button,” itself could pass because it’s 4.5 to 1, so, the entire button wouldn’t necessarily have to meet that ratio.

In this next example, they are a little apart. So, we have this name input up here at the top and then we have this name input here at the bottom. And these are both examples of how we’re not always comparing the text, we’re not always the color of the component with the background. Sometimes, we’re going to be comparing the component with itself or some other part of the component with the background.

So this top example, so we have name is in a dark color and then there’s a white background behind the entire page, the page background will be white and the input field is a silver gray box boundary around the box.

In this case, the thing we would use to check the contrast for this non-text content is going to be the boundary box or the border and check the contrast of that against the background of the page. We’re identifying what this input is.

However, this one down here is pretty much the same thing. The name is in white instead of in black because we have a dark background for the page. But, basically, the box is probably the same silver gray color. But if we compared that gray color to the dark blue background, it probably wouldn’t meet the contrast minimum. However, in this case, we can use the inner background of the text input and compare that to the dark background, the dark blue, and it will meet the contrast minimum. It’s not always that we’ll be comparing the component in its entirety with the background.

Here is another example where we’re comparing the contrast of elements within the component itself instead of worrying about the background. So this is a kind of gray check box with the rest of the background of the component is a purple color. So, we’re comparing the components of the check box itself, the check box or the checked state. So the check mark. And then the purple background. We’re doing a comparison of those two colors, not of the purple and the background how it is on the page.

I’ll just look at…let’s see. We’ll look at two more examples just to kind of give you an idea of some of the principles around this and then we’ll move on to the next section which is going to be a lot of fun.

So, this group shows some radio buttons. It’s four different radio buttons in different presentation styles. So the first one is not selected it’s just a circle with, it has no filling inside of it. The second one is a circle that has, it’s selected so all the rest of them are selected, but the first one that’s selected has a complete fill within the border that’s been specified for it.

The third one is more of a flat design where it’s completely filled but the fill is kind of like the same color as the border or it’s a fill without a border.

And then the last one has the border of the radio button and then there’s a fill inside of it to let you know it’s selected but it doesn’t fill up the entirety of the radio button, so, there’s a little bit of like an inner kind of border. There’s like a border and then there’s some spacing between the border and the selected area fill.

So in the second and third examples, what we would be doing for checking the contrast is we would check the contrast of the fill in the second example with the background color.

In the third one, the flat design, the only color that we have really is the fill so we compare the fill with the background color. But in the last one, what we would probably do is compare the fill color with the inner background color of the input. So, this gray fill with this little white border kind of in between the border and the fill.

This last example that we’re going to look at kind of twisted my brain a bit the first time I looked at it. So there’s four different types of star ratings presented. The first two will pass. And the last two will fail.

So, these two at the top, they pass. The bottom two fail.

The rationale behind each of these is pretty straightforward after you look at this a little bit and read it a little bit.

So the first two, well, the first of these star rating examples, it has five stars. You know, possible five stars. But the ranking that’s been given is two out of five stars. So, the first two stars are the ones that are filled in, they are completely filled in, they are black. And then the remaining three stars that aren’t selected, they have just the outline. So the thing that we’re comparing here is, “How do we distinguish between the stars that are in one state versus another?” We’re going to compare the contrast of the content, of the fill versus the not filled state for the non-selected stars. And that’s going to be a sufficient contrast ratio between this black and then this inner background color.

For the second one, if we look at the same and we kind of compare the same thing we have the same situation, two out of five stars selected.

If we look at the fill of the two stars that are selected and compare it to the fill of the three that aren’t selected, that’s not going to meet that contrast minimum. That yellow against that white, that’s not going to work.

So then how does this pass? Well, if we look at the border, we can see that this has a thicker border than those remaining stars, so, that’s how it passes. So, it’s not just comparing this one aspect of it. It’s looking at the entirety of it and seeing if there’s a way that we can distinguish these two stars from these other three stars. In my personal opinion, this isn’t really super clear. I guess I’m just not as good as seeing some things are bold and some things aren’t when it comes to graphics, but this is sufficient according to WCAG.

And if we look at these latter two examples, they both fail.

The first of them that fail is pretty similar to that last one that worked. So we have the two stars are filled in with yellow. The remaining stars are unselected, have no fill. But in this case, we don’t have a thicker border that we can use to delineate. So, actually, yeah, now that I look at it a little closer, so I can see clearly when comparing this one with the one above it, it’s like, yes, this one definitely doesn’t have this thicker border on these two stars. So the only difference between this and that one is color and the contrast isn’t sufficient enough to distinguish them. That said, one of the things that we can use is like brightness or luminance. Which is defined as like a 3 to 1 contrast ratio. So if we were using a different color, then we could potentially have this one work.

So we are using a darker color in the first example where we had the black. So the black against the white.

Okay, so, the Position guidelines, or the position conflicts is basically that…so we have some navigation links up here and we’re using positioning to define that these are links. They are separate from the normal content in the page. But, what if we have a more subtle separation for the links at the top. So this, “Learn more on its own line,” then we have a paragraph here and then we have, “Learn more here.” It is positionally different from this content like, “Learn more…” will always be on its own line at the end of a paragraph. Is that enough position separation to say that nothing else is required. What if, “Learn more…” is more separate? Is that going to be enough? Like we put it further down? We put more space between the text that has a similar treatment and the, “Learn more…” Is that going to be enough? That’s a little bug. We don’t really have guidance in the guidelines to let us know that.

And continuing on, I’ve not really seen people talk too much about visited links and unvisited links. Is this a failure of use of color? The new contrast guidelines for non-text contrast says, “There’s not a new requirement that visited links contrast with the default color.” But does that mean that that wasn’t a requirement before? Who knows?

And then also because in that same paragraph it says that the guideline, “doesn’t require change of color that differentiated between states of an individual component.” That they, “meet the 3 to 1 contrast ratio when they don’t appear next to each other.”

But does it need to meet the contrast ratio when they do appear next to each other? And if so, how much is next to each other? What’s that differentiation?

Then for Disabled Inputs, so this one meets the contrast minimum. This top text input. The bottom one doesn’t meet it. But what if the enabled input is just above the contrast minimum, the disabled one is just below, does it fail use of color because they are so close? Just like we have text in a sentence that we need to make sure there’s a sufficient contrast between those if we don’t want to use an underline. Do we need to do something else similarly for enabled and disabled inputs?

And then similarly, we start getting into switches and we add a lot more states. It’s not just enabled or disabled. There’s visual difference. They are showing us whether this is on or off. And in the case of switches, the only one that has color is going to be enabled and on. Usually, disabled is going to be gray. And then off will desaturate the colors in the control. So, we have three states that all have different kinds of gray. What parts do we need to differentiate? Both parts of the switch are needed to identify the component is a switch. The big bits on the right, if it’s on. The big bits on the left, if it’s off.

And the on and off. So, on and off is designated by the position. So, do we need to have a sufficient contrast between enabled on and disabled on to differentiate between those two states? And what about enabled off and disabled off? That’s even worse, because, like I said, those are all going to be in gray.

And this is the last bit.

So we’re a little bit late but, we should be able to get to this. So this sentence here at the top has a link in it. And this is the classic case that everybody is aware of. When we have that link in this situation, it can pass if it has a 3 to 1 contrast ratio between the link and surrounding text and 4.5 to 1 between the link and the background, so, the linked text needs 4.5 to 1 with its background. The surrounding text needs to have 4.5 to 1 contrast between it and its background and then on hover and focus we need to provide an underline or similar styling and we are used to balancing these issues out; it’s a pretty well known issue. But, what if we have a situation like this sentence where it says, “But, what if the entire sentence is a link?”

And we have some color. Does this need to meet, like, do we need to have some kind of…does this fail? It seems like it would technically be correct because color is being used, but there’s no surrounding text for the link the entire thing is a link, we’re just using multiple colors within this link. And then what if there’s no difference in color between the link and the surrounding text? Can it fail the use of color success criterion if there’s no actual use of color? I would say technically it fails. Technically it passes. Because there’s no color being used. So it can’t possibly fail that. But, it just seems like it’s a bad idea to do these kinds of things. So I’m not going to go and shame designers or other people who have maybe tried to get away with some of these things. I’m not going to tell you which of them are things that designers have tried getting away with or that are just ideas that have popped into my head. But that’s pretty much some of the conflicts that can kind of come up. Sorry I had to rush through that last bit. I promise, I said it much more eloquently the first time.

So, thank you for listening to me rant. Thank you for attending my talk. Any questions?

>> MICHAEL BECK: I did see one pop up in the chat earlier on. Let me get back to it.

So, are buttons address differently when they don’t need a border or background to help identify them as buttons or actionable items whereas links and content should be used to help identify them as links by not relying on color alone?

>> LUIS GARCIA: Yes. What’s the question?

>> MICHAEL BECK: Are buttons addressed differently that they don’t need a border or background?

>> LUIS GARCIA: So, I don’t think it’s necessarily that buttons and links are treated differently that way. There’s this kind of, so, that was another example I was thinking about putting in here. But I didn’t feel too confident about it. Like, do we need to have some visual distinction between things that are links and things that are buttons? I would say that in practice they are treated the same way, that the designers are going to treat them like interactive elements. And developers, like, people know that developers are just going to go use links as buttons and use buttons as links. To some developers, they’re really like they are interactive things. And not tied to any specific functionality. So, I would like there could be, you know, within a design system for there to be consistency between things that are links and things that are buttons, not between the two, but, a user should be able to differentiate what’s a link and what’s a button just by looking at it. But,, right now the guideline probably would allow you to have differentiation or it wouldn’t really matter so you don’t need to make that differentiation.

>> MICHAEL BECK: All righty. And do we have any other questions? Oh, there we go. Non-text contrast. Does the hover state of buttons have to meet the contrast ratios or is the fact that it’s in the hover state enough?

>> LUIS GARCIA: I think that they said that they don’t really need to, so they would need to on their own. Actually the hover state. Probably the focus state would need to. There is a bit on it. It’s one of the newer guidelines. I’m not as intimately familiar with it. But, I would encourage you to take a look at the guideline itself, because I’m sure it mentions it. Let’s see. State. So non-text information within controls uses a change of hue alone to convey the state. That’s a little different. We can look through this but basically do some reading, search for the word, “state.” If a focus state relies on a change of color, changing from one color to another has at least a 3 to 1 ratio there will be some content in here. You’ll be able to find your answer in here. I haven’t internalized this guideline well enough to give you the answer.

>> MICHAEL BECK: Got ya. So yeah, read through the WCAG guideline. I just put the chat — the URL for the tweet for the deck in the chat for anybody that wants to take a look at it again.

>> LUIS GARCIA: Cool, awesome.

>> MICHAEL BECK: What is the best link combination for link styling in a content management ystem that allows people to use bold and underline while adding content to the page? Would you have to use color bold and/or underline to avoid conflicts?

>> LUIS GARCIA: Typically, I usually use underlines. So, yeah you can have conflicts with other text that’s underlined but since the beginning of the web, underline has been the affordance given to identify links. I would just kind of, like, within the CMS it gets a little bit weird. Because it’s like you don’t necessarily want to treat every single link and have underlines for every single link. That said if you did have underlines on every single link, some people just think it looks a little bit ugly. If you look at mine, I have these navigation links down here they are underlined. I don’t think they look too bad and people know that links are underlined by default the browser is going to underline links. And, you know, often the guidance I give to teams when there’s something that needs to be underlined I’m saying, I don’t tell them to add an underline to it. I tell them to stop removing the underline from the link that the browser puts on here.

If you get to the case, like, let me show this because I know from memory this is a thing. So, if you go to a footer, oops, it’s all the way down here. If you have something like this where you have multiple links right next to each other, the underlines are going to give you affordance of where that link begins and ends. You can’t just say it’s going to be the word because, “Accessibility,” is one word. The link is one word, “User Agreement,” that’s a two word link, “Privacy,” “Cookies,” these are all going to, like, the underline is what everybody knows as being a link.

>> MICHAEL BECK: Yeah. A follow-up question to the button answer. I know you said read the guidelines but Michelle would like to ask, “If there’s not a border and no background defining the button area, wouldn’t it look like…”

>> LUIS GARCIA: It will just look like text that’s on the page, yeah. So, it will look like what some people would say, “Oh, that’s a link!” It’s like, “Well, no, just because it’s a text doesn’t mean it’s a link.” Because there could be within a sentence, you might have a flyout for that text to give you a little bit more information about that term. So, if you go to Wikipedia, they now have on there the links they have the little flyout thing that comes up. That doesn’t necessarily mean if it was a toggle kind of functionality, then that would be a button, it wouldn’t be a link.

>> MICHAEL BECK: Okay. All righty. Our next presentation will be on, what is that first Wednesday in April?

>> LUIS GARCIA: It’s going to be the 3rd is the first Wednesday in April.

>> MICHAEL BECK: Yeah, 3rd, first Wednesday in April at 11 a.m. We’ll have Shell Little, the mobile Accessibility Lead at Wells Fargo. The title of her talk is, “Dark Adventures in Mobile Accessibility.” She’s going to talk about why the mobile space is so difficult to work in and give concrete examples of things that would technically pass in WCAG but are really bad experience for users with various disabilities with a focus on cognitive disabilities on mobile.

So thank you, all, again for joining. And thank you to Luis. And I hope to see some of you at CSUN. Stop by the Tenon booth. Karl and I will be there as well as some of the other Tenon family and I’ll see you next month. Thanks again.

Luis Garcia

About Luis Garcia

Luis Garcia works at eBay as a Sr. Product Manager for Accessibility. In his spare time, Luis talks to tech folk about accessibility at Meetups, Accessibility Camps, and on various online forums. In addition to pro bono and volunteer accessibility consulting work, he also participates on the W3’s Silver Community Group working on the next major revision to the Accessibility Guidelines.

PDF/UA and WCAG 2.1: Always Complimentary


[Intro music]

>>ANNOUNCER: Welcome to techinca11y, a webinar series dedicated to exploring the technical challenges of making the web accessible. This month, our presenter is world renowned expert in PDF accessibility, Adam Spencer. And now, our host and moderator, Michael Beck.

>> MICHAEL BECK: Hello everyone, and welcome to this February edition of technica11y. If we have any new folks with us, I’m Michael Beck, the Operations Manager at Tenon, stepping in for Karl Groves. We are very excited to have Adam Spencer of AbleDocs, one of the world’s foremost experts in PDF accessibility. Hello, Adam.

>> ADAM SPENCER: Hey, Michael. Thanks so much for having me.

>> MICHAEL BECK: Of course! Adam is going to address one of the greatest misconceptions in the world of document accessibility, namely the statement that, “My documents need to be WCAG compliant.” He’ll explore the similarities and differences in language that need to be understood when making PDF documents accessible and compliant for PDF/UA. Without any further ado, take it away, Adam.

>> ADAM SPENCER: Super. So, thanks again. And yes, some of you may know me from my previous life. I was head of accessibility services at Accessible-IT for the last nine years and in November, I stepped away from that position and moved over to taking over a new firm that was actually an old firm in Europe. So, we’ll be making a much bigger announcement about that in the coming days and weeks. So, forgive me if our slides aren’t as polished as they could be. Obviously, I’m sure you can appreciate that there’s a whole bunch of things to do when launching a new firm, but it’s business as usual. One of the things that I have always run up against when speaking about PDF accessibility is, “Why do I need to do this? Why do I care about PDF? What the hell is this PDF/UA thing and why can’t I make them WCAG compliant?” I have spent a lot of time over the last decade making sure people understand the similarities as well as the differences and why it’s important to understand what you need to be asking for when you’re requesting your downloadable content to be accessible.

We’ll walk through this and I always have a challenge with webinars because it always feels so one sided. But, I assure you we will definitely have time for questions and comments at the end. And if you have any that you’d like to write into the chat panel, by all means we can work through them afterwards.

So, I think where we need to straight is let’s set the record straight. PDF/UA and WCAG 2.1 are not mutually exclusive. They are not mutually dependent, but they are always complimentary. So that always becomes a founding place for us. Asking for something to be accessible in a PDF content does not mean that it is not WCAG compliant. But what it does means is that it’s compliant for specific format, and that’s PDF.

So, why are we still talking about PDF? Well, this is always a great question, especially when you’re talking to a room full of web developers. Contrary to popular belief, PDF is still very, very relevant. PDF technologies predate the internet. We’ve been working on PDFs since the ‘80s and since then it has now become really the document format standard and is still used today extensively. There are over a billion PDF’s loaded to the internet every year. This is validated by Adobe on an annual basis, and that’s where you’ve got to understand that you can’t just be a web manager or content manager and say nothing is going to be PDF. And I will say this: I am not a PDF exclusivist, and I may be coining a new phrase there. But, one of the things that we’re looking for is how people access content and it’s often forgotten, and I think we get into a very rigid mind set about, “I’m good at this and this is what I’m going to do.” And we’ve obviously had that kind of struggle between the web world and document world over the last five years, six years about we have to eradicate all of our PDF documents. When the federal government in Canada was sued for discrimination against an individual with a print disability, their reaction was to remove all of that content. But that content is still very valuable. I always go back to a 300 page report. Can we make a 300 page report in HTML? Yes. Should we make a PDF report in HTML? And the answer is no. Nobody’s going to read that type of content. That’s why I’m not going to sing “Kumbaya” and hold everybody’s hand about different formats. But it’s important to understand the difference of the use of different technologies for different applications and different methods of communicating. If you’ve seen me speak before, I make the joke about not everything should be a PDF, a web page, or tweet. And that always kills in Washington D.C. or on the coasts, particularly with what’s going on in the U.S. lately. So that’s why, you know, we can’t ignore the document problem or the document reality, I should say.

PDF’s have been built upon a platform of trust of the content and that’s an important concept to understand. When you see a PDF, we can validate whether that’s the original content or not. That’s why there’s so many subsets to the international standard for PDF technology. Things like archiving. Things like accessibility. Things like engineering drawings. They’re able to be trusted because of the way that the file is created. PDF is a very technical standard. It’s not something that you just pick up and run with. And I think that’s been one of the biggest challenges that people have faced when it comes to document accessibility. Everyone’s looked for an easy button, and we’re all guilty of it. But the reality is, it is a very technical piece of technology. And that’s okay. But you need to understand what those differences are.

So, when we’re looking at organizations who are generating PDF content, the reason they’re doing it is many are old, and particularly in Europe, documents have to be saved for PDF/A, which is the archive standard in order to be posted online because of that element of trust. We’re able to see that with documents. And one of the cool things about PDF is that all of these subsets, all these substandard, PDF/A, PDF/X, PDF/UA, they’re all complimentary. They’re pieces that may overlap and may not be fully compliant, but they don’t infringe on the ability to either meet those standards or still access the content.

And one big thing that I always reiterate, PDF is not owned by Adobe anymore. It is not their product. They are obviously the largest contributor to the product development, but it has been an ISO standard. And starting 2004, there was effort from Adobe and the ISO to allow PDF technology to be an ISO standard. So, the International Standard of Organization is really built upon the global market understanding what it is we’re talking about. So, we have the same language so if a PDF that’s generated in the United States can be accessed by someone accessing it in Australia, Japan, India, it doesn’t matter. This is what we call a PDF. Unfortunately, there are a lot of organizations that have bastardized it and make it in an open browser, let’s say, and I won’t name any names. I’m sure you’ve either had experience with it or come across challenges when accessing documents loading in browsers, very popular ones at that. But the standard is a living document. It’s refreshed, well, we just published PDF 2 in the spring. And so there’s a group of people who get together and work together through the year to make sure that we’re meeting the needs of the document format as well as clients who come across new challenges or new technologies that we can include into PDF. So that’s a really big thing to remember.

Everything related to PDF is based on its ISO 32000-1 standard. There are going to be a few numbers that I talk about today, but don’t worry. If it’s a PDF and it’s a compliant PDF, it confirms to ISO-32000, everything we do after that is a subset to the standard.

What is PDF/UA? UA stands for universal accessibility. It was a project started as early as 2004 where the reality was PDF content was inherently not designed to be accessible. It was designed to be printed and transferred from one machine to another, even pre-dating the internet, so that one print shop can see what a designer had built and have it print off perfectly well the next time. So when UA started being contemplated by Adobe and AIM, which is a U.S. group that works in conjunction with the PDF association, the goal was to increase accessibility of content within a PDF context.

PDF/UA is also an ISO standard, it’s 14289-1. We’re currently working on Dash 2 and we hope to have that ratified later this year. It is not exciting to read by any stretch of the imagination. If you’re ever looking for an easy way to fall asleep on a long flight, I highly encourage you to take ISO 32000 and 14289, and if you can make it past meal service, you may want to join the committee. We would love to have you because it’s a very technical document.

And it’s a normative technical standard. What does normative mean? It means that we’re providing guidance for developers, typically developers, to better understand what it is they need to know when creating an accessible PDF. What PDF/UA is not a guide like WCAG itself. It’s not really telling how to do it, it’s telling you what you need to do versus what you can’t do, and I think it’s a very important distinction. You raise a much more technical yes or no pass or fail kind of a standard rather than WCAG – and this isn’t a criticism of WCAG – it’s more of a, “If you’re running into this, you should be looking at that,” whereas PDF/UA is, “If you have this, you must do this or you should do that or you may not do this,” and it’s very technical.

It also provides consistent guidance for achieving accessibility within a PDF context, and there are three pieces to the UA standard that’s important. We’re not just focused on the document itself. What we’re focused on is how a file is going to be created and accessed, how a piece of adaptive technology is going to access that content, and how a viewer like Adobe Reader, Adobe Acrobat, Nitro Reader, even a web browser is going to present that content and it’s got to be presented in a consistent way. Again, we get back to that consistent experience from a user’s point of view as well as from the technology’s point of view. We want as many applications to be able to understand the file, interpret the file, access the content within the file, and provide it in a meaningful way, regardless of how they’re accessing that document. And as I said earlier, it’s an additive spec for ISO-32000. So, again, a PDF/UA file to coform must conform to 32000. So, you can’t have one without the other.

So, why do we need our own spec? Why can’t we just leverage WCAG and say, “Bob’s your uncle,” and we’ll use the 32 pieces of guidance that are provided by the W3C for PDF? First of all, PDF is not web content. I’m never obligated to post a PDF to a web page. I can create it on my PC, put it on to a USB key, hand it off to Michael, and Michael can read the file. It’s a very different mentality to the way that the document is created. The other thing to consider is people don’t really author web pages. When you’re writing content, you’re typically writing it in a different application, whether that be Word or Google Docs, or anything like that and posting it into a content management system. PDF is different. We leverage the design on the page, whether it’s in Word, Google docs or InDesign, and then create or we distill a PDF file. So, that’s one thing that’s important, the way that PDF content is created is different than the way web content is created.

And those formats are not created equally. That’s where we got to look at the differentiations between WCAG and PDF. There are different capabilities within each file type within each format, and those pieces need to be respected and understood in their given context. When we start to ignore those differences and try to harmonize it, the reality is that’s not how people interact with content. You have to remember that, and I think that’s gotten lost over the last five years and having seen the transition of who’s making content accessible and who isn’t; it’s a very interesting life span of content. And I cringe every time I hear, “Well, we don’t need our documents anymore.” Not every organization can say that. You obviously need to cull content, but rewriting content that already exists is a costly process rather than making existing content accessible.

So the people who were working around PDF recognize that those unique capabilities needed to be addressed in order to ensure equal access to content. Again, documents do things different than web content. And the people who we work with now, obviously want to make sure that we’re providing the best experience for everyone, regardless of the type of AT you may be accessing the file with. Sometimes there’s a whole bunch of subprojects going on on a regular basis, trying figuring out how to push the limits, what we can do within a document context.

And, “Authors, authors, authors.” The reality is, it became very easy for people to File > Save as PDF, and again, billions of these things floating around the internet and sitting on desk tops and we want to make sure that authors understand that they can make their documents fully accessible and fully compliant. One of the challenges is the guidance around that has been difficult. You can’t learn PDF accessibility in a day, but, you can learn it in a reasonable amount of time, if you’re prepared to be focused on making that happen, and I think that’s one of the things that gets very quickly glossed over. It is hard. That’s why services like AbleDocs exist. But, understanding how authors can generate content in different applications and still publish it in an accessible way is really an important piece to understand.

So, going head to head. We are friends, not frenemies. I have been stressing thing for at least eight years. We are not suggesting that one format should win or one format should die. I apologize, I’m in a departure lounge because my flight was late, so forgive me for the background noise.

So ,this comment of, “Friends, not Frenemies” is one we’ve got to embrace: how content is to be authored, how it’s distributed and how it’s going to be accessed and I think too often internal design teams, internal web teams, compliance officers don’t understand the relationship between the two to be the two formats, so that they pick one or the other because of a single directive and that just a sustainable approach. We have to recognize that there is a much easier way forward, and bluntly, a much more cost effective way forward. We were working with a client a few years ago that was planning on removing all of their PDF content, hiring people to reauthor all of that content into HTML because they needed it on their web page, and it was going to cost them an absolute fortune whereas we can remediate the PDF content already there, make it accessible, make it search engine friendly, make it catalogable for a fraction of the cost, and it boggled my mind that someone would even consider that. But again, we internal advice we’re receiving from constituents is we like web pages because we can access it on our phone. Sure. But you can also access a PDF on our phone.

So, this is a very, very important piece. PDF/UA and WCAG 2.1 have no conflicts. They’re complimentary. By making your file WCAG compliant, that does not mean it’s not PDF/UA compliant, or vice versa. They’re different considerations that PDF/UA looks at and WCAG 2.1 looks at. It’s understanding those differences, and understanding those differences, that help make a more accessible document and that’s something important to keep in mind and we’ll get into what those details are in just a minute.

But keep in mind: PDF/UA doesn’t address everything, and I think this is a really important piece. We don’t address media. We don’t address actions. We don’t address scripting. We don’t address design. And we don’t address content. One of the things that the authors of PDF/UA are very conscious of is we don’t want to tell someone what we can or cannot do with their own content. What we do is provide guidance around how people should be making that content accessible. And when it is a content consideration, we actually recommend referencing WCAG. If you don’t know what to do for a video for accessibility, as an example, check what you should be doing in WCAG’s guidance around media, particularly captioning and the rest. So, those pieces — that’s again why we’ve always found it quite interesting there was such a reluctance to embrace PDF/UA by the W3C, historically, as well as governing bodies or people in the industry. We’re not saying that WCAG isn’t valuable in a PDF context, but there’s so many more things that need to be discussed above and beyond the 32 guidance techniques and that’s really an important thing. And I can tell you that there’s an initiative right now, that is happening as we speak, to unite the two and have WCAG directly reference PDF/UA, which is really exciting, and I would suggest, it’s not just exciting for those who care about PDF, but exciting for all content authors because now there will be an easier way to better understand how to make all content accessible in a digital context or I should say more pieces of content accessible.

And I think that’s a really great thing. There was a meeting that was held or a summit that was held in Edinburgh, Scotland at the beginning of December, where PDF experts around the world got together and discussed how we can build canonical references for this is how you make PDF accessible or this is how you make this piece of content accessible, and we’re going to be publishing that in the coming months, which is really exciting because it allows people to understand, “I’ve come across this… what do I do with it?” And that’s a really important piece. So, although the language may be a little different and although PDF/UA doesn’t address everything, it allows us to have a common bond between the different format types and the different types of content that people are trying to put in. We laugh in our internal teams about the pieces of content that we receive and how authors are always trying to push the limits of what they can do, whether that’s from a graphic or design standpoint, embedding new widgets into the format. It allows us to challenge ourselves and say, “Okay, how can we make that accessible?” There are a couple of us that were having a conversation in a meeting a few years ago about how we make 3-D CAD rendering live and how we also make it accessible. And the cool thing is, it’s possible. It’s not easy, but it’s possible. And I like seeing what else we can do and quite frankly, that’s not something that WCAG touches on. So, maybe we’ll be able to bring something to the forefront when that gets hashed out.

So, the details. Time based media alternatives, test, sensory, and CAPTCHAs: UA does not address these items. We always recommend addressing WCAG. We can’t have those pieces exist within PDF/UA. It just isn’t how the format works. So we don’t have that.

Audio and video content: we do include syntactical requirements. So, we tell you how to tag something, but we don’t tell you what to do with the actual content. That’s something that WCAG does something much better than UA and that is not going to change. We will never address what that content accessibility should be. And a lot of that has to do with the fact that we don’t have audio/video experts or people with audio/visual disabilities within the committee to help form that guidance and we didn’t feel it was necessary to reinvent the wheel. If it’s already being convered in WCAG, let WCAG continue to manage that.

Video captions: we have no guidance around that. And JavaScript design: this is interesting because JavaScript is used heavily within a PDF content — or context, rather. However, we do not have a JavaScript accessibility standard included and that’s a really important part. Because we work so much with it and it can reference external content, that’s why we don’t provide guidance around JavaScript. So for any of the JavaScript accessibility requirements, we do suggest accessing or referencing WCAG.

So, JavaScript and media content: again, PDF/UA is not a JavaScript or multimedia standard. And that’s not to say that WCAG is either, but there’s much better guidance around how to make that type of content accessible within WCAG rather than within PDF.

Control device specific: we are control device independent. The way that content is accessed within a PDF is very much reactionary, not putting the focus from the page to the user. So, it’s all about the interactive to the page rather than from the page, and that’s a big differentiation. So, we don’t cover any of that within our spec.

And design considerations: this is really interesting, and I will say that this is an area that is most challenged when content is made accessible. And there’s two schools of thought here. When a file is presented to be made accessible by many remediators in the world, our job is to follow the PDF/UA specification. And often, unless otherwise specified by the client, that will be it. So, for example, low color contrast, or the type of content put in is an inaccessible piece of content. We won’t come back and indicate to the client they should change this. Now, what I will say is, that varies country to country. That varies organization to organization. However, it’s really important to understand where these changes can be made. PDF isn’t like HTML where we can swap out a CSS script and change the coloring throughout the document. That needs to be done at source. Nobody authors a PDF document. You author a Word file or InDesign file or PowerPoint deck. And when you’re looking at those pieces, you then realize how that content is being created in the authoring environment. That’s where the change is made. Unless we’re provided with the source file, which is often not the case, it’s very difficult to make those changes within the PDF. There are tools that make it easier, there’s no question about that. But, we can rely on different semantic equivalence.

For example, we can change the way that the page is presented based on certain print disabilities or user requirements. But, that has to be done in the viewer, not on the page. And this is a philosophical debate that goes back and forth. And I was actually having dinner with someone the other night and they said, “Yeah, we received a document back and the color contrast was awful.” And I said, “Great, you should have spoken to your authoring team and your design group and educated them on how to make a more accessible design.” And he said, “You know, that’s a really interesting point because we expect that to be done later in the process, whereas in the document world, we expect it to be done much further upstream.”

So, I think this becomes one of the interesting debates, and I don’t know that we will ever find true consensus on how that comes through, but that’s always the conversation. And I will say personally, we’re happy to make those alerts happen, but it becomes a much bigger challenge in a PDF context than an HTML context for those visual appearances.

So, what we look at is, for example, if there’s a bar graph with very low color contrast or the sizing of the text is poor, we have been known to embed tabular content behind the image, which could be a much more accessible option rather than relying on low color contrast and grainy quality imagery. Although, a table can sometimes post challenges for some, it’s actually a really easy way to access the content if the table is tagged correctly. And I think that’s one of the things that we’ve seen over the years.

Just sticking on content, PDF/UA worked really hard to make sure that even complex content could be made accessible and navigated easily. There was a period of time where people providing training were saying simplify your content, make it easier, don’t have complex tables. And I will push back on that quite strongly because it’s not the case that content authors can do that. When you consider things like financial tables and annual reports, you can’t simplify that table. In fact, it would be illegal to simplify that table because of recording requirements. But, people were providing that guidance because they didn’t know how to make the table itself accessible or authoring tools weren’t able to generate completely accessible structures. So, you’re all of a sudden telling an author how to create content without understanding the way that the way that content can be presented to a user if some additional steps were done to the document.

That one always rubs me the wrong way because I always felt that it’s important to make sure that people understood as much as possible about how compliant content can be, even complex content, rather than ignoring all of those pieces because it’s hard or they didn’t know how to do it. So, that’s just something that I wanted to add in. And I’m always happy to have a conversation about design considerations any day of the week.

Dynamic XFA: XFA is an XML based subcategory that has been used for almost 20 years now that allows for dynamic forms to be created. So, things that are responsive to users’ input. And they can be very, very accessible. However, they are prohibited in PDF/UA because we can’t control them the same way in a PDF context. That being said, they are not prohibited by WCAG. And you can make an XFA PDF accessible in a WCAG compliance framework, but you can’t do it in a PDF because of the way that the guidelines are set forward.

So, when do we refer to UA? Forgive me for all of this text. I’m not going to read this out. I’ll make it available. I don’t believe in reading out slides. [Chuckling]. So one of the things we really look for is interoperability, and it’s that document of trust. We want to make sure that a PDF is always reliable in the way that it renders, the way that it prints, the way that it’s viewed, the way that AT interacts with that content. And WCAG does not normatively address interoperability. It is always going to be our focus to make sure that regardless of what application is accessing the content within the page, we’re putting normative guidance around how that should be done. And then it’s up to the developers to make sure that their software can do that.

Again, it is all possible. However, some organizations don’t want to put the resourcing behind it, making sure that that’s done. And I think that really does a disservice to users because the number of times that a user will open a file and it’s still not accessible because the author in the group doesn’t know what to do with it, is really missing out on that interoperability factor. Because it has been around for so long, PDF remains that go to for accessing content or distributing content because of how it shows up. I know that you’re going to be able to see the exact same thing that I do or you’re going to be able to access that content in the same way that I’m going to, and I think that’s a really key aspect of why PDF/UA is so great and why PDF as a format is so great and really becomes a corner stone of what we’re trying to accomplish.

Fonts: fonts are a fascinating piece of PDF accessibility, and not just from a design standpoint, but from an embedding standpoint. And that section 7.21 is so critical for the accurate conveyance of the author’s intent with legibility. What we were able to do is smoothly render that content in any size which so important. And yes, there are some fonts that are still not available with that rendering, particularly in dialects. We have some of that challenge with Native languages because they not be TrueType and that may cause a legibility problem going forward. The other cool thing is that there’s some AT that can interact with embedded fonts and convert those embedded fonts. People who may have dyslexia, they can then swap in a new font that would be easier to read and access your content in a way that’s much more approachable for them than what the author intended. So, even if, a user put a serif font in place, which can be more difficult for something to read, a user can swap that out to a sans serif font and make it easier to access that content. I think that’s a great opportunity for content authors to consider general accessibility or a broader accessibility when you’ve got the ability to allow the user to customize it in a way that works for them.

Headings as nav: most files have headings. And it’s really the only feature that allows navigation within the document. The way that we navigate content in UA is really strictly based around heading structure. So, we’re building that hierarchical order of content, and they’ve got to be logically ordered. And what that means is you can’t go from heading 2 to heading 5. The context doesn’t make sense for a user. So we’re looking for an H1 to H2, 3, 4, 5, 6, in that order. And the cool thing in PDF/UA 2 is we can do it in a nesting way that doesn’t limit the number of headings. We used to be limited to six. Now, we can effectively go unlimited with nesting, and that’s a really cool thing that allows the user to pull up their heading levels and say, “Where do I want to go in this document?”

Some authors will not use a table of contents, for example. Maybe the document is only 20 pages and they feel they don’t need that kind of navigation. But, a user of AT does. And so, by forcing, through UA, the document to use that structure to allow an individual to navigate the file using these structures, WCAG doesn’t mandate that. So, and again, you can navigate using buttons and links and all sorts of other anchored referencing. But, PDF doesn’t have that, so we’ve got that now. We’ve leveragde the heading structure for that type of nav.

Article threads: this is a way for users to think of a magazine article. Instead of having it go from page 1 to 25 and read all of the content before, one of the things that PDF/UA allows is for articles will follow the logical structure. So you can move through an entire story as a combined piece rather than having to try and navigate between, “See continued story at top of page 7.” We can tag the file to follow that rather than just the tag structure. We’re following the logical structure, the semantic structure, rather than having to find that content somewhere else. So again, something really unique to PDF/UA and really based around the print world and I think you’ve got to remember that. That’s where the foundations of PDF are. It’s in that print world that allow us to have different pieces of content show up in a logical order, but we’re going to be able to, on a digital order, not necessarily a logical order. So we’re able to get around that within a PDF/UA content.

One big rule, and please take this away: failing WCAG 2.1 conformance with PDF/UA, you will violate UA…Sorry, violating UA file should be considered a violation of WCAG 2.1 or 2.0. The only exceptions to that are included. They are sections 7.4.2, 7.4.3, 7.4.4 and 7.12. None of those are required by WCAG 2.1. Those are the subsections I went through earlier. If your file is not compliant in the PDF context, it is impossible to be WCAG 2.1 compliant. Those are the pieces that you’ve got to recognize. I have in parentheses the XMP metadata flag and what I will say to that is this is a self-reporting piece of metadata that is added by an author of a PDF/UA compliant file that identifies the file as being UA compliant. That is not validated by anyone. That is self-reporting. So, a user can put that in and say, “I’m compliant.” However, that doesn’t necessarily guarantee it. It’s not like a PDF/A file format requirement. So, that’s one thing to keep in mind. Again, I’m happy to go into greater detail about what the UA XMP metadata flag is.

The nitty gritty: if you’re looking for a detailed framework, I wasn’t going to go through the table side by side comparison of WCAG to PDF/UA, where those subsections can be found in each of the specifications. I thought may be sleep inducing. But, it is available and this was done [see https://www.aiim.org/Global/AIIM_Widgets/Community_Widgets/Achieving_WCAG] I want to say three years ago, four years ago now by AIIM. And you can see achieving WCAG and following the guidance of UA versus following the guidance of WCAG. A few pieces have been updated since it was authored. But it’s a great resource to say, “Can I do this?” Yes. How do I apply? Do I apply it using the compliance provided by the ISO?” I thought that could be helpful.

And that’s me. And we’ve got 15 minutes for questions and comments and dialogue, if anyone would like that.

>> MICHAEL BECK: All right. Thank you so much, Adam. First question we have is from Elizabeth. She wants to know if you have an example of a complex table and “fix” for it that goes beyond just header IDs.

>> ADAM SPENCER: I do. And Elizabeth, if you want to send me an email, I’ll be happy to send you one. Part of the challenge is how you add those pieces in, and again, I’d be happy to walk you through that. I will say, and I try not to promote or discourage anyone from using any piece of software, all of the major PDF accessibility tools can do this. Some do it easier than others. [Chuckling]. So, that’s one of the challenges that you may run up against. And again, happy to have that conversation.

>> MICHAEL BECK: Perfect. And then Isabella would like to know how to make maps accessible on PDF, like a visiual map on a brochure of office locations.

>> ADAM SPENCER: Good question. We do a lot of maps. I think over time we’ve probably done about maybe half a million maps in our time…

>> MICHAEL BECK: Oh, my.

>> ADAM SPENCER: Yeah, it’s a lot. We have a client that is very map happy. There are really two approaches. And we worked with some users at the CNIB a number of years ago to try and best tag a map. And there were a few approaches that we relied on and one is obviously tagging the map as an image because it is an image. By PDF rules, we have to tag semantically. An image is an image. We don’t have a map tag. The alt text approach, so, writing a description of that map is a bit of fine art. You have to give someone context for where things are, and that does take a little bit of understanding of the map itself, where things are, how could they be geo-located, how can they be referenced? So, we look at bounding area. So if you’ve got a large map, we’re going to show a context. So, let’s just use the state of New York. If you’ve got a state of New York but the office is located in Buffalo and in New York City, we would start by the macro level indicating that this is an image of the map of the state of New York and indicating that there’s an office in Buffalo. We would look to add the actual address if we could, as well as the address to the office in New York, depending on what was being highlighted. We have also been known to add a much more detailed text description behind the image of the map to provide greater detail, particularly on schematical drawings on top of maps when you’re looking at planning documents. They just need more notification to the user and putting that all in alt text is a really tough usability piece obviously when you can’t navigate or pause or rewind that content in the alt text. That’s why we like to use the text behind the map. Also, always making sure that you’re identifying highlighted pieces, things that are identified within a legend, you’re identifying those on the map as well. Always tag the legend, always tag the title, and give people as much context and if the map is very detailed, provide text behind the scenes and tag it.

>> MICHAEL BECK: Sarah asks: Is Adobe Acrobat really the only tool for editing PDFs?

>> ADAM SPENCER: Absolutely not, no. There are three really big ones. Adobe Acrobat. There’s an organization called axes4. They have a tool that does a lot of advanced tagging work called QuickFix. You can also do PDF accessibility in a tool from CommonLook. They have a tagger add-in that leverages on Acrobat. Those would be the really top three tools. There are other subsets of those tools that make it easier, but, those are really the big ones. One thing is you cannot generate a fully compliant PDF file from source. It’s impossible. Microsoft can’t do it. Adobe can’t do it. There are things that can get you more accessible but you have to do a finishing pass to make sure that your file is fully accessible and compliant. Always look into those. Sarah was asking for that. Sarah, if you want a full list, I’m happy to have that conversation with you.

>> MICHAEL BECK: Perfect. And almost like a companion question. What is your favorite tool for checking and verifying PDF accessibility?

>> ADAM SPENCER: I have my own! [Laughter] A lot that comes down to because of the way we generate content and how many documents we make accessible, we need more advanced tools to deal with that. There has been tools to releasing, but our tool is a proprietary one. Previously, relied on QuickFix and the tools from axes4. I have a personal relationship with the guys there, but it’s because of the tool there.

>> MICHAEL BECK: Okay. What about ebooks? How difficult it is to make them accessible?

>> ADAM SPENCER: It is not. Ebooks is actually really easy to make accessible. The question is, “What format do you want to keep it in?” Are you looking at keeping it in PDF or EPUB? And I don’t want to be too commercial about this, but we have a tool that goes from an accessible PDF to fully accessible HTML and EPUB in about a click. So you have to decide how you want your content being accessed and read. So that’s the piece that’s really look at.

>> MICHAEL BECK: Is that a proprietary tool of your own?

>> ADAM SPENCER: It’s a service that we provide.

>> MICHAEL BECK: A service that you provide, okay.

>> ADAM SPENCER: And actually, we haven’t launched it publicly, but if you get in touch, we can run a demo and show you.

>> MICHAEL BECK: Okay. So Isabella, go ahead and get in touch with Adam, [email protected] You can see it on the screen. He can have that conversation with you. Do we have any other questions? Nope. Oh, yes, yes, we do. Elizabeth again. Is there a resource that goes into detail about the structure tags, like what can be a child of what?

>> ADAM SPENCER: On the record, there is not. Off the record, there is, and if you send me an email, I will send it to you. [Chuckling]. The PDF Association will be publishing that, but currently, it has still not be ratified. So, I’m happy to send that along.

>> MICHAEL BECK: Excellent. This was fantastic…

>> ADAM SPENCER: And the rules can get complicated.

>> MICHAEL BECK: Oh, okay. That’s good to know. In my previous life as a law librarian, we dealt with governments pushing more content out in PDF and them using PDF/A to make it official and just to make sure people haven’t screwed with it. Authenticate it, that’s the word I was looking for. That was interesting for me to hear the technical aspects of that.

>> ADAM SPENCER: And all of the content we produce for clients in Europe has to be PDF/A as well as PDF/UA. So we don’t have a choice.

>> MICHAEL BECK: I wish that some more domestic U.S. government bodies would do the same.

>> ADAM SPENCER: Agreed.

>> MICHAEL BECK: [Chuckling] So that’s it, that about wraps up this edition of technica11y. Thank you Adam. Thank you to our participants for joining us. We’ll have this episode up on our YouTube channel soon. If you missed something, be sure to check that out. Please, please, share this knowledge that we’ve been amassing with your colleagues. Next month, we’ll have Luis Garcia, the senior product manager for accessibility at eBay on to discuss the various color-related WCAG criteria and how fixing one might create issues in other aspects of a site’s accessibility. That will be on March 6th, and we hope to see you all then. Thanks again, Adam. And thanks to Sylvia from ACS, for the captions. And once again, thanks to all of you. Enjoy the rest of the day.

>> ADAM SPENCER: One quick thing, Michael. If you’re trying to go to our website, it is being launched under the new branding new week. So apologies for the sparse details on there.

>> MICHAEL BECK: There we go. Good to know. Thanks, everyone.

>> ADAM SPENCER: Thanks so much.

Adam Spencer

About Adam Spencer

Adam Spencer of AbleDocs is a world renowned expert in PDF accessibility. As an active member on a number of ISO Committees for PDF and PDF Accessibility, as well as the Canadian Vice-Chair of the Standards Council of Canada for PDF related technologies, Adam continues to be an active contributor to the development of the new international standards known as PDF/UA to ensure the accessibility and usability of PDF and adaptive technologies.

Accessibility Mechanics


[Intro Music].

>> MICHAEL BECK: Welcome to technica11y, the webinar series devoted to the technical challenges of making the web accessible. This month’s presenter is Jared Smith, Associate Director at WebAIM. And now our host and moderator, Karl Groves.

>> KARL GROVES: All right, hello. And welcome to the January 2019 edition of technica11y and Happy New Year to everyone!

We have our guest here, as the introduction said, Jared Smith. Jared is one of my heroes of accessibility. I’ve known Jared through the WebAIM list since 2003 and the WebAIM website, the WebAIM discussion list and Jared in specific have been huge resources for me personally since that time. And so I’m extremely happy to see him here. Now, Jared’s discussion on the technica11y webinar is going to be the interplay between page content including ARIA, the browser parsing, rendering, accessibility APIs and assistive technologies and this is something I feel pretty passionate about. I have a presentation I give that discusses this that I try to do as often as possible because I think this is core knowledge that everybody in web development needs to understand and needs to understand before, frankly, getting too far down the road of getting involved with accessibility tools and trying to do testing with assistive technologies and all the of that sort of stuff is understanding these foundational bits of knowledge that Jared is going to share. I think they are some of the most important things that anybody can know.

So I’m going to turn the control over to Jared and let Jared give us his presentation. Thanks!

>> JARED SMITH: Here we go. I just had to find the right button to unmute. Thank you, Karl, for the introduction. It’s good to be here. I really appreciate the invitation to present today. And the opportunity to talk about accessibility a little bit.

I’m going to try to move this thing — here we go.

All right. Now we’re cooking.

All right, but yeah, Happy New Year to everyone! I hope you had Happy Holidays and wish you all the very best in the coming year.

I am going to speak today about what Karl talked about and I’ve titled this “Accessibility Mechanics.” Now, this is a term that I took from Naomi Watson, my good friend, she has presented on this load. And I tried to really think of a better way to describe this. And “Accessibility Mechanics” is really the best that I could come up with.

So the mechanics, the internals of accessibility, that makes all of these things we put together for accessibility to generate a positive end user experience.

So, I’m just going to dive right into this and start by talking about what we generally refer to as being the web stack: where HTML is the foundational layer of this stack, and then we layer on top of that CSS, and then scripting on top of that.

And generally with the web stack, we want to keep these three layers distinct and separate as much as we can. So, if we look at these three layers individually, our foundational layer of HTML, that’s where our structure and semantics happen. Also where most of our accessibility magic happens. That’s where we define headings and regions and alternative texts and links and lists and buttons and form inputs and tables and so on and so on and so on. We have the vocabulary of HTML, the language of HTML, that has quite a few things that we can define to make things accessible. These are things that generally happen under the hood of our page to support optimal accessibility.

I sometimes tell developers, “full stack developers,” I’m doing some air quotes here, “full stack developers” that really if they want to differentiate themselves from their peers to master HTML. And it’s interesting that most of the accessibility issues that we encounter happen — tend to come just from a lack of understanding of foundational HTML. And that’s interesting. I don’t think that HTML is difficult. But, I think that a lot of developers are not that familiar with it. So, it’s a great way I think to really support accessibility and to really differentiate yourself from others is to absolutely master the semantics of HTML. Excuse me.

Bruce Lawson has a great blog post just published a few weeks ago titled the “Practical Value of Semantic HTML”. I encourage you to go read that. It talks about just the value of HTML in this web stack for full stack developers.

Also in this layer comes much of ARIA: Accessible Rich Internet Applications, the additional stuff we can add to HTML to expand its vocabulary. With HTML being the language, ARIA allows us to expand that vocabulary when it’s necessary. And that’s an important point. And it isn’t always stressed, I think, when we talk about ARIA and accessibility.

So I’ll be talking about ARIA a little bit as we go through this. Because it is an important tool, a very valuable tool.

But we start with good HTML. It’s important to understand the ARIA roles as we expand the vocabulary or change the role of things within HTML or what that HTML element does or is defined as being or doing to the end user, that we don’t override those native HTML roles. We just have to be — at least, if we do it, be careful with it. For instance ‘input type of check box role of radio’ doesn’t make a lot of sense because functionally it would be a check box within the page but presented as a radio button to the end user, to a screen reader user, and that’s going to be really problematic, especially if it still functions as a check box within the page. The interactions between checkboxes and radio buttons are very different.

And another example would be having a navigation — a list of navigation links on your page. We can give a container for that list role of navigation that would now be defined as a navigation region that would facilitate navigation to that list of navigation items. However, if we added role of navigation to the list itself, say an unordered list, now the semantics of that list are gone. We have overridden that list. And now defined it as being a navigation region. So the benefits of having a list of items, a list of links, for instance, would then be lost. So we need to think about how we’re implemented ARIA and understand that it overrides those native roles and sometimes those native roles might be useful or even necessary for accessibility.

Also we need to be cautious that the role names in the ARIA specification sometimes are not intuitive. So, we need to be really careful. For instance, menu, `role=menu`. Very often I see people say, “Well, oh, I’ve got a navigation menu a list of links here. That’s a menu. We’ll give that in `role=menu`. Yay, accessibility, we have improved accessibility!” And that typically is not the case. Because an ARIA menu or an application menu is not a navigation menu. It’s not a list of links. The interaction is very different. It’s more like software menus like file, edit and so forth, so we would interact with that type of menu very differently than we would a list of links. Tab panels are another one where sometimes you see a group of links that are visually presented within tabs. But each of those are distinct links that take you to a different page.

Very often, we will see ARIA tab panel markup defined for that list of links. And that actually is not a tab panel. And ARIA or application tab panel is very dynamic where clicking on an item would dynamically change the content within that tab panel as opposed to taking you to a separate page. Again, the interaction between links that look like tabs and the actual application tab panel are very different.

A third example is a data table. Giving a data table `role=grid` sometimes you might say, well, yeah, table is a grid of information we’ll give it role of grid to improve accessibility. Well, an application grid or ARIA grid is more like a spreadsheet, interactive via the arrow keys, usually editable. And so giving just a data table with text content `role=grid` is probably going to destroy the accessibility of that data table.

So we need to be cautious with these. How do we know that we’re implementing our ARIA correctly? Well, use the specification. Use the ARIA Authoring Practices.

As W3C specifications go, ARIA is pretty friendly. It’s kind of human consumable, especially the Authoring Practices document, which is wonderful. It defines design patterns, has code examples, it outlines the keyboard interactions that are necessary, the proper ARIA to implement for different types of widgets and controls and things that we might want to build and enhance with ARIA.

So, following this Bible of ARIA implementation is so, so vital, especially the keyboard interactions. And that’s important because ARIA does not change browser functionality. Because we’re simply enhancing that vocabulary of HTML and the things that are presented to screen reader users, it doesn’t actually change anything in the browser itself.

So if you implement ARIA, generally, you’re going to need to test in a screen reader to ensure that it’s been implemented correctly and especially those keyboard interactions. For non-standard widgets now this is all amazing, it’s cool what you can build with ARIA, but it’s so important that we build it all correctly and I’ll come back to that point a little bit later.

Next in our web stack we have CSS, our Cascading Style Sheets, and this is where a lot of accessibility happens, too. The visual stuff all occurs in this layer: sufficient contrast and good visual design and light space and typography and all of these things that can really improve and enhance accessibility. Now, screen readers typically ignore CSS, but with some exceptions (there are a few others), but the big ones are `display=none`, `visibility=hidden’, hide content from all users including screen reader users, after and before pseudo selectors those are things that can allow us to define content in our CSS that’s visually within the page some browsers and screen readers will read that content and sometimes we just need to be careful with that and test it.

So, this is also sometimes a little bit of a pain point in accessibility mechanics because authors maybe don’t know how to style or maybe they feel like they are limited the browsers and CSS spec and they don’t really understand CSS. So, they try to do things in other ways and CSS is a very — really, really powerful tool, but, it’s also important that we understand the C in CSS: the cascading component of CSS. So, I have lots of layers in this presentation. This is another set of layers we would want to consider. At the very bottom, we have our browser default styles that define how things are going to look by default: paragraphs are going to be, whatever, twelve-pixel Times New Roman black on white. The user can define their own default styles that override those browser defaults. As an author you can define external or embedded styles to change what that paragraph looks like in your page. You can add inline styles that would override the external or embedded styles, to say, change this particular paragraph to make it look a little different.

And then about that, the author can define important styles and those are going to override the hierarchy of any other styles that are defined below in the lower layers. Say, this is most important and ensure that this style is applied.

And, at the very top, are user important styles, meaning they can override any other styles that are defined and that’s really important for us to understand is that the user has two places in which their styles can be implemented. One is just above the browser defaults. And then at the very, very top of this stack, meaning that the user always wins. They have the power to override all of your CSS. And that’s great for accessibility because they can define colors that are most optimal for them. They can enlarge page content and change font faces to maybe font faces that are best for them maybe because of dyslexia or a reading disability. Lots of things that they can override within the page.

We just need to understand this cascade and the user is always at the top of that stack. And interestingly, very near the bottom of that stack, as well. So they win. They can override most anything. We need to understand that. And really focus on maybe less on designing an end user experience. We need to give up the notion that we control what the user sees or experiences and focus more on enabling a good user experience. In other words, we provide a great default but if the user overrides or changes things within the page it’s still going to work and be highly accessible.

And then our top layer within this web stack is scripting or behavior where we progressively enhance things, make things better, add functionality that’s not possible via HTML. This is also very, very powerful for accessibility. We need to consider that we don’t disrupt the keyboard navigation. Very often overlooked when it comes to dynamic content, single page applications and so forth, the keyboard interactions very often aren’t really considered and tested. So do keyboard testing which is easy. Put your mouse away! Start to interact with the keyboard! We can use scripting to enhance that interactivity by setting focus so it programmatically follows visual focus for things like dialog boxes, error messages, and menus when they open, reset focus and when they go away or are dismissed, they set focus back to something else that’s most logical within the page. Those types of things are excellent accessibility enhancements that can occur via behavior and scripting.

But we want to start with our HTML, add CSS to enhance that and make it look sexy, and scripting to make it smarter and more performance and enhance that behavior and make it even more accessible. Here is a pretty basic example of how some of this can be used at the bottom layer of HTML. We can define a button element, give it ARIA `pressed=false` so this is a toggle button to turn on or turn off filters. So, just standard HTML, We have this button and get ARIA `pressed=false`. ARIA pressed can be either true or false to indicate whether the button is pressed or activated at that point in time. We can define our default styles for that filter when the ARIA pressed attribute is set to false, in the square brackets there in our CSS allow us to define those styles in that particular state based on that ARIA attribute, as defined in our markup or as manipulated via scripting. We can also define our `pressed=true` styles. In this case, we change the border color and increase the width of that border style and then in our scripting, here is some basic JQuery where, when the button is clicked, we determine what it’s current state is: is it pressed or not and then we toggle that state so it changes to the opposite value, true to false and false to true, so forth.

So, this is a great basic example of isolating and keeping these three layers very distinct and the power that we can use that we can take advantage of by defining our standard HTML, controlling states via scripting, and then letting CSS do what it does best, changing the visual appearance. Very often we change visual styles in our scripting and that kind of moves, that kind of morphs the scripting and CSS layers to the other. Sometimes we put our CSS in our HTML. Sometimes we define content in scripting or write content to the page. This just allows us to really isolate these. This is powerful for accessibility, for progressive enhancement, and ultimately for accessibility by trying to keep these isolated.

I saw something that was very, very similar to this example in a popular framework, just a little while ago. And to do this basic functionality of defining a button, clicking on it to change the visual styling and toggling the state, was over 100 lines of code. And there wasn’t even ARIA in the line. It wasn’t defining the ARIA pressed state, it was just defining the visual stuff, which is they didn’t start with the button, they started with <div> elements and added stuff to it and just the concept of keeping the layers isolated is really powerful for development, optimized clean code and for accessibility.

So, I now want to introduce another stack. This is what I call the accessibility stack. At the bottom, we have the webpage itself, and that’s really where those other three layers of that web stack come into play. They are all part of that webpage: our HTML, our CSS scripting are all put together to generate this webpage. Above that we have the browser that interprets that webpage content and above it we have assistive technology, screen readers, voice control things — software in JAWS, NVDA, VoiceOver, Dragon, and so forth.

So I wanted to talk a little bit about this stack. But before I get into more detail. I want to talk a little bit about WCAG and some of the terminology that’s used within web content accessibility guidelines particularly the term “label” and the term “name.”

Now, in the guidelines we are required to provide an accessible name for elements within the page. The name defines what to call this thing, how it’s essentially titled or labeled. And then we also have to have an accessible label. Now these terms sometimes get jumbled up. It can be a little confusing because of the way in which they are used within the guidelines.

So the “label”, as defined by WCAG, is something that’s presented visually, that visually defines or explains what something is. The “name” is what is presented to assistive technology. It’s sometimes called an accessible name and it can be visually hidden. So I’m going to go through a few examples of this.

So, here’s a text box that has an adjacent label text of First Name. Okay, we can use the <label> element to associate that text to First Name to the input itself so a screen reader, if they were to navigate to that text box, it would be defined as the First Name text box.

So we look at name and label. According to WCAG, the “label” is the visible text of first name. The name is also the text of first name. But that has been defined as being the accessible name because of the <label> element. So this is where it gets a little confusing, and the <label> element should not be confused for the WCAG label.

Now, in this case they happen to be the same thing, the same text. First name. But label is the visual text, the name or accessible name is the thing that’s been defined to be read by the screen reader. In this case, it happens to be via the <label> element.

So you can see where the terminology can sometimes get a little confusing. Now, we can have name and label be different. An example would be a linked image where text on that image is “Next,” but the alternative text for the image is “Continue.” We now have a mismatch between the visible label. which in this case is “Next,” and the accessible name, which in this case is defined to be the <alt> attribute of “Continue.”

Now, the easiest way to probably avoid this would be to not use an image. If we were to just use a text button and style, it then we would ensure that the label, the visible text, and the accessible name are the same. Then in this case, we have a mismatch. In this could cause some issues for a screen reader user. For instance, the screen reader would read “Continue,” but that user may see the word “Next” or maybe be told to find the word — the “Next” button within the page.

For a voice control user, they would probably use the word “Next” to try to activate this button via their voice, but because the word “Next” is not actually programmatically within the page, that’s not going to work. Only the word “Continue” is defined as the accessible name within this page so we want to avoid this. And WCAG 2.1 addresses this via a new success criterion titled, Label in Name” and that reads, “For user interface components with labels that include text or images of text, the name contains the text that is presented visually.”

Okay, so really what this is saying is that the label, the visual text, needs to be part of what’s read by a screen reader, part of that accessible name. And I think this is really useful for accessibility. I can think of some exceptions where this maybe doesn’t make sense, but, for the most part, if you can see something visually on the page that should be part of what’s read by a screen reader. This is especially important because we do need to consider that most screen reader users have some vision. Very often we think of screen reader users as only just being blind, they are only experiencing content audibly, but the majority of the time there actually is some vision. Some users with cognitive or learning disabilities may actually use — may use a screen reader. Maybe because of a reading issue or even a language barrier, they may prefer to hear that content as opposed to read it visually. So this is a great success criterion it helps support better accessibility.

So that previous example of the “Next” and “Continue” buttons that would be a failure of this because the label, the visual text, was not included as part of the accessible name, in that case, the alternative text.

So there are ways in which browsers should determine how to name particular elements. So consider this markup here. Here we have this search text box. Adjacent to it, we have a search visible text of search. That text of search has been associated to the text box with the <label> element. Our input also has `aria-label=”Search Terms”`, a `<placeholder>=”Search WebAIM”` and a `<title>=”Search articles and blog “`. So, here we actually have four pieces of text that are all associated to this particular text box. We have the label text, we have an ARIA label, we have placeholder, and we have title.

Well, what’s the screen reader to do with this? Oops! I jumped ahead and gave away the answer a little bit. There’s a way in which the browser is supposed to interpret these different types of associations to determine what it should be. In this case, the name for this is defined by — via the ARIA label it would define this text box as being search terms. The ARIA label text because ARIA wins most of the time. Usually ARIA is at the top of that accessible name calculation. In other words, if we define an ARIA label or ARIA labeled by that typically will override any other thing that’s been defined as being a label for that particular element.

So, there is a W3C specification. It’s called the Accessible Name and Description Computation spec. This is something that’s implemented within the browsers to determine, “What do you call this thing? What is its name?” And it helps define what types of things we can define and which wins if multiple labels or names have been defined. It’s a little confusing. We don’t need to worry too much about the actual specification. We just have to know that the browser is going to figure out what to call this thing, what its name is, and how it does that is defined. It’s defined in the specification and there’s a hierarchy and logic to that.

So, we do need to consider, however, that when we tried to define labels or descriptions for things, that we need to understand the rules and how that’s going to work, so we can ensure that what is read by a screen reader is appropriate.

So, first of all to define a label or description for something, it has to be labelable. That element has to be able to have a label. So links and form controls, tables and so forth, those are labelable elements. We can associate text to them and have it be read as that accessible name.

Things like divs, spans, paragraphs, are not labelable. So, for instance, if we gave ARIA label or ARIA labeled by to a <div> or <span> or <p> (paragraph), it should be ignored, it shouldn’t be read at all because that element is not labelable. We can however make that thing labelable by giving it an appropriate ARIA role. For instance, a <div> we can give it `role=region` because that <div> is defined as a region or landmark within the page, it’s now labelable, we can give it `<aria-label=filters>` to define a filters region within a single page application.

Those associated labels and descriptions are also strings when they are read as that accessible name or description. So, that’s supposed to say, “read as a stream of text,” that’s what happens when I edit slides at the last minute, so the read as a stream of text when it comes as a text box as ARIA labeled by the content within that ARIA labeled by element is simply read as a stream of text. That means it’s difficult to explore and navigate. If you hear a word in that label or description that’s unfamiliar, it’s hard to pause your screen reader and really explore that. Because it’s really a separate element that’s just being injected as a string into that accessible name field that’s read by the screen reader. It’s also devoid of semantics so for instance we have headings or links or lists within that separate element that’s a label or description, that’s going to be removed. It’s going to be stripped out when it’s read as that associated label or description.

So we need to keep those things in mind. A lot of what this means is that labels and descriptions should be short, succinct, really to the point, and appropriately labeled or describe that element.

We also have to consider that ARIA labels and descriptions will be read even if they are styled with `display:none` or `aria-hidden`. Now, I mentioned before screen readers generally will recognize `display:none` and `aria-hidden`, but if something is a label or description, and it’s hidden with `display:none` or `aria-hidden` or other CSS, it will still be read. A common example of this might be form error messages where we may have a piece of text below a form field that indicates that there’s an error for that particular field and we associate that error message with `aria-describedby’ to the field itself.

In the default state of that form, those error messages will be hidden but it’s not in an error state. If they are hidden with `display:none` even though they are hidden with `display:none` they would still likley be read by a screen reader because their description is defined as being a description of that field. So, it’s another one of those rules that often results in a screen reader experience being rather different from the visual experience.

Okay, so let’s take a lot of this and start to pull it together and find out ways we can start to test and analyze and take advantage of a lot of these different layers or stacks. So, again putting this together a webpage stack from everything that we put together to build our page with our browser on top of that and assistive technology on top of that, we have what’s called the Accessibility Tree. The Accessibility Tree is something that is generated by the browser based on the page content or more specifically, the DOM, the Document Object Model, of the rendered webpage. This is really powerful. In the old days, a long time ago, we didn’t have this accessibility tree. Essentially what happened, is the browser just kind of read your code and would generate stuff that it would send to the assistive technology. That meant if we were to manipulate the page with scripting, the browser usually wouldn’t get those updates. And thus — I mean at least it wouldn’t send those updates to the assistive technology unless we forced it to or told it to, and it was really quirky. But now if we update the webpage dynamically with scripting, that will update the Accessibility Tree which is then used to convey information to the assistive technology.

We can analyze that Accessibility Tree using just tools within our standard browser. So I’m going to show this real quick. I’m going to do this with Chrome but Chrome has something called the Accessibility Internals so Chrome:\\accessibility. Now, the very term there of internals suggests that this is kind of icky, maybe something that you don’t want to dive into that much. But it is kind of interesting that you can access this kind of internals page you can enable some of the — the options or modes I’ve turned on Web site accessibility, and screen reader support and accessibility and that’s probably — I need to make it bigger it’s small we have a page opened here the WebAIM page and I’ll go to Show Accessibility Tree.

This is what you get. This is the internal accessibility tree for the WebAIM.org homepage. Yikes. Yeah, it’s — this is the internal stuff. This is not very good. But you can maybe see here that there is a hierarchy or structure to this. We can start to explore this and find individual elements within our page. For instance, here is a heading and it has a title of WebAIM web accessibility in minder. And within that heading is a link and within it is an image and we can find the alternative text for that image. There’s a lot of these attributes. This is not a very good or friendly way to explore the Accessibility Tree. But it is a possibility to really dive right into exactly what is being generated and defined in that Accessibility Tree via the browser.

A better way to do this is probably via the developer tools. So I’m just going to inspect an element here. And pull over the developer tools window. And we have an option here for accessibility. And then within this accessibility panel, it’s going to show us the Accessibility Tree.

So, here we can start to explore the structure not just of the DOM or code of the page but what’s being generated into the Accessibility Tree, the things that are going to be presented to a screen reader for accessibility.

So I inspected this link. This link is within a heading. We can see in the ARIA attributes that may be defined for that, and then the computed properties these are the things that are defined by that accessible name and description calculation.

In this case, the name is accessibility training. That’s what this link will be read as: the accessibility training link. And it has a `role` of link. That ensures that this is going to probably be accessible. It’s a link. It has the name of accessibility training and we can see how that accessible name is being calculated is it being defined by ARIA labeled by ARIA label in this case it’s being defined by the contents or text within that link. If it had a title attribute that would also be here but it would be overridden by the content of that particular link.

So if you’re not quite sure what would be maybe presented by a screen reader, especially if you have text and ARIA labels and titles and maybe placeholder and things like that, you can use the accessibility panel to go in and expect that and determine what the actual accessible name is and ensure that the accessibility role is correct.

So this can be helpful. This can be really useful tool in kind of debugging and testing accessibility. Generally, it’s only necessary if you’re doing things like ARIA or dynamic content updates.

So a little bit more about the Accessibility Tree or how your browser interprets your webpage for accessibility. If we just have a standard button, it will be defined with a role of <button> and in this case a name of subscribe. That’s the text contents within that button. Pretty straightforward.

This is also why we don’t need to or would want to add `role=button` to this element. It already has a role of button. Gving it `role=button` with ARIA doesn’t do anything useful, it just introduces a potential point for breakage for something to be broken up the road. Maybe we change this button to some other element but don’t change role of button, now we can have an accessibility issue.

If we have a link in this case it’s — this has a role of link and a name of “Register now,” with “Register now” being the text within that.

However, if we were to give this link role equals button, it now has a `role=button” and a name of register now. So for a screen reader user this would be defined as the “Register now” button.

But we have a potential issue here. Because we maybe haven’t considered the keyboard interaction for this element. Because it is defined as being a button to the screen reader user, the keyboard interactions for buttons can be a little bit different than the keyboard interactions for links. Links are activated via the enter key, buttons are activated via the enter key or the space bar and, in screen readers the screen reader generally indicates to the user that they should use the space bar to activate. The screen reader in this case might read, “Register now button, press space to activate.” If the user presses the space bar when this link has focused, it actually will not activate by default because it is a link, it’s not a button. It’s only defined as a button to the screen reader because we added the ARIA role.

So we could address this by then doing key event detection, meaning has the user hit the space bar. We have to use scripting to do that. And then activate this thing.

Don’t do this. Just use the proper element. If it’s a link, use the <a> element make it a link, if it’s a button use the actual <button> element without trying to manipulate this via ARIA. We can do all of this via HTML therefore we — well, we should. The spec would probably say must but usually we — you know, we can do all of this, rules of HTML are meant to be broken in some ways, use the proper elements.

By the way I do see the comments coming in, I can’t really jump to the comments. I’ll get to those at the end. I promise I’ll answer your questions.

So this is just where — I don’t know. This is just a pain point in ARIA implementation if we add roles or add ARIA without ensuring proper keyboard interactions.

This is something else we see quite often. Here we have a link, the same link, our “Register now” link, but it’s been given an `aria-label=”Opens in new window”. In this case the `role` is link but the accessible name is “Opens in new window.” Because the `aria-label` overrides the default name or text of this link, which is “Register now,” this would be read by a screen reader as “Opens in new window link.” The text of “Register now,” which is really the important stuff, is lost. In fact, it’s gone. It’s inaccessible to a screen reader.

Why? Because you told it to. The code, by adding `aria-label, we have told it that the name for this link is not “Register now” but it is “Opens in new window”. So this would also be one of those new WCAG 2.1 name failures because what’s visibly shown, the text “Register now”, the label is not shown as accessible or not included within the accessible link.

Okay. Another thing that’s really kind of interesting is something called the Accessibility Object Model or AOM. This is something new that’s been introduced. It’s starting to be implemented into browsers. But it allows us to script or manipulate or change things at the Accessibility Rree level as opposed to only within the DOM itself. So, typically, if we want to use scripting to set an attribute for an element, we could define what that button is by its ID. Use `setAttribute` to change `aria-pressed` to true, for instance.

Using the Accessibility Object Model, we can instead set the ARIA attribute directly within the Accessibility Tree as opposed to in the DOM. Now, this is under construction. It’s new, it’s not supported by older browsers, but it is kind of interesting, because typically what happens is if we make changes in the DOM, that needs to be kind of reprocessed by the browser to generate an updated Accessibility Tree with those accessibility changes. With the Accessibility Object Model, we can make those changes directly at the Tree level.

Now, why would we want to do this? There are a few cases. For instance, things like web components, things with a shadow DOM where we aren’t really — we don’t really want to make those changes directly in the DOM but it might make sense to enhance accessibility via scripting for that particular element directly within the Accessibility Tree. This can also be useful for inspecting accessibility or testing it. For instance, checking whether a particular role or attribute is valid for an element or supported. Does the browser actually add this particular role or attribute correctly in the Accessibility Tree? We can also, again, just directly manipulate that Accessibility Tree.

So there’s I think a lot of potential for this up the road from the future when this is better supported. Also a little bit scary to me. Because we can change things, we can really break accessibility and make it kind of difficult to detect that, because it’s not directly within the DOM of the page. It’s more in those icky internals within the browser that can be a little bit more difficult for us to test and analyze.

A lot of what this is going to require is better screen reader testing, better assistive technology testing to make sure that things are actually working the way that they are supposed to.

Okay. There’s another little interaction I want to talk about briefly and that’s accessibility APIs. The accessibility APIs are how the browser communicates all of this accessibility information to the assistive technology itself such as screen readers so every operating system has its own API or multiple APIs, in some cases, which are just the specification that’s defined as to how these things are going to be communicated between the software and the assistive technology via that operating system.

So those APIs are going to define channels, things like role, what is this thing, what does it do, name, how do we title or tell you what this thing is, description, properties, whether a check box is checked or unchecked or required and so forth. Those can all be defined via these API channels. And those values are going to be determined from the Accessibility Tree that’s generated in the browser from the content and DOM of our page.

You can also use tools to inspect the accessibility API channels. Meaning we can actually see what is being communicated from the browser directly to the assistive technology.

So on a Mac, you can do that with Accessibility Inspector and in Xcode on Windows, there’s the accessibility viewer from my friends at TPG, a great tool to kind of tap right into those communication channels to see if — what’s being communicated. So this can be really useful especially for debugging where things have broke.

So, sometimes as a developer, you build something, you build say an ARIA widget and it doesn’t work correctly in the screen reader. And you have to figure out, “Did I do something wrong. Is the browser not supporting this or is it the screen reader?”

So you can look at these different layers, these different points of where accessibility happens to try to determine where that’s at and we can look at the ARIA specification and determine whether we have used incorrect code or ARIA. If that’s the case, we need to fix it. We can inspect the Accessibility Tree. If those things are not being defined correctly, maybe the browser is not properly implementing the accessible name computation, then we would need to file a browser bug so it’s interpreting our code or DOM correctly into the Accessibility Tree.

We can also inspect the API channel. Is the browser communicating the proper stuff to the screen reader? If it’s not, file a browser bug. And if it is, and it’s still being read incorrectly by the screen reader, then file the screen reader bug, that’s probably where the issue is occurring.

Now, most of the time it’s the browser’s fault, at least, if you have implemented your code correctly and it’s not being read properly by the screen reader, usually that’s because the browser is doing something quirky. So, just remember that. Screen readers usually read what the browser presents. So very often, we curse at the screen reader saying, “Oh, it doesn’t support this!” But usually it’s the browser’s fault. We now have tools to determine that. We can analyze all of these layers to determine where the breakdown has occurred.

Okay, so that’s a lot that all comes together. Now, this is important I think for modern accessibility. It didn’t used to be. We didn’t used to have to think about all of these things, we just had to write proper code and it would work. The more and more especially with ARIA, especially with complex applications, we need to think about these layers and interactions more and do more and more debugging.

Now there is one last critical layer most important of all at the very top of that and that’s the user. That’s what this is all about, is ensuring a proper end user experience. Good accessibility for our users. But we need to understand these layers to ensure that proper accessibility.

Okay. With that, I will say thank you and I’m going to pull up a quick slide here just with some credits and I will go to answering your questions.

There was a comment here that it says it doesn’t help that Google’s design calls their style of menu links, tabs. Designers can’t see the difference when frameworks name things like that. Yeah I think the comment is, you know, we get these weird interaction — weird interplay between what we see, tabs, and they are called tabs but — tabs and they are called tabs but they are not actual application tabs or we call things a menu, maybe a mega menu, it’s not the same as an application menu. Yeah, that’s a lot of where we run into these difficulties in implementing ARIA because it looks like one thing but what it looks like may not actually align with the ARIA specification, with the actual design pattern.

Okay. The next question was wonder if inline JS injected important styles to an override user via user style sheet important styles.

Okay. So I think the question there was whether — yeah, if you define styles via scripting, whether those would override the user — end user style sheet. No. They shouldn’t. They shouldn’t at all. The end user should ultimately have control over those styles.

There’s comment that those injected style sheets may — can be interpreted as user important styles so maybe that isn’t the case. There may be — there may be places where those injected styles may cause issues. Good. I don’t know the answer to that. That’s a good point, good question. I’ll have to do some testing.

Okay. Steve in the chat pointed the — pointed out the link to Accessible Name and Description Computation. Great. It’s not terrible lengthy specification. It is really helpful to maybe look through that. If you’re not sure what is going to take priority as you define things with ARIA like ARIA label, ARIA labeled by ARIA description can be useful to help define that. Good.

Okay. Sarah asks are hardware accessibility issues such as having to use mouse and can’t use keyboard in the accessibility API?

You might provide a little clarification, if I’m answering the wrong question here. Not — not really. I mean, the accessibility API what’s communicated from the browser to assistive technology is going to be pretty agnostic of the technology itself.

So, for instance, if you’re using a screen reader, the browser doesn’t know if you’re using that screen reader with a mouse or only with a keyboard. It’s just presenting the accessibility information. It would be up to that assistive technology itself to know how to — to create that interaction for you. For that particular element. And it’s interesting when it comes to screen readers because some interactions that happen with a webpage occur at the screen reader level and some occur at the browser level. Typing into a text box is you manipulating or typing directly into the browser. You hitting the H key or command-option-H on a Mac to navigate by headings is going to be handled by the assistive technology, which changes the interaction within the page.

So most of that — yeah, most of that interaction stuff, the hardware level, you know, mouse, keyboard, is going to be independent of those accessibility APIs. Hopefully that’s answering the right question there.

>> KARL GROVES: And Jared that’s kind of something I see a lot people misunderstanding when I do training and stuff which is that once the assistive technology is enabled, the assistive technology then sort of takes over that decision of whether — what it’s going to do with those keys when you hit them based on the interaction mode that has been enabled at that time.

So you know, if you’re in browse mode versus interaction or forms mode, stuff like that, is the assistive technology sort of trips you up there. And as a matter of fact sometimes when people are doing testing, especially with JAWS, I’ve seen JAWS do things like fire a click event on something that didn’t have a click event on it because it’s like, okay, we know what they are trying to do here and so we’re going to just take over.

>> JARED SMITH: Yeah. If we go back to that accessibility stack, that assistive technology was above the browser.


>> JARED SMITH: And that’s again where that user interaction flows through that assistive technology. And yeah that can be a barrier. And I would point out especially on Windows. On a Mac, there’s a different keyboard interaction between the user and the browser itself. The screen reader keys always — almost always I should say occur via a key combination. Control option and some key to cause the screen reader to trigger, say, heading navigation or region navigation. On Windows, it can toggle between these different modes meaning is the screen reader handling the keys or is the browser handling the keyboard key interactions and it can toggle between those modes.

So, it gets a little complex. I think the biggest takeaway there is if you are a developer on a Mac testing only on a Mac and are implementing certain ARIA roles, be cautious. Follow the ARIA design patterns and you’re probably going to need to test on Windows with a Windows screen reader to ensure that the keyboard interactions are happening correctly.

>> KARL GROVES: Yeah. Well, all right, if there’s no other questions, and I’ll babble a little bit so people can plop in some questions, if they want, I just want to say thank you for this. I think this is going to be the kind of thing that we’re going to highlight a lot when people come to us to ask us questions about stuff is just hey look at this awesome webinar that we had with Jared Smith.

Now, next month our guest is going to be Adam Spencer. Adam Spencer is my personal PDF guru. If I have PDF questions, that’s who I send them to. And he’s going to be talking to us about PDF accessibility.

And so that will be — Michael, what day is that?

>> MICHAEL BECK: That would be on Wednesday, February 6th at 11 a.m.

>> KARL GROVES: All right. Wednesday, February 6th. Adam Spencer. That one is also going to get streamed onto Facebook. We have found a way to shoot Zoom over to Facebook to stream your videos to that, as well.

And as Michael said in the comments section or in the chat section, once we get this thing downloaded and archived and edited and transcribed and all of that good stuff, we will be posting this on YouTube and on the Technica11y.org Web site. And thank you, all, very much for attending and I hope to see you next month.

Jared Smith

About Jared Smith

Jared Smith is the Associate Director of WebAIM. He is a highly demanded presenter and trainer and has provided web accessibility training to thousands of developers throughout the world. With a degree in Marketing/Business Education, a Master’s Degree in Instructional Technology, and almost 20 years experience working in the web design, development, and accessibility field, he brings a wealth of knowledge and experience that is used to help others create and maintain highly accessible web content. Much of his written work, including a broad range of tutorials, articles, and other materials, is featured on the WebAIM site.

Single switch usability.


[Intro music]



>> MICHAEL BECK: Welcome to Technica11y, the Webinar Series devoted to the technical challenges of making the web accessible. This month’s presenter is Thomas Logan, founder and CEO of Equal Entry.

>> MICHAEL BECK: All right, everyone. Welcome to the December version of technica11y. Michael Beck, the operations manager at Tenon. This month we have Thomas Logan from Equal Entry who will be discussing single switch usability. Correct?


>> MICHAEL BECK: Yes. So, take it away, Thomas, all over to you.

>> THOMAS LOGAN: Thank you very much.

>> MICHAEL BECK: Before we begin if anybody could put any questions in the chat, we’ll get to them at the end of the presentation.

>> THOMAS LOGAN: Great. So, I’m just going to get my screen set up here. Make sure this comes through.

All right. So as introduced here. My name is Thomas Logan. I’ve worked in the accessibility space my whole career. So, I started as a Computer Science student at University of North Carolina and I worked with a student who was blind in the classics department thinking about accessibility for ancient world maps. And that was my first exposure to thinking about technology and how technology can provide access to information. And so, I’ve basically spent my whole career always being interested in this topic, of what can we do to make different experiences that prior to technology might have been very difficult, what can we enable via technology.

So, today’s topic of considering single switch accessibility. I’m very appreciative to have this opportunity to talk to you all about this today.

I was very interested to get to work on this topic in a small consulting project this year where, basically, the task and the request was to consider how to improve a specific experience for someone that uses a single switch.

And I appreciated that opportunity as an area to focus on. Because I think, historically, I’ve been over 15 years working in accessibility, I find that a lot of the work rightfully so often focuses more broadly on standards like the Web Content Accessibility Guidelines and my most experience is working with screen readers and considering how people who are blind get programmatic access or underlying access to information.

So, this opportunity to really think deeply about the single switch accessibility use case, really gave me that opportunity to explore and understand some of the Web Content Accessibility Guidelines better or through a different lens than I had in the past. So, I wanted to have this be a conversation and demonstration today. Probably a lot of you have not had projects working on single switch accessibility. Maybe some of you on the webinar today have, so, I kind of lead here with this is my experience focused working really on a project but I’m always eager to learn from everyone else’s experiences.

So, I wanted to start with this quote and key point when we’re thinking about single switches.

So, the concept is, “for some people their interactions with the world take place through a single switch-image having to control everything you want using a light switch.”

So, I like that this is actually from a research paper I’ll be showing later in the presentation. But I liked that way to consider this: actually turning on that light switch. It’s a binary on or off. It’s a single switch we can use to turn a light on or off in a room and this is basically what we need to consider or think about for this technology use case. We have the ability to control a single switch.

How can we get every interaction that needs to be taken made available through just that single switch?

So, I think some of the ideas here, some of the ways that people can control switches: we can blink eyes. We can move our head. We can move a hand, maybe some people would have the dexterity to move their entire hand, maybe balled up as a fist, or maybe just only having one finger on a hand. Again, considering that someone may be able to move all five fingers or ten fingers.

The ability to control a switch — to control a single switch. We could potentially just use a single finger to control that single switch. Use an elbow, a foot, a toe, a knee, a tongue.

It’s definitely broad spectrum here. And again, what I want to stress of one of the points in today’s presentation is I wanted to look from the single switch use case because that was actually the design thinking. But, obviously the reality and one of the things that’s interesting about considering use cases for people with motor impairments of the upper or lower extremity, is any combination of these can be present for an individual so some individuals may be able to control five switches, eight switches, some individuals may only be able to control one switch.

So, that’s just part of the design thinking that I think will also be interesting as we have this discussion.

So, one of the techniques, and I’ll be demonstrating this with the different products. Scanning is the way that on a technology interface we can have a single switch be able to access information and scanning is basically being able to move through a list of items, one at a time, and then indicate by tapping or switching.

So on the screen I have — this is basically going to be the iPad demo.

But on the iOS, we have inside of the accessibility settings. The great thing about Apple’s iOS, each new release, getting more and more of these accessibility options and settings. But you know, one piece of that is we have to look through this whole list to find the one we’re looking for The switch control is the accessibility control on iOS that you can use to see this functionality so inside of these configuration settings I am only set to have a single switch, so this is again this idea that many people may be able to control more than one switch, but, if we start from the baseline we can start with a single switch and then we have a lot of different configuration options for that single switch.

But the concept is when we turn the single switch on, we now see a red highlight rectangle that starts moving between each interactive item that’s on the screen.

So, it skips over things that are just plain text. But anything that could be clicked on to be controlled has this highlight focus move through the screen.

And the number of times that loops through the screen can be controlled. The speed of that loop can also be controlled.

So, I have this going at 0.5 seconds. It’s actually like 500 milliseconds on each element that could be clicked on on the screen and that’s actually moving quite fast.

So that’s, again, part of understanding the individual use cases is someone may be able to move very quickly with the head or the tongue or the part of their body they are using to control the switch. Another user may need more time to actually hit and activate that switch. So that’s why all of these different options — I’m going to tap off the switch right now and hopefully do a good demo of that.

So, I’ve turned off the switch. But I could actually give myself more time for how long to control that switch by changing this auto scanning time to a different value, and that will slow down how quickly that highlight rectangle moves around the interface.

But I think one of the interesting points here is I, at least, very frequently if I’m doing accessibility testing, I typically use VoiceOver. And I have this model of sometimes swiping right and just swiping through every element that’s on the screen to make sure that it has an accessibility alternative and a name, a role, a value, those types of things.

But one of the things that’s cool with the switch is seeing that rather than having to swipe right, swipe left, and navigate through all of the elements in the screen using the switch access is a good way to actually test the logical ordering of elements on a particular screen that you’re looking at.

So that will be a part I’ll really stress when we go into the illustrative example of composing music is the position and the logical ordering that we choose of elements on the screen is one thing I think as people who work in developing or designing software, this is a consideration that could be put more at the forefront for some of the design thinking when we want to consider all users that need to access the technology. This is just the other point to say when we think about one of the WCAG standards is saying everything has to be accessible from the keyboard, operable from the keyboard.

There’s over 100 switches on a standard keyboard. So, the dexterity and ability to individually access all of those keys with ten fingers. We got all of these different switches and we can frequently get very complicated interactions when we design for keyboard accessibility and that’s one of the assumptions that we need to challenge. We need more consideration especially for software design for the web and for desktop is that these complex key sequences like holding Control-Option-U or, you know, having Control-Alt-Delete is another common one from the Windows days. Those combinations and those requirements to control the interface through the keyboard can be much more difficult when you don’t have control of multiple switches to execute those quickly or easily.

So that is another thing that I just wanted to quickly show as a demonstration comparison is that I’m on the — I’m on a Windows PC here but this is one of the tools built into — it’s been built into Windows…at least since Windows Vista, maybe earlier. But there’s the ability to use a single switch with this virtual keyboard that exists on Windows and so this is I think a good example. This functionality does at least come in by default that I could basically select any key on the keyboard through this scanning method. Similar to how we saw a scanning method on iOS, on Windows, this functionality actually groups keys together into groups of four and I can use my space bar or some other single switch to look for a row of keys, wait until it gets highlighted on the key that I want, and then tap to start typing.

And so this keyboard has up at the top some auto complete functionality to learn, again typing this way and actually needing to type — complete a form field say on the web using just a single switch.

The more that this auto prediction can work to guess the words auto complete the words that need to be typed in again that’s — the feature or function that can help someone that needs to do input via this type of mechanism. But I would say that obviously one of the things for me in getting to do this project, I looked at what comes out of the box on Windows, on Mac, on iOS, and Android. And I think for right now, on Windows, a lot of the functionality there for more advanced functionalities for switch control will require either purchasing third party software or obtaining Open Source software can run on Windows that gives some more of the advanced features that you would need for controlling an interface.

Because built into Windows we just have this keyboard interface control.

I wanted to mention this video, I’m not going to play the video in the stream today. But I do have an article that I’ll be showing in the presentation that has links to everything that I’m demonstrating today. But this is the Apple accessibility video from a few years ago. This is a woman named Sadie who was able to create the video,, and if you haven’t seen the vide, I highly recommend watching the entire video, but the point of the video is Sadie is a cinematographer/video editor and Sadie in this case has the ability to control two switches with her head, so, she’s actually using an interface or a configuration on the Mac where if she moves her head to the left and clicks this switch, she can use one piece of functionality for scanning and when she moves her head to another side and clicks the switch on the other side of her head, she can use that to actually tap or select the item underneath the functionality.

And so, in the video itself, one of the examples they are showing is actually doing complex mouse operations using those switches where basically on the left we have iMovie or FinalCut, some type of video editing software and we’re able to using those single switches click and select items. And then glide or drag them down into the timeline view to create editing. So, this is showing another thing I was showing on the Mac. We have the keyboard emulation and this is also something we would have on Windows but this would showing mouse emulation and that’s something that needs to be available to the single switch users: the ability to left click, double click, triple click, gliding, dragging, all of those features could also potentially be exposed through named commands and basically a list of commands on the screen.

I wanted to comment on the Game Accessibility Guidelines.

So, I’ve been doing, and I think a lot of people have been, paying more attention to game accessibility as of late. And one thing that’s interesting about the Game Accessibility Guidelines that’s a little different from, say, the Web Content Accessibility Guidelines is that there is a good focus or a larger focus on considerations for people that need to have remapped controls or be able to use switches or alternative input devices to control games.

So one of the –one of the cool things lately is the Xbox adaptive controller. So, this is actually something that was developed and released this year in 2018.

But the Xbox adaptive controller is a controller that comes with basically the left, right, up, down analogue controls and then it has these two very large buttons similar to what could be purchased as a single switch. There’s two single switches on this adaptive controller.

And then the control itself has a bunch of other switches that can be connected to the back. So this is showing that you could basically set up a system or set up an interface to control typing in the Y button, the X button, the B button, the A button, all of the different buttons that get used because different games use different buttons.

This piece of hardware sort of shows this idea of flexibility for like how can we connect all of these different types of switches and these are examples of what other switches might look like to connect to the device to allow people to control them.

So, one of the things in the Game Accessibility Guidelines is this requirement to allow controls to be remapped and reconfigured. So that’s something that I think is really important, and that I don’t see lots of common design implementations of in desktop and web software. I think we do have obviously lots of software that does do these things, but, there’s not very general ways for a lot of web functionality or desktop functionality to easily have the controls being remapped and reconfigured.

So, this is just a quote for one example of someone that would benefit from remapped and reconfigured: “I was born with cerebral palsy I can’t walk and have very limited use of my hands. I love video games of all shapes and sizes and have been playing since I was 3 years old. I just want to thank you Blizzard for having near endless control in Overwatch. I don’t know if this was your goal but because of your extensive options I was able to play every character in the roster and it feels great. Because of you, I made my first snipe in a video game today.”

Snipe is a video game term for shooting someone and getting them with one hit.

That’s one of the features that Overwatch supports is this rich configuration UI where you don’t have to say that a particular button performs a certain action in the user interface.

And that’s something that, again, when we think about a single switch: if you think about a game that has lots of different buttons, we could think about making the single switch the ordering of the buttons that get cycled through for that switch. It could be a limited set really customized to a game.

And if we have this full remapping and customization, someone that even uses a single switch hopefully will be able to get that snipe.

So, we’ve got Game Accessibility Guidelines. We’ve got this.

So now I’m going to go into basically the key part of today, which was just to take you through an illustration of some work that I did considering making Song Maker, which is a music composition app from Google Creative Labs.

And inside of the video, I’m going to switch to this presentation, this is the link that I’ll be sending out to the attendees today but this has links to all of the demonstrations and resources that are in the presentation today.

>> MUSIC LAB VOICE: Music is for everyone. That’s how we — why we started Chrome Music Lab.

>> THOMAS LOGAN: Inside of this the basic premise of how the Song Maker is explained is music making should be for everyone. And in this example, it’s showing touchscreen where we can tap and input notes into a musical grid. And then hit a big play button that then starts playing music on the screen.

And they have adapted this and made that work on iOS or — sorry; on an I — a mobile device screen. A tablet device screen. And like on a desktop screen.

And so, the idea here was the whole purpose of Song Maker was to create a very minimal interface for music composition, so they really wanted that to be easy for everyone to use and to enjoy. And it was specifically for students. And they wanted to have a consideration of how could we make this particular application work well for someone that maybe only can tell an interface with a single switch.

So, in the video they are showing I would say the standard way of controlling this interface is clicking. Inside of the interface And clicking — [Music].

>> THOMAS LOGAN: Here to build up a rhythm. So, we have a way with a mouse or with a touch finger to control the interface.

So, the first thing to consider or the first thing that we were considering for figuring out how to make this available to a single switch user was, “What would be an alternative way to input the notes and control this interface if we couldn’t use the mouse and we could not use touch on the screen?”

So, this is one of the core concepts that I thought was a really cool design on the site is that very much at the top of the interface, there’s this control which is basically a show or high more gain controls button.

This as far as I know isn’t like a design pattern that’s been standardized anywhere but I liked the concept here that it’s fairly hidden from the start. It’s almost like a skip navigation link on a Web site. But when you expand the control, now we have the ability via a keyboard or via an input we can tab to those controls and input notes. So now, instead of having to use a mouse or keyboard to control the interface and also not having to have a rich set of controls, inside the grid itself is actually keyboard accessible here.

So, we could also implement notes in that range but for someone using a switch or single switch, we were demonstrating it basically uses the scanning feature where we have to scan through everything where a note could be input.

So, if you were trying to compose a song and you needed to write out the melody line, having to scan through every interactive line in this grid would be more difficult than being able to actually find positions in the interface via these controls and then basically input those controls.

So that was part of the design thinking is, “Well, if we have these as buttons, we would have a way for someone then with a single switch without any configuration to be able to toggle and launch those controls and then input them using just a single switch.”

So, the other part of considering this was I thought — and again, this is something that probably goes into the design of a lot of games, but, also goes into a lot of design for web interfaces is how to simplify the interface.

So we would want — or I would want someone that’s trying out this experience from the beginning to have a good experience and to be able to get started and then build their skills or advance over time. So, one of the things in the documentation for teachers considering how single switch users could interact with this control is through configuration settings and so that was if we can configure this interface to basically have less number of bars and use like a pentatonic scale, which a pentatonic scale would have notes that all sound good together but require less notes, and maybe just one octave, we can basically get a more simple interface that allows someone again to control and have fun composing music but again with a smaller grid and maybe a more focused area to get started and have an experience with. I think of an analogue to this on the web could be having a responsive design Web site where, on the desktop width application, we might have lots of different controls and lots of navigation elements. But when we get down to the mobile device or the responsive view of the mobile device it might be a simpler interface to navigate. And that’s something that someone that was using a single switch — the assumption is that the simpler the interface or the less elements to need to navigate through will make someone using a single switch more efficient.

So, the other part that — I just want to show this video. But this is what I think is a very super genius feature inside of iOS that I have not seen in another technology implementation. But I share it as just something that, again, to spark the conversation and spark the consideration for what can be done on other software interfaces.

This particular tool called Guided Access allows you to select elements that you don’t want to receive focus or to be clickable.

And so this is just a short video, I’ll play it again. It was going pretty fast. I’ll slow this.

Inside of — when you turn on Guided Access inside of iOS, it lets you actually draw around certain parts of the user interface and say, “I don’t want keyboard focus or input focusing to go to the elements that I’m drawing circles around or drawing squares around.”

So, on the screen I’m drawing around parts of this user interface that I don’t want to receive focus. And now I have just the — at the end of that exercise, I have just the show/hide controls button available in the UI and I have the play button available in the UI.

So thinking about how I could make this experience the best experience to me right now it seemed like the easiest one for educating educators, if they had iPads in the classroom, it’s like this is actually a way that we can say with this particular software interface using conjunction with Guided Access, we can basically reduce all of the different tab stops that are on the screen and get it down to basically just the control that shows or hides other controls. And then the control that actually lets the user of Song Maker start or stop the music.

So that’s, I think, ultimately one of the things that at least for really wanting someone who uses a single switch to be as efficient and successful as possible in the end scenario of the technology we’re designing. This seemed like an essential part to me, where basically in the default setup, if I had not turned on guided access, the switch control is going to be stopping at the tab inside of the Chrome UI, going into the back button, the refresh button, the address bar. It’s going to go through other features in the UI that every single one of those tab stops is going to take time. If you remember at the beginning of the presentation, I was showing we could configure the auto scanning time to figure out how long the highlight for the scan should stay on a single element. And I had set it to 500 milliseconds, but we changed it to one second.

If we think about 1 second without using this Guided Access tool, then it’s going to be one second waiting on this control, maybe one second waiting on this group of controls, one second waiting here. And again, as those seconds or that time adds up, and when we think about someone enjoying or having a successful experience, I think the amount of time it takes to have an experience with it is a critical number, a critical thing to consider for someone’s enjoyment and/or benefit of a piece of technology.

So again, this is all documented and shown on the screen. But I think one of the things that’s interesting about this exercise is it does allow you to have potentially a thought process about the experience you’re designing: what are the most important features or functions?

And so, one of the things that’s still sort of an open issue to me in this design is that the close button, once we have opened up this list of controls, again, I’m just going to simulate the auto scanning tool by hitting the tab key right now.

But, the order of the actions that we put into this particular control, ideally we put them in the order of the most frequently used action to the least frequently used action because again, each time we’re having the scanning go through the control, it’s taking that time. It’s having a period of needing to wait. And so, if it’s more common to enter a note, then we should move the note to the beginning of this list of controls to make that a more efficient experience. Similarly, because once we open this grid and we’re entering notes, it’s probably more common that we’re going to be entering a lot of notes and navigating through this machine between it’s not going to be as common that we’re opening and closing the display.

So, this particular control itself could be moved to the end of that list and that would then make this potentially a more efficient interface and take less time to complete.

So what I wanted to show here is just — I did do a breakdown of using 0.5 seconds, that’s the same thing I started with 500 milliseconds for each switch, and with Guided Access turned on it has 7 interactive stops on the page and take the user 3.5 seconds to go once fully through the interface with the seven controls that shows on display.

So I’m going to let this video just play for it’s about a minute 48 but this will let you see the experience of inputting a melody and a rhythm with this particular setup enabled.


>> THOMAS LOGAN: So that right there is about 1:45, and I think that was just a cool experience to see with this design and with this combination of features, we can enable a student using a single switch to still be involved in the creative process. As I showed here, this interface, as someone became more comfortable using it, we could expand the number of notes, expand what’s available, expand what gets — what settings get available, and which settings are not available.

But it showed kind of this ideation process of how you have these considerations. So, the next part that I think is interesting and again, this is just to spark discussion and thought processes, is that this idea of creating this control for this interface, you can on — on Apple you can basically create — they have an accessibility keyboard active panel where we can choose basically four specific applications. Only a certain set of buttons that we want shown. And I apologize. This is pretty small.

But this is just the left down — I’m sorry; left, up arrow, down arrow, right arrow buttons. And then it has the ability to make that be the keyboard.

So if I added — like say I’m going to add a button and add the enter key, I could basically build — I could build a set of keys or I could build this control for specific interface. And then have that be an interface that’s available for that application or for that site.

Now, you know, there’s still more things to do potentially if we’re considering the web and having this be able to be a keyboard that appears on the web. But, it’s cool to actually see that this functionality is built in, on at least the Mac OS, that if you needed to have more custom controls or a set of actions or a set of functions, you can design and develop this group of keys. And again, the idea then only having five keys is we could use a single switch to just move through these five keys on the screen and then just use those five keys when we’re inside of that application.

So that’s one cool feature.

Another cool feature, I don’t get to show this application enough. But I like to show this demonstration of a desktop software. It doesn’t have all accessibility. But it does have this remapping feature that I think could be very cool for a lot of software interfaces.

This is also a music composition application called Ableton Live. And one of the ideas of Ableton Live is to make it so any hardware interface can easily connect with it to control this software interface.

And one thing that they have built into it is they also make it very easy to control it from the keyboard.

So, if I hit the keyboard shortcut command key, all of the actions in the UI basically have a way for me to click on them and map a keyboard to them. So, all of the interactive elements get this highlighting and I can say, “Oh, the play button, I want that to be controlled by pressing the number 1.

“The stop button, I want that to be controlled by pressing the number 2.

“I want the sound itself to be muted by pressing the number 3.”

So, I can choose any of these elements that are highlighted in yellow, I can click on them and then set a keyboard shortcut or modify the keyboard shortcut that currently controls it.

So, when I do that…


…I can now use those keys on my keyboard to control the interface.

So again, if we think on top of that, if we had single switch sets of controls that became more common to be used for types of experiences on the web or types of desktop software, and we had this ability to remap the keys so that for that particular application, whatever it may be, we can change or modify and say, “Well this key should do this on that computer or that application.”

I think that’s just a really cool feature. And so, I just wanted to show that as a design that I think could be more applied into other software interfaces where, when we have programmatic access for tools like screen readers, we are working to make sure all interactive elements have a programmatically accessible element. And then, this sort of design feature shows a way that we can overlay on top of that other functionality then to assign actions or assign keystrokes that are specific to that user or configurable for that user.

This is the document — I made heavy use of this document. “Switch Access to Technology: A Comprehensive Guide.” So, this is just a resource that I also wanted to advocate for. David Colven and Simon Judge did a very excellent job. It’s a very comprehensive guide. It starts on basically taking you through the different user interfaces for switching and letting you understand the different ways of controlling scans, highlighter movement controls, timeline and input filtering. Just lots and lots of details so, I highly recommend it if you’re interested in this area, taking a look at this document, Switch Access to Technology, and it pairs well if you have an iOS device to being able to basically read that document and get a good understanding of why most of the settings that do exist inside of the switch control setting on iOS, why they exist, why they are there.

And then I guess I just wanted to — I see we have some questions. I wanted to make sure there was time here at the end to have a discussion and happy to share.

>> MICHAEL BECK: Oh, yeah, there’s plenty of time.

So, we’ll get to PJ’s first question. Well, first, thank you for the presentation. That was very interesting. A lot of it reminded me of — I don’t know if you’re familiar with the guitar player Jason Becker.


>> MICHAEL BECK: He was one of the big shredders of the mid ’80s and he was unfortunately diagnosed with ALS shortly after he got a gig with David Lee Roth and while ALS is usually tantamount to a death sentence, he still composes music today with his father. And it’s — his story is amazing. There’s a great documentary called Not Dead Yet which I highly recommend and a lot of this reminded me of him and how he has pushed the use of software in order to compose in single switch. He can only use his eyes. He only has eye movement. And it just reminded me of that.

But our first question is from PJ. She asked, “With Guided Access how does someone leave the interface when most of the controls are hidden?”

>> THOMAS LOGAN: Yeah, so I think that’s one — that’s a great question, thank you, PJ.

>> MICHAEL BECK: I thought about that, as well.

>> THOMAS LOGAN: Yeah, and actually, I should definitely demo this. Because if you go and, you know, you get like, oh, this was a cool presentation, I’m going to try this out, this definitely had a similar sort of — there’s a little bit of a pairing of those two features that’s a concern. Sort of like sometimes when people get freaked out the first time they turn on VoiceOver and they are like, “How do I turn it off?”

It is sort of for me the thing with Guided Access — let me get to the Guided Access tool.

The core reason Guided Access was implemented was for, I think, having the ability to have people also have that for maybe their children, also, and having people ways to control, I don’t want these certain apps to be used.

The pass code, this is sort of one of the parts I was struggling with of that’s usually the way the — usually the way Guided Access gets turned on and off, although it does have the Touch ID feature where it would allow if someone had the ability with the finger to use that but Guided Access is set up that when you turn it on, right here I’m on the music maker access. When I turn on Guided Access, this is basically — you get a UI telling you that Guided Access started. When you try to turn that off, I’m using the triple click shortcut. So, that’s probably the normal way to turn it on and off is to add it to your accessibility shortcuts menu.

This accessibility shortcuts menu is accessible with the switch control as well.

But it is very difficult that the combination if you have the switch control on and you go to Guided Access, you have to enter your pass code to get — to turn it off if you haven’t set up the Touch ID. That’s actually — I’m glad that you brought up this question because this was something that I did want to pose to Apple but I was not getting switch access to that particular prompt, to actually type in and use the switch to type in. Since I don’t have the switch, I can type in my pass code and that’s actually how you turn on and off this functionality.

So, when I go into this feature, I could then say maybe I don’t want — I also don’t want this button to be there. But I do want these buttons to be there.

So, this is sort of the options that you have there. You can have it be some other options for turning on and off the feature, but that’s the traditional way I do it is via the accessibility shortcut and then I’ve been using the pass code to turn it on and off.

>> MICHAEL BECK: Okay. I hope that answered your question, PJ.

Our next one is from Sarah. Is the iOS switch mode using its own focus indication? Or more to the point, how important is visible display of focus on a web page/app for single switch users?

>> THOMAS LOGAN: So, on iOS at least it is using its own focus. So, on the switch control, you have a – yeah, you actually can get the cursor color: there’s five options, blue, red, green, yellow and orange, so it’s not using or displaying the keyboard focus indicator. It’s using its own.

And I also had the large cursor turned on.

That being said, though, that’s something that, at least in my experience, this was one of the other challenges I had. I was super motivated to work on this project. I was like, “Oh this is exciting.” I want to figure out how to do this and have it work well for Android out of the box, iOS out of the box, Mac OS out of the box and Windows OS. That’s what I try to do for screen reader and some of the other features. But it was difficult that they are so different between the platforms of what is sort of built in for free.

I’m just answering Sarah’s question for iOS. I’m not confident that that would be the same answer on one of the other platforms. But on iOS it draws its own.

>> MICHAEL BECK: Okay. And one final one, have you seen Chris Hills’ book about switch control on iOS?

>> THOMAS LOGAN: Is Chris — I’ll have to ask if Chris was the one switch guy? I did also link — did Chris write —

>> MICHAEL BECK: He said it’s in the iTunes section.

>> THOMAS LOGAN: Okay, yeah, I did — yes. So I haven’t seen the book. But I did — do have that also mentioned in his — in the writeup I did of this that he does have a switch control overview that goes into a really good depth of — on YouTube, of the different features and the different modes. And so it sounds like, too, we’re being made aware that there’s a book, too. I didn’t know that. I’ll definitely check that out. So thank you for letting me know.

>> MICHAEL BECK: And one more. Can individual user experiences be recorded to later be evaluated for possible best practices with an application? Or if that’s not possible, are preferences too individualized to use that data to enhance other users’ experience?

>> THOMAS LOGAN: Yeah, well, I think that comment of like can we record and sort of understand those features? I mean I think that’s the right type of design thinking. That’s one of the things that interested me. I’m showing this music interface. And it was sort of fortunate that by design that was the goal was to keep it a simple interface.

But I think — I do think there’s something to be said for having an analytical understanding of most-used features and functions for any type of experience. If that — as far as I know, I’m sure there are ways to use things like Google Analytics or other third-party software to gather that data. I don’t know of a platform way built in to gather it. But I think that’s definitely additional work that should consider — continue and/or be part of a process where if we really want to have in the future interfaces be automatically better for single switch users; we have to have that type of consideration. And it’s almost like the software should be able to configure itself into an ordering of elements that are most used based off of tasks. As I said, when you have to think about the amount of time it takes to scan through each item, I do really like looking at that as a math problem, you know, because it is a mathematical — the time is realistic of how long it has to spend on each one. So, I think that’s — I think that’s definitely stuff that should be happening. And may already be happening and I’m just not aware of it.

>> MICHAEL BECK: Okay. For other people that were listening about the Chris Hills book, someone posted a link in the chat. It’s called “Hands Free: Mastering Switch Control on iOS” by Christopher Hills and Luis Perez.” I’ll go ahead and toss a link out with that with that in the YouTube video.

>> THOMAS LOGAN: All right.

>> MICHAEL BECK: So, unless there’s any more questions? We would like to thank Thomas for his time and that was incredibly interesting. Thank you so much.

>> THOMAS LOGAN: Yeah, thank you.

>> MICHAEL BECK: And our next technica11y will be on January 2nd with Jared Smith of WebAIM. He’ll be discussing the interplay between page content, including ARIA, browser parsing rendering accessibility APIs and assistive technology which is something incredibly important for developers to have a sense of whenever implementing and testing accessibility, and again, that’s on January 2nd, 2019 will be our first one of the New Year. So, thanks again so much to Thomas for his time and presentation and thank you very much for joining us today.

>> THOMAS LOGAN: Thank you. Thank you very much.

Thomas Logan

About Thomas Logan

Thomas Logan got started in accessibility in 2002 at the University of North Carolina when he worked with a graduate student who was blind who needed access to map information for research.

After completing a degree in computer science, he helped large companies and government agencies meet their accessibility goals for over a decade. Then he decided to start Equal Entry to improve public education about accessibility, and close the gap between what was being taught and what needed to be taught.

Thomas is from Raleigh, NC.

Making custom selects accessible.


[Intro music]

>> MICHAEL BECK: Welcome to technica11y. The webinar series focused on the technical challenges of making the web accessible. Our presenter this month is Gerard Cohen, Lead Accessibility Strategist for Wells Fargo Digital Solutions for Business.

>> MICHAEL BECK: Welcome everybody to this edition of technica11y. I’m Michael Beck, Operations Manager at Tenon, stepping in for our normal host Karl Groves. Before we start with Gerard’s presentation on making custom selects accessible, I would like to mention our next webinar will be on Wednesday, December 5th with Thomas Logan from Equal Entry he’ll be discussing his work on making the Google Song Maker app more accessible. Also, if you missed the last webinar with Nic Steenhout of A11y Rules, you can check it out on our Web site Technica11y.org or Technica11yorg.

One final thing before we get started, the Digital Accessibility Legal Summit will be held on December 5th and 6th in Washington, DC, and March 11th and 12th, 2019 in Anaheim. The basic topic will be accessibility lawsuits how to handle them and how to avoid them.

One of the things that stands out for me in regards to this summit is that all of the presenters who are on both sides of the aisle, so to speak, with lawsuits, they would be defendants and plaintiffs, they have been asked to provide something actually tangible that participants with walk away with and not just give some fluff presentation.

It’s really geared towards the C-level executives and legal counsel for companies. I’m not sure if we have any of those in attendance today. But I’m sure everyone knows a person in those positions and can pass the information onto them.

It’s an important event not only for the growth of our industry but also helping to make high-level types who don’t normally think about accessibility, make them aware of the legal risks and obligations that they have towards their users and customers.

The Web site is accessibility.legal.

And so that brings us to today’s presenter, Gerard Cohen. As noted in the intro, Gerard is the Lead Acessibility Strategist for Wells Fargo Digital Solutions for Business. And today he’ll be telling us how he managed to break one of the myths of accessibility and managed to create a custom select widget that was accessibility. Over to you Gerard

>> GERARD COHEN: All right thank you very much, Michael. Let me go ahead and start sharing my screen here.

Just let me know if you can see that, Mike.


>> GERARD COHEN: Fantastic. Good morning, everyone, happy Wednesday I know you could have chosen to be anywhere else this Wednesday morning so I definitely want to thank you for joining me today.

Let me go ahead and introduce myself for those of you who don’t know me. I’m Gerard Cohen and I am the Lead Accessibility Strategist for Wells Fargo Digital Solutions for Business.

The slides are posted at this Bitly URL that’s bit.ly/2OxNPro.

And you can contact or follow me on Twitter.

Today we’re going to be talking about custom selects. Custom selects are kind of the cardinal sins in accessibility, they are right up there with carousels and assistive technology detection. I don’t think there’s a much quicker way to stir up drama in the accessibility community than custom selects. Of course, I’m exaggerating to make a point.

But just to talk about this a little bit more, this is a very recent tweet from Sarah Souoidan: it says us custom select drop-downs are pretty and all but if they are not properly accessible then you should probably not be showing them off. Also nothing beats the usability of a native select UI on mobile no matter how pretty.

So this is the most recent conversation that happened; this conversation happens almost monthly. So why is it such a big deal? Well, I’m going to borrow a recent presentation from Karl actually to help explain this now I have sped this video up for brevity it’s not really necessary to hear exactly what he’s saying but I want you to kind of get the gist of just the amount that he’s talking about this problem. So let’s go ahead and watch this for a brief moment.


>> KARL GROVES: That’s what that does. Anybody here ever make a custom select element? Did you do all of that shit? Did you do all of that shit? Did you do all of that shit? (Chuckles).

>> GERARD COHEN: So the point that was being made there is by Chipmunk Karl is that it takes a lot to provide a usable select widget the problem is nobody ever does all of that stuff. It’s really, really hard to do. So considering it’s such a big no-no, why did I do this? Well I had some problems that I needed to solve. The biggest thing I needed to solve was on iOS the spinner that comes up for select any long values would get truncated and we have to be able to support user generated content that we can’t really control the length of and we also have to display really long account numbers. So, sometimes the only difference in an account number is the last couple of digits if those get can cutoff there’s no way to know what is what. In general, the experience on mobile devices isn’t that great.

Some additional problems, supporting multi-select. The native multi-select takes up way too much space it’s hard to operate with keyboards and actually doesn’t even announce as being able to have multiple selections. Lastly, we needed to provide a better formating for grouped options and of course being able to provide a styled select was nice, too.

Speaking of styling selects: listen, if the only reason you need a custom select is for styling purposes then just take everyone else’s advice and stick to the native it’s actually pretty easy now to style given just a little bit of CSS. Now, I can’t remember where I heard this but for the longest time I had a really long recollection and believe the reason why we can’t style select today is because it’s the host OS that’s providing the select. I remember reading that some time ago…I can’t find proof for it ,but, it’s something I learned very early on and I could be wrong but it makes sense if you consider the way selects are the only components that are allowed to spill outside of the browser window.

So let’s talk about the official ARIA authoring practice for listbox because this is the first place usually people will go. First of all, it just doesn’t work everywhere. And there are a few notes in the documentation that provide an explanation for this.

I’ll read a few of them. The first one: “Because the purpose of this guide is to illustrate appropriate use of ARIA 1.1 as defined in the ARIA specification these design patterns, reference examples, and sample code intentionally do not describe and implement coding techniques for working around problems caused by gaps in support for ARIA 1.1 in browsers and assistive technologies.”

“Except in cases where the ARIA working group and other contributors have overlooked an error, examples in this guide do not function well in a particular browser or with a specific assistive technology are demonstrating browser or assistive technology bugs. Browser and assistive technology developers can thus utilize code in this guide to help assess the quality of their support for ARIA 1.1.” What they are saying is first of all they don’t provide any guidance as to how to work around issues you may discover in particular browsers or assistive technologies because they are really a guide on ARIA 1.1 they have to stay pretty close to the roots there.

And the guides also — these guides provide the state at which they would like things to be supported. So it’s kind of like a best wish.

It’s basically at a certain point a testing tool for browsers and AT vendors to help beef up support if something is not working. There’s actually one more note from the documentation I want to pull out.

“Currently this guide does not indicate which examples are compatible with mobile browsers or touch interfaces. While some of the examples include specific features that enhance mobile and touch support, some ARIA features are not supported in any mobile browser. In addition, there is not yet a standardized approach for providing touch interactions that work across mobile browsers.”

So here there are basically acknowledging that the patterns don’t support mobile browsers at this time. So, that’s a problem for the issues that I was trying to solve.

So another big issue for me is that the ARIA pattern itself does not actually function as a form element. There’s no data being submitted in a traditional form sense. You need to figure out your own way to get the selection value and pass it along with your form. Along with that, there’s no guidance on how do you make it required, how do you perform validation, how do you mark something as invalid. Those ARIA states aren’t allowed on buttons, most of them aren’t.

Lastly, they don’t have any guidance on opt groups how do you group related items, so, these are some of the gaps of functionality that are just not addressed there.

So before I pull the curtains back I have to admit that some of this — it’s a little unconventional and definitely the first rule of ARIA to not use ARIA if you can use native elements has been thrown out and I’m heading into a bit of unchartered territories, but, I can assure you that the end user experience is well worth it. Still if you don’t want to see the monstrosity now is your chance to opt out so you have the option of taking a red pill or blue pill.

So I guess everyone is going to take the red pill, so, we’ll just go down the rabbit hole. So let’s talk about the major players here.

I’m going to be demonstrating code using plain old web technologies. So that’s just HTML, CSS and vanilla JavaScript. I wanted to be able to demonstrate everything without forcing you guys to be human JS compilers, but, honestly the real star of this is the HTML anyways.

I’ll be starting with native select because I like progressive enhancement. I think it’s still a worthy cause. I moved to the suburbs where Internet connectivity is spotty; my browsing experience on my iPhone is pretty crappy. It’s also bad when taking the train into the city. I don’t want to sound like get off my lawn guy, but I don’t have time for brochureware type site to download and rendered entirely in JavaScript. It’s just not for me anymore. Some elements that are going to be used for this are the simple — a simple span for the trigger and unordered list for the options and, of course, a lot of ARIA.

So the inspiration for this implementation of custom select came from a couple of places. First was actually the ARIA pattern we talked about earlier. From there additional interactions were borrowed from native select instances on Windows and Mac browsers mainly IE, Firefox, and Safari. The goal was to try to build something as close to a native select as possible. In most cases, there’s a pretty close mapping, but, in some cases, the experience is augmented to add additional features that were missing so I started where we are right now and tried to make it better where possible.

Let’s talk about the keyboard interactions I’ll be implementing first of all on the select trigger: space, up, or down will display the list.

Focus will move to the unordered list. And an interesting thing that I discovered was difference between Mac and Windows in this case. Native selects on Windows and Internet Explorer will actually let you navigate the items without having to expand the list just by using the arrow keys and in my implementation I prefer to display the list so everything can be seen by sighted users and you can navigate without having to make a selection.

On the actual options list, up and down is used to navigate the items but they don’t actually cycle. Navigating does not make a selection and we’ll talk about selection in a second.

Home will move to the first item in the list. End moves to the last item in the list. There’s a type ahead functionality where basically typing a character will move focus to the item that starts with that character. Space will select the item, close the list, and then focus on the trigger.

And escape will close the list and focus on a trigger without making a selection.

So,, speaking of selection the ARIA pattern has a notice about whether or not selection should follow focus. That is, should the value of the selection be updated as you’re navigating along the options, which is how the native selects on Windows work.

I thought it was — this — since this happens an Windows you could stick with that but I chose to make the selection a specific interaction requiring a space key press for selection.

Another issue is again on Windows, when you’re pressing escape, it closes the drop-down and the last item you had focus is your selection. Again, the selection was following focus and this just doesn’t feel right, there’s no way to back out of a selection

But if you feel that it’s important for you to maintain this, then please go for it.

Lastly, I want to point out in Safari on a Mac, when a list is open tab is essentially trapped and on Windows tab will close the list and return focus back to the drop-down trigger. I liked that behavior so I kept that in my implementation, as well.

So of course we can’t leave out mouse and touch interactions; these are actually pretty simple. Basically tap or click — tap or click to open and tap or click to select. And for assistive technologies using touch, swiping would allow you to navigate the items.

So let’s talk about some of the things I discovered before I actually show you the demo. There’s nothing really super fancy about the JavaScript, to be honest. It’s really boring. It’s just a bunch of event handlers setting properties of handling focus. Again, like I said the star is actually in the markup because that’s what the browser is interacting with, that’s what the user is interacting with, and what assistive technologies are interacting with, so JavaScript doesn’t really play a big part in the entire process. The code is actually pretty procedural and boring to be honest, but, I wanted to point out some things that I discovered in this journey.

So the role of combobox versus button. The ARIA Authoring Practices show examples of using a button for the select trigger and I can assume they did this because role of combobox by definition is a composite widget and it consists of a text input field and a list and despite this I end up choosing the role of combobox for a couple of reasons.

The first one is that Windows announces native selects as comboboxes and VoiceOver just announces it as a popup button, so there’s a match there with at least the role that’s being announced.

Another big thing for me was that combobox is actually recognized as a form element, so if an assistive technology user is trying to discover elements by type, say bringing up examples of elements on the page, the combobox element will show up but won’t show up as a form element and may not be obvious to look there for that particular role.

Lastly, as I mentioned, — well actually because of the combobox role unfortunately one thing I found was there are certain automated tests that will complain because there’s no text box child so just be aware that certain automated tests will actually call that out on you.

Another big issue is again I’m trying to make sure this stays as a form element and ARIA-required is a property that is not allowed on buttons. So, this makes it less of a form element. You could get around this by just adding the word required to the label. But again for me it was important that the role was listed as a form element. So this is why I chose to go with combobox even though it’s not a pure combobox since it’s missing that text input field component.

So one thing that I was kind of surprised about was that the ARIA-selected state wasn’t announced everywhere. While navigating lists, there’s no notification that an item was selected or not. Of the three screen readers I tested VoiceOver was the best by announcing the number of items selected and actually it will announce the items selected upon opening the list.

JAWS 2018 and IE11 announces the selected item upon first announcing the list. But it’s kind of a strange announcement. So if you selected dog it would just announce list dog it wouldn’t say dog selected or anything like that.

And on Windows 7 with the NVDA and Firefox, it doesn’t announce anything at all, so it’s possible you’re expected to remember that selection from when you opened the trigger but that doesn’t seem right to me so again I just — what I did here was I added a string of comma space selected to the ARIA-label for the option.

Another thing was that Firefox for whatever reason wasn’t announcing the content of the combobox. And that was kind of a big deal, because it just announced the role and the expanded say but not a value. And this happened in Firefox with both NVDA and JAWS but not in IE so I feel pretty confident it’s just an issue with Firefox. Except when I got to testing the Windows 10, it was [an issue]. So I’m not really sure what’s going on there.

In any case, again, I got around this by using my favorite fix. I just forced the content using ARIA label on the content box to use the field name and selected item to be announced and unfortunately this had a negative impact in that VoiceOver now announced that content twice which is kind of unfortunate but I have a thing where two announcements is better than no announcement, so I had to make sure that that case was covered.

One thing about setting ARIA labels, especially on these interactive controls obviously, is you want to make sure that that ARIA label is matching the visual label or at least make sure what’s visible is at the start of the ARIA label and this is to help support users using voice technologies.

So ARIA described by gave me a few problems for the groups I’ll show you in a little bit. It appears as though ARIA described by wasn’t looking at all on anything I tested. I tried with both NVDA and JAWS and got nothing; it worked fine with Firefox on IE11 with Windows 7 and 10. I’ll probably log a defect for that unless someone on the phone already knows something about that.

ARIA-described by on iOS and Safari did announce the ARIA described by info but there was a brief pause which made it weird if you’re not used to waiting for it but it did announce. It wasn’t really long, like the seven seconds it used to be on Mac OS, but it was probably a second and a half. It was a noticeable delay but it does announce. JAWS and IE announce the ARIA described by but the instructions on how to use the JAWS shortcut to announce the extra content was announced first every single time and then the extra content would announce automatically.

So I had an instance where I had tutor messages on and that turned it off but then in another instance I turned it off they were still announcing so I’m not sure if there was something going on there with my particular instance of JAWS.

Some things for mobile. Something — it’s super important that you render the list right after the trigger. Otherwise swipes won’t enter into the list properly: this is really important for VoiceOver on Mac and iOS as well, so it’s just generally good guidance to have. Another tip is to listen for focus events on the document. This way you’ll be able to hide the list if a user swipes out of the list or somehow focuses on another item on the page. For mobile listening to the focus depends on the document. You can close the list and that way it won’t stay open.

One difference about VoiceOver on iOS was that even though I was programmatically setting focus to the actual list once it was opened, VoiceOver focus stayed on the combobox on iOS. So that was just a different interaction there that I wasn’t expecting

And because of the extra ARIA labels that I needed to add to smooth out all of the other issues, unfortunately again, the combobox and the selected states would announce twice.

So form controls: I wanted to maintain that again with our custom widget. And the way I did this is I actually kept the native select under the hood.

And as selections are made with the custom widget, they are synced to the native select that was hidden.

And this way values are still submitted or can be serialized like normal form elements. And it would also allow you to do some pretty simple validation. You would basically validate the native select as you normally would and then sync the validity of that to the custom select and present an error. I didn’t actually work that out in my code example but it’s easy enough to do anyways.

A couple of comments on the actual design on the user interaction of it. Just some recommendations that I would recommend is trying to prevent overscroll. So when the list is open, you want to set overflow hidden on the body. It’s really annoying on devices specifically when you’re scrolling through a list that has a lot of options and then you get to the bottom unknowingly and then you scroll past it, because the page itself starts to scroll. And that also happens just if you’re scrolling with a mouse on a desktop or laptop, as well.

So setting overflow hidden on the body when the list is open will help prevent that and it’s just a nice thing for your users there.

Another thing is list positioning and this was something that I didn’t implement in the code that I’m about to show you. But OSs do this really nice thing where the selected item will actually appear in the same place as the label of the select, as well. So, if you can imagine you have a really long list. So, for example a list of states which would be at least fifty. If you had an option down at the bottom selected and you opened that list, if it didn’t change the positioning for you, then you would — you probably wouldn’t realize that an — wouldn’t realize the option at the bottom of the list that was hidden from view — you would need to scroll to it — you probably wouldn’t know it was selected so this is just a nice thing that browsers will do for you and I highly recommend that you do that, as well.

Another thing was that I added a check mark to the selected option versus just changing a background color. I just felt like this felt better versus just relying on too dangerously close to using color to designate a selected item. So I added a check mark there to designate.

Okay. So now we’re going to get into some demos. And these are some videos that I have prerecorded demonstrating on different browsers and screen readers.

First up I want to show NVDA and Firefox on Windows 10.

>> COMPUTERIZED VOICE: Custom select Mozilla Firefox. Custom. Cities. Choose a favorite animal.

Favorite animal combobox collapsed required clickable.

Space Expanded. Favorite animal options list. Dog not selected 1 of 6. Fish not selected 2 of 6. Horse not selected 3 of 6. Bird not selected 4 of 6. Ferret not selected 5 of 6. Cat not selected 6 of 6. F. Fish not selected 2 of 6. F. Ferret not selected 5 of 6. Space. Ferret favorite animal combobox collapsed required.

Space Expanded. Favorite animal options list. Dog not selected 1 of 6. Ferret selected 5 of 6. H. Horse not selected 3 of 6. Space. Horse favorite animal combobox collapsed required.

Please select cities combobox collapsed. Space. Expanded. Cities options multiple selections available list. Los Angeles not selected California 1 of 8. San Francisco not selected California 2 of 8. Space. San Francisco selected California 2 of 8. Oakland not selected. Space. Oakland selected California. Roseville not selected California Houston not selected Texas 5 of 8. Space.

Houston selected Texas 5 of 8 Austin not selected Texas space. Austin. Muleshoe not selected Texas 7 of L. Los Angeles not — Space. Los Angeles selected California 1 of 8. Los Angeles, San Francisco, Oakland, Houston, Austin, cities combobox collapsed. Space. Expanded. Cities options. Multiple selections available list. Los Angeles. Selected California 1 of 8. Space. Los Angeles not selected California 1 of 8. A, Austin selected Texas 6 of 8. Space. Austin not selected Texas 6 of 8. San Francisco —

>> GERARD COHEN: So a couple of things that you’ll notice there was that obviously the keyboarded actions were in place but there was a lot of good information as far as selected states. Another thing I’m not sure if you noticed you’ll maybe notice in the next couple of videos that I show but one behavior that I picked out from native selects was that in grouped options the group label itself is not actually navigable or selectable. And so I carried that over onto this widget, which was really nice because you notice as the — as NVDA was listing the item count, it wasn’t counting those group labels. So that was a really nice behavior that I was able to add in there.

So next I’ll show you VoiceOver and I want you to notice how you have in this case you have the double announcements on the combobox trigger and on the selected state and again this was because of the extra ARIA labels that I needed to add.

>> COMPUTERIZED VOICE: Welcome to Mac OS VoiceOver is on Safari new tab button vertical splitter custom select web content.

Main, main. Heading Level 1 custom select. Favorite animal. Choose a favorite animal choose a favorite animal favorite animal required combobox. Zero items selected. In favorite animal box list items selected dog text 1 of 6. Fish text 2 of 6. Horse text 3 of 6. Bird text 4 of 6. Ferret text 5 of 6. Not cat text 6 of 6. Fish text 2 of 6. Ferret text 5 of 6. Ferret, ferret, favorite animal required combobox main. Please select please select cities combobox. Cities options multiple selections available list box zero items selected in cities options multiple selections available list box zero items selected Los Angeles, California 1 of 8 San Francisco, California 2 of 8 Oakland, California 3 of 8. Roseville California 4 of 8. Houston, Texas 5 of 8. Texas Houston selected 5 of 8. Roseville California 4 of 8. Added to selection two items selected. Oakland, California 3 of 8. Added to selection three items selected. San Francisco, California 2 of 8. Added to selection four items selected.

California San Francisco selected. Elgin Texas 8 of 8 added to selection five items selected.

Texas Elgin selected you are currently on an escape button San Francisco Oakland Roseville Houston Elgin. San Francisco Houston Elgin cities combobox main. You are currently on a combobox cities options multiple actions available. California San Francisco selected 2 of 8. California Oakland selected 3 of 8. California Roseville selected 4 of 8. Texas Houston selected 5 of 8. Texas Elgin selected 8 of 8. San Francisco Oakland Roseville Houston Elgin San Francisco cities combobox main. San Francisco Oakland Roseville Houston Elgin San Francisco cities combobox main.

>> GERARD COHEN: You noticed some of those duplicates announcements there and also added additional information upon navigating the list it would immediately tell you the items that were selected already which I thought was kind of a nice feature that the other assistive technologies didn’t do.

That was built in from the specific roles I was using VoiceOver was doing that. I didn’t do anything extra for that. It did sound a little long but some users may find that helpful.

Lastly real quick if you’ll bear with me I want to show you what this is actually like on VoiceOver with iOS, as well.

>> COMPUTERIZED VOICE: VoiceOver on. Custom select. Heading Level 1. Main. Favorite animal. Choose a favorite animal. Favorite animal. Choose a favorite animal. Shows Popup. Double tap choose a favorite animal. Favorite animal. Choose a favorite animal. Shows popup. Dog, list starred. Fish. Horse. Bird. Ferret. Not cat.

List end.

Ferret, bird, horse, horse. Favorite animal. Horse. Shows popup. Horse Favorite animal. Horse. Shows popup. Dog. List start.

Fish. Selected. Horse. Selected. Bird. Bird. Favorite animal. Bird. Shows popup. Cities. Please select cities. Please select. Shows popup. Please select. Cities. Please select. Shows popup. Los Angeles List start. Cali — San Francisco. California. Oakland. California. Selected. Oakland selected. Roseville. Houston. Custom select. Heading level — Houston. Austin. Selected. Austin. Selected. Muleshoe. Elgin. List end. Back button.

Bird, favorite animal — bird, favorite animal.

Bird. Shows popup.

>> GERARD COHEN: So I guess maybe VoiceOver hasn’t been in Texas because it had a problem pronouncing Muleshoe and Elgin and actually I have more videos using other browsers and assistive technologies, those were just kind of the highlights so if you’re interested, let me know, I can share them.

Okay. If you’re still with me, let’s take a look at some code. The actual example that I just demonstrated in the videos is available online. You can go to bit.ly/2yVMZjw.

Okay. First thing I want to show you is the markup — the initial markup that’s used for the native select. This is before any of the JavaScript has kicked in. And it’s just your plain select markup.

You have a label there. And then you have the select with the options. And there’s a div wrapper around the select. And it’s important in my case to keep the DOM in this order with these classes because the class names are the same being used for the custom select widget. So what happens is the — the default state before JavaScript and after when JavaScript has rendered everything looks exactly the same, so that way you really can’t tell the difference and so you actually navigate the select itself, that you’re using custom versus select, a native select.

So this is what you start off with. Again you’ll notice on the select I have required on there. And that gets translated to the custom select as well. I’m not sure if you noticed the first — the favorite animal select was announcing as required.

So you can do validations on that.

And so once the JavaScript kicks in this is what it turns into.

So just a little bit more code here. Markup. So again we start off with a span instead of a label for the actual label itself. And in my experience, using an actual label element obviously there’s not a real form there. You could maybe attach it to the hidden. But I had problems with older — with IE11 and JAWS it would announce that — it would double announce that label and it would also associate that label with other elements. Really, really weird behavior.

So that’s the visible label. Then, the big deal with the combobox role. So it’s just a span. It has a tabindex of 0 to make it focusable. It has the role of combobox. It ARIA auto complete as none that’s just to be a little bit more pure to the actual combobox role and make sure there’s certain assistive technologies or even maybe some automated testing didn’t — it wouldn’t complain that that value wasn’t on there. Obviously I’m maintaining the ARIA expanded state. And then here I’m adding the ARIA label, which is in the initial rendering, it’s the placeholder which would be choose a favorite animal comma and then favorite animal which is the actual label of the field. And the comma is in there to add a nice pause between the two pieces. The assistive technologies will treat that comma as a pause, so it makes it a more human readable or more human sounding announcement. Then I have ARIA owns which actually references the ID of the ordered list itself. And then, finally the ARIA required which I pulled off of the native select required attribute translated to ARIA required.

Then I had to use a data attribute to store the actual placeholder text. That way I can reference back to it when — if you deselect the option in some cases, especially with the multi-selects, it’s nice to be able to replace that content back to the placeholder.

Moving to the actual list itself. It’s just a UL with the role of list box. And in this case I added the ARIA label there to say favorite animal options. And that way you have some context there when you first encountered the list. It would announce favorite animal options. It also serves the purpose for mixed select boxes. Add additional content there that says multiple selections allowed. And that’s information from the — information that’s not normally conveyed to assistive technology with a normal multi-select so I felt it was important to provide that information to a user. That way they know they can select more than one option.

Moving into the actual list items themselves. Tabindex of negative 1 with the role of option and ARIA selected equals false is the default state for all of these unless you already have one that’s selected. And this was important. When my — like for maybe 75% of my testing I was only adding ARIA selected equals true and removing ARIA selected: it’s just a thing I like to do. It wasn’t until I started testing on Windows 10 that I realized I think it was Edge required the ARIA selected equals false otherwise if ARIA selected equals false wasn’t on the option it would announce it as selected. So ARIA selected equals false — basically ARIA selected has to have a proper value for each one of the options later on I actually found that in the actual ARIA authoring pattern. It does so to have ARIA select equals false on there. Then underneath that you’ll notice I have the actual native select. It’s visually hidden it has ARIA hidden equals true so hidden from sighted users and also hidden from assistive technologies and a tabindex of -1 to make sure you didn’t inadvertently tab to it even though it’s hidden that’s just a cloned version of the original select I started with this is the select I manipulate based on the selections of the custom select I would sync them back down to the native select so when the form is submitted, the selected values of that select get passed along with the form. Also, again, you can do validation based off of this native select that’s hidden behind the scenes or you could serialize the form and send the values off via any other method that you’re using to submit the form.

So really not that exciting here. This is kind of just standard in my opinion. The real fancy stuff the stuff that I got excited about came with the actual grouped options now. This is stuff that basically I had to come up with on my own. Based on a lot of testing, this is how I got the options to be announced along with basically the parent level that they were grouped under.

So again just starting with the same unordered list with the role of list box, I have my ARIA label there that references the label of the field. So, in this case, it was cities and then I added options to it and then I add the additional context to let the user know that multiple selections were available. ARIA multi-selectable is set to true. Even though that’s not announced by assistive technologies, I still have it on there. Then of course tabindex of -1 is on there because I’m managing focus entirely myself.

Then after that, the list items so treating the list items a little bit differently. The actual group labels don’t have a role of option. In fact, I wanted to be very, very kind of heavy handed with — did not want them to be announced, so I added a role of presentation and ARIA hidden equals true. Also they are not navigable by keyboard so they operate the same as a native group selects there in that case. For the actual list options, still the same — pretty much the same, tabindex equals -1, you have a row of option. And in addition here, this is where I had the ARIA described by that will reference the ID of the — basically the group label, the option above there.

And then again ARIA selected equals false. And that’s how I was able to get — it’s actually pretty simple except for Edge where ARIA describe by was announcing. That’s how I was able to associate the — in this case these two names with the other names and I was surprised how it worked with most of the assistive technologies, it picked it up right after it. A lot of other examples I saw out on the Internet used nested lists. And in some cases they didn’t work, but, honestly I didn’t really like the interaction there because of the way it would list levels and additional information that I just didn’t feel was really necessary. And this was a much simpler more elegant solution.

And as I mentioned before, the JavaScript, there’s really nothing special to the JavaScript. When you look at the code, it’s very procedural. And it’s very — I was very explicit with everything in these examples because I wanted you to understand exactly everything that was going on, every little step, every little property that I updated every time I moved focus, I wanted it to be very, very explicit so that you can see that. Obviously it can be refactored. It’s not production code; like, this is not a UI library widget or anything like that it’s literally just to show the example of everything so it’s actually pretty boring. But one thing I did kind of geek out on was the code I’m using to perform the type of ahead. And a lot of examples I saw there was a lot of — there was just a lot of code that went into that. It is kind of difficult process if you consider that it has to be able to cycle through all of the options. So I’m going to show you that real quick.

And this is where obviously ES6 was really handy because of a lot of really cool methods for arrays.

So the first thing that I’m doing here is, you’ll notice I have basically a variable named options and that is literally the options in the list. So the first thing I do is using the sum method which basically returns if there’s a match for a particular value. So in this case, the very first thing I’m doing is I’m just making sure that at least one item starts with the key that you just pressed. If not, it’s just an easy out. It will exit immediately versus going through the rest.

This is probably the only case where I did a little bit of optimization on the code. Everything else again was very procedural and I wasn’t following engineering best practices for performance and all of that stuff because I wanted it to be explicit. This is the only case where I started to do a little bit of optimization. I don’t know why. I just felt like I needed to do it. Working with arrays is just always — stresses me out.

If I meet that condition that okay, there’s at least one option that starts with the letter — the key that you pressed, first thing that I’m doing is I’m checking to see if there’s an item that’s already focused so I’m saving that in a variable here called focus. And that’s because I need to know which index on the array options to start searching from. So when you’re doing a type ahead, I don’t know if you saw in the videos, but for example in the animals list I had two animals that started with F. There was fish and there was ferret. So if fish is already focused. My next press of F should start searching from that item and find the next one underneath it, which was ferret. And then on ferret if I pressed F again it should cycle back up and find fish.

So I needed to first find out if there was one focused already. And if there was one — if there wasn’t one that was focused then I knew I had to start from the top of the list. And that’s in — I don’t know if you can see. It’s basically Line 402 I’m using the find index method to search through the options starting from the top. And then return a value back rather quickly. If there was an item that was already focused, and this was the part that I geeked out on, is basically what I did was created a new array starting with splicing from the index that was already focused so that would — for example if index 3 out of 5 was selected on this, I would slice out items 3 through 5, or actually 4 through 5, first and stick that at the top of the list. And then I would splice out the front half and add that to the end. I’m not sure if I’m explaining that properly. But what that means is that now instead of having a loop through the array twice or multiple times to find out, “Okay I’m at the bottom of the list,” I didn’t find anything, I have to start at the top, what happens is it’s just one flat list to search. And it will automatically produce the result of loop — starting over from the top.

So I’m using slices to do that And then finally, again, just using index of and a couple — and find, which are just other really cool ES6 array methods to find the item in the list and finally just end up focusing it.

So it was pretty surprising with these few lines I was able to perform that function.

>> CYPHER: I know what you’re thinking because right now I’m thinking the same thing actually I’ve been thinking it ever since I got here. Why, oh, why, didn’t I take the blue pill?

>> GERARD COHEN: All right. So I know that was a lot of information so thanks for sticking with me. I’m hoping that you got something out of it. As I mentioned earlier, it may have been a little unconventional and you may think all of that wasn’t worth it but I had an opportunity to improve the experience for our users and that made it worth the effort for me. At the end of the day, you know, custom selects: they are not going away. I know sometimes it’s easier to just say no to something than to fully explain it and flesh it out but I don’t think telling designers and developers not to build custom selects is doing anything but making accessibility worse so I’m hoping you can all take everything I’ve shown you here we can crowdsource make it better for everyone so hopefully one day building custom selects won’t be such a bad thing one last thing we have a few — a few minutes for some questions probably but one last thing I wanted to send a shoutout to my team who helped me test a lot of this stuff so Richard for his JAWS help and Michelle Little for her NVDA help #oxsquad. Yeah just thank you for joining me today thank you for Karl and Michael and everyone for attending and allowing me to share this stuff, thanks to ACS Captions. My name is Gerard, you can catch me on Twitter. If you know any developers that want to learn how to write accessible code, I have a course on Perl that they can check out and we have time for questions.

>> MICHAEL BECK: Yeah, thank you Gerard very much. We have a few questions. One is, you may have answered this whenever you pulled the code up, but it’s from Perry: if the input is backed by a select element, how are you submitting multiple values?

>> GERARD COHEN: Yeah, actually having that native select with multiple on the select, I mean everything was just handled for me that was one of the nice things to make sure I had the native select under the hood there.

>> MICHAEL BECK: Then PJ asked are there any considerations if the list has to be translated?

>> GERARD COHEN: I didn’t get into that. Yeah, I didn’t get into that. That’s not something that I thought about. But I mean —

>> MICHAEL BECK: The answer is no.


>> MICHAEL BECK: Anyone else have any other questions? I really don’t see much. So thank you all very much for attending. And like I said, the next one will be on December 5th with Thomas Logan. And spread the word for the — oh, geez, spread the word for the webinar. Let everyone know if you enjoyed it. And we will see you next month. Thanks, Gerard.

>> GERARD COHEN: Thanks, everyone, take care

Gerard Cohen

About Gerard Cohen

“Do you ask a dolphin how it swims, or an eagle how it flies? No, because that’s what they were made to do!”

Gerard K. Cohen loves front end engineering so much that he is on a mission to make sure that the web is inclusive to all users, making rich internet experiences available for all. He believes a great user experience includes performance and accessibility.

Gerard lives in Oakland with his wife and Betta fish, Squiggles, and when he is not sleeping or drinking Zombies at tiki bars, he helps raise awareness by speaking at Front End and Accessibility conferences around the country. He is also the author of “Meeting Web Accessibility Guidelines” on Pluralsight.

Making a Podcast WCAG 2.1 Compliant


>> MICHAEL: Welcome to technica11y, the webinar series dedicated to discussing the technical challenges of making the web accessible.

Our presenter this month is Nic Steenhout, host of the popular Accessibility Rules podcast.

And now, our host and moderator, Karl Groves.

>> KARL: Alright, everybody! This is Karl Groves from Tenon. This is our first technica11y webinar and I think it’s really interesting to note who we have with us. It’s Nicolas Steenhout and Nic and I have known each other for a while. The idea for technica11y came about from Job van Achtenberg, who can’t be with us. He’s actually at Frontiers today. He’s on the Frontiers committee. Job came up with this idea and I sort of ran with it. There’s a lot of conferences out there and there’s a lot of meetups and all that sort of thing, and a lot of the discussions are sort of introductory in nature. There’s lots of reasons why they’re introductory in nature and that’s because a lot of people need the introductory stuff; they need the introductions. But the idea that we had was that we wanted to dive into more discussions around the technologies and more discussions around making sure the information is directly actionable.

And, so, this is the first one of these. We’re going to have these every month, so if anybody has anything to share, that they’d like to talk about about accessibility and they want to get into the weeds a little bit, this is going to be a good format for you.

So without further ado, though, I’m going to introduce Nic and Nic is going to take over and drive the discussion here around podcast accessibility and a lot of the things you have to consider, including new WCAG success criteria and things like that. So, here we go, here’s Nic.

>> NICOLAS: Hello, everyone! I’m really happy to be the first guinea pig to give this series of webinars, but even more excited to talk about podcast accessibility. It’s been a pet peeve of mine that there is so many great podcasts out there, but so few are out there are accessible or providing even basic accessibility. Let’s get started. Before I share my screen, I should say a couple of things about housekeeping. First is that we’re going to take questions but if you can send them through the Zoom interface and we will pile them up at the end rather than interrupt the flow because with this format, it’s a bit difficult. And the other thing is that for people with vision disabilities, the slides don’t really have visuals. It’s just mostly text to help you to help sighted people to go along. You are not going to miss anything from my slides that I’m describing because it’s just text. With that, I’m going to share my screen for a second. Let’s see. Start broadcast.

So I’m sharing screen. Accessible podcast, so that’s what we’re talking about. If you are not here to listen to this, stay anyway, going to be super interesting. I think many of you know me. I’m easily found at Twitter @vavroom. I work for Nobility, based in Austin, Texas. We do accessibility, that’s what we do all the time. I have a personal blog site at http://incl.ca and, of course, I have my podcast website at http://a11yrules.com. I also a few months ago put together a site about podcast accessibility. There is a lot of resources and information there that follow with what I will be talking about today on podcast accessibility websites.

Quick overview. Few things to consider when we are talking about podcast accessibility. The first thing is that we have to provide transcripts. And website need to be accessible and big aspect of an accessible web site is the media player we are using. I will be talking about WCAG 2.1 contrasted with 2.0 which was part of my accessibility journey for with making the podcast accessible.

Do note, it’s not an in depth code review. This is not what this is about. There’s o many things to consider and talk about. I’m not going to go in depth about code and not going to review things that most people should be comfortable with in terms of accessibility when looking at WCAG 2.0 success critera. Accessibility: A11y. For those of you who don’t know, it means accessibility. There’s been a lot of debate about this shortcut, this hashtag is not really accessible. But it’s taken ground and known more and more. How did it become A11y instead of accessibility. It’s a because it’s a numeronym, a word that I am always tripping on. Basically, we’re looking at the first letter of the word and the last letter of the word and counting the number of letters in between. There’s eleven letters in between A and y. So we have A11y for accessibility. Something that a lot of people don’t know where it comes from. There you go.

Transcripts. Obviously, podcasts are an audio format. For a lot of people that can’t hear the audio or access the audio for whatever reason, we should provide transcripts. I’m going to focus on transcripts here. There was a study that was done for the show, This American Life for NPR. They transcribed about 500 of their episodes and after they did that, they realized that they had a lot of inbound traffic coming through whether unique visitors or search traffic coming through. They realized that 7% of unique visitors viewed at least one script when looking at site. Seven percent of traffic is considerable. Over 4% of new traffic coming through. And over 6.6% of increased search traffic coming through. That’s also quite interesting.

They had nearly 4% of inbound links that started appearing after this transcript. In this day and age of trying to get information and more traffic and more visitors, I think just in and on itself, these numbers for me are really and a good argument to provide transcripts. But there is more. More than that. Beyond making the content accessible for people with disabilities, you end up with audio content indexed searchable. So if you have to refer to episode, you have it right there easy to find. It’s more easily translated. If you have your podcast in English and people coming to your site from France or Japan or anywhere else, they can run your content through Google translator or other translating machines and that will be easier for them to access. More people can access your content. There is people that love to listen to your podcast, maybe they are in an open plan on a bus or any number of reasons that it’s difficult for them to access the audio. We still have people that have slow connections. We are not all blessed with gigabyte internet connections and we don’t have all we can eat data either. So, there’s a whole lot of reasons people will benefit from your transcripts including you.

So, hat is a transcript? It’s basically taking anything that is spoken and translating that into the written word. And there is a few things you need to keep in mind when looking at doing transcripts. Typically, when you purchase transcription services, you are going to be asked, “Do you want time stamps?”. Time stamps typically are more expensive to get, but if you implement synchronized transcripts, that’s mission critical. I will talk a little bit more about synchronized transcripts later on.

You can get the speaker’s name included in the transcript and if you have more than one speaker, it’s really crucial to actually know who is saying what. Depending on transcription services, you might have that done for free if there is no more than four speakers or something like that. You will also be asked if you want your transcript to be verbatim. And that means it includes all the “ums” or “you know” or all of these vocal ticks that might be present in the speakers, and typically that makes it harder to read. I think verbatim transcripts are more important when looking at legal or medical transcription, which is obviously not the realm we are moving in when talking about podcasts. Do yourself a favor and order the clean or clean verbatim that includes only the words that are important. Not all the text. It makes it so much easier to read.

There is three main types of transcription you can look at. Two of them are human transcription and the other one is machine transcription. Human transcription, you do the transcript and that can be consuming. It’s cheap if you don’t value your time. Can take a while when not used to it. On the other hand, you know the content better. You are going to be more accurate into what is being said.

I’m using human transcription service. It’s costing me a dollar U.S. per minute of audio. So, it’s not cheap but it’s not super expensive. You have to shuffle around. I’m not going to recommend certain transcription services. They are out there and some of them are doing a good job. Between 97 and 99% but if you are doing a technical show, the accuracy goes down. It’s good to build a glossary of words that you are using in podcast. That’s going to help the transcription service to actually be more accurate.

A lot of people want to use machine or automated transcription. It’s not a bad thing. Some people say it’s better to have a machine transcript that’s 80% accurate than no transcription at all. In some way, I find that argument difficult to argue against. At the same time, I’m not entirely convinced at the same time. The benefit of the machine transcription, it’s going to cost you 10 cents per minute on audio. That can be good approach to starting your transcription. You get the transcribed cheaply with machine transcription and go back and fill in the blank where it’s not quite as accurate as you would want.

If we are looking at machine transcription, we may be thinking, “Better poor transcript than no transcript for your podcast.” This sentence has ten words. We are looking at 80% accuracy. What happens if you’re missing the first word and a word in the middle and the sentence reads, “[a] poor transcript than no for your podcast.” Suddenly, it doesn’t make a whole lot of sense. Or you can miss other words in the sentence. So, “Better a transcript than transcript for your podcast.” So we dropped poor and no. This gives you a feel when you have only 80% accuracy. You can get a gist, maybe, but you will be missing the crucial points of the whole thing.

If you do rely on machine transcription, I strongly urge you to go back and run through the transcript while listening to your show and fill in the blank and correct the text as it goes.

One of the things I have seen happen a lot is when the show has a transcript, suddenly they put it on different page and the link to the transcript is often hard to find. I’m going to suggest to you that to get the search engine friendliness and all these people being able to benefit from the transcript, the transcript needs to be easy to find. Ideally, it’s part of the podcast episode page. If you look at my website for the podcast, all the episodes have the player on the top of the page and right below that, the transcript is available and easy to find and text of the transcript is then associated with the show and it’s much better on several respects.

Now, please don’t provide the transcript as a downloadable document– not a PDF or a Word document or a text file. Just put it into your page. It’s not going to harm anything. That’s what web is for — to present documents. Avoid putting it on separate page. I think you will have a lot more people enjoying the transcript if it’s on the same page.

Now, one thing that a lot of people don’t say, is they don’t say on the show they have a transcript. That’s something that pays to do. At the start, you can say, there is transcript available and where to find it. If someone is looking for that and accessing the podcast through maybe their podcast player of choice, they will know there is a transcript available and it’s easy to find. You might want to state it that way and when you talk about show notes or anything else, ideally at start of the show so people actually know it’s right there when show begins and they don’t actually you make it front and center, not as an afterthought. That’s a good idea.

So, that was transcripts, a fairly quick overview. We could probably talk more about it. But, let’s look a little bit about website. Because, of course, once you have a transcript, you have to have a website of show. There is a lot of different flavors of podcast hosting and solutions.

I was looking for Accessibility Rules podcast to have something that I could make accessible easily and have control over. I chose the WordPress platform, which I’ve known for a while, so, it wasn’t too difficult to make that work for me. Then, I went in search for right plug in. There is a lot of plug ins for podcasting. Some of them you have less control over rather than more. I chose the Seriously Simple podcast plug-in because the code is open and I was able to go in and do the modifications I needed. Again, mostly from an accessibility point of view.

Now, we know that for a website to be accessible, we have to think about the usual suspects. We have to make sure it works for keyboard users whether sighted or not. Being able to tab forward and say backwards and activate all the interactive elements with the enter key or with the space bar. We want to make sure the site works for screen reader users and users with low vision and that includes contrast issues. There is a list of things and I think that majority of people that are attending this webinar actually know about basic accessibility. I’m not going to go in-depth about that. In fact, I’m going to leave it out there that if you want to make an accessible website, look at WCAG 2.0 to get you started if you don’t know how to do that.

One of the things I did when I did my website was from early January, I went live with the website and I looked at WCAG 2.0 AA compliance. I was asked on someone on the WCAG 2.1 committee that they really wanted to have my site as implementation site before making the guidelines be accepted. So, I foolishly, maybe, agreed to that and delved into the guidelines to see what would apply to my site, what I needed to worry about, and what did not apply. That was quite an interesting learning experience for me because obviously I’ve been doing accessibility for a long time, but I was not at that time familiar with 2.1.

I will get back to 2.1 in a little bit with specific criteria I was looking at. If you’re looking at building a website like I did on WordPress, you have to start with right theme. Make sure it’s accessibility ready, whatever that means. You have a lot of themes on the marketplace that say they are accessibility ready but are not exactly ready. But, it’s a good start. And then you may want to look at Joe Dolson’s most excellent WP Accessibility plug-in which implements a lot of accessibility for things that may not be fully accessible.

Then you may need to do additional custom work. That included for me making 2.1 work. A lot of things in 2.1 did not apply to me. The other aspect is that I was not aiming for a AAA level accessibility, so that let go of a few aspects. Some things were unclear. One of the new success criteria identify input purpose, that’s 1.3.5, basically is now understood as make sure that you form input fields have auto complete in it. It wasn’t clear that that was it. I sought guidance with the people that were actually writing the standard and there were some discrepancies in what they were saying. That can actually cause issues for you if you are not really familiar with accessibility. Hopefully, the new technical documents to help you along are going to help. And I should plug in the fact that Nobility has been putting one new blog post about 2.1 success criteria every week. We do cover in depth about what each means and how that’s going to impact on you if you look at 2.1

Some of specific things I implemented when I was looking at making my podcast site, 1.3.4 orientation. Basically, it needed to work in both sides of landscape and portrait. That is important for people that will have their devices often be mounted on, for example, wheelchairs. So, they use their phone or iPad premanently mounted in one orientation. If your website requires you to be in one direction rather than the other, it can cause issues. So, for me, orientation worked a little bit with the reflow aspect, 1.4.10, basically, being able to change font size and element being placed where they need to be placed. So, that was work I had to do in the theme in making sure that worked. 1.3.5, as I said, identify input purpose, that was to do around making sure the forms, in that case particularly contact form, had the auto complete field associated to that when the right field were used.

So the name, e mail, address, phone number, all that can be used with auto complete. I ended up removing the contact form for a technical reason. I would not make my contact form system error messaging actually work. I could not associate the error message with the erroneous field and that’s something I work on when I had spare time which I haven’t had much lately. Something for you to consider that sometimes you might be able to make a form work with some of the new success criteria. But if they are not working across the board with even the old success criteria, there is no real point working on the new ones.

Other elements that were tripping me were 1.4.11 which is non text contrast. That’s contrast on elements that are not necessarily text. Logos are good example of that and some images that are critical to understanding on the page. I was loading third party content, namely Patreon, which is a platform that is not accessible. That’s a discussion for another day. I had to adjust things to make it work. 1.4.12, text spacing. So, the theme I had selected needed a little bit more spice around line heights and kerning and that kind of stuff, so, I had to move around that. And finally, 2.4.12, which is label in name. I was using a Paypal donate button that I had to implement the label a little bit differently than the code that they were giving me when you are exporting code out of Paypal. I had to play a little bit with that. Nothing major. And I think it’s within the reach of pretty much everyone who understands coding a little bit. But, you have to do a little bit of thinking and you might have to play around a little bit before you’re reaching a point where you are at. And I would like to point out something that I always tell people that is better to implement a little accessibility and keep implementing as you go, rather than hold back until you are done. Just like any website, you are never really done. There’s always going to be things to improve and implement. Don’t wait until you’re, quote, unquote, done to make it happen. So, the last aspect I want to cover about podcast accessibility is the player that you provide on your site. I had an issue with the default player that was coming with Seriously Simple pod casting. It was a similar issue found with a lot of podcasting plug-ins, using players that are somewhat accessible, maybe not completely. Seriously Simple podcasting using the JW player that worked with keyboard, but, there was no way to create visible focus on the element. At least not within the implementation when it was on WordPress with the different layers of CSS, so it was difficult in making it work. You could navigate the player with keyboard, you could not see which element was active and that makes it basically not unusable, but, very difficult to use for sighted keyboard users.

Under the hood, they were using some weird ARIA attributes and values which, in my test with voice over and VDA, meant that there were some things that were unexpected. While it wasn’t major blockers, you could work around that, it also did not create a friendly experience. And accessibility for me really is about being friendly and making it easy for people. So, I did some research. I tried a few players and I ended up going with Able Player. Incidentally, that’s player used on W3C websites, so it’s pretty good in many respects.

Some of the things I like about Able Player, it allows you to add timed transcripts. Earlier, I was talking about when you purchase transcriptions, you can get timed transcripts and they put time stamps through. When you have these timed transcripts and you use Able Player, you can actually get an area on your site that will be timed. So, as the podcast is playing, the transcript is moving along with the right cadence; it’s associated that way. I did not implement that because I thought it might create some other issues and it also doesn’t allow people to scan the transcript quickly if they want to get feel for the show or get to a specific part of transcript. But, it is a popular option.

The other aspect is that if you are doing video podcast, vlogs more than podcast, you can also provide an audio description file with Able Player. So, that associates the audio description to video file and you are going to be able to have that associated. With a lot of players out there, you would have to jump through massive hoops to make happen.

Able Player is a really solid thing, bu it’s also not designed necessarily for WordPress. I thought, “Okay, am I going to spend time making it work more in WordPress or is there a plug in?” And lo and behold, there is a plug-in! It’s older, but it’s worked and it’s secure. I’m keeping an eye on security and updates. I was able to find a plug in that allows me to fairly straightforward use Able Player in the site. I had to go in and use short codes in the podcast plug in to remove the default player and be able to use the Able Player. There’s a few hoops to jump through and that’s probably something you want to plan for when you start building your site instead of building it and seeing as you go which was a mistake I made, maybe. While I bit of a mistake on the front end, I learned so much and actually having fun doing it! People tell me, “Nic, accessibility is such a chore.” And I say, “You are a coder, right? You like a coding challenge, right?”


“Well, think about accessibility as a coding challenge.”

That, for me, was the case. It was a strategy and implementation challenge.

So, as I said, even with the plug-in, I needed to make it work with Seriously Simple podcast. And I did so. Recap quickly and then I will be taking questions. We will have about 20 minutes for discussion and questions. So, find a platform for your podcast. That might take you some time because you have to test the different platforms. There’s a lot of platforms out there, but most of them are not really accessibility friendly or not really easily modifiable. Make sure your platform accessible. Ensure your media player is fully accessible. Provide transcripts and make them easy to find. That is really the end of the slides and hopefully we open up discussions and questions for this.

>> KARL: For those of you who do have questions, type them up in here in the chat. The way that Zoom webinars work is that they don’t allow anybody to talk except for presenters which I guess is by design and okay as opposed to having 55 people talking at once. But, it definitely does hinder some interaction. If you have questions, put them through here.

Nic, do you have any favorites in terms of people to do the transcript over others. I know you mentioned that a bit? There’s a couple of services out there. We are using for our live webinar here right now, we are using ACS Captions. I know you have done research on that.

>> NICOLAS: Yeah, I did do quite a bit of research on best transcription services out there. For a while, I was using Rev.com. And they are good. They are quick. They are relatively accurate. But, I have found that a lot of the time, I did not get all that accurate things even when I was providing glossaries. Sometimes they were making mistake even on my name. However, when you have high volume, they provided an API, so, that’s good.

Currently, I’m using an independent transcriber that is quick and much more accurate. Because she is independent, she is actually able to tell me, “Ni,c, something wrong with your audio in this particular episode,” or that kind of feedback for me is really important. And so anyone who wants the details, I’m happy to provide contact details after the show.

if you are looking at cheaper machine translation, there is Temi. They are pretty much the best out there. And they are coming up with an API. So you can hook up your system if you have machine if you have high volume as well. There is yeah.

>> KARL: API thing is interesting because we had a client who, you know, doing a high volume of videos and stuff like that on their website, and the ability to sort of shoot off that file for for transcription and getting it right back was neat. That’s a pretty competitive space, right?

>> NICOLAS: Yeah.

>> KARL: A lot of services out there. That’s why you get it so cheap. It’s competitive space. A lot of people use Mechanical Turk, they send them out to Turk workers. That’s an increase charge to get the accuracy. It’s worth it to spend the extra money to get the increased accuracy.

>> NICOLAS: Yeah.

>> KARL: Anybody else having any questions?

>> NICOLAS: So, I see a lot of questions coming through about my contact details. I think that’s been given out by Michael. I see question of use of sign language for deaf people who cannot read or write?

Yeah, you know, this is something that I struggled with. Do I provide that or not? And I think it’s fantastic if you are in a position to be able to provide it. From podcast accessibility in general, I think you are going to find issues with making that happen.

So, I struggle a lot with podcasts to tell them, “Hey, you need to be accessible. You need to provide a transcript.” I’m getting a lot of push back on that. I can’t really see myself going back to them and saying, “Hey, you know, great, you provided a transcript, fantastic, please provide a video in sign languages with equivalent of your transcript so people who are deaf and cannot read will be able to understand that.” I think that would really be pushing the envelope a lot to the point where people would actually wash their hands of it and not want to touch accessibility. That said, if you are able to provide that, by all means, do it. More accessibility is better than less. Absolutely. Some of the pitfalls I would look at is make sure that sign language you are using is actually relevant to the language of your podcast. If you are using more – your guests are more American, use ASL. If they are more in Canada or Australia or U.K., you have to pick up the right flavor of sign language. You have to see how you integrate the sign language with the podcast audio episode. I think would get quite tricky technically, but, it’s worth doing.

>> KARL: Are you aware of any services that do that? I know, you know, with so with the accessibility boot camps that I do, I have one coming up in November in San Francisco. I’ve gotten some contacts out there that I have dealt with through online directories for sign language interpreters to do it in person.

I wonder is there a service that would do that let’s say here on this Zoom or something like that, is it possible to have a sign language interpreter do it live the way we have ACS doing the captioning?

>> NICOLAS: There is a couple of aspects to that. First thing is I think finding an interpreter would be willing to do it would be relatively mundane. Just question of finding the right person and having budget for it. Last time I looked for interpreters in the United States, it was about $40 per hour on interpreting and I think it’s gone up since. In the context of doing a podcast, I think interpreters would be happy to probably go on even Skype and do a Skype video and then you can record screen and then use that to edit your video file. That’s relatively simple. For live podcast or live webinar like we are doing now, I think the barrier to use here becomes the platform that you use. How do you ensure that you are able to display the sign language live in the little vignette in the corner of the screen. I don’t know that Zoom does that or any other platform that does that. If someone does, I would love to hear about it. >> KARL: You are saying the per hour rate for someone has to come on site and travel to your site and travel to your location like an event, that gets pricey. You know, $40 an hour not a big deal when it’s one hour. When they are there all day, it’s pricey. I imagine there has to be a market opportunity for somebody who can do it virtually like this. But I don’t know of any. That would be great to hear about though. Anybody else have any questions?

While we are waiting for any more questions to come up, I’m going to mention that next one of these is in November as well. Michael, do you have exact date in front of you or I can log on to Zoom or take a look at that. When everyone registered for Zoom, for whatever Zoom reason registers you for every webinar that you have that you have on the schedule. For us, you are registered for all the rest. That doesn’t mean you have to attend, you can unregister if you would like.

>> MICHAEL: Going to be on November 7th.

>> KARL: November 7th. Awesome.

>> MICHAEL: And Gerard Cohen will be our presenter.

>> KARL: Gerard Cohen, another I guy I know rather well. He’s a developer at wells Fargo. Going to be talking about custom form elements and specifically I think he’s talking specific about the select element. Now, again, keeping in mind that technica11y is about deeply technical things. What I like about this and I’m not going to give spoilers for Gerard’s talk, one of my personal pet peeves is the fact that a lot of developers will create custom form elements specifically so they can style them. Graphic designers come up with visual treatment that they want to have forms adhere to. Next thing you know, browser doesn’t support styling accordingly. As a consequence, of course, as a consequence, developers will come up with custom form elements. Who God only knows what they were going to look like. It’s usually radio buttons and check buttons and select elements that they do it with because styling regular plain text inputs is kind of easy.

Select elements, everyone has select elements on their forms. And the mechanisms of interacting with select element via the keyboard and getting the necessary feedback to the user is nontrivial. And I give an example of this in a talk that I give that is called, “What is this thing and what does it do?” We have to have some level of predictability to the user interface so user understands how to use it. What is great about this is Gerard is going to give us a deep dive in that stuff. I think he’s going to give us an awesome idea for how to make sure the select elements are accessible. That’s cool. We are going to button this up here. One other question.

>> NICOLAS: Yep, just saw a question about which websites do I know that use accessible podcasts? There is a few of them, they are difficult to find. I started a list on podcast-accessibility.com. I’m sending that to chat. Oops, I sent that only to transcriber.

Yeah, that’s difficult to know because there is no centralized lists anywhere. Every time I come across a podcast accessible, I put it on the list and that’s on Github. I invite you to add yours to the list. It’s interesting. TEDx videos have captions and transcript. It’s not a bad site at all from an accessibility standard but could be better. Pretty good.

>> KARL: Cool. Now, just for I never as we anyone else as we are buttoning up, if you would like to give a talk here on technically webinars, propose something. Give us a shout on any idea that you have that you would like to share a good technical deep dive on. PJ mentioned a topic she would like to hear about on upcoming topic and that is best practices for making error messages accessible. If anyone wants to do a talk on that, give us a shout. Until then, thank all of you for coming and I want to thank Nick for giving us our first talk for giving us our first talk and my colleague, Michael, for setting this up. God knows I’m too disorganized to get this started. And I want to thank ACS Captions for captions. We will see you in November. Bye.

>> NICOLAS: Thanks, everyone.

Nicolas Steenhout

About Nicolas Steenhout

Speaker, trainer, podcaster and consultant about accessibility, Nic is the host of the A11y Rules Podcast.