A conversation with Bram Duvigneau about screen readers
On the 16th of November 2018 Bram Duvigneau joined us at the Design Research Master at the Willem de Kooning Academy in Rotterdam. Bram and I had an open conversation for a small audience of web accessibility experts, fellow students, lecturers and other people who were interested. In our conversation we explored the difference between expert screen reader users, and regular people who use a computer every now and then and who depend on a screen reader. I thought this would turn into a small summary, but alas, it is 1250 words long. If you know Dutch, you can find the transcript and the video of the conversation here.
Bram started with a nice in depth demo of NVDA, an open source screen reader he uses on his Windows laptop. He showed things like language switching, and the basics of how you control your computer once a screen reader takes over. He also showed the different ways to navigate a website. He showed advanced features like listing all headings, or listing all links. But he also explained that non-expert users might not know how to use these features, and that they often depend on listening to every item on a website, starting at the top. Of course he made us listen in awe to how incredibly fast his screen reader speaks. Again, he explained, many people listen to their screen readers in normal conversational speed. He also showed how complicated, or even impossible some pages can become.
I asked Bram if he has any ideas of how we could get screen reader UX to a basic functional level. And, while basic functionality is important, what I really want to know is if he has any ideas of how we can get beyond the functional: how can we get screen reader UX to a reliable, of even a pleasurable level.
Smart phone vs computer
Bram started to explain that many blind people prefer their smart phone over their desktop computer. There are a few reasons why, according to Bram:
- Touch devices do give you a better sense of layout and hierarchy, since you can actually feel where things are placed.
- Small screens force designers to get to the point. Chances are high that an interface on a phone is more focused, which means that it’s easier to navigate.
So it would be interesting to see if it would be possible to somehow give screen readers more visual clues, like visual hierarchy or maybe even things like visual ambiance. And we should learn from the focused designs of smartphone apps.
Another thing Bram thinks we should do is study the WCAG. There are some excellent ideas and examples in that document, and it’s unfortunate it’s being used as a technical checklist. If it’s up to Bram, it should be mandatory learning material at every (digital) design school.
Another thing we discuss is the complexity of controlling a computer with a keyboard. Interactive elements can be activated with Enter and often with Space as well. It’s complicated to explain to a layperson when the space bar does and doesn’t work. While a mouse, or a touch screen is simple: you can simply click on everything. Since most designers are mouse users, there is a lack of expert knowledge about keyboard interaction.
We talk about the horror of cookie warnings. While I think they are annoying, for many blind people they are an invisible wall. Some of them, like the one on 2doc.nl, are extremely verbose and literally impossible to control with your keyboard. But Bram talked about examples where cookie warnings are invisible to screen readers, yet without interacting with them, it’s impossible to interact with the rest of the page. These examples show that both design education and the design profession are clearly lacking. Designers and developers don’t understand how things work, and these things are not found during testing.
We talk about the checklist mentality that’s often used in web accessibility: after the fact ticking off of items in the WCAG. Bram tells us to study the WCAG, and indeed, do the things you find in it. He says: just do the things in the WCAG, and after you did that, then we can check to see if things really work for real people with real disabilities.
I’ve been working with the opposite idea in the past year, where I first observed, and then came up with tailor made solutions. Very interesting to hear that according to Bram this may not be the best way to go.
I ask Bram where to start with educating designers about all that WCAG knowledge. I explain the problem I see: many of my students do care about accessibility during school. But once they work at an agency they are forced to not use their knowledge, for whatever reason. And within half a year they forgot all about it. And yes, Bram sees this as well, and he thinks it’s an industry problem. It’s a problem of agencies who see accessibility as an extra, and not as the basics. But it’s a problem of clients as well: as long as clients accept the reasoning of the agencies, this is not going to change. So I guess it’s time for some good clientship courses. And maybe we need to find a way to turn accessibility into something sexy. We didn’t conclude how to do that though.
We talk a little bit about the idea you sometimes hear that Artificial Intelligence will solve everything. I explain the idea that I’ve been playing with of using image recognition to explain the visual hierarchy of a page. Bram rightly points out that design trends change all the time, and that keeping such a tool up to date would be quite complicated. Let alone letting it work in different cultural contexts.
We discuss the possibilities of designing a website that’s fun to use with a screen reader. Bram doesn’t know any. He does know a few experiments with carousels and live regions that were meant to be pleasant but had the exact opposite effect.
There was a very interesting question from the audience. Someone explained that as a sighted user we have all kinds of visual clues, like colour, shape. Screen readers turn everything into to neutral text. Are there other ways than text to solve these graphical clues, like for instance a ping?
Bram explains about the system wide sounds that some screen readers use, but he agrees that a lot can be improved in this field.
Which brings us to the people who make the screen readers. We talk about the default settings and wonder if these are too verbose, with too many technical details for non-experts. An extra problem for non-experts is that they don’t know where to find the settings, and when they do find them, setting are often filled with jargon, who knows what happens when you change them? We agree that the defaults should be smarter. This could a nice project for some students to try and figure some more clever default profiles.
We conclude this conversation with the idea that maybe we should design tailor made solutions for different types of screen reader users, like we’ve been designing chairs for different types of people and different contexts. One way to do this would by by donating nerd time to people who don’t know how to build their own user scripts.