(Image Credit: Elizabeth Woolner/Unsplash)
Interactive visualizations aren’t accessible for people relying on screen readers, which are used by millions of Americans suffering from complete or partial blindness, motion sensitivity, or learning disabilities. University of Washington researchers developed VoxLens, a JavaScript plugin that improves visualization accessibility for screen readers. With VoxLens, users can gain a high-level summary of the graph’s data, listen to a graph translated into sound, and provide voice commands to ask data-related questions, including the mean or minimum value.
“If I’m looking at a graph, I can pull out whatever information I am interested in, maybe it’s the overall trend or maybe it’s the maximum,” said lead author Ather Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”
Screen readers notify users about text on a screen since it’s referred to as one-dimensional information. “There is a start and an end of a sentence, and everything else comes in between,” says co-senior author Jacob O. Wobbrock, professor in the Information School. “But as soon as you move things into two-dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”
The team recruited 22 screen reader users dealing with partial or complete blindness to determine how VoxLens can be applicable. In their tests, the participants learned how to use the tool and performed tasks that involved answering data visualization questions. The team discovered that participants completed each task with 122% increased accuracy and 36% decreased interactivity time compared to those that didn’t use VoxLens in an earlier study.
Participants with partial or complete blindness completed nine tasks in the VoxLens testing. They showed 122% increased accuracy and 36% decreased interaction time compared to those who didn’t use the tool. (Image Credit: Sharif et al./CHI 2022)
“We wanted to make sure that these accuracy and interaction time numbers we saw were reflected in how the participants were feeling about VoxLens,” Sharif said. “We got really positive feedback. Someone told us they’ve been trying to access visualizations for the past 12 years, and this was the first time they were able to do so easily.”
However, VoxLens currently works with visualizations developed via JavaScript libraries, including D3, chart.js, or Google Sheets. Now, the team is working toward making it available for more popular visualization platforms. The team also says the voice-recognition system can be frustrating.
“This work is part of a much larger agenda for us — removing bias in design,” said co-senior author Katharina Reinecke, UW associate professor in the Allen School. “When we build technology, we tend to think of people who are like us and who have the same abilities as we do. For example, D3 has really revolutionized access to visualizations online and improved how people can understand information. But there are values ingrained in it, and people are left out. It’s really important that we start thinking more about how to make technology useful for everybody.”
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell