XR Access Symposium
Published by Brian McDonald
So on July 16, 2019, I was in New York City for the first XR Access Symposium. XR covers Augmented and Virtual Reality, an emerging field without a whole lot of set guidelines. We have WCAG for websites, but there is no comparable standard for the emerging field of spatial computing. So this event was put on to help guide the future to a more accessible default, with talks, technology demonstrations by industry and research leaders, and intensive working groups that focused on generating actionable plans for approaching the unique accessibility challenges posed by XR.
I had a great time and learned a lot, and here are some of my top takeaways.
Talk 1: Richard Ladner |
---|
The talks started off with Richard Ladner, Professor Emeritus in the University of Washington Paul G. Allen School of Computer Science & Engineering. He started off with a board definition of Accessibility. For him, accessibility is a product or service can be used by anyone, including those who have disabilities. I like this definition as it’s broad and inclusive. Inclusion was important, as he emphasized that accessibility is about Inclusion not compliance only. Accessibility should be the default, and not only for privileged people who can afford expensive device. He also talked about what disabilities are, showing there are broad categories that effect a large number of people.
Richard Ladner's List of disabilities included: | ||||||
---|---|---|---|---|---|---|
Vision | Hearing | Mobility | Speech | Cognition | Behavioral | Multiple |
Blind | Deaf | Ability to walk | Ability to Speak | Dyslexia | Bipolar | Deaf-blindness |
Low-Vision | Hard of Hearing | Ability to use limbs | Memory Loss | ADHD | ||
Color Blind |
A Few Disability Statistics |
1 billion people world-wide have a disability - that's 15% of the world’s population |
217 million blind or low vision |
36 million blind |
360 million deaf or hard of hearing |
70 million need a wheelchair |
Clearly there are a lot of people with different needs, and we should build a system that allows them all to use XR. For Richard, the key is Ability-Based Design, design to leverage the full range of human potential. For this, it’s important to make your product adapt to different types of users, not force a 3rd party to adapt to you. Then he finished with some of the starting points we can use as guidelines, since there are no specific accessibility guidelines for XR, yet.
|
Talk 2: Steven Feiner |
---|
The next talk was by Steven Feiner, an augmented reality pioneer and director of the Graphics and User Interface Lab at Columbia University. He had a quote from Ivan Sutherland about “The ultimate display”
“The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked.” – Ivan Sutherland, The Ultimate Display, Proc. IFIP 65, 506-508, 1965
This display has not yet been created, but can be a useful understanding of a future goal. Steven however, wanted to update this VR dream to be updated and include AR. For him, the ultimate display++ involves 3D interactive tracking user, but also multi-user, indoor and outdoor, with navigation, visualization, and task assistance.
Talk 3: Yuhang Zhao |
---|
Yuhang Zhao is a mixed reality and accessibility researcher building intelligent interactive systems to enhance human abilities. She presented SeeingVR, a tool she helped create with a team from Microsoft. SeeingVR is a set of tools to make virtual reality more accessible to people with low vision. It’s a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. This toolset allows easy integration into the Unity engine, the most popular engine used when creating VR experiences.
30 second teaser trailer |
7 minute video |
Accompanying Paper |
![]() |
![]() |
![]() |
https://www.youtube.com/watch?v=izmKY17CDhg | https://www.youtube.com/watch?v=tr4Ejq5fHMc | http://aka.ms/seeingvrpaper |
Talk 4: Chancey Fleet |
---|
Chancey Fleet is an accessibility advocate and educator at the intersection between disability and technology. She opened by saying she identifies as blind, non visual. Her big question was how do we make VR to honor a nonvisual person?
Current technology is not doing a good job at this. The average tech demo has no info about accessible options. And when questioned, creators often respond by saying “blind people are not the audience for this app,” which is garbage. If people put effort and care behind their designs, they can unlock new futures that are better for everyone. People made blank slate of inaccessible glass into an indispensable iPhone, that is an important component of many blind people’s lives.
Disability inventions help everyone. The typewriter was originally invented to allow the blind to communicate with the world, and now I am using a keyboard to type this blog. Similar stories carry for Optical Character Recognition (OCR), Text-to-speech, and closed captioning. This message reminded me of something I saw from David Tisserand at Ubisoft.
When subtitles were not the default, the majority of users went out of their way to turn them on. When subtitles were the default, 5% or less of users turned them off. Clearly subtitles are helpful, and that is the sort of design where we can be accessible by default.
In the real world, Chancey uses quite a few apps to extend her reality.
Apps Chancey uses to extend her reality. | |||
Seeing AI - Microsoft
|
Soundscape - Microsoft
|
|
|
Over there
|
Takes virtual walk before arriving to new place in VR
|
||
|
This stack is such a powerful augmentation, that real time virtual interpretation plus gps allowed her to drive a boat. Blind boating.
But there’s a problem, Chancey can’t use those normal techniques in VR, VR is currently not accessible. Mostly sited developers designing visual experiences, and never put descriptive text in. Haptics are blunt, and much rougher than what she gets in reality with a cane. And binaural audio, or 3D audio, is worse than video in terms of research and fidelity. These need to be fixed to have a more inclusive future.
Talk 5: Glenn Cantave |
---|
The last talk was by Glenn Cantave, an activist, performance artist and entrepreneur using immersive tech to highlight the narratives of the oppressed. I was surprised to see him, as in the past I have seen race and class absent from some accessibility talks. But I remembered listening to Glenn on the Voices of VR, and was excited to see him here.
His talk was about accessibility as who is allowed to participate? Often times accessibility is thought of as physical limitations, but structural ones are just as important. If you cannot access something, it does not necessarily matter if the reason is physical or social. You should be allowed to access it.
Glenn created Movers And Shakers, which uses immersive tech to highlight narrative of the oppressed. His problem is that social protest and activism has not changed since women's rights and civil rights marches. Everything else in society has changed technology wise, but how they raise awareness is mostly the same.
Movers And Shakers had a campaign for advocate for removal of Christopher Columbus statue in NYC. As Glenn was from Haiti ethnically, Columbus would have owned him. When hypocrisy is at the center of a system, those systemic issues make you feel other. That’s not the inclusive community we want, so they created an AR book on the real history of Columbus. Being openly honest can help increase engagement with black and brown students, they don’t want to learn about greatness of oppressors.
Glenn also talked about another project, the Monument’s project. NYC currently has 120 statues of men (including slave holders), 23 statues of animals, and 6 statues of women. NYC is slowly fixing this with a few upcoming statues of women, but creating a more inclusive narrative will take some time. AR however, can fix this without needing to wait for official permission.
Afterwards |
---|
After these presentations, we split off into individual groups to try to plan for the future of XR accessibility. The groups consisted of Authoring Tools, Content & Creative, Definitions & Measurement, Devices & Platforms, Education, Frameworks for the Future, Image & Video, Input Modalities, Mobility, Sensation & Cognition, Sound & Haptic Technologies, Standards & Policy.
Then we came together to briefly describe our take-aways as a group. After the event, a guide of future needs and ideas was created.
Now people are still trying to further the event, and work together to detail and answer some of the open questions raised, and drive towards tangible outcomes. XR Access was a great event, and I anticipate being part of a working group for some time to try to move XR to a more accessible place.

Brian McDonald is a Research Associate at the User Experience Center. He has worked as a designer at Van Stry Design, participated and won numerous healthcare related hackathons. Brian holds a Bachelor of Science degree from Wentworth Institute of Technology, where he studied Industrial Design. He is currently pursuing his Master of Science in Human Factors in Information Design at Bentley University.