Happy to se a question like this.
I’ve been doing “normal” web accessibility (semantic HTML, WAI-ARIA, WCAG, EN 301 549 etc. - normal meant as “not 3D” here) and when I consider what can be done with 3D it all drills down to what is the actual function of 3D UI. As we know we can do a loooot of amazing things with ThreeJS and generally WebGL/3D/VR/AR/XR on the web and some are quite simple to make accessible while others are perhaps close to impossible or even impossible.
So - before I dive into what could be an accessible 3D UI, I need to point out the following:
- HTML can range from a simple, static, image, sometimes decorative, sometimes informative and in need of an alternative text.
- It can be “like a video” (where user only passively observes) - and if not purely decorative, again - requiring proper alternatives, often also audio descriptions.
- Or we can have, a 3D version of advanced web interface made with 3D objects instead of HTML elements,
Or a totally innovative UI that does not even exists out there and is more sci-fi than not (for example 3D scene orchestration with AI recognized gestures, poses, voices etc).
- I must also mention the game scenario, where 3D UI is a full-blown actual game.
These are just a couple of things, we can surely segment possibilities into dozens or hundreds of different ones, but this is just to consider the range of possibilities that defines how we can make things (more) accessible.
We basically need to consider how to make things perceivable, operable, understandable and robust (POUR principles that are basis for WCAG and EN 301 549 standards, often required by legislation, like European Accessibility Act). But WCAG / EN 301 549 success criteria is limited when it comes to modern tech, even more so when it comes to 3D, unless we build on the POUR by ourselves and transpose the HTML oriented concepts to 3D ones.
Just for transparency - by only satisfying WCAG, on any level - we can still have a totally inaccessible or at least unusable experience for some users…
Parallel semantic DOM synced with visible UI is an excellent start. But that is just one aspect. It will help with screen readers, refreshable braille screens, voice control, keyboard support and a lot of assistive tech that requires semantics. But there is way more to consider - we need to consider the visuals as well - supporting zoom, preventing reflow, making sure we provide good contrasts for text and interactive elements, status messages for screen readers and so on.
How to describe the visuals in text and audio can be the most difficult thing if UI is complicated. Too little info vs. too much info can quickly be difficult to find out if we rely on non-visual modality for a visually-first scene or situation. I see that AI with computer vision is doing an amazing job here and hope it will be even better (and less biased) in the near future. Some screen reader users can already have a dialog with visuals, but that may not be useful for all scenarios. Automatic AI audio descriptions or alternative text based on 3d perspective is perhaps a possibility, but context is key and there is a thin line between too much info and too little info, so once again not to be left unsupervised - best to be designed for intentionally (I know there are some ideas to provide text alternatives in 3d models, but I am afraid they lack context).
I’ve been curious about possibilities for years now and things by @kpachinger look very promising, thanks, must dig into them more. I am confident that some UIs made with ThreeJS and WebGL in general can already conform to WCAG and are even usable, but I think that it can be way more difficult to make an existing UI accessible and usable than it is when we start with the accessibility and don’t try to “bolt it on” later. Ideal is to have a group of people with different disabilities that help us co-design and co-develop. This can be done and should be done more, but is often not possible due to limited resources (mainly also poor awareness and lack of planning). We can look at gaming industry and all the amazing accessibility efforts made there, I see that we can all learn a lot from there.
A combo of 3D specialists, people with disabilities, UX designers and accessibility specialists, co-designing from start would have best chances of making things as accessible as possible if things are relatively complex in my opinion.
Sorry for this long post, I got a bit passionate as I really want more focus on accessibility and love that I am not the only one. Back to original question - it really does depend a lot about the UI in question. Specifics matter a lot as we have too much options in 3D and even after 25 years we still struggle with 2D accessibility. But things improve all the time and we can do better, thanks for being a part of it.
Very interested in process, please let me know more
and thanks again.