Accessibility for 3D websites

I need to admit that accessibility was never a very strong concern when working with three.js because I mostly worked on personal fun projects, but now the time has come to address it.

For that I need guidance on how to make a 3D experience accessible, so any tips and techniques are very welcome.

My default approach is likely incorrect, which is to have everything exist in the canvas, so text, images, content, everything is three. This allows me to place things exactly where I need them to be and makes animation much easier.

But, accessibility is an issue. Taking a look around the forums, I found out about the CSS2DRenderer, but didn’t quite understand it. Could I make a full experience using it?

I’ve used Drei in the past, which has an HTML utility, but right now it’s just plain JavaScript, HTML and CSS.

One possible approach I have been considering, which is also a suggestion in the post I linked, is having a mirror version of the website only “visible” for screenreaders.

So in this post, I would like to ask the community two things:

  1. have you done a fully accessible website before? how did you do it? what technique (or techniques) you employed?
  2. is there a better way to integrate textual content to a 3D scene? by that I mean add some text and position it at the left side of a cube, for example

Just to clarify:

are you looking for ways to publish your “personal fun projects”, so they are accessible worldwide via the internet?

No, I have created level A and partially level AA, but never a fully accessible level AAA website. Presenting 3D to users with vision deficiency is tricky. Addressing color deficiency is easier, photoepilepsy is also not extremely hard.

@vielzutun.ch this time is a professional website for a client. My question is about making a 3D website accessible. For example, if all the text is inside the three canvas screen readers will not be able to read it. Not deployment, but rather development.

@PavelBoytchev it will not be a very high end product. In the projects that you did make accessible, how did you do it? A “hidden” html with the canvas’ content?

I don’t remember specific details, as we used a recommending online service that inspected the web pages and suggested what to do. As far as I remember, apart for addressing colors, sizes and alignments, we had to add aria tags/attributes to the canvas and hide them with CSS, so they are not visible, but readers can access them.

More info on ARIA.

1 Like

OK, I understand that you want to present information barrier-free to an audience which might include people with various disabilities.

Sorry I can’t help you with that.

1 Like

You may appreciate my agnostic framework tool and exhaustive type notes. There are several presets to try which represent structured custom interfaces. These are functioning, interactive use cases which may help you: a game board, GUI with extruded image contours, a scripted presentation. I think it would help extend to peripherals even such as braille / JAWS.

Reminder: the previous paragraph directly addresses a SEO SEM best-practice application.

To qualitate my response, I have studied accessibility and worked in front-end production roles. I see a government is standardizing the major departments… I don’t know every instance of ANSII, OSHA, IEEE, or AAA… who does? I believe the survival of best-practice resources (for consumption or authoring) is the result of (blog-style) CMS templates in a supportive community. So I made this tool to be easy to loop and hook and insert widgets.

Reminder: stateless loaf of bread is self-aware.

So as not to detract from your previous messages, I shall postpone my next chunk at your disposal.

Happy to se a question like this.

I’ve been doing “normal” web accessibility (semantic HTML, WAI-ARIA, WCAG, EN 301 549 etc. - normal meant as “not 3D” here) and when I consider what can be done with 3D it all drills down to what is the actual function of 3D UI. As we know we can do a loooot of amazing things with ThreeJS and generally WebGL/3D/VR/AR/XR on the web and some are quite simple to make accessible while others are perhaps close to impossible or even impossible.

So - before I dive into what could be an accessible 3D UI, I need to point out the following:

  • HTML can range from a simple, static, image, sometimes decorative, sometimes informative and in need of an alternative text.
  • It can be “like a video” (where user only passively observes) - and if not purely decorative, again - requiring proper alternatives, often also audio descriptions.
  • Or we can have, a 3D version of advanced web interface made with 3D objects instead of HTML elements,
    Or a totally innovative UI that does not even exists out there and is more sci-fi than not (for example 3D scene orchestration with AI recognized gestures, poses, voices etc).
  • I must also mention the game scenario, where 3D UI is a full-blown actual game.

These are just a couple of things, we can surely segment possibilities into dozens or hundreds of different ones, but this is just to consider the range of possibilities that defines how we can make things (more) accessible.

We basically need to consider how to make things perceivable, operable, understandable and robust (POUR principles that are basis for WCAG and EN 301 549 standards, often required by legislation, like European Accessibility Act). But WCAG / EN 301 549 success criteria is limited when it comes to modern tech, even more so when it comes to 3D, unless we build on the POUR by ourselves and transpose the HTML oriented concepts to 3D ones.

Just for transparency - by only satisfying WCAG, on any level - we can still have a totally inaccessible or at least unusable experience for some users…

Parallel semantic DOM synced with visible UI is an excellent start. But that is just one aspect. It will help with screen readers, refreshable braille screens, voice control, keyboard support and a lot of assistive tech that requires semantics. But there is way more to consider - we need to consider the visuals as well - supporting zoom, preventing reflow, making sure we provide good contrasts for text and interactive elements, status messages for screen readers and so on.

How to describe the visuals in text and audio can be the most difficult thing if UI is complicated. Too little info vs. too much info can quickly be difficult to find out if we rely on non-visual modality for a visually-first scene or situation. I see that AI with computer vision is doing an amazing job here and hope it will be even better (and less biased) in the near future. Some screen reader users can already have a dialog with visuals, but that may not be useful for all scenarios. Automatic AI audio descriptions or alternative text based on 3d perspective is perhaps a possibility, but context is key and there is a thin line between too much info and too little info, so once again not to be left unsupervised - best to be designed for intentionally (I know there are some ideas to provide text alternatives in 3d models, but I am afraid they lack context).

I’ve been curious about possibilities for years now and things by @kpachinger look very promising, thanks, must dig into them more. I am confident that some UIs made with ThreeJS and WebGL in general can already conform to WCAG and are even usable, but I think that it can be way more difficult to make an existing UI accessible and usable than it is when we start with the accessibility and don’t try to “bolt it on” later. Ideal is to have a group of people with different disabilities that help us co-design and co-develop. This can be done and should be done more, but is often not possible due to limited resources (mainly also poor awareness and lack of planning). We can look at gaming industry and all the amazing accessibility efforts made there, I see that we can all learn a lot from there.

A combo of 3D specialists, people with disabilities, UX designers and accessibility specialists, co-designing from start would have best chances of making things as accessible as possible if things are relatively complex in my opinion.

Sorry for this long post, I got a bit passionate as I really want more focus on accessibility and love that I am not the only one. Back to original question - it really does depend a lot about the UI in question. Specifics matter a lot as we have too much options in 3D and even after 25 years we still struggle with 2D accessibility. But things improve all the time and we can do better, thanks for being a part of it.

Very interested in process, please let me know more :slight_smile: and thanks again.

4 Likes

Thank you so much, @BogdanCerovac . This is exactly the guidance I was looking for. I will take some time to look into the resources you mentioned.

what is the actual function of 3D UI

I actually laughed out loud when I read this because… yes! What is the actual function? My immediate response was “to make it look pretty”, but really, what function does it have in my page?

Will have to think it through because this sentence challenged the perception that I had of the relationship between the content and the visuals.

I must also mention the game scenario, where 3D UI is a full-blown actual game

This is very true. I made a full game with Three and R3F and found some unexpected roadblocks.
It’s here: https://gilneas-bank.lucaslamonier.com/

Having spent a huge chunk of my life playing games, my hand automatically positions itself over the WASD keys the moment my brain registers a game in front of me. Also moving the camera with the mouse is a no-brainer. But it wasn’t a very pleasant experience for my mother when I asked her to playtest.

So I had to rethink the controller to make the camera rotate with movement. And fine tuning it was an absolute nightmare.

This was an additional layer of accessibility. Not accounting for disabilities, but for how much the user has played videogames in their life.

Not going to go into making it accessible for old/weak hardware and for mobile because that is an entire conversation on its own. (Optimizing this project was wild, I learned a LOT and it was really fun in the end)

I think that it can be way more difficult to make an existing UI accessible and usable than it is when we start with the accessibility and don’t try to “bolt it on” later

Following the hook of “what is the actual function” I believe this is a very nice follow-up. The project I am working on is reasonably at an early stage so this will be very helpful.

All in all, today I learned a few tricks. Thank you.

1 Like

Your qualification is informative. Hards skills translate deliverables unlike perfunctory soft skills. Ungrounded effort detracts from an inexhaustible search for progress.

Evergreen question: Do platforms exist, and awareness, and distribution? Yes, assuming the problem is to reinterpret a schema. Communities (such as Meta, Android, etc.) have certified roles established to leverage. The upside is extension(s) only require another schema (of any overloaded kitchen-sink references). The downside is custodians may have innumerous additional reporting practices to discover.

Authority to persist is the token of actuation on said platform. How sensational a UX is may be the result of the license holder’s permissions, and stated intent versus usage audit. Recent history has made great strides in memorandum capacity: permanence, locality, inference. Basically trust levels, or guidance if a retired Seal overhears Deep Blue advise GTA6 to harm a transient. I say this from the perspective of public news consumption about big contract seekers.

I’m sorry, but WHAT

1 Like

Exactly. If I reply to someone else, accessibility constraints may prevent full comprehension by a third party. That third party may personally request specific augmentation (do they need hieroglyphs to enshrine the significance of comments?)… even publicly with no specific context.

But there is usually an upkeep cost associated to justify any further response. That was perhaps my point… defined boundaries. The incentive to promote proper accessible 3d.

I don’t think I can add anymore than @BogdanCerovac has covered, but if it helps others here’s how I tackled this problem when building my fully 3D site with accessibility considerations.

One of my main concerns (in addition to accessibility) was how to manage content delivery and editing if the site lived entirely inside a 3D scene. The solution I landed on was inspired by the idea of using a computer terminal in a video game. i.e. Grand Theft Auto

The result is that the Open Studios website runs in two modes: 3D and HTML.

  • 3D is the main experience that should work on most modern devices
  • HTML only if you have Javascript disabled or your hardware can’t render the 3D scene.

The site is built with 11ty, which lets us separate the content layer from the presentation. That means the same content can be rendered either as a 3D experience (via iFrame admittedly) or as a more traditional HTML output. To make the experience seamless it uses a screenshot of the page as a texture on the virtual monitor, which allows the iFrame to load in the background while you move up to the screen.

I’m a self-taught web developer from the late 1990s, back when I was mostly tinkering with Flash and VB. By the time I started my professional career in the early 2010s, accessibility had become a major priority and that’s something I’ve carried forward into my own business today with projects like this.

If anyone’s keen to know more about the site build I wrote a retro - Building A Cool 3D website: Tips, Tricks and Lessons Learned – Paul Brzeski

1 Like

It is expensive to crawl the taxonomy and breadcrumbs of a realtime app (which probably relies on nonstandard tags). That’s not what search engines do or feedback form-factors. Plus, you actually limit compatibility by promoting unreliable feature experiments. Some domains use keyword stuffing in manifest transcripts. These fly-by-nights promote features until blacklisted. Yet we still map the galaxy after LiLo meets Stitch.

In terms of a 3d experience (not just a UI structure), there are “things” that could either be curated (at app level) or accepted as “things”. Otherwise your Big Dipper is my cloud-shaped meatball.

~ Freckles “Cat$ha” Granade

The 11ty suggestion is awesome! Thank you. I will definitely look into it.

I’ve just checked it out and was very impressed by the platform game like experience, well done.

Semantically - the whole page is a navigation landmark with single link, Lamonier, a main landmark with canvas (where all the 3D magic happens) and then NextJS route announcer - semantically used for route change announcement to screen readers (basically role=”alert”).

I’ve considered accessibility of similar projects (well, games) before, but never got a real project to analyze with WCAG, accessibility and universal design lens, but I can for sure give some quick tips on how to make it more accessible, at least.

  1. As you mentioned AWSD are something that is quite a standard, but I would suggest adding some visible instructions (perhaps navigation could have a sub-menu instructions that would describe this and potentially other important things). That would be an improvement, WCAG does not say how the keyboard should work, it just needs to work with keyboard. Perhaps we could also have a “warp mode” that would allow people to jump directly to the sections we otherwise need to “walk” to.
  2. This being said - how to operate this with a mobile/touch device? I’ve seen people used some virtual joystick-like things that could work here. I’ve just used my browser phone simulator after loading the page in desktop mode… Didn’t test with a real phone…
  3. Responsiveness - please support responsive viewports and zooming in. When I zoom in only the Lamonier logo changes, rest is unchanged. This can be a difficult to support, but perhaps there are some options to make it larger. Especially text should be zoomed in to 200% (as per WCAG 1.4.4). But as game here is essential - I would like to think that we could perhaps do something about it there as well. Ideally we should also support 320 * 256px (ref. to WCAG 1.4.10).
  4. canvas element is like mentioned with no semantics, but in connection with my point nr. 1 here I think we could do all of it quite semantic, in parallel with the canvas - so that we visually show the canvas, but “behind” it we can have headings and anchors and buttons and modals and what not. I see that when the character walks to the marked section - a modal pops up. We can perhaps have a navigation to all these modals - for all users. So if they want to play - go for it. If they don’t want to play (or canvas is just not an option for them) - give them same information. I guess it’s a win win for everybody (people, SEO, AI etc.) :slight_smile: .

And nice touch with the video fallbacks - I didn’t get to watch them, but if you can perhaps add some textual alternatives (or audio descriptions if time allows) it can also help with understanding.
Time-based media page of WCAG that bears information has a lot of possibilities as well.

These are just some thoughts. And far from conformance, but it would at least be a start and improvement IMO. And specifically for games there are some quite good resources - like Game accessibility guidelines that will for sure show even more options for the game itself.

I like the concept and people more experienced in gaming would for sure add even more tips on how to make it even more accessible. I wish we would have more research and user insights available for this kind of exciting interface design, that would help us all…

Right I also think - a semantic “parallel” can be the solution.

I’ve quickly visited https://openstudios.xyz/ with a screen reader and at first I managed to access the content, but then I was kind of “thrown out” and could not use the site. Visually I’ve seen that 3D rendered and I was visually in the room. But code-wise all semantics was removed (using CSS display:none; removes all the semantics for assistive technologies and keyboard as well), so basically there was just <canvas> for me.

I didn’t have the time for more testing, but I suggest you don’t use display:none if you don’t want to take the semantics away (it works recursively, so setting it on parents hide the whole tree).

Not sure if that was the intention (I hope it was not), but my suggestion is to keep both the visual and the non-visual parts (synced) at all time, so that people with assistive tech can access the semantics behind and people that don’t use assistive tech can use the visible UI.

There are multiple solutions for this “visually hidden, but accessible for assistive tech” - this is one of my favorite blogs about it: Inclusively Hidden | scottohara.me.

Google seems to care less about display:none; as far as I can see - that’s nice, but assistive tech and keyboard users will struggle, unless I missed some switch or something that would prevent canvas taking over / display:none; being added…

1 Like

Thank you @BogdanCerovac !

Sounds like I have some work to do! I had thought that having the HTML fallback in the page was enough for screen reader technology, but it sounds like that was a false assumption.

I genuinely didn’t know display: none would have an impact like the one you described as I just assumed no Javascript would ever get loaded in that scenario. Is there any free/open source testing software I can use to simulate a screen reader for testing this?

Also if you can suggest some reading materials for me to improve my knowledge I would greatly appreciate it!

1 Like

Assume the user: primarily blind but otherwise highly motivated. They have 1 second to rely on incoherent hierarchy.

  • scenario 1: perspective image gallery (circa 1996) with map hotspots (@ PennyHost)
  • scenario 2: military 3d vr headsets (ie HoloLens) in a $10 billion contract (@ 2 million per)

A CoPilot could (n) scan the entire view… or (o(n)) convert individual code conventions. Assume 60fps is appropriate for “normal” users. An assistive rate may (1) require much less latency
and (2) expect serious delays or failures.

~ MSDF “Two 7s” 77

1 Like

You are welcome and thank you for caring.

HTML fallback needs to be exposed and display:none; prevents that. It even prevents keyboard usage and that is intentional - display:none just hides for everybody and everyone (obviously not for search engines, hehe).

In a nutshell;

There are a lot of possibilities for simulation, but IMO they only add an abstraction layer that can present other issues, so I just advise people to try out screen reader, after understanding how HTML, CSS and JavaScript “render” accessibility tree. I like to use Google Chrome accessibility view for a quick check, but then I also check with a real screen reader like open source NVDA.

I always recommend with official docs like W3C WAI ones which offer a good intro. WebAIM ones are very good as well. For a quick review we can also use web.dev ones. There are a lot of resources out there, especially intros and basics and some mid level. That is a good start. And starting with HTML, the semantics parts is often a very solid start. When HTML does not offer more specific things we reach out to “WAI-ARIA 1.2” standard, but some parts of ARIA are not supported well and can even make our efforts worse - “Using ARIA” docs explains it well.

(sorry was not allowed more than 5 links, but they are just a search away, please look for official W3C docs first)….

It takes time, but progress over perfection means a lot. Thank you for asking, it’s a journey and I find it highly rewarding, sure that others will as well :slight_smile: .

Making WebGL (more) accessible is perhaps one of the final frontiers (together with AR, VR and XR), and anybody that cares help a lot, thanks!

1 Like