Demo: Order Independent Transparency with Depth Peeling

Overlapping or intersecting transparent objects are some of the most problematic things to render in graphics. Solutions like alpha hash or dithered rendering, weighted-blended transparency, and other solutions can be fast but have their own visual artifacts to be aware of that may make them undesirable.

Depth peeling, on the other hand, can be fairly performance intensive but results in correct transparency overlap, even for intersecting objects. Here are a few examples from a demo I wrote last year:

Rendering two intersecting boxes with and without depth peeling.

This works by progressively rendering each subsequent overlapping layer of the transparent objects and compositing them together - hence referring to it as ā€œpeelingā€. Here’s another demonstration on a more complex model with multiple layers of geometry, toggling between the depth peeling approach, typical transparency rendering, and fading the opacity. You can see that the model fades smoothly to a ā€œfull opaqueā€ model with no visual popping or artifacts:

Video demonstrating the benefits of depth peeling with a complex, multilayer model.

For detailed models like architectural or CAD models that use different kinds of draw-throughs, this type of correct transparency can improve clarity of the result.

Transparent drone CAD model rendered with depth peeling

One quirk is that you need to specify the number of ā€œlayersā€ to render ahead of time, which can have a noticeable impact performance if there are too many specified. So there’s some tuning required depending on the models being displayed. With enough layers, though, complex transparent 3d models can ā€œjust workā€ if the performance is acceptable and quality and correctness are important. It’s otherwise best used with a limited number of layers, however. With WebGPU it may be possible to automatically determine the required number of layers.

And with the same data and some shader work, the multiple layers can be used for some more complex effects, as well. Here’s another experiment from years ago rendering a colored glass effect using a similar technique and darkening the color based on depth between layers:

Video demonstrating rendering colored glass material using a depth-peeling technique.

You can checkout the repository here if you want to see or use some of the Depth Peeling code.

Some other ā€œorder independent transparencyā€ techniques that may be interesting to investigate in the future are Per-Pixel Linked List transparent (or k-buffer transparency) and ā€œMultilayer Alpha Blendingā€. Both techniques keep some variation of sorted list of transparent pixels during rendering that are then sorted. New features in WebGPU may make some of these features more viable, as well.

18 Likes

Really cool, one you might have missed is Moment-Based OIT. Alan Wake 2 shipped with this, and I personally had very good results with it.

1 Like

Stencil does improve the performance :slight_smile:

A long time ago we discussed this here:

I never figured out if you were in fact able to achieve this using textures, my conclusion was that without the proposed modification it was not possible.

Edit

Looks like it is possible now? Three has the API to do the proper setup?

1 Like

@Usnul

Really cool, one you might have missed is Moment-Based OIT. Alan Wake 2 shipped with this, and I personally had very good results with it.

Yeah I’ve heard of moment-based transparency - it looks great. From the paper it seemed like you could still get some artifacts like blurring at hard edge intersections and incorrect looking overlap but maybe those can be alleviated? It looks like a really nice approach for more general use cases, though, and faster. A lot of my work has typically been with transparency for things like CAD models or cases where that precision is wanted so I’m not sure if it would be the best fit? If I wanted something with a similar correctness now I think I’d probably look at the Multilayer Alpha Blending, with WebGPU becoming more available. Always trade-offs :sweat_smile:.

I think depth peeling is probably too slow and requires some unintuitive layer tuning to be considered ā€œturn keyā€ for anything.

@dubois

I never figured out if you were in fact able to achieve this using textures, my conclusion was that without the proposed modification it was not possible.

For the translucent glass demo I wound up rendering depth in a dedicated pass to a color buffer and (I think) using discard in the other passes rather than relying on z testing. It was definitely just a demo and not anything practical, though.

Looks like it is possible now? Three has the API to do the proper setup?

Yup! You can reassign the depthTexture field, now, for this and other compositing effects. I wrote this demo to show it functioning but otherwise haven’t used it in anything. I had needed reassignable depth for rendering something else a couple years ago that I don’t recall, now.

Stencil does improve the performance :slight_smile:

Yes I was thinking stencil may be good and possibly use of the scissor test, as well?

That’s true. MBOIT is not perfect, it’s similar to moment shadows (aka variance shadows) and other moment-based techniques. You can increase number of moments to improve accuracy, and you can use more complex geometric moment instead of simple power moments, but this is a trade-off as usual.

From my experience, having tested a bunch of models and scenes - the technique is perfect for cases where 99% is good enough. If you want 100% - ray tracing is the way to go, I think. Other engines seem to agree, as ā€œray traced transparencyā€ is a thing. Though ray tracing on the web is still a bit elusive, at least for real-time usecases.

It really depends. CAD typically is very geometry heavy, and, at least in my experience, complex cad models tend to use a lot of material variations. So, at least as I see it, drawing the scene fewer times is a big advantage of techniques like MBOIT.

The accuracy definitely is a strong argument, but I don’t think it’s as applicable as it seems at a first glance. What we want for CAD typically is clarity, and not necessarily accuracy. Pixels are going to drift, and small-scale detail will be lost in the final image due to undersampling anyway. Often when we draw CAD, we still want it to look pretty, so we use things like analytical lights with finite distances and environment maps, all of these are approximations.

Finally, I MBOIT is pretty much perfectly accurate when you have few layers and when you have a ton of layers - it will not matter, because neither the screen not the viewer are capable of perceiving those blends accurately anyway, so it’s more of a matter of looking ā€œclose enoughā€. But for sure, in some cases you need that 100% and often clients are not graphics experts, so given a choice - they will say ā€œI want 100% accuracyā€, just to be on the conservative side.

1 Like

As for results, I took the same robot for a spin. It’s not 1:1 as I didn’t bother removing shadows and environment map.

Also, my TAA implementation isn’t tuned for full transparency, it can handle it, but you get a bit of visible ghosting when the whole scene is just transparency (no depth information to use, known TAA issues)

and video

The cases where moment-based techniques break do exist and they are very real, but those cases are quite specific too

4 Likes

To be clear I’m very aware that MBOIT can look nice and the model in the demo is not really what’s in question :sweat_smile: Complex or noisy overlapping transparency is really not what I’m concerned about. It’s more simple cases where the paper demonstrates very noticeable failure. And that’s okay! It’s cases like this that a lot of use cases like games don’t run into or can work around. Again, I’d have to try it out but if these kinds of cases are failing then I’d probably consider reaching for a different transparency solution for some of problems in my domain.

4 Likes