I know there are many issues about the transparent problem in the internet, but I need a solution.
The Problem:
I create some mesh objects with a simple transparent material. And when the objects are overlaps, sometimes (ok always) when I rotate the camera the objects not displaying correct.
The mesh objects has 2-4 Units space between. When I increase the space it works, but I must rotate the objects. Means i cannot increase the space between the objects.
Here some informations I test it:
depthWrite: false
depthTest: false
transparent: true
alphaTest: 0.5
nearPlane : 100
renderOrder (It works, but is has many other problems. Not the correct solution)
many more…
(From many issues I found in the internet)
I use threejs v101 (Same Problems with the newest v111).
See attachment (The close button, the texte are and the big button has all transparent (alpha) values. And when i rotate the camera, sometime it looks correct and often damaged.
One way to solve such depth sorting issues it to explicitly define the rendering order via Object3D.renderOrder. It seems from your listing you have not tested this approach so far. It would be interesting to know if using Object3D.renderOrder helps in this use case.
Yes, i test it with renderOrder too. And it works (for only this case)!
But the problem is then is the z-index ignored. Means when I have e.g. two complex objects one above the other, the view is incorrect. Because not the render order is relevant for 3D, but the z-index.
That’s not what I can use meaningfully.
I’am not sure is it ThreeJS or the webGL render that makes the problem. I read the WebGL Rednerer renders first objects who has no transparent, and then all with transparent (very stupid )
I’m afraid this is the correct approach. You want to render opaque objects first from front to back in order to avoid overdraw as much as possible. Then you render transparent objects from back to front in order to do the blending right.
BTW: Instead of sharing animated gifs or videos, it’s in general easier to help users when live examples are shared.
You generally should not write depth for transparent objects. depthWrite = true is why you get some blue (sky) pixels sometimes. It happens when those objects are rendered before the gray plane - which makes me wonder if those objects are really transparent, because then they should be rendered after the plane and it should work… Or did you also set the plane as transparent? A live example would really help.
Here is my use case on codepen: When you rotate to left then the transparent background is the background from behind the second mesh. If i use depthTest/depthWrite the image is hidden.
What can I do?
Note: I cannot use renderOrder! It ignores the z-axes from the other objects in my project.
^For that particular example, and cases where the texture has no semitransparency (everything is either 0% opaque or 100% opaque) you will want to use alphaTestwithout setting transparent=true. Then you shouldn’t have to worry about depth settings or sorting issues.
When two transparent planes are close and one is both in front and behind the other, it’s really hard to get them to behave as intended. It seems as though the fragments on a given primitive are being assigned the same depth value (e.g the value of the fragment closest to the camera). The plane in front disappears instantly, it doesn’t gradually get clipped behind the other when rotating the planes.
If it was possible to implement, what would be useful is a more granular form of renderOrder like render order groups to be able to say which objects should definitely be in front of others but otherwise use the depth buffer.
A material could perhaps have a renderOrderGroups property:
If a material being rendered had a renderOrderGroup and a fragment of a material that had the same group had a renderOrder above it, it would put that one in front. Then render according to the depth buffer.
It seems like it should be possible to fix this problem automatically though. Shaders can calculate a floating point camera-space z-value of a fragment and use that to determine render order. Even if it was too slow to do on a whole scene, it should be fast enough for selected objects.
thanks, @ [donmccurdy], but i have semitransparency. The png I used in the demo on codepen only has no semitransparency. But in my reale case i use it.
I couldn’t get polygonOffset to work on that example. Only renderOrder and setting the background object to non-transparent worked but neither of those are ideal. polygonOffset should work with a large enough value.
Looking at the sorting functions in ThreeJS, it could be to do with the z value here swapping when rotating the camera. The relevant functions are WebGLRenderList.sort() and WebGLPrograms.painterSortStable and reversePainterSortStable.
It looks like those tests override some of the depth buffer. It checks renderlist.groupOrder, then renderOrder, then z value (I assume camera space z-value of the object), then lastly the id, which would be the order the objects are added to the scene.
If you are ok with modifying the ThreeJS library, you could add a custom condition in that sort function before the others that checks a custom value in the render objects and you’d set these values in the game like the renderOrderGroups I mentioned above.
There’s no satisfactory answer here yet (only workarounds that disable critical parts of the regular rendering abilities of the scene, which causes there to be other issues) so I posted the question on StackOverflow:
Codepen (based on above example, but with more depth/distance between planes, still same issue):
There’s no satisfactory answer here yet (only workarounds that disable critical parts of the regular rendering abilities of the scene, which causes there to be other issues)
I’m not sure what a satisfactory answer would be – there are only workarounds. There is no magic answer that makes transparent rendering just work in every scenario. It’s a difficult problem that you have to work within the limitations of the graphics API to achieve – even on with newer DirectX and OpenGL versions it’s still an issue (though newer raytracing APIs may change this). The only best answer is artistry and to select a combination of “hacks” that work best for your scenario. This is what modern games have to do, as well.
Here are some of the techniques available to render transparency some of which require more effort than others within three.js.
Use alphaTest to clip transparent textures. You won’t be able to achieve partial transparency here.
Render transparent objects with depth write enabled which will lead to the background clipping you’re seeing.
Render transparent objects with no depth write and back to front sorting. This is the most common approach but can result in objects “flipping” order when moving the camera which can be undesireable.
Render partial transparency with a dither pattern while still writing to depth. This is also called “screendoor transparency”. Transparent objects will render on top of eachother coherently but the dither pattern may be undesirable. Performance may also be affected negatively.
Weighted, blended order independent transparency seems to an approach that takes some weighted average of transparent pixels but again it won’t give you the correct overlap look. But it will avoid “popping” as you move the camera.
Depth peeling will give you correct transparency but is performance intensive and requires rendering all the transparent objects in the scene multiple times to properly blend all layers of transparency.
Per fragment sorted transparency will give you correct transparency but again is performance intensive and may not even be feasible using current WebGL APIs. This involves creating a linked list of every transparent color and depth at every pixel and sorting them front to back before blending them for the final color.
I’m sure there are other approaches but the point is there’s a lot of options and there’s not one well accepted approach for rendering transparency correctly. You really just have to pick what works for your use case.
I’ve been playing with dithering, i also randomly came across your screen door transparency @gkjohnson example , in old 2D games transparency was archived by this, while the result was smooth not dithered by the monitor blending pixels.
I’ve tested some approaches to smooth/“undither” the result. I think this could be the best option in terms of cost, this would be a great alternative to the hard/sharp edged alpha test, especially minor alpha transitions like edges for vegetation or hair.
I got a basically perfect looking result with supersampling, some modified blur to average and FXAA at the end, but a doubled resolution isn’t an option. I rendered the alpha masks to an alpha buffer to use for the average pass. Especially for wildly self-intersecting geometries like in vegetation this seems to be the only reasonable option. There was an engine (i see if i find it again) that implemented transparency this way too, though MSAA and other was used, i didn’t followed any approach using higher resolutions or subpixels further as i rely on a multi render target setup and either way, it would get way more expensive.
Another option for some might be enabling alpha to coverage
gl.enable(gl.SAMPLE_ALPHA_TO_COVERAGE);
With multisampling available it will give nice results, still with some minor visible bleeding but good enough, if multisampling isn’t available it will act like alpha test.
You choose a path but there is a big hole making you can’t go to destination, you tried a jump, but it’s too large, you tried to get a rope, but it’s too short, maybe the solution to get to your destination is to try another path!?
Your box menu looks like very much an HTML popup! Why not render it in a hidden DOM with HTML/CSS, capture the view bitmap and project it on a plane in 3D as a texture? Then, you raytrace the mouse pointer and create fake mouse event on the DOM to trigger your actions…