I’m trying to have a clipmapped plane in a scene with post processing effects, however the post processing effects are rendering the entire plane rather than the alpha clipped map. So if I add an effect like depth of field, SAO, outlines, etc. it renders incorrectly behind the transparent plane.
Setting the material to transparent: material.alphaTest = 0.5; material.alphaMap = alphaMap; material.side = THREE.DoubleSide;
This makes it transparent as expected, however, we can see all post processing is ignored behind the plane:
I did actually try this earlier and it doesn’t seem to help, logging the loaded model it does look like the change propagated on the mesh object, but doesn’t change anything (depth texture still rendering the entire plane same as before).
Ok I quickly modified that codepen sample to use the effect composer instead of the renderer and just stuck a bokeh effect on it so we can clearly see the transparency issue: https://codepen.io/phipho/pen/jORjbwJ
Hopefully this’ll help pin down the problem!
EDIT: Oh and the map vs alphaMap doesn’t make a difference (tried that one already too heh).
I’ve also tried modifying the fragment shader if(alpha < .5) discard; but this still doesn’t affect the effect composer depth even though it works everywhere else (e.g., the colors are discarded correctly just not in the depth map of the effect composer renderer).
Hmm… BokehPass, in its render function, overrides materials in the scene with MeshDepthMaterial
This is a limitation/bug/feature of BokehPass, I think
At this point, overriding material knows nothing what maps were applied to what mesh, and even if you set bokehPass.materialDepth.alphaTest = 0.5, it won’t have any effect.
Yeah ok, was hoping that wasn’t the case as my custom outline shader requires the depth texture pass as well as bloom, sao, bokeh, etc.
I’ll just have to see if there’s a way to get the uv and alpha texture data into the depth texture fragment or so…
Appreciate you looking into it!
They could be individually posteffected too, before merging with the rest already posteffected scene. Like, working with two render targets and then combining them in a final scene.
Anyway, for single render targets I could imagine a fundamental conflict between alpha transparency and depth maps, especially for alphas that are not 0 or 1. What would a depth map encode for a semitransparent area? The depth of the foreground semitransparent object? The depth of the background object? Or a blend of both depths (which will actually be interpreted as wrong depth)? I think that some posteffects would need volumetric (3D) data in order to process semitransparency well. Depth maps are flat (2D) data.
Most likely some specific set of postprocessing effects will have a specific solution. Another set might have a different solution. A global universal solution may not exist. Or it may?
Yeah, not sure about semi-transparent, I’m coming from just doing a lot of shader work in unity, and expected there would be 2 modes on material, transparency (semi-transparent) which isn’t specifically something I need in this current project, and clip maps (0 or 1) which are much more performant and what you typically try to use for things like vegetation/bushes/etc.
Understandably a lot of hidden complexities, anyway I’ll either adjust art style to not have that type of bushes/vegetation or figure something out, seems like there may be a way to get the uv and clip map data into the DepthTexture shader, but may not be worth it for this specific project.