I’ve been trying various methods of anti-aliasing but they all have some flaws. FXAA/SMAA are a bit blurry and not good quality in some cases, increasing the pixelRatio works great, especially combined with those but is really taxing on the GPU.
The SMAA effect does a double pass, first an edge-detection pass and then applies a filter to those edges. It works well but shows flickering edges at times.
What I was thinking about trying was an edge detection pass and use that as a mask for a render at a higher resolution like 2x or 4x. This way the high resolution pass would only render the edges so should render much faster than the whole frame. Then I’d blend those smoother edges back onto the original.
Is there a way to mask a render call so that only the parts visible in the mask are rendered? This is different from applying a mask after rendering. I guess it would work similar to the clipping planes in the shaders but would use an image buffer to tell if the fragment should be rendered or discarded.
maybe. for example: if you put a plane with your mask texture at the distance like 1.01 * camera.near and render it 1st, with the shader to discard only edge pixels, the rest of the z buffer will be filled - you could then render the your stuff and have z buffer do the masking.
Thanks for the suggestions. I can’t use WebGL 2 unfortunately, not enough device support yet. But I think I can do similar in WebGL 1 by passing a higher resolution render target into the effect composer as the 2nd parameter. I might have to use multiple composers though now that I think about it.
Using a plane as a camera mask should be able to render the scene without the edges but I’m not sure about using the z-buffer as I might have to render the high resolution buffer behind first. If I set the front-most buffer to transparent, it would render what’s behind it first. The main thing would be to avoid rendering all of the full resolution buffer.
It might be best using the plane mask to only render the edges in a high resolution render target and then try to blend that on top of a normal render buffer. I’ll try without the edge-detection step first. I’ll just use a basic shape to mask a high-res buffer and try mixing it onto a standard resolution one.