It is always MSAA.
When you do post-processing, setting antialias
to true
has no effect. It only works if you render directly to the screen (or default framebuffer).
It is always MSAA.
When you do post-processing, setting antialias
to true
has no effect. It only works if you render directly to the screen (or default framebuffer).
If you are using post-processing, whether or not you add any effects, then antialias: true
will not work. You have three options:
This also applies if you are rendering to a target without using the EffectComposer, although then you can’t use option 2.
I’ve been doing a bit more research here, as this is not something I had considered before.
First, there’s some discussion here that suggests this is a valid approach:
That’s as I’d expect, since this is kind of the whole purpose of multi sampled buffers.
However, they find the same as me, which is that this only gives good result when using a high number of sample (8 or higher). If I leave the number of samples at the default value of 4, the result looks terrible.
When passing the MultiSampleRenderTarget
into the EffectComposer
, both the readBuffer and writeBuffer use this multisampled target, however any passes that create their own render target still create a normal WebGLRenderTarget
, which might be the reason for this.
I’ve asked for clarification here:
No idea if that forum is the right place to ask WebGL question though.
@mrdoob, if this requires some changes to the EffectComposer or to post passes to work properly, is that something you would support?
Yes, sounds good to me.
Great, I’ll do some more research and see if I can figure out what needs to be done.
@mrdoob One other question that’s been on my mind for a while: what do you think about making WebGL2 easier to use?
Something like:
const renderer = new WebGLRenderer({
useWebGL2IfAvailable: true,
});
@looeee @Mugen87 what i thinking about is https://github.com/mrdoob/three.js/pull/16895
or is this the same, when i use the antialias: true
in the WebGLRenderer
Maybe it’s better to discuss this topic at github.
Not sure why you are referring to this PR
I was hoping I can get a quick informal answer (maybe/no way) before I got to the trouble of making a feature request there.
sorry, looks like i misunderstand this
That part is easy. But what happens when a project uses a feature that requires WebGL2 and the target platform only supports WebGL1? Do we have to add checks in every single feature and log warnings?
The current approach leaves that task to the dev.
If we could have slightly more of this, three.js would be even more awesome. I think there’s too much stuff where three wants to do something specific, and prevents me as a dev from doing something else.
There are three scenarios:
If we add the isWebGL2Available
flag with default false
, here’s how these would work:
Business as usual:
const renderer = new WebGLRenderer({ ... });
const renderer = new WebGLRenderer({
useWebGL2IfAvailable: true,
// ... other settings
});
if(renderer.capabilities.isWebGL2) {
// WebGL2 feature
} else {
// WebGL fallback
}
In this case you still need WEBGL.isWebGL2Available
.
if (!WEBGL.isWebGL2Available()) {
console.error('This project requires WebGL2').
// Bail out early
}
// Otherwise continue
const renderer = new WebGLRenderer({
useWebGL2IfAvailable: true,
// ... other settings
});
I brought this point up before, but I will do so again. If we had a software implementation of WebGL, we could always use that as a fallback. Would it be slow - you bet, but it would ensure compatibility.
Another alternative is to re-design the API, build a low-level API for threejs that experts could use and have a top-level API that uses the low-level API internally. That way, the top-level API could comfortably offer cool features and handle supported rendering API compatibility internally.
Right now three.js offers something in-between. You have the ability to write sharers, you can manipulate buffers directly, but you can’t access various crucial parts of the rendering pipeline.
I feel both would approaches would address the compatibility issue, but second would offer a lot more for the developers, beyond the compatibility.
Another point about software emulation of WebGL - it would enable shader debugging, which, in itself, is pretty awesome.
That sounds like a huge project.
I think it may be. But if parts of ANGLE or SWIFT cant be ported using emscripten - it might be a much smaller task.
It sounds more like a polyfill project rather than one for three.js (maybe that’s what you were implying) but either way I’m not sure I see the point. If the performance of webgl2 is going to be so poor on unsupported devices such that it’s unusable then projects should prefer to fallback to webgl or reject nonwebgl2 users in the first place. Who would use a software based webgl2 fallback polyfill in production or for development when you can pick your browser?
And I don’t know if I see the value in debugging, either. In my opinion the thing that makes debugging shaders difficult is the parallelized nature of them and trying to understand what’s happening across the whole screen at once. I don’t think a software implementation can help that. Debugging hardware specific quirks is another nightmare, which a software implementation can’t fix and may just add it’s own quirks to make it more difficult, anyway.
If you don’t - you sure are one hell of an amazing shader god
Yeah, being able to step through a shader would be amazing. Is emulating all of WebGL in software really the only way to do that though?
Adding WebGL support to Renderdoc sounds like a better solution, for example.
I guess I’m not seeing the value gained vs the sheer amount of work needed in order to get a software implementation working such that you can guarantee it properly emulates every aspect of a GPU. I’m more of a console.log to debug type of dev, though, so changing pixels colors to see value ranges isn’t so bad to me. I wind up “stepping through” a shader by returning after I set a variable that I’m interested in understanding and I can see it across the whole model which gives a sense for how it’s changing, too. You can also see the value of a pixel by copying the data back to the CPU and seeing what it’s set to if you want exact numbers.
I think it’s less that shaders are impossible to debug it’s just that tools haven’t been implemented. Streamlining the above so you can easily change which variable is being displayed in real time or get the value of a hovered over pixel seems like a much better investment of time to me if your goal is to debug shaders and then you can deal with debugging hardware quirks, as well. There are some neat tools from NVidia and console manufacturers for debugging graphics pipelines they just don’t exist for webGL (yet?). Maybe some day we’ll get some more robust webgl developer tools from browser vendors.
I’ve been toying with the idea of writing a THREE.REGLRenderer
class (on top of http://regl.party/, 25kb gzipped). No idea if there would be any practical use for that (probably not ) but I’ve heard writing your own renderer is a fun and hip thing to do…