I post-process (rendering to textures) and I finally render to the screen via a plane and an ortho camera.
I need to find the color of the pixel under the mouse, so I used the raycaster, assigned the plane and I got the UV coordinates under the mouse, so I’m one step before getting the actual pixel data, but surprisingly, raycaster doesn’t seem to have any pixel reading method at all.
Or should I access the renderer pixels via the WebGLRenderer’s canvas? If so, how?
I’m looking for an efficient method.
It looks like you want to use the method WebGLRenderer.readRenderTargetPixels(). This method is used in the following example: https://threejs.org/examples/webgl_read_float_buffer
THREE.Ray only work on geometry level and have no relation to pixel data.
It worked perfectly thanks!
I had tried that method before and it didn’t work because I should have modified my rendertargets to float types -that was key (obviously). Also I had to change the coordinates for the readRenderTargetPixels() in my case (different than the example) to make it work correctly.
Hi. I’m interested in taking a principled approach to adding PBR using HDR environment maps and tuning tone mapping to reach the desired look and feel of the result.
One thing that I would like to get a handle on with an HDR workflow is understanding how to implement a way to fetch the (linear color space, internal!) color under the mouse. The webgl_read_float_buffer demo is a wonderful resource and tells me a few things:
- It creates a float renderbuffer (hefty on memory and performance) and reads one pixel under the mouse for it. This seems… not ideal, but I am certainly glad that this is at least a conceptually simple way to achieve this task and then easily removed from the final graphics pipeline implementation. However, it is tricky if I’m intending to use something like React Three Fiber, but I reckon whatever I’d need to do will be tricky with that regardless.
- It fails with “WebGL: checkFramebufferStatus:attachment type is not correct for attachment” on Safari 13 both on macOS and iOS, indicating a possibility that this functionality may be impossible on browsers which do not support WebGL2, even though HDR functionality is perfectly possible under WebGL1 (I guess by using RGBE textures and the use of corresponding shaders)
Are there any possible ways (and this question would actually apply equally to OpenGL and WebGL) of sampling a single pixel in the middle of the rendering process? I guess if it was possible to printf from a shader that would certainly be powerful enough, and be a very useful debug tool as well. I guess what I’m starting to realize is that some shader based approach is probably the ticket because it seems that the only easy way to get the data back from the GPU is to put it into a renderbuffer or texture.