I’d love to get some guidance on how to approach the following scenario: I need to be able to load an image texture over the network, manipulate its pixel data, and then use the resulting image as the property of a material.
The specific use case is transforming an RGB image to grayscale (using some specific maths) and then assigning the result to a material as a texture map.
From my reading so far, it seems like in outline, one potential approach might be:
Use a loader to load the initial texture
Pass the texture into a ShaderMaterial. (It only needs to be re-rendered once after loading - it doesn’t change dynamically thereafter.)
Render the result to a WebGLRenderTarget
Assign the target to the property of the material
Does this sound the right way to be thinking about it? Any pointers to examples of how to efficiently process a 2D raster image as a texture and then wire it into a material would be great.
@prisoner849 - ooh, that’s very elegant. I wonder if I’d still need the two stage approach though, as I want to use the grayscale output as a displacement map, so it would need to be grayscale already for the vertex shader (if I’m understanding how it works correctly).
hi @Mugen87, can you comment on why an orthographic camera is preferred here? is this a known limitation of rendering 3d texture?
as you can see from my question, I was able to render a 3D texture MIP view when using a orthograpic camera, but when using a perspective camera, it clips the boxgeometry’s faces at different angles. I just want to see if it is at all possible to render a 3D texture when using a perspective camera.