I am pretty new to ThreeJS and I am interested in generating panoramic pictures from ThreeJS scenes. I found this handy tool for that purpose: THREE.CubemapToEquirectangular
It is working pretty fine but there is an issue that I just couldn’t figure out so far, and it is fairly easy to reproduce. From the demo page, when I capture on my laptop, the result is quite neat. But the same process on my phone leads to a very pixelated result (see attached images). Note that this is only the case on the exported images, the actual scenes are fine on both devices.
My laptop is an old one (Dell Precision M4600 - good graphic card though), my phone not that young as well (Samsung Galaxy S6 Edge). The pixel ratio is 1.5 on my laptop and 4 on my phone.
While it would be tempting to just tell me to update my devices, I did thorough investigations and the issue could be either related to:
the devices’ pixel ratio (none of my attempts with setPixelRatio made a difference),
some canvas resolution (still related to pixel ratio I guess - none of my scaling attempts made a difference),
some render target filtering issues (some deterioration for the laptop when using neighboring min/mag filters but no difference for the phone)
the fragment shaders? (no clue about this one, so no attempts).
Here is my (summarized) understanding of how the CubemapToEquirectangular works:
A CubeCamera is created with a WebGLCubeRenderTarget of a given size, and some (linear) filtering options.
The position of the CubeCamera, its renderer and its scene are updated with the one of the actual Three JS scene to capture.
Another scene containing an a simple quad (with a RawShaderMaterial) scaled to the size (resolution) of the expected output image and an OrthographicCamera is created in parallel.
The texture of the CubeMap render target is passed on to the quad.
The pixels of the scene with the quad are copied into a raw ImageData array with readRenderTargetPixels
Finally that ImageData is copied on a canvas and exported as an PNG.
I think the whole issue happens between setp 4 (line 149 here) and 5 (line 153). But that could also be a problem of antialiasing not supported on the phone maybe…? I am desperate.
Hopefully someone can help me find an answer on this and sorry for the long message.
I kept digging and the only point that I haven’t tried to tweak yet is the vertex/fragment shader. So I had some readings on shaders and have a little bit of understanding on how they work globally. But some help to understand some parameters and how the piece of code below works would be highly appreciated.
What does the variable uv corresponds to? is it a texture attribute associated to the vertices of the geometry that takes myMaterial?
The vertexShader is processed for every vertex of my geometry. Does that mean that if I have a quad, it will run 4 times (or maybe 6 times if it is triangulated) and the fragmentShader is interpolating all the way between each couple of vertices? or is it a pixel per pixel process (it will run 10000 times if I have a geometry of size 100x100)?
If my understanding is correct, the fragmentShader computes a texture value for each pixel based on the corresponding point (dir) on a cubeTexture that is passed to the value of map… Can this part be affected by pixel ratio issues (let’s say in a device with high pixel ratio, shall latitude and longitude reflect that)?
What does the ‘t’ of the type in map corresponds to? texture? (I couldn’t find any doc on it).
Finally, all of this is obviously GPU based. But isn’t there a way to do it on CPU instead? (I am assuming that if the issue is due to device GPU limitations, maybe moving to CPU would solve it).
These are the texture coordinates or uv coordinates. Since they are define per vertex, they are part of the vertex data (similar to vertex normals and colors).
Yes.
No.
This is deprecated code. Long time ago, it was necessary to define the type of a uniform. Later, it was possible to derive the type from the shader program. So it’s sufficient if you do this:
uniforms: {
map: { value: null }
},
You can emulate this logic with JavaScript on the CPU. The problem is that it probably will be hopelessly slow (because you won’t benefit from the massive parallel processing on the GPU).
Thanks a lot for clarifying all of this (although you killed my last hope to solve the issue)…
I don’t get it… The scene is perfect on the phone, but the generated image is not. So it means that the problem must be somewhere in the process of mapping the scene as a 2D texture.
What parameter(s) could affect the quality of a texture when copying it from of a WebGLCubeRenderTarget to a RawShaderMaterial (of a PlaneBufferGeometry)? The antialiasing?