Bad texture quality on mobile devices with CubeCamera

Hi everyone,

I am pretty new to ThreeJS and I am interested in generating panoramic pictures from ThreeJS scenes. I found this handy tool for that purpose: THREE.CubemapToEquirectangular

It is working pretty fine but there is an issue that I just couldn’t figure out so far, and it is fairly easy to reproduce. From the demo page, when I capture on my laptop, the result is quite neat. But the same process on my phone leads to a very pixelated result (see attached images). Note that this is only the case on the exported images, the actual scenes are fine on both devices.

My laptop is an old one (Dell Precision M4600 - good graphic card though), my phone not that young as well (Samsung Galaxy S6 Edge). The pixel ratio is 1.5 on my laptop and 4 on my phone.

While it would be tempting to just tell me to update my devices, I did thorough investigations and the issue could be either related to:

  • the devices’ pixel ratio (none of my attempts with setPixelRatio made a difference),
  • some canvas resolution (still related to pixel ratio I guess - none of my scaling attempts made a difference),
  • some render target filtering issues (some deterioration for the laptop when using neighboring min/mag filters but no difference for the phone)
  • the fragment shaders? (no clue about this one, so no attempts).

Here is my (summarized) understanding of how the CubemapToEquirectangular works:

  1. A CubeCamera is created with a WebGLCubeRenderTarget of a given size, and some (linear) filtering options.
  2. The position of the CubeCamera, its renderer and its scene are updated with the one of the actual Three JS scene to capture.
  3. Another scene containing an a simple quad (with a RawShaderMaterial) scaled to the size (resolution) of the expected output image and an OrthographicCamera is created in parallel.
  4. The texture of the CubeMap render target is passed on to the quad.
  5. The pixels of the scene with the quad are copied into a raw ImageData array with readRenderTargetPixels
  6. Finally that ImageData is copied on a canvas and exported as an PNG.

I think the whole issue happens between setp 4 (line 149 here) and 5 (line 153). But that could also be a problem of antialiasing not supported on the phone maybe…? I am desperate.

Hopefully someone can help me find an answer on this and sorry for the long message.

Hi guys,

I kept digging and the only point that I haven’t tried to tweak yet is the vertex/fragment shader. So I had some readings on shaders and have a little bit of understanding on how they work globally. But some help to understand some parameters and how the piece of code below works would be highly appreciated.

var vertexShader = `
attribute vec3 position;
attribute vec2 uv;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
varying vec2 vUv;

void main()  {
	vUv = vec2( 1.- uv.x, uv.y );
	gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`;

var fragmentShader = `
precision mediump float;
uniform samplerCube map;
varying vec2 vUv;

#define M_PI 3.1415926535897932384626433832795

void main()  {
	vec2 uv = vUv;

	float longitude = uv.x * 2. * M_PI - M_PI + M_PI / 2.;
	float latitude = uv.y * M_PI;

	vec3 dir = vec3(
		- sin( longitude ) * sin( latitude ),
		cos( latitude ),
		- cos( longitude ) * sin( latitude )
	);
	normalize( dir );
	gl_FragColor = textureCube( map, dir );
}
`;

myMaterial = new THREE.RawShaderMaterial( {
		uniforms: {
			map: { type: 't', value: null }
		},
		vertexShader: vertexShader,
		fragmentShader: fragmentShader,
		side: THREE.DoubleSide,
		transparent: true
	} );
  1. What does the variable uv corresponds to? is it a texture attribute associated to the vertices of the geometry that takes myMaterial?
  2. The vertexShader is processed for every vertex of my geometry. Does that mean that if I have a quad, it will run 4 times (or maybe 6 times if it is triangulated) and the fragmentShader is interpolating all the way between each couple of vertices? or is it a pixel per pixel process (it will run 10000 times if I have a geometry of size 100x100)?
  3. If my understanding is correct, the fragmentShader computes a texture value for each pixel based on the corresponding point (dir) on a cubeTexture that is passed to the value of map… Can this part be affected by pixel ratio issues (let’s say in a device with high pixel ratio, shall latitude and longitude reflect that)?
  4. What does the ‘t’ of the type in map corresponds to? texture? (I couldn’t find any doc on it).
  5. Finally, all of this is obviously GPU based. But isn’t there a way to do it on CPU instead? (I am assuming that if the issue is due to device GPU limitations, maybe moving to CPU would solve it).

Thanks.

These are the texture coordinates or uv coordinates. Since they are define per vertex, they are part of the vertex data (similar to vertex normals and colors).

Yes.

No.

This is deprecated code. Long time ago, it was necessary to define the type of a uniform. Later, it was possible to derive the type from the shader program. So it’s sufficient if you do this:

uniforms: {
    map: { value: null }
},

You can emulate this logic with JavaScript on the CPU. The problem is that it probably will be hopelessly slow (because you won’t benefit from the massive parallel processing on the GPU).

Thanks a lot for clarifying all of this (although you killed my last hope to solve the issue)…

I don’t get it… The scene is perfect on the phone, but the generated image is not. So it means that the problem must be somewhere in the process of mapping the scene as a 2D texture.

What parameter(s) could affect the quality of a texture when copying it from of a WebGLCubeRenderTarget to a RawShaderMaterial (of a PlaneBufferGeometry)? The antialiasing?

Any other brain willing to give thoughts on this issue? :sweat_smile:
Even if this can’t be solved, it would be really good to understand the source of the problem.