Exporting high-resolution images of the canvas

I need to create high-resolution images of what the user is seeing and allow them to download it.

This is my current setup:

    const highRes = () => {
        const renderer = new THREE.WebGLRenderer({
            antialias: true,
            alpha: true,
            preserveDrawingBuffer: true,
        });

        const width = 5000;
        const height = (width / 16) * 9;

        renderer.setPixelRatio(window.devicePixelRatio);
        renderer.setSize(width, height, false);

        const originalAspect = camera.aspect;
        camera.aspect = width / height;
        camera.updateProjectionMatrix();
        camera.updateMatrixWorld();

        renderer.render(scene, camera);

        const dataURL = renderer.domElement.toDataURL('image/png', 1.0);

        camera.aspect = originalAspect;
        camera.updateProjectionMatrix();

        renderer.dispose();

        const link = document.createElement('a');
        link.href = dataURL;
        link.download = 'scene.png';
        link.click();
    };

This way I create a temporary renderer with an arbitrary size and capture what is currently on display.

This works, for the most part, but one of my shaders is not working as expected. I have particle systems that are created using the following shaders:

const particleVertex = `
uniform float uTime;
uniform vec3 uCameraPos;
uniform float uZNear;
uniform float uZFar;
uniform float uSizeNear;
uniform float uSizeFar;
uniform float uFloatSpeed;
uniform float uShouldScroll;

// scroll
uniform float uScrollOffset;
uniform float uColumnHeight;

attribute vec3 aOffset;
attribute float aSeed;
attribute float aRotation;

varying float vDepth;
varying float vSeed;
varying float vRotation;

void main() {
  // --- Animate particle entirely on GPU ---
  float phase = aSeed * 6.28318;
  vSeed = aSeed;
  vRotation = aRotation;

  vec3 animatedPos = position;
  animatedPos.x += sin(uTime + phase) * uFloatSpeed + aOffset.x;
  animatedPos.y += cos(uTime * 0.3 + phase) * uFloatSpeed + aOffset.y;
  animatedPos.z += aOffset.z;

  // --- Scroll offset ---
  float yWrapped = mod(animatedPos.y + uScrollOffset, uColumnHeight);
  
  animatedPos.y = mix(animatedPos.y, yWrapped, uShouldScroll);

  // --- Compute world position ---
  vec3 worldPos = (modelMatrix * vec4(animatedPos, 1.0)).xyz;

  // --- Transform to clip space ---
  vec4 mvPosition = modelViewMatrix * vec4(animatedPos, 1.0);
  gl_Position = projectionMatrix * mvPosition;


  // Use smoothstep for Z mapping
  float depthFactor = smoothstep(uZFar, uZNear, worldPos.z);
  gl_PointSize = mix(uSizeFar, uSizeNear, depthFactor);

  // Pass for fragment shader
  vDepth = depthFactor;
}
`;

const particleFragment = `
  uniform float uFadeTime;
  uniform float uAlphaNear;
  uniform float uAlphaFar;
  uniform vec3 uColor;
  uniform float uShouldFade;
  uniform float uMaxAlpha;

  varying float vDepth;
  varying float vSeed;
  varying float vRotation;

  float pentagonSDF(vec2 p) {
    const float PI = 3.14159265;
    const float N = 5.0; // number of sides
    float a = atan(p.x, p.y) + PI;
    float r = length(p);
    float k = 2.0 * PI / N;
    return cos(floor(0.5 + a / k) * k - a) * r - 0.5;
  }

  void main() {
    vec2 uv = gl_PointCoord - 0.5;
    float c = cos(vRotation);
    float s = sin(vRotation);
    mat2 rot = mat2(c, -s, s, c);
    uv = rot * uv;

    vec2 p = uv * 2.0;

    float d = pentagonSDF(p);

    float edge = smoothstep(0.0, 0.03, -d);

    float depthAlpha = mix(uAlphaFar, uAlphaNear, vDepth);

    float phase = vSeed * 6.28318;
    float fade = 0.5 + 0.5 * sin(uFadeTime + phase);
    fade = mix(1.0, fade, uShouldFade);

    vec3 baseColor = vec3(1.0);
    vec3 finalColor = mix(uColor, baseColor * 0.9, vDepth);
    float finalAlpha = edge * depthAlpha * fade * uMaxAlpha;

    if (finalAlpha < 0.01) discard;

    gl_FragColor = vec4(finalColor, finalAlpha);
    // gl_FragColor = vec4(finalColor, 1.0);
  }
`;

When I create an image using the big renderer, the particles are tiny. Here’s a comparison between a screenshot and a generated image:


The generated image has a transparent background, that’s why it looks washed out.

I did some research and found that my shader is resolution-dependent and a number of solutions did not work to improve it.

Questions:

  1. is my approach to capture images correct?
  2. is there a way to change my secondary renderer logic so that the particles appear the right size?
  3. if 2 is a no, how do I adjust my shaders?

gl_PointSize is given in pixels, so when you change the resolution of your render, you need to change the size of the points. Are you doing so when setting uSizeFar and uSizeNear? You’ll need to make some decision there based on the frame buffer size you’re rendering to.

No, I am not adjusting. It’s just a number. So in this current implementation, I do need to change the shaders when capturing the image.

In order to get the new value when rendering a larger image, is this how I should do it?

const renderer = new THREE.WebGLRenderer({
    antialias: true,
    alpha: true,
    preserveDrawingBuffer: true,
});

const aspect = camera.aspect;
const width = 5000;
// two considerations about the height:
// keeping the original aspect ratio and
// is it best practice to have integer sizes?
const height = Math.floor(width / aspect);

renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(width, height, false);

// since I am not changing aspect ration anymore,
// do I still need to do the following
// // camera.updateProjectionMatrix();
// // camera.updateMatrixWorld();

renderer.render(scene, camera);

const dataURL = renderer.domElement.toDataURL('image/png', 1.0);

Additionally, I read something about tiling, which makes it possible to make ultra high resolution images. I’d like to know more about that, but couldn’t find anything solid.

you may be best off simply setting the renderer.setPixelRatio to something larger than the devicePixelRatio and multiplying both the uSizeNear and uSizeFar by the defined pixelRatio value to get consistent point sizes, here’s a minimal demo, you can see the points have a consistent size when increasing the pixelRatio constant which in turn increases the renderers pixel ratio… https://codepen.io/forerunrun/pen/RNRRBbb

I’ve setup the “take snapshot” button to output 5 images each with an increasingly bigger pixelRatio 1, 2, 3, 4 and 5 respectively

3 Likes

Well, the uSizeNear and uSizeFar parameters are passed into your shader, so you’re not really “changing your shader”, you’re just changing the parameters. The same is automatically happening for other parts of the shader inputs that Three is calculating for you automatically based on e.g. the aspect ratio of your back buffer.

Because the specification for gl_PointSize is in actual pixels, i.e. a value of 3 means 3 pixels exactly, then when you change the size of your rendering buffer, the points stay the same size, they’re still 3 pixels! There’s no meaningful automatic way to scale those up like the objects in your scene, because those other things are all specified in “world space” which is transformed into pixel sizes via a camera, which are concepts independent of the buffer resolution.

What you want is to specify your particle sizes in a similar manner, i.e. to figure out a transformation from an abstract size to pixel size. A simple way to do this would be to say, “I actually want to specify the particle size as a fraction of the buffer height, so if I say the size is 0.1, then I want them to always take up 1/10th of the screen height.” Once you’ve made a decision like that, it’s easy to work out what you need in code, right?: uSizeNear = height * 0.1;

By the way, if you actually want these to be in the same 3D space as your camera, that is to say they should look like they have a real 3D size, and should scale with distance like any other object in your scene, what you actually want is to replace gl_PointSize = mix(uSizeFar, uSizeNear, depthFactor); with something that uses the camera, so maybe gl_PointSize = uSize * projectionMatrix[1][1] / gl_Position.w; Then your uSize would represent the number of pixels wide the object is when it’s right in front of the camera. You’d still need to modify that number of pixels based on the render resolution, but the distance calculation would now match your camera perspective.

That’s a pretty nice demo. Thank you.

This solves it.

I can now get my high-res screenshots.

@subpixel thank you for the tips on the shader. After I’m done with this project I’ll take a more thorough look at the tips and re-evaluate the shader. I’m new to GLSL, so advice is very welcome.

One thing came up as I was doing some research, which is tiling. How complex is it to render multiple, for lack of the correct word, slices of a bigger image?

2 Likes