I need to create high-resolution images of what the user is seeing and allow them to download it.
This is my current setup:
const highRes = () => {
const renderer = new THREE.WebGLRenderer({
antialias: true,
alpha: true,
preserveDrawingBuffer: true,
});
const width = 5000;
const height = (width / 16) * 9;
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(width, height, false);
const originalAspect = camera.aspect;
camera.aspect = width / height;
camera.updateProjectionMatrix();
camera.updateMatrixWorld();
renderer.render(scene, camera);
const dataURL = renderer.domElement.toDataURL('image/png', 1.0);
camera.aspect = originalAspect;
camera.updateProjectionMatrix();
renderer.dispose();
const link = document.createElement('a');
link.href = dataURL;
link.download = 'scene.png';
link.click();
};
This way I create a temporary renderer with an arbitrary size and capture what is currently on display.
This works, for the most part, but one of my shaders is not working as expected. I have particle systems that are created using the following shaders:
const particleVertex = `
uniform float uTime;
uniform vec3 uCameraPos;
uniform float uZNear;
uniform float uZFar;
uniform float uSizeNear;
uniform float uSizeFar;
uniform float uFloatSpeed;
uniform float uShouldScroll;
// scroll
uniform float uScrollOffset;
uniform float uColumnHeight;
attribute vec3 aOffset;
attribute float aSeed;
attribute float aRotation;
varying float vDepth;
varying float vSeed;
varying float vRotation;
void main() {
// --- Animate particle entirely on GPU ---
float phase = aSeed * 6.28318;
vSeed = aSeed;
vRotation = aRotation;
vec3 animatedPos = position;
animatedPos.x += sin(uTime + phase) * uFloatSpeed + aOffset.x;
animatedPos.y += cos(uTime * 0.3 + phase) * uFloatSpeed + aOffset.y;
animatedPos.z += aOffset.z;
// --- Scroll offset ---
float yWrapped = mod(animatedPos.y + uScrollOffset, uColumnHeight);
animatedPos.y = mix(animatedPos.y, yWrapped, uShouldScroll);
// --- Compute world position ---
vec3 worldPos = (modelMatrix * vec4(animatedPos, 1.0)).xyz;
// --- Transform to clip space ---
vec4 mvPosition = modelViewMatrix * vec4(animatedPos, 1.0);
gl_Position = projectionMatrix * mvPosition;
// Use smoothstep for Z mapping
float depthFactor = smoothstep(uZFar, uZNear, worldPos.z);
gl_PointSize = mix(uSizeFar, uSizeNear, depthFactor);
// Pass for fragment shader
vDepth = depthFactor;
}
`;
const particleFragment = `
uniform float uFadeTime;
uniform float uAlphaNear;
uniform float uAlphaFar;
uniform vec3 uColor;
uniform float uShouldFade;
uniform float uMaxAlpha;
varying float vDepth;
varying float vSeed;
varying float vRotation;
float pentagonSDF(vec2 p) {
const float PI = 3.14159265;
const float N = 5.0; // number of sides
float a = atan(p.x, p.y) + PI;
float r = length(p);
float k = 2.0 * PI / N;
return cos(floor(0.5 + a / k) * k - a) * r - 0.5;
}
void main() {
vec2 uv = gl_PointCoord - 0.5;
float c = cos(vRotation);
float s = sin(vRotation);
mat2 rot = mat2(c, -s, s, c);
uv = rot * uv;
vec2 p = uv * 2.0;
float d = pentagonSDF(p);
float edge = smoothstep(0.0, 0.03, -d);
float depthAlpha = mix(uAlphaFar, uAlphaNear, vDepth);
float phase = vSeed * 6.28318;
float fade = 0.5 + 0.5 * sin(uFadeTime + phase);
fade = mix(1.0, fade, uShouldFade);
vec3 baseColor = vec3(1.0);
vec3 finalColor = mix(uColor, baseColor * 0.9, vDepth);
float finalAlpha = edge * depthAlpha * fade * uMaxAlpha;
if (finalAlpha < 0.01) discard;
gl_FragColor = vec4(finalColor, finalAlpha);
// gl_FragColor = vec4(finalColor, 1.0);
}
`;
When I create an image using the big renderer, the particles are tiny. Here’s a comparison between a screenshot and a generated image:
The generated image has a transparent background, that’s why it looks washed out.
I did some research and found that my shader is resolution-dependent and a number of solutions did not work to improve it.
Questions:
- is my approach to capture images correct?
- is there a way to change my secondary renderer logic so that the particles appear the right size?
- if 2 is a no, how do I adjust my shaders?


