CPURenderer issues (a CPU rasterizer)

I’ve made a CPU renderer that works with THREE, transpiling GLSL to optimized JS code for vertex and fragment shaders. I need it to get single point/pixel values in workers and without readPixels of quads with noise materials.

It basically uses a quad or triangle for the vertex program, caches the varyings and interpolates them when rendering the triangle or quad with the fragment program per pixel. It works so far, i’m testing it with WebGLRenderer on the left, and CPU one on the right.

But when i tested it with 2 different noise functions i ported exactly to JS from GLSL, both resulting in different results, but partially matching. I think it could be rounding errors for the noise, i can’t tell if the float precision of JS isn’t the same as GLSL medium or if it’s GPU/WebGL dependent.

(rendering the same quad with the same shader)

In a test i use 4 vec3 per vertex in a quad, and fract the interpolated y in the fragment. Also other tests are correct, but at some point they seem to fail at precision.

(computed varyings and sample positions seem correct)

Maybe anyone has an idea what could lead to the difference.

Edit: It really seems to be a precision issue, JS uses 64bit float so the sin random function gives errors. Also some tests with sin show differences pretty quickly.

1 Like

That sounds great! Are you planning on open sourcing it?

It still needs some work, when everything is fine i could open source it. Small scenes can be rendered in realtime, depending on the resolution/content, but it’s main purpose is just for rendering static images in workers or single fragment samples like in my case.

My issue has been fixed by using a “softer” hash function, causing a more repetive noise, but using a texture lookup also solves the precision differences.