To convert from clip space to world space, I am simply multiplying my clip space point by the matrix mat4 clipToWorldMatrix = inverse(projectionMatrix * viewMatrix * modelMatrix);
. This strategy works for orthographic camera, but does not work for perspective camera.
A bit of background on what I am trying to do:
Overall, I am using a fullscreen quad as a “window” into a volumetrically rendered world. ie, from each fragment of a fullscreen quad I raycast into my data structure, and render collisions. The start and end of each fragment’s ray are two points located on the near and far clipping planes: variables worldCoordNear
and worldCoordFar
, respectively. In the vertex shader I calculate worldCoordNear
and worldCoordFar
and pass them to the fragment shader. Then in the fragment shader, I raycast between those points, and color the fragment based on intersection with my data structure (a single cube for now).
I’ve created a working implementation of this strategy using an orthographic camera: [done] three.js volumetric rendering onto fullscreen quad with orthographic camera - JSFiddle - Code Playground . You can see the volumetrically rendered cube (solid red) is the same size as the mesh cube (green wireframe).
However, this strategy does not work with a perspective camera: [bug] three.js volumetric rendering onto fullscreen quad with perspective camera - JSFiddle - Code Playground. I see a few indications that my raycast positions are wrong (cube looks big and orthographic). So where am I going wrong with my perspective camera to make the cube the wrong size?