Reverse Z infinite projection

A while ago, some decade ago, in fact. I came across an article by Nathan Reed “Depth Precision Visualized”. It was a fascinating read. And by now the concept of Reverse Z projection is a fairly uncontentious one. To borrow Nathan’s diagrams, here’s what the standard Z projection looks like (what three.js uses by default):

And here’s reverse Z:

This is why we have had log depth implementation in three.js, and why the new Camera.reversedDepth has appeared (false by default).

A brilliant resource “WebGPU samples”, has a great visual demo for this as well:

I’ve been using reverse Z in Shade from the start, it was one of the design parameters I set for myself as that’s just a no-brainer.

Something that I marked for myself as a curiosity 10 years ago and moved was the bit that Nathan wrote about infinite projection. I was thinking to myself “yeah, cool, but what’s the point?” and I successfully have forgotten about that part.

Now here we are, years later, and I re-discovered that article again.

So, I will attempt to explain in practical terms why infinite far clipping plane is actually good and why I integrated it into Shade as a default.

Math is simpler.

Infinite far plane doesn’t make math more complex. It makes it significantly simpler.

Here’s how you typically re-construct linear depth:

float convert_depth_to_view(in float d){
    float d_n = 2.0*d - 1.0;

    float f = fp_f_camera_far;
    float n = fp_f_camera_near;

    float fn = f*n;
    
    float z_diff = f - n;
    
    float denominator = (f + n - d_n * z_diff );

    float z_view = (2.0*fn) / denominator;

    return (z_view - n);
}

Honestly - a pain. Part of this pain is the -1 … 1 clipping bounds. But in WebGPU, the clipping planes at at 0…1, which makes our lives easier.

Here’s the equivalent for infinite far in WebGPU:

fn convert_depth_to_view( d: f32 ) -> f32 {
   return fp_f_camera_near / d;
}

Setting up actual matrices is also simpler.

Manual user parameter

Camera.far is a widely accepted and well-understood parameter. In fact, so well understood and accepted, that when I was implementing this feature in my engine I kept asking myself “Is it really ok to take the parameter away?”, but in reality, you as a programmer implementing some kind of a 3d experience - you don’t care about this parameter, and you set it to whatever is the smallest value that you can get away with.

Removing this parameter is an overall good. Here’s an example from Shade, with different levels of zoom:


This is a 30 meter-long building viewed from 5,000 meters away

You’ll have to take my word for it - but there is no visible z-fighting or precision loss, and you get it for free.

Problems

It would be remiss of me to not mention some of the issues I personally ran into implementing this.

  1. Since device depth 0 maps to true infinity now, you have to have custom logic to reconstruct direction for background. If you just use the standard uv + depth -> position_ws logic, it’s going to fail spectacularly. I’m talking about NaNs everywhere and you’re going to have a black sky box.
  2. clustered techniques, such as clustered lighting or volumetrics pretty much rely on a finite far plane. There are 2 ways to go about dealing with this:
    • Use a custom fixed far plane just for those techniques. The effects of clustered techniques will essentially cut off that that point.
    • Reserve the last slice of the cluster for infinity. This makes the math a bit of a headache, but should be a fairly elegant solution. I haven’t tried myself yet, but I can see myself going for it in the future.
  3. Cascaded Shadow Maps. CSM technique relies of slicing up a view frustum, and again - the fact that it goes into infinity is an issue. A bigger issue than that for clusters, I won’t go into too much details as it’s not relevant, but trying to stretch a texture over an infinite distance even if mathematically possible will produce pointless results. So you are forced to use some kind of a fixed cut-off here.
  4. Kind of related to the previous point. There is no infinite far for orthographic projection. So you’ll have to stick with a mixed system.

These issues, however, are nice issues to have. Because all of that extra view distance you’re getting is free already. You didn’t have the issues before - because you couldn’t render anything there.

8 Likes

Just to clarify -

Are there certain changes that we three.js WebGPU programmers need to do to implement this improvement, e.g. always use logarithmic scale, change camera.far, etc.? I currently work with big grids in my programs, so I need ranges of, say, 0.01 (for nearby shadows) to, say, 150 miles for distant features, like mountains or cities.

Or are these improvements that the three.js creators will have to implement?

Or both?

1 Like

Hey Phil,

Are there certain changes that we three.js WebGPU programmers need to do to implement this improvement […]
Or are these improvements that the three.js creators will have to implement?

That’s a good question. It’s a bit of both. What you can do today, and what I would recommend - is to just set camera.reversedDepth to true for your every project moving forward. To do that you’ll need to initialize Renderer with the appropriate flag:

new WebGLRenderer({ reversedDepthBuffer: true })

[edit] thanks to @grml for the instructions

As for the infinite far plane, this is something that can be done on your end, by modifying the projection matrix directly, but you’d have to disable three.js’s automatic matrix management for cameras. I’m a bit rusty on that front, so I don’t know exactly how that’s done today. You could just set camera.far = Infinity and hope for the best, but I’d be a little surprised if it was as simple as that.

One thing to watch out for is - all of the post-processing stack that you might be using will likely expect depth NOT to be reversed, and will likely expect the far distance to be finite, so if you’re using things like SSAO or anything else of that nature - expect further work.

2 Likes

I tried setting camera.reversedDepth = true as part of the following series of instructions:

let camera = new THREE.PerspectiveCamera(45,CamAsp,0.1,100000);
	camera.rotation.order = "YXZ";
	camera.reversedDepth = true; // error
	camera.updateProjectionMatrix(); // required?

where CamAsp = window.innerWidth/window.innerHeight.

and got this error message:

Uncaught TypeError:
Cannot set property reversedDepth of #<Camera> which has only a getter

What the heck is a “getter”?

1 Like

It’s not possible to change the setting per-camera, you need to set reversedDepthBuffer: true when you create the renderer instead.

A getter is a member function that gets called when you read a property of an object. get - JavaScript | MDN

3 Likes

So is this the correct command?

renderer.reversedDepth = true;

I don’t get an error message, but that could mean nothing.

I tried:

renderer.camera.reversedDepth = true;

and that didn’t work.

1 Like
renderer = new WebGLRenderer({ reversedDepthBuffer: true })

I don’t know about WebGPU, the documentation says nothing about this, or rather I couldn’t find it.

Edit: Does three.js WebGPU support reverse z-buffer?

2 Likes

So reversedDepth may not be implemented for WebGPU?

I see from the list of r182 milestones that fixing the reversedDepthBuffer for WebGL is first on their list.

I am testing reversedDepth on one of my programs with an object that has a significant flicker at a certain distance (e.g. 1 mile). The flicker is mostly solved using the logarithmicDepthBuffer. I assume that reversedDepthBuffer would have a similar effect?

For what it is worth, the far value of the camera does have one benefit in that it cuts off portions of objects beyond the far value. This is especially apparent if you have something like a checkerboard that extends out beyond the far value. Without far value, you would see the entire checkerboard. With far value, you would only see the portion of the checkerboard within the radius of the far value. But, in other situations, this cut-off might not be helpful, e.g. if it cut off part of a distant planet that is otherwise clearly visible.

Getter is used to retrieve a value, a setter is used to set one. Basically if you just have a getter it’s read only.

1 Like