These two functions will give you the visible height and width in your scene at a given distance from a PerspectiveCamera.

It’s useful if you want to create an object that is the full width / height of the scene, or want to place objects at screen edges etc.

Here’s the code (ES6):

const visibleHeightAtZDepth = ( depth, camera ) => {
// compensate for cameras not positioned at z=0
const cameraOffset = camera.position.z;
if ( depth < cameraOffset ) depth -= cameraOffset;
else depth += cameraOffset;
// vertical fov in radians
const vFOV = camera.fov * Math.PI / 180;
// Math.abs to ensure the result is always positive
return 2 * Math.tan( vFOV / 2 ) * Math.abs( depth );
};
const visibleWidthAtZDepth = ( depth, camera ) => {
const height = visibleHeightAtZDepth( depth, camera );
return height * camera.aspect;
};

There are quite a few examples of this function around the web, but I haven’t seen any that compensate for camera positioning.

Note that I’ve only tested this when the camera is facing in the default direction (i.e. along the negative z-axis) and the depth considered is further along the axis, although I think it will work in other directions as well.

nice, you might want to consider making a node module out of something like this

Could be just a monkey patch for the camera something like require('three-camera-world-size')(THREE) after which THREE.Camera would have your functions bound.

Maybe I can use a similar technique to figure out which plane (at which depth) that sizing matches with physical pixels on the screen? Basically this question: How do we size something using screen pixels?

Maybe I’d have to do a binary search: divide the depth by two and see which mid-plane is closer to the size of the viewport, then proceed into that area and recurse. This seems like an “I don’t know the math so I’m just going to search for it” sort of method.

I think there’s also a real way to figure this out without a testing technique like binary search, but I’m not sure what that might be yet.

Here’s the binary search that (theoretically) finds the depth at which sizing matches pixels of the viewport:

function findScreenDepth( camera, renderer ) {
const { near, far } = camera
const { height:physicalViewHeight } = renderer.getDrawingBufferSize()
console.log( window.innerHeight, physicalViewHeight )
const threshold = 0.000001
return _findScreenDepth( near, far )
function _findScreenDepth( near, far ) {
const midpoint = ( far - near ) / 2 + near
const midpointHeight = visibleHeightAtZDepth( -midpoint, camera )
if ( Math.abs( ( physicalViewHeight / midpointHeight ) - 1 ) <= threshold )
return midpoint
if ( physicalViewHeight < midpointHeight )
return _findScreenDepth( near, midpoint )
else if ( physicalViewHeight > midpointHeight )
return _findScreenDepth( midpoint, far )
else if ( midpointHeight == physicalViewHeight ) // almost never happens
return midpoint
}
}

But the results are not as accurate as I hoped. In the following codepen, you see the depth is logged to console.

(Open the demo on desktop, make sure the viewable area has bigger height than in the following embed, or else the red line does not show)

If you set the height value of the red box to match the height value of the window, you’ll notice the red box is slightly taller than the viewport, and you have to give it a size that is slightly smaller than the window for it to fit just right.

Maybe this is because of floating point error?

What would be the correct way to get this depth, if not with this binary search?

Nevermind, it works almost perfectly if I supply the exact floating point values programmatically, to the dimensions of the box:

The two lines are (basically) the same exact size. The teal line on the left is WebGL and the pink line on the right is a <div> absolutely positioned on top of the canvas (see element inspector).

Both lines have a height of 50px. Cool!

This works, but I think there might be a more precise way. I might be able to use this to map Three.js Object3Ds to DOM coordinate space…

Thanks @looeee, this is helping align some things with DOM.

For some reason though, things aren’t perfectly centered. In this example, you see the teal square behind the DIV element is not exactly lined up:

I believe that by default Three.js centers everything perfectly, and I thought DOM would too (with the CSS I’ve given it). Trying to find why this happens…

Sorry, I updated the last example: I removed the -5 X position from the teal square.

But still, in the previous example you can see the teal sticks out beyond the pink, but I’m expecting (hoping) for it to be precisely hidden behind the pink. Maybe there’s floating-point error?

If I translate the pink square up with -52% instead of -50% then I can manage to cover up the teal square:

Are you looking at the embedded codepen (i.e. here on the discourse site) ?
Embedded pens do some weird stuff with resizing. If I open it in a new window the squares match exactly.

EDIT: actually I also had to remove the

body {
perspective: 800px;
}

So actually the CSS was making the pink square inaccurate, not the other way around.

Ah! Interesting! It does match perfectly without CSS perspective.

I thought that no matter what value of perspective is applied, positioning on DOM content on the Z=0 plane should be exactly the same whether there’s perspective or not.

Could this be an aliasing difference introduced when there is perspective? I need to research the browser implementations. I find that no matter which value of CSS perspective I apply, the offset is always the same as long as any value of perspective exists, so it makes me think there’s something about aliasing at play.

This is what I wanted to figure out next anyways: matching the CSS perspective with the Three.js perspective so that I can transform both the mesh and the <div> in unison. Maybe this will help point to what the issue is.

EDIT: Dang, setting antialias: true for the renderer and leaving the CSS perspective in place didn’t work, there’s no difference:

@looeee You won’t believe it: the problem of the div shifting is because the <canvas> element is an inline element, and somehow that affects the div when perspective is applied. Making it display: block solves the problem! See:

Awesome. now I can move on! Thanks for your visibleHeightAtZDepth and visibleWidthAtZDepth functions.

Aha! After taking a look at this again, I realized that instead of using these functions and a binary search, matching the Three.js perspective to CSS3d perspective boils down to a short equation:

const perspective = 800
// this equation
const fov = 180 * ( 2 * Math.atan( innerHeight / 2 / perspective ) ) / Math.PI
// then align the Three.js perspective with the CSS3D perspective:
camera = new THREE.PerspectiveCamera( fov, window.innerWidth / window.innerHeight, 1, 10000 );
camera.position.set( 0, 0, perspective );
document.body.style.perspective = `${perspective}px`

Here’s a simple example. Note that the blue/pink color is due to the teal WebGL plane rendering on top of the pink DOM plane (zoom the camera to displace the WebGL rendered behind the DOM element):

At this point, the WebGL object and the DOM element can be moved in unison in the same 3D space. For example, let’s animate the positions and rotation of both (again if you zoom, you can see which one is the WebGL and which one is the DOM):

As an follow-on feature would also control the DOM position with the OrbitControls.

@looeee I don’t understand why you subtract (or add) the camera’s z position from the depth before calculating the visible height. Is it not assumed that depth is given in view space? And in which space will it always be correct to transform to view space by subtracting z when z>depth and adding z when z<depth?

@trusktr What you did with the binary/bisection search was basically to invert the function. Looking at the stripped-down function (with depth assumed positive and from view space origin and in view space units):

height = f(depth) = 2 * tan( vFOV / 2 ) * depth

this is invertible just by moving things around a bit: