How to understand Vector3.project and NDC space

I’m having trouble with understanding the Vector3 project method and exactly what NDC space is.

My understanding is that a Vector3 in NDC space is supposed to have all 3 coordinates in the range -1 to 1. So when I call vector.project(camera) where vector is a Vector3 and camera is a PerspectiveCamera, the result should be a Vector3 which has x, y, and z coordinates all in the range -1 to 1.

The below code snippet is what I’m trying to use to project an object’s world position into screen coordinates. camera is a PerspectiveCamera and object is an Object3D.

camera.updateMatrixWorld();
camera.updateProjectionMatrix();
camera.updateWorldMatrix();
camera.updateMatrix();
object.updateMatrixWorld();

const width = window.innerWidth;
const height = window.innerHeight;

let pos = new THREE.Vector3();
pos = pos.setFromMatrixPosition(object.matrixWorld);
pos.project(camera);
pos.x = (pos.x + 1) * width / 2;
pos.y = -(pos.y - 1) * height / 2;

If I console.log() the pos variable just after calling pos.project(camera), I am getting values that look like nonsense. For an object that appears directly in the center of the scene on-screen, I’m getting values like pos.x = -1.10302... and pos.y = 6.50278.... For an object that appears in the center top of the screen, I’m getting values like pos.x = -2.03907... and pos.y = 13.10969.... Like I said above, my understanding of NDC space says these values should all be in the range -1 to 1, and an object directly in the center of the screen should have values pos.x = 0 and pos.y = 0. So I don’t know if my understanding of NDC space is wrong, or if there’s something going wrong with the way I’m calling this method.

My final values for pos.x and pos.y are giving screen coordinates that are nowhere even close to mapping onto the Three.js objects on-screen, probably because of whatever is going wrong with my understanding of the Vector3.project method. I don’t know what I’m doing wrong here, but I’ve read everything I can find about projecting vectors into screen space on StackOverflow and here and it’s all telling me to do pretty much what I’m doing here, but it’s not working. Notice that I am updating all of the matrices I can think of for both the camera and the object, so that shouldn’t be the issue like I’ve seen for some people with similar problems.

I’ve been beating my head against the wall about this for almost 3 whole days now and gotten nowhere. Am I missing something?

Happy to provide more clarification or code snippets on anything as necessary. Thanks!

1 Like

Im answering this from my phone so I wasn’t able to digest the whole post. You will get values from -1 to 1 only if your vector actually is in the cameras frustum. More or less in z would mean that it’s beyond near/far for example. Are you sure it’s not actually a super small number with an exponent?

I can see the objects in the scene so that would mean they’re in the camera’s frustum, right?

The z components of the vectors seem to always turn out to be slightly less than 1.

Now that you mention it though, it would make a lot more sense for the object directly in the center of the screen if the x and y components were super small numbers with exponents, because they ought to be very close to zero. But there’s no exponent in the values, they’re just regular floating point numbers.

1 Like

What are your camera and object positions? Try putting the camera at 0.0.10 or so, and then a vector at 0.0.0

Ah, also, before projecting it might make sense to multiply the vector with the cameras matrixWorldInverse

That’s done with vector.applyMatrix4(matrix), correct? Where matrix is the camera’s matrixWorldInverse.

When I do that I get x and y coordinate values in the positive or negative hundreds after the projection. z value is still always slightly less than 1.

When I place the camera at (0, 0, 1) looking down the z-axis and place a vector at (0, 0, 0) and then project the vector onto the camera, the projected vector ends up being (0, 0, 0.99999…) which is what I would expect. This is without multiplying by the camera’s matrixWorldInverse. It seems to me like any vector located directly in front of the camera (on the camera’s local z-axis, or nearly so) should end up with a similar projection no matter how far from the camera it is or what direction the camera is pointed.

I think it has to have that multiplication but I’m not sure. You can try rotating the camera a bit to see if the values change or not. What I think is happening, when you have it on the z axis, the matrix is identity.

So if you rotate and get the same exact value, that should be wrong.

But when I apply the camera’s matrixWorldInverse to the regular position of the object I get projected x and y values in the hundreds, which is even further off than it was before. Maybe it’s some other matrix that needs to be applied?

That doesn’t make a whole lot of sense, with the way you positioned them. Did you by chance set some scale to your camera by accident? Is it added to the scene? At this point it may be best to setup a fiddle.

Thanks, working on setting up a fiddle now.

Okay, I think I might finally be onto something. Here’s a JSFiddle illustrating the problem I’m having. Notice that you can’t see the object in the scene until you click and drag on the canvas or do something else to move the OrbitControls. The console.log() of all the coordinates demonstrates the same thing as I’m seeing on my main project, but I also tried putting a console.log() of the projected coordinates inside the render loop (this makes the browser tab freeze if you let it run for longer than a few seconds, so be aware of that if you try it).

It turns out that before moving the OrbitControls, the projected coordinates look wrong, but after moving them, they actually align with what I would have expected them to be. So I think maybe this has something to do with the updating of the camera, which I thought should have been taken care of by those commands updating the camera, but maybe not? In the main project I’ve been working on I’ve had all of this code for projecting vectors outside the render loop, but I’m going to go back and try putting it inside the render loop and see if that works.

Side note, my main project doesn’t seem to have this issue that the JSFiddle has where the camera doesn’t actually look in the direction you pointed it until you move the OrbitControls. Not sure what’s going on there.

Alright, it seems to be solved. I’m pretty sure the issue was that the console.logs I was using to look at the numbers were being called before the scene finished rendering, so that’s why the numbers didn’t match what I was expecting. I should have just gone ahead and coded the rest of what I wanted to do instead of using console.logs to check things along the way first. Everything seems to be working now that I’m updating the values in the render loop. Thanks for your input!

2 Likes

I seem to be meeting the same problem with you! But my target is to get the object’s position in canvas from its matrixWorld coordinates. I always get wrong (x,y) in canvas. My English is a little pool. I think it seems to be that you do nothing to amend your coding. Here is my codes. Can you help me find what is wrong.

function getXYFromPoVAndRotaion(name, camera, width, height){
               
                let obj = earthGroup.getObjectByName(name)
                console.log(obj)

                let pov = obj.position

                let worldPosition = new THREE.Vector3()
                obj.getWorldPosition(worldPosition)
               
                let modalMat = obj.matrixWorld;
                worldPosition.applyMatrix4(modalMat);
                return vec2WindomXY(worldPosition, camera, width, height)
            }
function vec2WindomXY(posVec, camera, width, height){
            
                let vector = posVec.project(camera)
                // vector.normalize();
                console.log(vector)
                let halfW = width / 2, halfH = height / 2
                return {
                    x: Math.round(vector.x * halfW + halfW),
                    y: Math.round(halfH - vector.y * halfH)
                }
            }

Thanks for your answer.

You can take a look at this example, which converts the world coordinates of a mouse position or point in the space to the local coordinate system of a transformed object. So it accomplishes the conversion you try in your first function

https://stackoverflow.com/a/78850821/16054918

// Inside the render loop:
  // Compute the inverse of the object's world matrix
  rotatingObject.updateMatrixWorld(); // Ensure the matrix world is up-to-date
  
  // Create a new matrix and copy the world matrix of the geometry
  let inverseMatrix = new THREE.Matrix4().copy(rotatingObject.matrixWorld).invert();
  
  // Calculate the mouse position in local coordinates
  let markerLocalPosition = mouse.position.clone().applyMatrix4(inverseMatrix);
  
  // Return the updated mouse position to the desired location