How do we size something using screen pixels?



What I mean is, how to we position things (camera and the objects we wish to draw) so that when we give something a size of 10 (f.e. a BoxGeometry with an X-axis dimension of 10) that this will mean the box is 10 screen pixels wide?

Another question might be: If we have a camera, how far from the camera on the camera’s Z axis is the plane in which a size of 1 on the X or Y axis corresponds to exactly 1 screen pixel? How might we figure this out?


How can we make Three.js scenes use DOM-style coordinates?
Functions to calculate the visible width / height at a given z-depth from a perspective camera
Units in Three.js

I found a way. Using a @looeee’s visibleHeightAtZDepth and visibleWidthAtZDepth functions, I was able to do a binary search to find the plane perpendicular to a camera’s line of sight where the dimensions of the found plane are the same as the viewport dimensions (there’s a better way but this works for now). On this plane, one Three.js unit is equivalent to one CSS pixel.

So, for example, following is an example where the <div> element is positioned using top: 50%; left: 50%; transform: translate(-50%, -50%), which is using the DOM coordinate system where the point (0, 0, 0) starts at the top left of the viewport and positive Y goes downward. When you start dragging, you’ll see the teal square that was hiding underneath the pink square. Both of them are rotating in unison, and are perfectly aligned until you begin dragging the Three.js camera. The pink <div> has a size of 50px width/height, and the Three.js Mesh has a size of 50 width/height, so the Three.js Mesh is sized in pixels:


And what practical use?


@prisoner849, the use case is, for example, what I described here (also listed above). Basically, to easily make traditional web content enhanced with WebGL.

The following is a sample that shows combination of DOM with WebGL using a combination of Three.js’ CSS3DRenderer and WebGLRenderer:

What you notice is that the squares are DOM elements (see them in the element inspector), the shadow of the WebGL sphere is cast onto the elements, and the moving lights also shines on these elements. If you run it a few times, you’ll notice that sometimes the sphere also intersects with the elements, as if both are in the same 3D world.

I’m making an HTML API to make it super easy (abstracting away “mixed mode” behind the HTML interface). In my case, I’m not going to be using the CSS3DRenderer, as I have my own CSS3D renderer and I will be mapping the WebGL objects (Three.js) to the DOM coordinate space (rather than mapping DOM elements into Three.js coordinate space like CSS3DRenderer does), which is why I opened this thread here. I’m going to post my full working solution when it’s ready over at How can we make Three.js scenes use DOM-style coordinates? (that thread includes an HTML snippet of what mixing DOM with WebGL will look like).

Here’s a sample scene without any “mixed mode” (only WebGL) because mixed mode won’t be ready for a few weeks:

I am posting progress over at :smiley:



Are you familiar with these tutorials?

The stuff you’re doing seems pretty awesome but you seem to be missing some “basics” (basic my ass, these are tough concepts :D). Might be worth getting more familiar with this kind of stuff.


I was missing some basics, but since posting my last comment in this thread, I figured out the simple formula after doing some basic geometry with the frustum on paper:

Its a fun satisfying process. :blush:


I made this with it :smiley: :


Second vote for the scratchapixel tutorials. Even though they are incomplete and could really do with a spell / grammar check they are some of the best technical tutorials I’ve come across.