How can we make Three.js scenes use DOM-style coordinates?



For example, I’d like all positions of all objects to be from top/left like with DOM CSS. For example, X goes to the right, starting at 0 from the left edge of a parent/viewport, and positive Y goes downward, starting from the top of a parent/viewport.

I’d like for an object that is added directly to a scene with position 0,0,0 to have its origin (center/middle/whatever-it-is-called) be at the top-left.

Then I’d like for it’s size to be in CSS pixels. For example, if a Circle’s radius is 10, then this means that the radius would span 10 CSS pixels (px) across the screen when Z=0 (obviously if Z is not 0 then the sphere gets bigger or smaller as it goes away from the Z=0 plane just like with transforms in CSS). We’d see one quarter of the sphere, because it’s origin would be at the top left.

Currently, when we resize a Three.js viewport, objects also grow or shrink. Bu I’d like for Objects to remain the same size, using absolute sizes (like CSS), so sizing is not relative to viewport width/height. This is how sizing in CSS3D is by default when using units like px, in, etc.

CSS3DRenderer exists, and I am currently studying it, but I don’t yet understand what all the numbers mean. However, CSS3DRenderer does the opposite of what I want: it converts DOM coordaintes to match those of Three.js. I want to instead convert Three.js coordinates to match those of DOM.

I made this “mixed mode” demo with CSS3DRenderer + WebGLRenderer:


As you might imagine, I’d like to align Three.js objects to DOM objects.

The way it works in that fiddle is using Three.js’ default coordinates, which can be nice for infinite space apps (games, etc).

But for making “web sites” (mostly-2D documents) that are agumented with 3D objects, it’d be easier to have DOM coordinates so it is easy to match Three.js positions to things in the page.

Once I get this working, I’m going to wire it up in Infamous (soon to be renamed) so that it is possible to describe “mixed mode” scenes like in the fiddle, but using HTML. This is what I’ve planned for it currently is planned to look like:


    <!-- Easily mix HTML and WebGL content together -->

    <i-node rotation="0 30 0" position="20 30 40">
        <h1> Mixed mode! This is just regular HTML content. </h1>
        <svg> ... </svg>

        has="sphere-geometry phong-material"
        position="30 30 30"


The i-scene and i-node elements already exist, and can currently be used to easily manipulate DOM in 3D. I’m currently working on the WebGL parts of the API.

How do we size something using screen pixels?
How do we size something using screen pixels?
Nesting scenes is possible. What is it good for?

@mrdoob Continuing from GitHub,

what he wants to do is convert CSS3D coordinates to three.js coordinates

I thought that’s what CSS3DRenderer already did: converts from the CSS space to Three space, so it aligns DOM objects with Three objects. I’d like to do opposite: convert Three space to DOM space, and align Three objects with DOM objects while the DOM object have their native transforms.

For example, if I have a root “scene” element with a perspective, and all children in the scene have only transform: matrix3d() properties, I want to take that matrix3d() value (converted to a numerical Three matrix of course) and apply it to an Object3D (f.e. a Mesh) and so when the mesh is rendered using the same matrix as the DOM element, the Mesh will appear in the same place as the DOM element (where positioning starts from the top/left).

I think I might be able to achieve this by setting the matrix of a THREE.Scene, where the matrix will ultimately map all world matrices to the DOM space.


I think I might be able to achieve this by setting the matrix of a THREE.Scene, where the matrix will ultimately map all world matrices to the DOM space.

That may work. Give it a try :ok_hand:

How do we size something using screen pixels?

Alright, I finally did it. Here’s what I was aiming for (see the HTML):


  • This is a simple demo to show that the “dom elements” intersect each other as well as with WebGL Meshes, and receive shadow and light.
    • Use case: mixing traditional web content with 3D content in an easy way (it’s simple HTML)
    • Try selecting the text, it’s regular HTML text.
    • The second square is contenteditable.
  • I had to figure out how to match Three.js coordinate space to CSS3D coordinate space:
  • Three’s Planes don’t cast shadows
  • Selective lighting is not yet implemented, so the lighting can’t be configured very easily to make the lighting look more realistic. Due to the way the transparent meshes aligned with the DOM render, the shadows and shine are not as pronounced as with other solid meshes. Selective lighting would allow me to easily adjust lighting to make the “mixed mode” effect look more real.
    • Example of inconsistent lighting that might be fixable with selective lighting: (the shadow on the rotating plane is much lighter than on the background)
    • A workaround mentioned in that issue – of rendering two scenes – is completely non-ideal and creates unwanted complexity and unexpected rendering results.

@mrdoob What are your thoughts on a selective lighting solution and Planes being able to cast shadows? It would make my implementation a lot cleaner without having to twiddle with renderer settings, without unnecessary complexity, and without unexpected results and artefacts.