Occlusion Between CSS3D Renderer and WebGL Renderer

Hello fellow engineers!
I’m currently working on masking between CSS3D Renderer and WebGL Renderer~

After reading some previous post, Since I’m not using react, I try to learn this example approach =>

So far I’ve realised a few ways to do this, but since I’m going to have a lot of objects, I’d like to use method 2

  1. Use projective lines to determine if there are any objects blocking and if there are, just make the CSS hidden.
  2. Place a flat model of the same size and rotation in the place of the CSS3DObject, and use blending mode.

Here’s how I’m currently creating my CSS3DObject:

import * as THREE from '../js/three/build/three.module.js';
import { CSS3DRenderer, CSS3DObject } from '../js/three/examples/jsm/renderers/CSS3DRendererLocal.js';

export class CSSrender {
    constructor(main) {
        this.main = main;
        this.scene = main.scene;
        this.sceneCSS = main.sceneCSS; // This should be the CSS3D scene
        this.camera = main.camera;
        this.renderer = main.renderer;
        this.CSSrenderer = main.CSSrenderer;
        this.LD = main.Loadings;

        this.Dom = [];
        this.raycaster = new THREE.Raycaster();

    }
    init() {
        this.CSSrenderer.setSize(innerWidth, innerHeight);
        this.CSSrenderer.domElement.style.position = 'absolute';
        this.CSSrenderer.domElement.style.top = '0px';
        $(this.CSSrenderer.domElement).addClass("canvasCSS3D");
        $(document.body).prepend(this.CSSrenderer.domElement);

        this.addHtmlDom_getHtml(
            "labels_03", //obj.name
            { x: -1, y: 1, z: 2 }, //obj.pos
            0.05,//scale
            45,//obj.deg
            '.pinkVendor',//dom (querySelect)
        );
    }

    addHtmlDom_getHtml(objName, pos, scl, degrees, querySelect) {
        const Div = document.querySelector(querySelect);
        const divHTML = Div.outerHTML; // Get the content of the div element
        // Use DOMParser to convert HTML strings into DOM elements!
        const parser = new DOMParser();
        const parsedHTML = parser.parseFromString(divHTML, 'text/html');

        const position = new THREE.Vector3(pos.x, pos.y, pos.z);
        const label = new CSS3DObject(parsedHTML.body.firstChild);

        label.name = objName;
        label.position.copy(position);
        
        const radians = (degrees * Math.PI) / 180;// Convert degrees to radians
        label.rotation.y = radians;

        label.scale.set(scl, scl, scl);
        this.sceneCSS.add(label);

        this.Dom.push({
            cssObj: label,
            cssDom: parsedHTML.body.firstChild
        });

       // Create a mesh with the same shape as the CSS3DObject.
        // const geometry = new THREE.PlaneGeometry(Div.offsetWidth * scl, Div.offsetHeight * scl);
        const geometry = new THREE.PlaneGeometry(50, 50);
        const material = new THREE.MeshLambertMaterial({
            color: 0xffffff,
            transparent: true,
            blending: THREE.NoBlending,
            opacity: 0,
        });
        const mesh = new THREE.Mesh(geometry, material);
        mesh.position.set(pos.x, pos.y, pos.z);
        mesh.rotation.y = radians;
        this.scene.add(mesh);
    }
}

My main.js related way to create rendering :

class ThreeScene {
    constructor() {
        this.scene = new THREE.Scene();
        this.renderer = new THREE.WebGLRenderer();
        this.CSSrenderer = new CSS3DRenderer();
        // this.CSSrenderer.alpha = true; // 設置為 true
        this.sceneCSS = new THREE.Scene();

        //......
    }
    async init() {
        await this.Loadings.init();
        this.createScene();
        this.creatSkybox();
        this.createLights();

        /* 創造Render */
        this.createRenderer();
        this.CSSrender.init();

        this.createFloor();
        this.Camera.init();
        this.scene.add(this.camera);
        this.Buildings.init();
    }
    update() {
        this.Camera.update();
        this.Buildings.update();
        this.controls.update();
        this.creatSkyboxUpdate();
    }
    createScene() {
    }
    creatSkybox() {
    }
    creatSkyboxUpdate() {
    }
    drawSky() {
    }
    createFloor() {
    }
    createOcean() {
    }
    OceanUpdate() {
    }
    createLights() {
    }

    createRenderer() {
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        this.renderer.setClearColor(new THREE.Color("0xFFD1A4", 0.5));
        this.renderer.alpha = true;
        this.renderer.shadowMap.enabled = true;

        // Check if it's a mobile device, if so, leave the PixelRatio setting alone.
        if (!window.matchMedia("(pointer: coarse)").matches) {
            this.renderer.setPixelRatio(window.devicePixelRatio);
            this.renderer.shadowMap.type = THREE.PCFSoftShadowMap;
            this.renderer.antialias = true;
        } else {
            // this.renderer.shadowMap.type = THREE.BasicShadowMap;
            this.renderer.setPixelRatio(1);
            this.renderer.shadowMap.type = THREE.PCFShadowMap;
        }

        $(this.renderer.domElement).addClass("canvas3D");
        $(document.body).prepend(this.renderer.domElement);
    }


    onWindowResize(that) {
    }

}

window.addEventListener('load', function () {
    const app = new ThreeScene();
    app.init();
    animate();

    function animate() {
        requestAnimationFrame(animate);
        app.update();

        if (app.Render.finalComposer) {
            app.Render.finalComposer.render(app.scene,app.camera);
            app.CSSrenderer.render(app.sceneCSS, app.camera);
        }
    }
});

Despite these efforts, I’m still not achieving the desired occlusion effect. is that because I might be missing something or the way I think is wrong?

Thank you for your assistance!

blending is the right choice, the old approach just eats up cpu resources and looks bad as it can’t achieve real occlusion where #2 can.

the trick boils down to creating a mesh plane that has the same size of the html, this one cuts a hole into the canvas (via the blend mode). the white part in this image is the html background peeking through.

if you now render the html exactly at that position, underneath the canvas and css3d transformed correctly, you have true occlusion.

perhaps you can contact https://twitter.com/vis_prime he is working on vanilla-drei, a project that makes drei react components easier to access to vanilla users.

3 Likes

Thank you for helping me confirm my thoughts!
Today, I took another look and realized that I didn’t force both renderer’s DOM elements to have the following settings:

{
  position: absolute;
  top: 0;
  left: 0;
  width: 100%;
  height: 100vh;
}

So perhaps the positioning was incorrect.

However, after changing the HTML structure to:

<div id="container">
  <div id="css3d"></div>
  <div id="webgl"></div>
</div>

My CSS3D content doesn’t seem to render anymore. I might need to rethink this :smiling_face_with_tear:

In case it helps, here are some old pens showing how:

Here it is applied in Lume:

Here’s the next problem we need to solve with more layering (the sphere is not visible behind the transparent element):

If it uses a subset of CSS features, a simple solution is to copy it to a texture via the SVG foreignObject trick (but this is slow for animation, requires uploading new texture every frame).

We really need a custom shader instead of the simple blend mode trick, so we can make the lighting and shadow look better.

Nite that if you want to support WebXR, regular CSS content will not transfer over to the headset (only WebGL, no CSS). For that, the texture trick will be needed (with performance limitation in mind), or just make UI purely with WebGL content (or WebGPU once that is widely supported).

Curious to see what you come up with!

3 Likes

Thanks for the other examples you provided!
There are some ideas I really hadn’t thought of.

Oh! I think I’ve understood something now.
I seem to have misunderstood this method as being that when an Obj is set to NoBlending because the Obj is in the same position as CSS3D, the CSS3D content will be mapped to that Obj.

Now I realize that setting an object to use NoBlending is equivalent to cutting a hole in the WebGL rendering, and CSS3D just happens to be within that hole.

But now I have another issue.
It seems like my current approach hasn’t successfully created this ‘hole.’
The color of this ‘hole’ still appears to be affected by the ambient lighting of the scene, and furthermore, when I look at it from the back, I can still see the shadows of the buildings.

However, I’ve looked at several examples, and it seems like all I need to do is to set these properties, right?

const material = new THREE.MeshLambertMaterial({
  transparent: true,
  blending: THREE.NoBlending,
  opacity: 0,
  side: THREE.DoubleSide,
});

Or could it be that I suspect some post-processing effects might be affecting it? Especially since I’m using the bloom effect.

In my animate function’s update loop, I’m constantly executing these steps:

  1. scene.traverse(darkenNonBloomed): This step traverses all objects in the scene and temporarily replaces the materials of objects that don’t have the bloom effect applied with a black base material to make these objects appear darker in the bloom channel.
  2. This step renders the content of the bloom channel onto the bloomComposer.
  3. It then restores the previously replaced object materials to their original materials, so these objects appear normal again.
  4. Finally, the content of the entire scene is combined with the rendering results of the bloom effect."

that is my work
https://gotoo.co/demo/elizabeth/gotoo5th3D/HtmlWith3D/

(Currently, my html elements are correctly rendered in webGL!
But the hole can’t be cut.)

and that is my code
https://github.com/ginnychen19/gotoo5th3D/tree/main/HtmlWith3D

1 Like

Wow I realized that it was really a post-production problem, once I removed all the post-production, the cut worked fine!

1 Like

Nice you got it! Yep, it just cuts a hole. Indeed, if the post processing pass is not aware of the hole and applies colors over the hole, then it will mess with the effect. There’s ways around that, f.e. render the NoBlending object on top of the post-processed content, but the to the effect will not apply to the HTML/CSS content. So there are difficulties to overcome when trying to mix the CSS layer with the WebGL layer.

If your HTML content is using a limited set of CSS properties, then what you can do is render the HTML to a texture first, the use that texture on a Mesh with PlaneGeometry, and then the content will be fully interactive with lighting, transparency, and post processing effects.

To do that,

(1) first put the HTML inside of a <foreignObject> element inside of a <svg> element, like so:

once you have that, then

(2) convert the SVG to pixels. Clues are here:

^ You might try rendering the SVG to 2D canvas directly.

More SVG-to-canvas considerations here:

Finally, once you have the SVG rendered as pixels,

(3) Use a THREE.CanvasTexture to use the canvas that contains the SVG pixels as a texture, and use the texture on your material .map property.

Advantage of SVG-to-canvas:

  • the result lives inside of the WebGL rendering, not making a hole in the rendering, so it looks like a native object inside the 3D scene with complete lighting and shading.

Disadvantage of SVG-to-canvas:

  • if you want to use CSS animations, the hole approach will be fast but more visually limited with respect to WebGL effects, while the CanvasTexture approach will be slow because every frame you have to update the canvas with the new pixels, and to set canvasTexture.needsUpdate=true, which means every frame Three.js needs to upload the texture to the GPU again.

So it all depends on the use case, and what the limits are for various visual possibilities. You may animate a CanvasTexture so long as it fits within your budget (draw less things then you can spend more time on a simpler scene with CSS animations piped through canvas, or pipe only initial CSS content once without animation and keep it static (also miss out on certain CSS properties) then draw a more other things)

2 Likes

the hole is quite un-intrusive and fast. the html is live, can run animations, has interaction etc. i really don’t see much downsides to it at all. it behaves like an actual part of the scene, it even receives shadows, reflects the environment map, etc.

2 Likes

Yeah, but as mentioned by the OP, post processing effects do not see the CSS content in the hole. So not all rendering features are available when using the hole technique (fast), but only when importing the CSS content (with limitations) as a texture into the WebGL (slow). Pros and cons, depends on the needs.

1 Like

That’s interesting, I’ve not seen SVG to canvas used in this way before, is it similar to the way html2canvas converts html content to a canvas? It’s typically my go to when needing to create a CanvasTexture from html elements and is pretty rapid

1 Like

Yeah, if you specify the foreignObjectRendering: true option for html2canvas, then it will render with that SVG trick which is a lot less code than foreignObjectRendering: false where it reads the CSS properties are re-implements a more limited subset of them as canvas 2D draw calls.

2 Likes

Oh my, thank you for such detailed instructions this time!!

Because this is just a simple technical attempt, my ultimate goal is to embed other web works into it, and preferably, maintain interactivity!

I’ve seen these works before =>

This one uses glowing effects and interactive interfaces!
So, at that time, I was wondering if there was something I missed that prevented me from achieving it.
In that case, it seems that in the future, I should consider using your method or directly creating a plane and applying textures to it, and then using raycasting to determine interactivity or something like that.
.
.
.
.
.
Additionally, I’ve seen this one before =>

I think it might be possible to use the cutout method directly in projects that don’t use glowing effects.

Henry’s is indeed using the cutout method; that’s neat. Jesse’s is using textures with fade transitions between them, 100% three.js.

Yes indeed, glow effect (post processing) won’t interact with the HTML content. But transferring to texture via SVG will allow the glow!

1 Like

I’m really grateful for your diligent organization of these knowledge points :face_holding_back_tears: :face_holding_back_tears:

I had discussions with my colleagues before about how to achieve these effects.
Initially, I thought thinking, if I use post-processing written by this person maybe i could potentially avoid this issue.

At that time, I wasn’t quite sure if the first example was 100% pure three.js, but if it’s confirmed now, I won’t be so puzzled anymore!

I also want to express my gratitude to everyone who participated in the discussion :smile:
It has given me a deeper understanding of the occlusion between CSS3D Renderer and WebGL Renderer :star_struck: :star_struck: :star_struck:

1 Like

is there a demo with something like, say, a slider or something? anything with interaction, or a css keyframe animation running. just to see if it’s fast enough for simple interaction. would love to try the svg thing. or even add it to drei/html occlude blendings.

Although I’m not entirely sure if this can be considered a similar case
But in a previous project where I needed to switch between day and night modes, I created a sky background using a canvas.

my sky background uses Canvas =>

// Define gradient colors and moon color
const initialColors = {
    colorTop: '#000000',
    colorBottom: '#1f1997'
};

this.skyColors = {
    colorTop: initialColors.colorTop,
    colorBottom: initialColors.colorBottom
};

// Create gradient
this.gradient = this.skyCtx.createLinearGradient(0, 0, 0, this.skyCanvas.height);
this.gradient.addColorStop(0, this.skyColors.colorTop);
this.gradient.addColorStop(1, this.skyColors.colorBottom);
this.skyCtx.fillStyle = this.gradient;
this.skyCtx.fillRect(0, 0, this.skyCanvas.width, this.skyCanvas.height);

// Update sky texture
const skyTexture = new THREE.CanvasTexture(this.skyCanvas);
this.scene.background = skyTexture;
// Function to switch to day mode
switchToDay() {
    gsap.to(this.skyColors, {
        colorTop: "#1e90ff",
        colorBottom: "#87ceeb",
        duration: 2,
        onUpdate: () => {
            this.skyCtx.clearRect(0, 0, this.skyCanvas.width, this.skyCanvas.height); // Clear canvas every time
            this.drawSky.bind(this);
            this.drawSky();
        },
        onComplete: () => {
            this.clearSkyMaterials.bind(this);
            this.clearSkyMaterials();
        }
    });
}

// Function to switch to night mode
switchToNight() {
    gsap.to(this.skyColors, {
        colorTop: "#000000",
        colorBottom: "#1f1997",
        duration: 2,
        onUpdate: () => {
            this.skyCtx.clearRect(0, 0, this.skyCanvas.width, this.skyCanvas.height); // Clear canvas every time
            this.drawSky.bind(this);
            this.drawSky();
        },
        onComplete: () => {
            this.clearSkyMaterials.bind(this);
            this.clearSkyMaterials();
        }
    });
}

When animations are needed, I place each generated canvas material ball into an array. Once I’m done, I release all the old material balls at once.

clearSkyMaterials() {
    // Reverse loop through the array, starting from the end to avoid disrupting the current execution order
    for (let i = this.skyMaterials.length - 1; i >= 0; i--) {
        this.skyMaterials[i].dispose();
        this.skyMaterials.splice(i, 1);
    }
}

It appears to work well, and I consider that the key difference lies in it need to store DOM elements as canvases.

https://gotoo.co/demo/elizabeth/gotoo5th3D/cameraGSAP/

I made an example, but so far

  • it only works in Chrome
  • Firefox is not rendering in the canvases yet
  • if you update it to r155+, it goes black (probably because physical lighting and the units being actually meters)
  • only the actual DOM is interactive, if we want to interact in the WebGL, we need to listen to canvas events and emulate the interaction on the real DOM (not sure if that’s possibly on an input range slider yet though)
  • the img.src loading needs a little more care, f.e. in case input event happens twice before the next frame (f.e. Safari) then we want to avoid setting texture.needsUpdate to true if a new src is loading, or the texture will be incomplete (Three.js warning in console, in Firefox).
  • hover styles do not transfer into the WebGL, because the trick is to serialize the input inside the SVG (including its CSS style or it will render as unstyled), and interaction events like hover are not possible to include in the HTML markup. We’d need to create custom style classes, and on hover (that is, if we can even emulate the events after based on canvas events) we’d need to apply classes to the input so that the style state is serialized
  • CSS animations should be doable, just use the serialization trick on every frame during the CSS animation. It may get expensive re-rasterizing and serializing a bunch of separate animated DOM surfaces at the same time. The texture upload takes 0.38ms on my macbook air, so it won’t take too many of those at once to eat up a whole animation frame!

Looks good!

Yeah, canvas is a DOM element, but in your case the canvas does not have to be appended into the DOM tree.

A couple tips:

  1. The lines this.drawSky.bind(this); don’t do anything. You would want this.drawSky = this.drawSky.bind(this) outside of the update loop, as you would only need to bind it a single time. However, since the demo works, apparently you solved that elsewhere, so you can simply delete those lines. :slight_smile:

  2. This can be faster without reverse:

clearSkyMaterials() {
    for (let i = 0, l = this.skyMaterials.length; i < l; i++) {
        this.skyMaterials[i].dispose();
    }
    this.skyMaterials.length = 0
}

And taking advantage of not going in reverse, you can even make it easier to read like this:

clearSkyMaterials() {
    for (const mat of this.skyMaterials) mat.dispose();
    this.skyMaterials.length = 0
}
1 Like