How do I use YouTube IFrame with WebGLRenderer?

Up until now I’ve been putting video objects in my ThreeJS scene by building HTML5 video elements and then assigning them to the surface of a cube by passing the video node to the MeshBasicMaterial constructor:

    self.videoNode = document.createElement('video');
    self.videoTexture = new THREE.VideoTexture(self.videoNode);
    cubeMaterialArray.push(new THREE.MeshBasicMaterial({map: self.videoTexture}));

My ThreeJS scene uses pointerlock.js so the whole scene is base on using a WebGLRenderer object for rendering:

    assignThreeJsRenderer(new THREE.WebGLRenderer( { antialias: true }) );

I switching over to using YouTube video IFrame instead of HTML 5 video elements. I have a sample working that I found on CodePen, but that sample uses a CSS3DRenderer renderer so it creates a DIV to hold the YouTube video IFrame element and passes that DIV to an instance of CSS3DObject via that object’s constructor:

    this.hostDiv = document.createElement( 'div' );
    this.youTubeIFrameObj = document.createElement( 'iframe' );
    this.hostDiv.appendChild( this.youTubeIFrameObj );
    this.threeJsObject = new CSS3DObject( this.hostDiv );

My problem is I don’t know how to integrate the two approaches. What do I pass to the VideoTexture object, the object that I pass to the MeshBasicMaterial constructor? I believe VideoTexture expects an HTML5 video object and not a DIV containing a iFrame? Can someone help me connect the dots here?

That is correct. The argument has to be an instance of HTMLVideoElement. You usually define the video element with its properties (e.g. playsinline or muted) in the DOM and then use document.getElementById() to query it. The result is then passed to the constructor of THREE.VideoTexture.

Right, but when you use the YouTube API things work differently. You pass an instance of the YT API object a DIV element ID and it builds a YouTube video IFrame for you in that DIV. Just like the way ThreeJS builds its canvas in the DIV whose ID you give it. So I don’t create the IFrame myself or its contents. The YT API does that.

I’ll set a breakpoint in the code and inspect the IFrame element the YouTube API creates. Perhaps I’ll find an HTMLVideoElement in its child nodes that I can pass to VideoTexture. However, I’m already worried that the children of an IFrame this hosted on another web site, YouTube in this case, will not be accessible to my code and if that’s true, I’m in trouble. I’m also in trouble if the IFrame does not contain an HTMLVideoElement. f you have any creative ideas non how to make this work I’d appreciate hearing them…

Well I may be out of luck:

This thread seems to indicate my current WebGLRenderer based scene will not convert to a scene that uses a CSS3DRender because of depth sorting and perhaps other issues. I saw the idea about “leaving a hole” in the WebGL content and put CSS3D rendered content there, but my videos are displayed on the surfaces of cubes, sometimes rotated, so that won’t work.

So it looks like I have to stick with self-hosting videos. That’s really too bad since it’s a much harder sales pitch to my users. That is, to tell them they have to upload a bunch of videos instead of just dropping in a bunch of YouTube URLs.

I’ve been looking into this too for my mmo game to stream videos on cinema screens or tv devices as well (more as a side feature rather than important), there was a API longer ago to get original video files YT removed.

There still is an API apparently that allows to receive this, which is also what YT video grabber services use, however i’m not sure in which way or account (and rather likely paid) this is possible, but from a quick rough look into the sources of such available on GH, there seems to be an official API, can’t guarantee.

There seems to be no larger alternative platform providing stream or video sources, for legal reasons i decided to go with custom hosted files as well as selected streams externally.

For user input however a platform such as YT as provider would be more legally secure as, asides of them hosting them, they also check on the contents and copyright issues.

Seems like css3d renderer is only way currently rn, something like this

Is there a document that describes the differences, pros and cons, of the WebGLRenderer vs the CSS3DRenderer, so I could really tell what I would lose if I convert my WebGLRenderer scene to one that uses CSS3DRenderer?