Multiple video textures in a PlaneGeometry grid

I don’t have working code yet. But is there a way to display multiple video textures in a grid layout. dynamically changing its position and size ? There is the possibility textgeometries need to display over each video, so both geometries need to be added to seperate groups and position that group.

The requirement is in fact many WebRTC streams rendering to different video textires in custom grid layouts, to record the final canvas stream.

I have done something similar mixing camera and screen using raw webgl but might have to use three. as positioning the video texture with an alpha mask for background removal, over the top of an image texture proved to be complicated.

1 Like

Is it possible to apply multiple VideoTextures to a single material / mesh? Likely. Is it necessary? Unlikely.

Since you want each of the grid cells to be actually distinguishable (to add the text, masks etc.) - wouldn’t it be easier to create a single Group (it’s just an empty Object3D that it will allow you to change position and size of all its children at once), put 4 PlaneGeometries inside, then apply a separate video material to each one of them? Doing it all on a single geometry sounds like a confident step towards great over-complication :eyes:

2 Likes

I’m sorry my question was confusing. I will add a planegeometry for each video texture inside a group to overlay individual text. Trying to figure out how to dynamically size each group and position into a grid depending on how many videos are added. Ive added sizing but it needs to change depending on amount of videos. And need to position them properly. Here is a proof of concept fiddle. It needs to work like CSS3 flexbox grids. Its for the purpose of replicating the flexbox grid view of video players in a canvas to record as one stream.

https://jsfiddle.net/danrossi303/756pL1ck/2/

Here is an update with grid position code. The groups wont scale or position. into rows and columns. If click to add twice it should be a side by side camera

https://jsfiddle.net/danrossi303/756pL1ck/10/

Hey @danrossi,
I’m not really sure what controls you would be using, but here the basics that you will need.

If you will be using the OrbitControls, (which is my guess), remember to update them and their target once the video has been added.

const sceneBox = new Box3().setFromObject(this.scene);
const sceneBoxCenter = sceneBox.getCenter(this.controls.target);

this.controls.update();

Next thing you’d like to consider is the width and height of the viewport.
These is a lovely snippet that had been written by one of the three.js contributors (@looeee):

const visibleHeightAtZDepth = ( depth, camera ) => {
  // compensate for cameras not positioned at z=0
  const cameraOffset = camera.position.z;
  if ( depth < cameraOffset ) depth -= cameraOffset;
  else depth += cameraOffset;

  // vertical fov in radians
  const vFOV = camera.fov * Math.PI / 180; 

  // Math.abs to ensure the result is always positive
  return 2 * Math.tan( vFOV / 2 ) * Math.abs( depth );
};

const visibleWidthAtZDepth = ( depth, camera ) => {
  const height = visibleHeightAtZDepth( depth, camera );
  return height * camera.aspect;
};

Next steps, being the calculations and functions are up to you, but I’d use these two as a base.

There is no controls. Its just 2D display but in a tiled grid. Its a virtual mixer of multiple WebRTC streams into one canvas for re-streaming this requirement. I may have to use three for screen and camera mixer as I was unable to position the camera properly using raw webgl shaders. My shader solution uses mix function to mix the screen as the background texture with a scaled video texture but its stuck in one corner. Might need something more involved manipulating uvs.

Depending how many are added I need to dynamically size each video texture group and position to wrap the viewport dimensions. So if two are added, they need to display side by side. If 6 cameras are added, the code is supposed to work like flexbox and resize and wrap 3 per row.

The update is adding new groups but aren’t scaling to the set width / height and not positioning yet,

are these methods to obtain the viewport dimensions to calculate scaling ? I am obviously applying pixel dimensions of the canvas. I cant seem to figure out how to apply these. What is the depth ?

I need to try and do something like this, but not using react. Just a simple method to place objects into a grid. Seems trivial but not much out there.

I found an amazing project to do what I need. It uses Yoga for the flex positioning internally. It seems their text renderer is better than the other SDF renderer I am using also. Seems this project has performance optimisations for three also.

https://troika-examples.netlify.app/#flexbox