Shared workers and webgpu for multiprojection

I would like to use three.js for an exhibition in a multi-projected room. I know that web technologies are probably not the best option and I have experience with more suitable software, but I’d really like to explore three.js usage for this kind of projects.

So, the idea is to have different browser windows, each rendering part of the overall projection. The output should be a single sketch visualized seamlessly among the windows. Since I can choose the browser I’d like to take advantage of webgpu and the compute pipeline.

So, the idea is to have a shared worker doing all the processing for the overall scene (e.g. computing and updating movement and appearence of meshes, moving the lights, etc), and have each window render a portion of the sketch (probably using a separated camera or a view offset of the same camera for each one).

would this be feasible? Do you see any better approach? Is there any example I can refer to?

Thanks

bump! No ideas anyone?

This is mostly a gut reaction — I don’t have experience doing what you’re describing. But because WebGPU and WebGL shaders run on the (shared) GPU, I think that trying to render the same scene from multiple Web Workers may be a lot of work, for limited benefit. Also, each Web Worker will require its own GL context, essentially duplicating the scene data on GPU memory, and potentially hurting performance more broadly.

My guess would be that — if you’re going to do this with a single computer — then using a single renderer and a beefy GPU would be the way to go. Any non-essential work (like generating procedural geometry) can still be deferred to Web Workers.

1 Like

Thanks @donmccurdy, I get your point. I still have to study a bit more SharedWorkers, I thought it could basically be a single process and hence have its own unique GL context. If, as you suggest, multiple web workers are spawned and each one creates a GL context then I agree with you, this would not be efficient.

Regarding using a single renderer, it was my first choice. The Problem is the setup: I have one PC with 2 Geforce RTX 3070 and an i9 11th gen, and 7 projectors:

  • 2 for the floor
  • 2 for the east wall
  • 2 for the west wall
  • 1 for the north wall

with my great disappointment I discovered that RTX cards do not support mosaic, but only “Nvidia surrond”, which enables me to group a maximum of three projectors as a single desktop. Hence my idea of opening several browser windows and managing the scene through a shared worker.

I guess that I’ll have to add some hardware in any case to merge all the screens in a single desktop, but it was an interesting use case.

“would this be feasible” let’s start with a sounding “yeah, you can do pretty wild things these days with web technology”.

Now, if you want to overlap multiple renderers, you will be tempted to use multiple canvas in a single element, or multilple elements in a single page, and drive the sincronization behind them using a single scene. These are good starting points. In practice, though, projectors used for video mapping require a lot of resolution, so you migth find this renderer approach below resolution specs.

I’ve participated in a couple projects involving three.js for projection mapping on buildings. So far the best strategy has been setting up ‘extended desktop’ for an horizontal array of ‘screens’(actual projectors), and then plug each explorer (chrome) window (using one window, one virtual camera) for each projector output. Don’t know if this is the best, but it just works.

Don’t have a clue about shared workers, maybe you can start this endeavour :muscle:t3:

I don’t know if the output you’re going for is deterministic - meaning it shows the exact same output every time you load up your scene. If this is the case, you could just open multiple windows and use a shared worker to manage animation states and whatnot.

I think it would be more feasible to think of the shared worker as an orchestrator or director of some sorts that ‘tells’ all open browser windows what to do (enable animations, set camera positions, etc) so everything runs in sync.

@Antonio I’m glad to hear that someone did this in actual projects. Can I ask you how did you sync the browser windows? Websockets?

@Haorld unfortunately the output is not deterministic: I have 3 realsense depth cameras that get the position of people inside the room. The visuals should change accordingly. And yes, I’ll try to implement exactly the scenario you propose.