you can query the XR stuff in threejs to find out which eye is currently being rendered, and switch your frame based on that.
that should be pretty straightforward… streaming camera.position and camera.quaternion…
I’m pretty skeptical about what kind of framerate/resolution/latency you’ll get rendering on the server and streaming to a client? WebRTC video might help… it’s also gonna tax the hell out of your server… but if you only need the server rendering a single user, i suppose it might work? Is there a reason you can’t render on the client?
yeah… video streaming has pretty high latency. It doesn’t matter for prerendered 3d video since the viewpoint change is all done on the client… but sending camera position to server, waiting for server to render, server compresses frame… streams to client… client renders… sounds like it might induce a lot of vr sickness?
Hi @manthrax, thank you very much for the pointers. Please find some comments below:
I do share the concerns about VR sickness and QoEs, that is something that we have to pay attention and be mindful of. For now the plan is to implement the system in a client/server manner and then run a full performance test to do a data-driven assessment on this topic.
Do you have any reference on how to copy 2 given images (one for each eye) directly into the rendering frame for the VR using three.js? That is my main pain point for the moment, I don’t know which APIs I should use to achieve that… The scene object does not seem to have APIs for pre-rendered stereoscopic frames…