Does anyone know or have experience with using Three.js and a C library like LIBMPV together for video textures?
WHY: html video tag playback leaves a lot to be desired. LIBMPV allows for a lot finer control of playback without entirely recreating a from-scratch video decoder.
It seems like it should be possible from the “Render API” description on LIBMPV’s wiki by creating an open-gl context, but thus far I’ve been unsuccessful in my tests.
For clairty, this would be used in an Electron implementation, not bound by a browser’s normal sandboxing.
I’m not familiar with the library, however, if you can render the video output to a canvas, you should theoretically be able to feed that to a DataTexture and use that on a material. I’m not sure about the performance though.
Edit: after reading their docs, it states:
libmpv enforces a somewhat efficient video output method, rather than e.g. returning a RGBA surface in memory for each frame. The latter would be prohibitively inefficient, because it would require conversion on the CPU. The goal is also not requiring the API users to reinvent their own video rendering/scaling/presentation mechanisms.
So even if you would manage to render this to a canvas, the video still needs to be re-uploaded to video memory every frame due to the conversion between whatever the library outputs to a threejs material. I believe this would similarly impact performance for the exact same reason they mention in their docs.