I know this is old but I’ve been trying to solve this as well…
R3f has great resources on how to use the real-time scene directly as an envmap.
If you’re trying an effect or something, that would be the way to go…
However, if you need to render a real location or event using video data… you’re going to have a bad time…
A key thing is to make Image Based Lighting, the HDRI needs to have a fuller spectrum of color/light information. Even more than is humanly perceptible…
So while a PNG in 8bit colors looks great, it doesn’t have the full picture of how the light was.
For sites like PolyHaven who have strict standards, the requirements are full 32bit color, which is what you’d use in a video editor or modeling software.
They also demand crazy high resolution, which only multi-cam 360 rigs can attempt…
Then we have the problems of video.
360 videos are already huge. A 2 minute uncompressed HDR 360 video I have is 17GB.
IT IS in 16bit colors,
But even Adobe was only able to get that in 2016. So most tools can’t work with the file.
The next problem is transmitting the file. Any decent codec like 264 or even A1 seems to only work in 8bit.
So you’d have massive files.
That’s kinda where im stuck now in finding output options that are viable.
The Game Industry Standard Codec Bink is actually recently supporting full HDR (16bit)
So if you have $9,000 you could get a license and potentially make that work…
HOWEVER
If we go into it knowing we don’t need Pixar level frames, we can try things more approachable and potentially viable.
when running real-time, often 1K or 2K HDRIs are more than enough, and you can actually generate the light in its own weird texture.
So off the bat, I’m thinking we’ll probably be pretty alright using standard videos, and it’ll be a Video background so it would probably cover the shortcomings (untested assumption)
There ARE tools to take 8bit images and do a decent job upscaling to 16 or even 32 but I don’t know enough about them.
One idea I had was to output frames into a TextureAtlas which super recently got KTX compression support in three…
So if your animation is super short, you could potentially do a flip book…
The totally crazy idea I’m cooking up now is to render a standard format (8bit) video in regular web compatible formats and put that in the background.
Then doing a 1K Irredecence or whatever it is light data set of frames for the same time length but way small… like 25% final resolution and probably start with 12fps
Blend and super sample the extra light data, then merge it with my standard video on the GPU.
Also frame rate wise anything over 30 is overkill so you’d have at least double the time to evaluate the frame…
It’s all totally experimental… and I’m rambling…