I am building a configurator for furniture and would like the user to be able to request a more realistic image than in the threejs renderer. I found this thread
which offers exactly this function, but there is no more current information. My idea to solve this problem is to export a GLB file and load it automatically into Blender. Has anyone ever implemented a similar function and has any tips or solutions?
Personally, I’d first check if three-gpu-pathtracer wouldn’t yield results realistic enough to accept - that’d let you avoid quite a bit of complexity and stack mixing - since you could keep everything within JS and three.js.
If that’s not possible - you can mix together GLTFExporter, Blender CLI rendering with GLTF-to-Blender converter (or just write a python script by hand, so that you can import GLB into a blend file.)
If neither of these would work - you can hop on the AI train, first render your scene in Three.js in lower-grade 3D, save it as a PNG - then pipe it through StableDiffusion / Dall-E / Midjourney (MJ doesn’t have an API though ), and ask AI to convert it into a realistic rendering with a proper prompt.
I already experimented with three-gpu-pathtracer and it’s good but not the kind of result i like
I managed to implement something like this on my local maschine, so it looks promising to get that onto a server with an API to send files and retreive images.
Thats not an option for now but sounds very interesting aswell:)
Thanks for your input, it pointed me in the right direction!!