Wow, just this specific feature opens up so many possibilities!! Amazing work as always ![]()
Thanks! Yeah the HDR rendering is neat - there’s no requirement to do it with path tracing, either. Any floating point render target can be encoded to an HDR gainmap. The downside is that it’s a bit slow so it can’t happen in real time. We’ll have to wait for some canvas and WebGL upgrades for that.
its amazing how threejs bvh can render photorealistic like these on a browser, few years from now these renderings will be the norm on openworld threejs verse games or virtual exploration for showcasing architectural or real estate projects, looking forward to that future.
This helps me a lot, ease to use. ![]()
Awesome! Here is a use case for optics/optomechanics (i.e., 3D animated optics).
I’ve used three-gpu-pathtracer v0.0.23 to create an online demonstration featuring optical prisms. The result is mindblowing, accurately capturing the multiple reflections and refractions of real-world optical elements. For example, below is the rendering for a panoramic telescope, an optical instrument with a right-angle prism, a Dove prism, and an Amici roof prism. The output is similar to what KeyShot 9 can achieve. I’ve published this project on GitHub, using three-gpu-pathtracer v0.0.23 renderer.
I generated an animated glTF file in Blender and adapted the script to handle automated rendering after each frame update.
The rendering is amazing, but is extremely taxing on hardware resource. when I run the demo, my macbook pro goes into 80-90 degree celcius territory.
it would be great if you can do some optimization shortcut so hat regular folk can run on their old computer or mobile phone. Maybe use some techniques, screen space, texture mapping, to achieve similar result with less demand on hardware.
thank you. great work
Pathtracing is an inherently GPU intensive process.
The “optimizations” you described are basically what webGL is without path tracing.
well that is not completely true. There must be techniques and/or short-cuts to simulate near path tracing result. Otherwise Unreal and Unity would not exists. Stuff use in these real time engine is lacking in WebGL and Threejs. They can make scene look better but require very advance shader most 3d programmers cannot do.
Please google: “Does unreal use path tracing” and “Does unity use pathtracing”.
Specifically the parts about:
Yes, Unreal Engine does use path tracing, but it's not a real-time feature.
and:
Real-time Considerations:
While path tracing (in Unity) produces high-quality results, it is generally not a real-time rendering technique and may require significant rendering time, especially for complex scenes.
It’s been awhile since there’s been an update for the project but little by little the project will be moving over to WebGPU compute shaders. Path tracing in WebGL could provide some nice results but as you can imagine proved to be impractical to maintain and not very scalable across platforms. As well as designed for these kinds of tasks, WebGPU compute shaders open up the possibilities for a litany of new optimizations and features as well as hopefully being more reliable across platforms.
It’s not so exciting but here’s a quick look at an initial compute-implementation of a “megakernel”-style path tracer, meaning that all of the light bounces for a path are calculated in a single pass shader.
While functional, these kinds of megekernel approaches are not great for gpu architectures that basically want sibling pixels to run the same operations, meaning that sibling pixels diverging due to different numbers of bounces and paths are not ideal. But more noticeably tracing the full path can be very expensive leading to the types of unresponsive framerates you’ve likely experienced with the webgl pathtracer.
A “wave front” path tracer model, however, structures path tracing to add rays and geometry intersections to queues and process them progressively. Camera rays are added to initialize the ray queue, as does every object hit and generated scattered ray, and rays that hit lights or the environment contribute to the final image. This means we can control exactly how many rays are run per frame and should enable us to keep the page responsive while still making good progress and dedicating compute shader operations to pixels thactually need the work.
It won’t look significantly different but here’s a slowed-down view of the initial wavefront implementation progressively pulling rays off the queue and processing them:
There’s still a long way to go but it’s exciting to see the potential of this approach come together. I have to shout out @TheBlek on Github, as well, for helping to move this work forward! If anyone else is interested in contributing please feel free to reach out! There are a lot of separable problems in the mix.
Some small progress updates: the WebGPU implementation now has support for both environment & backgrounds, allowing for some basic environment lighting.
And the recently-added “ObjectBVH” for three-mesh-bvh is now being used to finally enable rendering scenes without merging everything into a single geometry, allowing for pathtracing complex scenes like these instanced spheres ![]()







