Hi!
I’ve recently been exploring many different optimization techniques I could use in three.js.
That’s how recently I stepped upon the GPU skinning behaviour. From my understanding the implementation of three.js falls back to CPU skinning (which is supposedly slower) whenever the bone count is above a threshold indirectly defined by WebGL/WebGPU device limitations on uniform maximum sizes.
My understanding of the implementation is that three.js creates textures in mesh.skeleton.boneTexture
when it’s handled by the GPU. Yet with all my tests on super lightweight and simple models it always seems to be null.
I’ve also read here and there that sometimes CPU skinning can be faster than GPU skinning in some engines. I’m not sure about that, and would think it’s usually the opposite. So does anyone know if the reason I have never seen any boneTexture
being calculated is that GPU skinning is no longer a thing in the modern version of three.js and maybe it’s because it wasn’t better than CPU skinning ?
Or maybe I need to toggle manually GPU skinning somehow ?
CPU skinning is a fallback for when the float 32 texture extension was not available, which is a thing from a decade ago before WebGL2 was a thing. GPU skinning is the way to go.
Doing it on CPU for all vertices would be extremely costly for today’s poly-count standards.
3 Likes
Ok thanks for the clarification. Then that means boneTextures are just an artifact left from older implementations. And I should not worry about them.
Is there a way to check if three.js renderer is currently rendering with GPU and not CPU aside from performance ?
EDIT: I think I managed to double check it. Because skeleton.boneMatrices
are being used and this is typical from WebGL2 and WebGPU behaviour from what I could read online.
For some context my performance tab looks like this when rendering animation frames:
The green bars on the bottom are from the GPU the rest has to be only CPU. And the common function calls I see lagging the whole thing seem clearly related to vertex positioning and applying transforms. A whole animation frame here is about 26ms to compute which is not good especially with the device I’m currently testing this on.
The main cost in that screenshot is computeBoundingSphere
. In general that should only be done once — up front — and not every frame. If you’re seeing this cost on every frame then I would try to set a debugger breakpoint and try to figure out why checks for an existing bounding sphere are failing or not being made.
If needed you could assign a 1–2x larger bounding sphere to accommodate the skinning transforms, it doesn’t need to be an exact fit to the (animated) skeleton on each frame.
2 Likes
Wow I feel stupid now. It’s quite obvious when you read again the performance stack. I know why it’s doing this. At some point I noticed clipping issues and at the time I didn’t know the process was called “frustrum culling” so I did reset the boundingspheres to force it to update. I worked but I didn’t have the fps stats on at the time, I didn’t notice they tanked it. I just had to toggle the frustrum culling boolean to disable it. Especially since I don’t want these mesh to be culled anyways.
Well that’s a great fix done! I’m back at max fps x)
1 Like
(deprecated !== depreciated) && (frustum !== frustrum)