Sorry, I can’t tell you how to improve performance without more information. 9 models and ~10MB of data could render very fast, or very slowly, depending on various issues.
One useful test would be to see how many meshes you have. Try:
var numMeshes = 0;
scene.traverse(function(o) {
if (o.isMesh) numMeshes++;
});
console.log('There are ' + numMeshes + ' meshes in this scene.');
Ideally, with 9 models, that would be about 9. If you have 100+ that could be part of the performance issue. Note that the test needs to be run after all of your FBX models have finished loading.
Reducing the number of those meshes to 1 would be ideal. Turning off scene.autoUpdate could another way to remedy this. It all depends on what you’re trying to do.
Also in the end, some devices will simply be incompatible with something you are trying to do. You could use some material, that triggers some shader to compile in such a way on a certain device to not play well together. It’s an overall drawback of WebGL.
if after everything that was suggested to you here you still face problems …
we were in trouble over a major site that had to be fast on any device, from weakest to oldest. the problem was that if we had our full effects chain, highest res, etc, the app would just crumble on some devices, but ran perfectly fine on others. and the solution was this: https://twitter.com/0xca0a/status/1563958783805620225
this would be simple to port into vanilla. what it does is allowing the app to find its sweet spot itself, it measures performance runtime and goes through gradual changes in quality (that you control) until the framerate is within a safe margin so as to not flip flop.
the twitter demo above btw deals with instanced meshes and just adapts drawrange to framerate performance. but it could be anything, from changing dpi to reducing lights or whatever.