Hello guys, when running my dual TilesRenderer scene with shared lruCache for an extended period of time with Google Photorealistic 3D Tiles + A Custom 3D Tileset and aggressively exploring / zooming in and out, the internal tile metadata tree grows unboundedly in the JS heap. The lruCache correctly evicts GPU geometry/textures by byte budget, but the JS-side tree structure that represents every fetched tile is never pruned. There appears to be no public API (dispose-and-recreate aside) to garbage-collect subtrees that are no longer within the camera frustum or relevance radius, and manually severing child arrays corrupts traversal. Is there an intended pattern for long-running sessions with large, dynamically-explored tilesets, or is periodic full renderer disposal currently the only escape hatch? Bellow I will provide images of the stats Im logging and the objects that keep building up.
You can see on a freshly opened scene, my heap has 120mb allocated with 124mb in the lru cache
After playing and exploring, I have zoomed out in the air and waited 2-3 min to give enough time to the GC to collect what it can, the lru cache is well within limits but the heap now has reached 1gb and its chilling there
Here is a snapshot, of the objects on a freshly opened scene you can see the count of Array objects here is ~90k
After some playing around the Array objects have reached 2,084,432. I’ve seen them go up to 3M, will probably go more if I try.
This definitely sounds frustrating, especially if you’re trying to run long sessions.
From what you’re describing, the LRU cache is doing its job on the GPU side by evicting geometry and textures based on byte budget, but the JS-side tile metadata tree just keeps growing. So even after zooming out and giving GC time to run, the heap doesn’t drop because all those tile nodes and arrays are still strongly referenced in the internal tree structure.
It sounds like there’s currently no public API to prune or garbage-collect subtrees once they fall outside the frustum or relevance radius. And since manually cutting child arrays breaks traversal, that is not a safe workaround. So for long running sessions with aggressive exploration across large tilesets like Google Photorealistic 3D Tiles plus a custom tileset, the metadata graph just accumulates over time.
Based on that, periodic full disposal and recreation of the renderer or tileset may indeed be the only reliable way right now to reset the JS heap growth, unless the library introduces a supported pruning mechanism for the internal tile tree. It might be worth confirming with the maintainers whether subtree eviction is on the roadmap, because for large scale dynamic exploration this kind of unbounded metadata growth is not really sustainable.
Curious to hear if there’s an intended lifecycle pattern for this, but from your data the array explosion into the millions definitely points to retained references rather than GC failure.
the JS-side tree structure that represents every fetched tile is never pruned.
Where is this assumption coming from? Children of external tile sets are explicitly discarded when the tile set is no longer needed. If you can provide some more information about how you’re measuring these things and calculating “eviction-protected” and “evictable”, as well as what your “custom” tile set is it might make it more clear what’s happening. Simplifying things down to just use a single, basic tile set would probably make things more easy to track down, too.
1 Like
Thank you for your input. For the stats Im polling every 500ms performance.memory for heap stats and the rest are coming from the shared LRUCache and PriorityQueue
setStats({
jsHeapUsed: mem?.usedJSHeapSize ?? 0,
jsHeapTotal: mem?.totalJSHeapSize ?? 0,
jsHeapLimit: mem?.jsHeapSizeLimit ?? 0,
hasHeapApi: !!mem,
// LRU Cache Performance
lruCachedBytes: lruCache.cachedBytes,
lruMaxBytes: lruCache.maxBytesSize,
lruTileCount: lruCache.itemSet.size,
lruMaxTiles: lruCache.maxSize,
lruUsedTiles: lruCache.usedSet.size,
lruIsFull: lruCache.isFull(),
// Task Queue Management
download: {
active: downloadQueue.currJobs,
pending: downloadQueue.items.length,
maxJobs: downloadQueue.maxJobs
},
parse: {
active: parseQueue.currJobs,
pending: parseQueue.items.length,
maxJobs: parseQueue.maxJobs
},
process: {
active: processNodeQueue.currJobs,
pending: processNodeQueue.items.length,
maxJobs: processNodeQueue.maxJobs
},
renderers: rendererStats
});
const unusedTiles = stats.lruTileCount - stats.lruUsedTiles;
Our mesh is a photogrammetry tiled mesh, so what I am doing is cutting a hole in the google maps and positioning our mesh there. To make the tests easier I’ve removed this part of the logic and the shared resources, so its just the 1 renderer, rendering google tileset with default settings and no additional plugins.
After doing my the tests with just 1 renderer and only the google tileset with all default settings and no plugins, there is no such build up of objects. So the library is bulletproof, the problem is somewhere in my code. I will report back when I find the concrete reason
cutting a hole in the google maps and positioning our mesh there
How are you cutting the hole? Are you using one of the official plugins like “ImageOverlayPlugin” to do so? It’s possible there’s an issue in one of the plugins that’s not disposing of references correctly but I’d need to see a repro to understand. Adding the pieces back one at a time will help make it more clear where the issue is.
1 Like
I found the cause, it’s the UnloadTilesPlugin. I tested my setup, with single renderer, double renderer, shared caches, turning the other plugins on/off, and thats the one thats causing the object build up. I am using it with the following settings:
<TilesPlugin
plugin={UnloadTilesPlugin}
delay={2000}
bytesTarget={250_000_000}
/>
Now, with removing it, I’m running my full scene without any memory issues
3 Likes
Thanks - would you be able to submit an issue with a minimal repro to the project along with instructions for how to test and evaluate memory? It sounds like the plugin is retaining tile references incorrectly.
cutting a hole in the google maps and positioning our mesh there
Also I’m still curious about this. Are you using a custom solution? Or the image overlay plugin? I’m curious as to why you might have chosen one or the other.
https://codesandbox.io/p/sandbox/n7t27z
Ok, so what I do is,
- Open the preview url in a new tab
- Wait for everything to load and take a snapshot from the memory tab in dev tools
- Start panning the map aggressively keeping the download and parse queue busy non-stop for like 30-40 sec.
- Then return to roughly the same place as you started in, wait a bit for garbage collection and take a snapshot again.
- I do this 2-3 times, and notice every time the memory and Array objects is a bit higher.
Then if you remove the UnloadTilesPlugin, and repeat the prev test in a new tab, everything should be fine.
As for the hole cutting, it’s a custom solution I built with claude code. I was just not aware at the time that this could be achieved with the image overlay plugin.
1 Like
This has been fixed in the latest v0.4.22 release (this PR) if you’d like to test this.
https://codesandbox.io/p/sandbox/n7t27z
Also I appreciate the repro but for future reference a “minimal repro” is one that is usually in a single file and does not included any extra libraries. Unnecessary additions and extra files make it difficult to see exactly where the issue is or copy the repro to a context where changes can be tested.
4 Likes