webgl_unload_test.html (6.8 KB)
I have been struggling with graphics memory not being freed in an application I’m working on… and to that end, I wrote a test app (attached here) to try to reproduce the problem in it’s simplest form. It’s based on the FBX loader example. Click LOAD to load the model, UNLOAD to dispose all the resources and make sure things are being freed up. Uninterestingly, the model I used was the Jeep1.FBX file included with the assimp distro (not attached) and the texture I changed from JPG to an up-sampled (2048x2048x4), uncompressed TGA that weighs in at 16MB.
Anyway, what I’m seeing is just as weird as the leaking memory issue, but in the other direction…
I click LOAD to load the model with ~7MB of memory displaying as used in the stats tracker… it spikes to ~40MB once everything is resident. Totally expected. Then it drops back to ~7MB after < 30 seconds!!! I didn’t unload the model and it’s still rendering.
So my first question is: what’s actually being displayed? It doesn’t seem like it’s showing the real amount of video memory used because the texture would exceed what’s indicated all by itself. Has anybody else seen this happen?
Second, does anyone have a recommended method for debugging memory issues like this? I’ve tried using the info.memory.textures | geometries… meh. The console collapses messages of the same type into clumps, too. Veeeery helpful.
My background is C++ with OpenGL and DirectX and I’d be using memory breakpoints well before now… but JS is… fun. The issue I’m having with my real project (this test frees memory ok, but the real app does not) feels like some kind of reference counted problem where I’m freeing things during a render (even though I’ve attempted to stop that) and so the system thinks they’re still being used and then never release. Asynchronicity. Hard to tell.
Thanks in advance for any pearls of wisdom.