About dynamic imposters

Hi all,

I came across this very interesting article on dynamic imposters and decided that it was worth giving this technic a shot.

Article TLDR :
Dynamic imposters are 2D sprites generated at render time to impersonate meshes far from the camera. It is supposed to help performance while offering better realism and versatility ( and less work in the end ) than offline-generated sprites.

As it looked promising I forked three and added the functionality in the examples in case it was worth filing a PR, and this is the result of my experiment :


As it turns out, the performance is worse than just rendering the original objects, at least in this specific scene, with my laptop. The only thing I seem to do differently from the article is to render each impostor on a separate render target, whereas they advise batching several renders on a given render target, but according to my test switching render target only takes 0.1ms and redraw happen rarely ( that’s the whole point ), so I didn’t bother.

So here am I wondering if my implementation is bad or if I just expected too much from this, or if it’s really useful only in a specific context. @Fyrestar I saw you mentioned this technic a lot and you seemed to be satisfied with your own experiments, so I would be interested about your thoughts on the matter.


It’s struggling for me with around 40 FPS too unfortunately. This kind of impostor is a good approach for animated meshes like characters/persons though. I’ll check more in detail on the code when i’m less busy.

Another advantage of impostors taking snapshots like this is you can work with a flat final image what is the cheapest you can get out of a impostor, but the memory cost can be high or get out of hand if it’s scaling bad, also the cost updating them. Another issue can be incoherence of light as (unless shading is separated out) will be that rendered for the specific angle in case this implementation tries to share angles.

What bothers me the most about the common impostor techniques is that they have snapping angles, those that try to cover up visible popping either have a lot samples like 3 or 6 rows each 9 columns, get sort of ghosting or expensive in memory or their morphing. However for really far impostors and assets that don’t occur in large count some popping isn’t really bothering, but once you got a large amount like trees in a forest for example the popping can get quite distracting.

I recently posted an update about the Volume Hull Impostor technique i made longer ago and got to work now. This technique only requires a small 8 bit image with fixed samples per asset and extra row or rows for further material maps like roughness maps (that get packed though) to get a full 3D impostor of the original asset that requires so little memory, that basically every asset can have a impostor in memory by default, while having a resolution up to 512 isn’t costly, working with forward and deferred lighting as well. The impostor atlas can be compressed like regular 8bit images and stored or viewed as regular textures.

It also basically works with any material, having the geometry in the texture so it renders like a regular mesh, individual lights will affect them locally per instance. They can also intersect each other partially like geometry.

At the origin is the impostor, once the Y axis is visible the impostor is rendered.

Forest test in my game rendering the distant trees as impostors:

(the update: Tesseract - Open World Planetary Engine - #40 by Fyrestar)


I ran an inspector on your scene, and you’re getting poor performance because you’re running over 400 drawcalls per frame. The trucks are pretty slow because the windows, wheels, and bodies of each truck is a single StandardMaterial drawcall, so that’s about 150 drawcalls. Then you’re rendering each sprite one at a time, for an additional ~300 drawcalls. You’d get the performance improvements you’re expecting by using THREE.Points or some instancing method.

Screen Shot 2021-06-18 at 12.26.34 PM



Your example with the forest is just too awesome ! What is also cool with dynamic impostors is you can rerender with new lights, so I assume it makes a lot of sense in your planet simulation. Really hope to test your work live soon, and to see this volume hull impostor in action :crossed_fingers:

What do you do about shadows, can your impostors cast shadows ?

About the visual popping I understand that it’s possible to mitigate the problem by tweening the alpha channel between two textures, I’m gonna try this if I can fix my performance issue.


Yes that’s a lot of draw calls, I will try to use instanced plane meshes instead of sprites ( can we instance sprites ? ). Points would be easier but I read here that it may cause some issues.

What is bugging me though is that the impostors are actually reducing the render load by a lot in this demo compared to the original scene ( ~350 draw calls instead of ~1200, and ~65000 triangles instead of ~900000 ). But the original scene is still rendering much faster on my machine…

I ran a performance test and I began to see where I screwed up ( the redraw function is an individual impostor texture update ) :

In order to isolate the real mesh for rendering on the impostor render target, I traverse the whole scene and enable layer 31 on the object to impersonate and all the lights, and disable layer 31 on everything else, then I render the scene with a camera set on layer 31. This is the way I found the most straightforward to render the object with the right lights and transformations ( and lights transformations ), but I realize I’ve been naive…

If instead of rendering the whole modified scene, I only add a random light to the forged object and render it directly ( renderer.render( forgedObject, camera ) ), I get a solid 60FPS with a lot of room to spare :


And the profiling shows that calls to updateMatrixWorld in WebGLRenderer.render became trivial. Now the problem is, it looks completely wrong.

I will have to figure this out, but at least I have hope again that this technique is worth it. Thank you guys !

@Fyrestar How do you isolate the object to impersonate for render in your own implementation, and how do you ensure that it gets the right lights ?


The IndexedVolume i mentioned in the thread (basically the spatial index) handles the rendering instead the default linear approach of THREE and does all the LOD decisions, rendering the meshes regulary, auto-instanced, or any of these as impostor and depending on occurrences+density clusters of impostors.

However the VHI is basically a standalone module so for the tests i just extended the Mesh class and use a callback to check the distance and swap the material and geometry of the mesh on the fly, that should be more compact and more performant than using layers.

Like i said the impostors are more of an actual mesh, rendered with the original material extended with the impostor code, the impostor consists of their structure maps like geometry, albedo, roughness etc, so the final “billboard” is doing the work of a regularly rendered mesh. It’s not as cheap as a plain solid sprite, but integrate seamlessly like meshes in disguise (literally impostor :grinning_face_with_smiling_eyes:) and since impostors are small/in distance and only rendered flat, not across many triangles that’s not really an issue, they don’t even need to interpolate the textures or mipmaps.

For the clusters i mentioned above i use a lower quality settings. You need to imagine how small objects get at a certain distance where you can trade more error for more performance.

Yes cast and receive, but it depends on the asset at what distance they become an impostor, a lot are already outside the CSM range where the volume ambient occlusion does the rough shadowing.

There are some sort of morphing techniques too, it mostly becomes an issue when the 2 blending frames have a different silhouette. But even if you got visible popping, as long as it’s not a huge amount of objects that simultaneously pop at a certain angle it isn’t too bad, you could also prioritize the alpha discard value between both frames around the axis in order to reduce having 2 50% blended frames that might have a complete different silhouette.


@felixmariotto :+1:
Thanks for sharing that awesome article (gamasutra has always been awesome BTW), and opening this discussion, as performance optimizations is my favorite subject!

While sprites and billboards are well known for decades, generated at runtime is an extremely useful concept and reminder of the possibilities, especially for gigantic complex-looking scenes (as the ones I personally aim for), so every weapon in our arsenal is a valuable addition to make a mission-impossible …possible - with the element of illusion here being the key!


@dllb thank you for you interest !

I’ve worked a bit more on this, and finally arrived at a satisfactory result ( I mean, nowhere near @Fyrestar implementation, but still stable and useful ).

In the updated example I’ve made to better illustrate the benefit of the technique, the load goes from ~6500 render calls and ~45,000,000 triangles, to ~200 render calls and ~150,000 triangles + ~3 impostor redraw per frame :


The more heavy the models to impersonate are ( lots of distinct calls and triangles ), the greater the benefit. Also the more static and in the background the better, since redraws are triggered when the angle from the camera is too great from the last render angle.

I’ve filed a pull request, I hope it will be merged :crossed_fingers: :four_leaf_clover:


I hope I can use this soon! oh Can I use It now? I think I can’t use it now :eyes:

1 Like

i try the impostor.js on three 151 .i don‘t know how to fixed it。on dev151 something wrong to the MeshBasicMaterial’s transparent or wrong to config the WebGLRenderTarget’s param?