How to extrude depth (thickness) of alpha video texture?

I wonder if anyone knows an efficient way to add depth to an alpha video texture?

This way an animated sprite would have volume in the scene, instead of being just a plane.

Creating ExtrudeGeometry every frame seems inefficient.

I saw someone also asked here but no solution.


A lot of the following depends on whether the video of animated sprint is known ahead of time or an adhoc video and how big it is.

Have you looked at morph targets? I’ve never used it myself. Sounds like you’d have to pre-compute the morphing for any sprite animation, but it might be worth a look.

Another option is adding depth maps for the the animated sprite. This is like a morph target

Convert the sprint into a point cloud where each pixel is offset based on its color or depth map. This might look cool from a side view

I want it more like an extrusion of the outline. Yes, I can get the crude outline from the video for each frame in a 2D canvas or as a Three shape. Just need an efficient way to create a geometry from that, near 30fps.

This type of extrusion:


What about an instanced mesh of small boxes, one per pixel. The instanced mesh count would be width*height. Each frame, you scan all the pixels. Anything that’s transparent, the scale is 0 (to hide) anything else appears as box. I’ve done this before, but not for something that is animated or transparent. If you can share your image, I can put together a demo

The benefit of this approach is you’re not creating any geometry beyond the original box. You’re just scaling instances of it, which is very efficient.

The alternative is to write a shader to basically do the same thing, but in the GPU.

I tried my suggestion. Setting scale to 0 doesn’t hide an instance and there’s no way set opacity for individual instances to make it transparent. So instanced meshes isn’t a solution

You could preallocate instances for the whole x*y grid, and then just set the .count of the InstancedMesh to the just the count that’s being used per frame.

That’s a really good idea. The instanced mesh only includes pixels that have color. While scanning the image, build a data structure of pixels with color. Once known, create instanced mesh with the correct count and set transforms and color.

Another option, it to just draw the “transparent” pixels well away from the scene, near the edge of infinity so they’re never seen.

I’ll take another whack at this.

Yeah to be more clear… you can adjust .count on the fly, Up to the amount that you set it to initially when you create the instancedMesh… so for a 64x64 bitmap, you could make a single 64*64 count InstancedMesh then when the frame changes, write the positions of only the visible pixels and set the instancedMesh.count to that count.

1 Like

Hmm, I was unable to instance a video texture plane >_<
No effect just only draws the first

The idea isn’t to instance the video texture plane…

It’s to instance a bunch of stretched cubes… one for each visible pixel in your bitmap.

Here’s some working code. three-instanced-mesh-transparent - CodeSandbox

You didn’t share your image, so I just used a small 128x128 image that I had


yesss very nice.