So I found a cool example - http://meetar.github.io/threejs-shader-demos/llama.html - that uses a gif as a displacement map. However, it uses THREE.js r58. I found out because it has the line geometry.computeTangents(); which was removed from THREE.js a long time ago.
I’ve opened a question on stack overflow that explains this in more detail:
I’m hoping someone here can point me in the right direction to implementing this in a recent build.
In r85 no need to explicitly set up the uniforms. Only had to import the gif, create the material then add the gifCanvas as a displacementMap:
var supGif = new SuperGif({ gif: document.getElementById('gif1') } );
supGif.load();
var gifCanvas = supGif.get_canvas();
. . .
material = new THREE.MeshStandardMaterial();
material.map = new THREE.Texture( gifCanvas );
material.displacementMap = material.map;
the gif loader code I’m using has that annoying progress loading bar and is slower than I want. I just found some other code that aframe actually uses that is supposed to be faster, when I get a chance I’ll refactor.
Is there a gist or a sample that shows how to use this code with an existing scene & GIF? I have an existing ThreeJS app and I’d like to add an animated GIF to it using the Nunustudio file listed in this thread to prepare the frame map from a GIF.
Note, if there’s a newer technique for playing GIFs as a displacement map of frames using Three.JS, please let me know.
A GIF isn’t necessarily encoded with full frames, optimized GIFs have partial frames only containing changed areas, this is why a canvas is needed as buffer to compose the full frames.
Thanks Fyrestar. Have you seen a sample that shows the right way to pull and compose frames from a GIF on a canvas? If I can figure that out, I can iterate the frames in the GIF with a working canvas that I would use to make the latest texture, and then assign that texture to a side of the mesh object I will use in my ThreeJS scene to display the frames. I would handle the frame timing myself.
Set myTexture.needsUpdate to true in your render loop. For performance reasons this could be optimized with an atlas depending on the size of the GIF, like typical sprites used for games, but that’s a bit more complicated and is limited by the number of frames and resolution - so to keep it simple and predictable just use a canvas you update per frame.
Edit: unless i missed something, it seems in the nunu example a GIF is directly used as image, i never tested this as i only remember that it wasn’t supported before. In case this works and you don’t need control over it’s speed, pause, loop etc, it could work using it as regular texture directly too, disabling mipmaps and settings needsUpdate to true just like using a canvas texture. Browser compatibility should be tested too first.
Edit 2: i tested it and it doesn’t seem to work, also you’ll get an issue with non PO2 images, the player should ideally be able to render on a given PO2 size as well. You could also just convert the GIF to a video instead if that works for you.
Thanks Fyrestar! What terms would I search the Web with to read up on what an “atlas” is? That’s a new term for me.
The GIF in my case is 32MB in size. I exported it with Photoshop using their legacy web format, which is supposed to do a good job of optimizing the size of the GIF. I’m not claiming that previous statement is true. I don’t have the experience to make such a claim. I’m just telling you what my context is.
Ok, I was worried about that. I assume you’re referring to the size. The problem is, the device I am targeting, at least on the host software my app has to run inside, can’t render videos due to a current bug in the platform. That’s how I ended up with trying to use an animated GIF as a movie.
NOTE: I just set up a simple test using the GIF with libgif-js, outside of Three.JS and it is choking on it. I’ll have to wait for the host platform to fix the bug. The underlying hardware is a typical lightweight Android device.
I’ll have to assume at this point that the native video players that run on those underpowered devices is insanely optimized, perhaps with custom assembler or at least C/C++ code that writes closely to the CPU/GPU architecture. So these device can run video, but only if someone writes a heavily optimized software module to do it.