Is there a tool to optimize keyframes in gltf?

I know about DRACO. But I’m wondering if there is a tool that would make the glb files smaller, specifically by optimizing the animation tracks. Typically I have a character that has a bunch of different actions modeled in blender (walk, run, stand, talk, etc…) and my blender frame rate is set to 60fps because I can animate more smoothly. A lot of times when I create a keyframe in blender, I’ll select a bunch of bones and insert a keyframe for both location and rotation. But from a playback perspective, it shouldn’t need all those keyframes - often times a bone rotates but doesn’t change position, or vice versa. But it’s very difficult, in Blender, to edit an animation that way, such that only the absolute minimum number of keyframes gets inserted. So I end up with megabytes of animation data in my character file.

So I was thinking, would it be possible for some offline tool that would identify redundant keyframes and remove them? Similar to the way that SVG optimizers work, where the decimal precision of <path> vertices can be reduced if the visible effect is smaller than some fraction of a pixel, a keyframe could be dropped if doing so has minimal effect on the animation curve, below some threshold.

Right now I have a gulp pipeline that processes all of the exported .glbs from the ‘source’ directory and writes them to the ‘asset’ directory after optimization (using gltf-transform as a gulp function). So it would be relatively easy to insert additional optimizations if such existed.

(BTW I dislike gulp, but all of the other node.js-based build/make tools I have looked at are specialized towards generating optimized JavaScript bundles as their main output type - which is the one thing I am not doing in this case, I’m just processing art assets.)

Since you’re already using gltf-transform, did you try the resample tool?

resample
 Resample animations, losslessly deduplicating keyframe

BTW I dislike gulp, but all of the other node.js-based build/make tools I have looked at are specialized towards generating optimized JavaScript bundles as their main output type - which is the one thing I am not doing in this case, I’m just processing art assets

You don’t have to use any build tools, gltf-transform is self-contained. @donmccurdy was kind enough to share this gist with me a while ago - it should get you started using gltf-transform without build tools.

The goal here was to apply draco if the model is above a certain size, and quantize otherwise. But you should be able to adjust it for other transforms without too much trouble.

2 Likes

Yeah, the resample function (either in a script or via CLI) should get rid of the redundant keyframes for you. The one exception I’m aware of is morph target keyframes, which it currently skips.

If you want to go a bit further, meshopt compression applies to geometry and animation data. The gltfpack tool will add this (and lots of other optimizations) to glTF files; I’m working on making meshopt compression available through glTF-Transform as well (EXT_meshopt_compression · Issue #106 · donmccurdy/glTF-Transform · GitHub).

OK, I added the resample function - unfortunately it doesn’t appear to have much, if any, effect on output file size. Maybe I am doing something wrong…dedup(), on the other hand, does have a small positive impact.

My build script does a lot of things, for example the tree models that are saved from the tree growth simulator are combined into a single .glb file, one per biome type, so that I don’t end up loading a bunch of separate assets for each biome.

The reason for using a build tool is that I have a lot of files to process, and I don’t want to have to maintain a giant shell script…also, I get file watching, so that it reprocesses the model files that changed when I hit save in blender, rather than having to re-convert everything. That part of gulp is nice - the part I don’t like is the ancient APIs that are based on node streams / events, this is all async logic from a time before promises.

For watching files you could use chokidar, it’s quite simple to use.

My experience in migrating build scripts from Gulp to pure Node is that things usually end up simpler, but you do end up using a few small tools like chokidar. On the other hand, if you’re already set up with Gulp and it’s working for you then there’s no major reason to change.

If you can share a sample file you expect to contain duplicate keyframes I’ll take a look. The interpolation would need to be linear or discrete/step, which should be what Blender’s baked keyframes are using anyway. dedup is probably consolidating identical the keyframe times (but not duplicate values) here.

viridia/gltf-models (github.com)

I can upload the original .blend files as well if needed.

It would also help if I understood the process a bit better. So for example, when I have an IK bone that is controlling a limb, does blender export the keyframes for the IK bone, for the bones that are controlled by the IK, or both? I’m assuming that three.js is going to ignore the IK bones since there are no vertices whose weights reference those bones. The question is whether I am paying the cost for those keyframes or not. If so, is there a way I could remove all bones with “IK” in the name?

Similarly, the IK bones are locked in a fixed orientation - they only change position, not rotation - but the very first keyframe snapshots both the position and rotation of every bone, even bones that never rotate.

Also, I can imagine that in some cases a keyframe can be dropped even if it is not, strictly speaking, a “duplicate” - for example if there is a linear interpolation path between keyframe 1 and keyframe 3, and keyframe 2 is right in the middle sitting on the path, dropping KF 2 would not change the shape of the path.

Part of why I am so focused on optimization is that I’m building a huge world with dynamic loading - the engine loads scenery and characters in the background as you travel around, so I’m trying to minimize the amount of network i/o. (The engine even pre-loads content accessible via portals based on the distance from the camera location to the portal - the closer you get to the portal, the more it loads.)

in my test the resample step is getting a ~45% reduction:

$ gltf-transform resample ./humanoid-male.glb ./humanoid-male-resampled.glb
info: humanoid-male.glb (2.7 MB) → humanoid-male-resampled.glb (1.52 MB)

$ gltf-transform dedup ./humanoid-male-resampled.glb ./humanoid-male-resampled.glb
info: humanoid-male-resampled.glb (1.52 MB) → humanoid-male-resampled.glb (1.47 MB)

If so, is there a way I could remove all bones with “IK” in the name?

I think the “Export Deformation Bones Only” option in Blender would let you skip these, otherwise three.js will still be animating them despite no vertices being influenced.

for example if there is a linear interpolation path between keyframe 1 and keyframe 3, and keyframe 2 is right in the middle sitting on the path

This should be detected, yes. :+1:

1 Like

All right, I got that working, but I have a few questions.

I’m getting the following warning on the console:

resample: Resampling required copying accessors, some of which may be duplicates. Consider using "dedup" to consolidate any duplicates.

However, I am using dedup():

const gltf = io.readBinary(buffer.buffer);
await gltf.transform(resample(), dedup());
return Buffer.from(io.writeBinary(gltf));

Also, for some reason when I tried running this as a Gulp task, adding resample() actually made the output file bigger. I figured that this was problem some error having to do with Gulp streams - so I sat down and over the course of two days wrote my own task runner (which I don’t have a good name for yet). I wanted something like Gulp, except (a) based on promises and modern Node APIs, and (b) one where the build definition files were strongly-typed - i.e. written in TypeScript.

Here’s a sample rule that uses gltf-Transform:

target(
  'characters',
  directory(srcBase, 'characters')
    .match('*.glb')
    .map(src =>
      src
        .transformAsync(async buffer => {
          const gltf = io.readBinary(buffer.buffer);
          await gltf.transform(resample(), dedup());
          return Buffer.from(io.writeBinary(gltf));
        })
        .pipe(write({ base: dstBase }))
    )
);

That’s OK, the resample() function just doesn’t know about the following stages and warns anyway… I should likely just automatically remove those duplicates, but it can be very slow on large models…

Also, for some reason when I tried running this as a Gulp task, adding resample() actually made the output file bigger

Hmm I don’t know why running as a Gulp task would change data… a simple read/write without any transform doesn’t have the same effect?

Correct. I believe it may have something to do with the fact that everything in Gulp is a node.js stream, there’s no way to just load a file by name and write it back out again.

Is there any way to suppress that message? It messes up my nice clean terminal output :slight_smile:

Can override the logger with a different verbosity (or a custom logger implementation) –

const logger = new Logger(Logger.Verbosity.ERROR);

// (a)
io.setLogger(logger);

// or (b)
document.setLogger(logger);

Logger class documentation

1 Like

So, on a somewhat tangential note, I’ve gone ahead and published my task runner (the one I used to call gltf-Transform), I’m calling it “overrun”: GitHub - viridia/overrun: Overrun is a framework for setting up asset pipelines in Node.js.

One of the build configurations examples shows how to optimize .glb files using the technique you posted previously: Build configuration files | overrun

2 Likes

Awesome! I’ll give this a try sometime here, it looks like a good setup.

const gltf = io.readBinary(srcBuffer.buffer);

I wonder if this might be related to the issues with Gulp? At least when using fs APIs, the Buffer returned often is often just a partial view into a larger ArrayBuffer, so you have to do srcBuffer.buffer.slice(srcBuffer.byteOffset, srcBuffer.byteLength). I had to do that often, so gltf-transform has a BufferUtils.trim(...) helper for it.