BestPractice: JPG attached to model or Load WebP to Model?

Hi, I’m using Adobe Dimensions for my model making.

I can attach JPG to my models but not WebP files.

Is it worth attaching blank Materials and adding/mapping WebP images on runtime?

The difference in size isn’t that big, but I plan on optimizing everything to the best of my abilities as this will be my CV.

For the curious:

Project Link

1 Like


I’ve not used Adobe Dimensions but using WebP files instead of JPG’s won’t improve runtime performance, it would only improve the apps’ download time (which is still useful) with the smaller webP files as they’re better compressed. You also need to remember WebP is still not supported on IOS safari I believe.

The reason is these sorts of files are uncompressed when running on the GPU so the GPU memory usage is directly related to the image’s resolution. You’d need to use GPU compressed textures to reduce memory, the .basis texture format for example.

1 Like

Thank you,

Would it be best practice to use Basis Universal Software to convert images to.basis file

Then follow this to implement Threejs basis texture loader.

Yes that’s the article i started to follow when i ran into GPU memory problems with my project which loads many textures (.basis textures and multi texture optimisation techniques (array textures, shaderMaterial etc.)).
It’s some effort though to get right (i think you still need to be on mac to compile the basis encoder), so only worth doing if you really want/need to reduce GPU memory in your app. In the thread above I’m looking at texture atlases to improve runtime performance.

Its important that your original images are of power of 2 dimensions (i.e. 64x64, 256x512 - they dont need to be square but both dimensions need to be Power of 2) when you run them through the basis encoder, the encoder would return a black image otherwise even though Threejs can handle non power of 2 textures when loading JPG’s etc. - not sure if basis have fixed this though.


Thanks for all your advice, I think I can be more efficient with my time if I stick to compressed JPG,

I have encountered unintentional Pixel Sorting glitch with animating WebP files.

Your project looks amazing, Theoretically could you have multiple image resolutions that are loaded in conjunction to how near the camera position is.

My thought process is that on one website I have worked on, I loaded different image resolution depending on the screen size, enabling the optimum speed/quality.

No worries,
Not heard about that WebP issue, will bear that in mind as WebP felt like it was still worth it over JPG.

One of the examples i took inspiration from uses low res to high res swapping as you get closer with the camera - Visualizing Image Fields - my project only has 1200 images so kind of on the border of where something like this becomes worthwhile, it’s all good learning at the end of the day.

1 Like

so cool, If you go down this road, I recommend batch automation with photoshop to get various file sizes

Can you not use this…?

Seems to work OK on Windows…

Does basis git hub not explicitly say basis is for texture compression ie power of 2 images? I think it’s probably best to “fix” the image to be a power of 2 texture before compressing to basis no?

1 Like

If you have a glTF model and want to have Basis textures embedded in it, you can use this tool. The textures will have a .ktx2 extension, but it’s still the same compression technology as .basis.

In most cases it’s necessary to use power-of-two textures with Basis in WebGL 1.0, and you will probably still want to do so in WebGL 2.0. The glTF-Transform CLI can do that for you with the --power-of-two flag if needed.


I have found the Threejs Optimisation guide, and noticed basis textures was on it.

Doing the list I have gone from 4fps to 60fps+

I’m using GLB, I was going to use glb - gtlf pipeline and then use dracoloader .

Are you suggesting a client side gltf transform is better for efficiency ?
Surely I should convert the files then just use a loader for them

also, WebP animation glitch (pixel sorting) for anyone whose interested

Are you suggesting a client side gltf transform is better for efficiency ?

No, Draco and Basis compression should be added offline and not on the client side. glTF-Transform has a CLI you can use to compress the textures, similarly to using glTF-Pipeline CLI to do Draco compression.

1 Like

Thank you, my dyslexia read it as Client not Command line Interface. Cheers everyone

Could you let me know how you find draco compression? when using it for certain things I couldn’t get a desirable compression quality, either the mesh would compress really nice but the textures would mip map in a really harsh sharp way not looking natural, or, in preserving the textures the mesh would become heavily triangulated and not really usable for scanned type models, although for other textureless scenes worked amazing, reducing a friend’s model from 60mb to 5.5mb with little to no noticeable artifacts… Pretty valid dislexic moment though aha far as I’m aware draco is decompressed/decoded client side, hence having to import draco loader and setting path to decoder

1 Like

Thanks @forerunrun. - did’nt see that binaries repo when I was looking, I had access to a mac but this works well and saves me swapping.

Also interested to hear about the Draco approach as well, I used OpenCTM a long time ago when I was given high res .OBJ’s to work with, worked very well but none of them were textured…

1 Like

Draco won’t change the textures themselves, but it does compress the UVs which might create the problem you’re seeing? The --draco.quantizeTexcoordBits option in glTF-Pipeline might improve that, try 16 instead of the default 12. Or, older versions of Blender had a bug that caused UVs to export incorrectly when using Draco compression.

1 Like

I tested it with a few gltf models, this one in particular

It’s not draco compressed now but if you look at the example the images in frames which are just 4 vertex planes looked fine and uv’s were fine when camera was close to them but from far it seemed the mip map was pixelated and too abrasive in a nasty way to the eye, I couldn’t figure out why but I will try again with your suggestion and let you know…

Used command line to compress to default level just for information sake.

1 Like

Hey, I’ve tried quite hard to get this to work, after installing it then said I needed to install KTX, and to install that I needed to install Cmake to build KTX to run the GLTF-Transform command to optimise the gltf/glb file.

Is This right? And is this better than BasisU? (apart from the option to fix images to the power of 2) ( is it better than BasisU because BasisU only compresses images whereas GLTF-Transform compresses textures and I guess other things ?

And gltf-pipeline is causing an error ( Error: THREE.DRACOLoader: Unexpected geometry type.)

Gonna Try Basisu next

By far this was the easiest to implement.
Straight from the [Threejs Tips And Tricks
Run into a problem](The Big List of three.js Tips and Tricks! | Discover three.js

Mesh Optimizer Gltf-Pack

Despite the keep mesh data on, it removes some ‘.materials’, which is no good as I reference and animate materials consistently through my project, unless I refactor everything and I add materials . Oh!, and it’s distorted a lot of my objects

Hey dude gltf pipeline will work you just forgot to put the option -d at the end after your -output
should look like this in npm command line…

gltf-pipeline -i G:/path/to/get/model.gltf -o G:/path/to/output/draco.gltf -d

-d at the end specifies draco compression…