The idea is to make a custom loader that runs in parallel because JS parsing is quite heavy thing. But when I tried just to place picture in dom I doesn’t noticed any significant improvements, just a change in loading order.
Here two ways of loading with dom that I tried:
if ( img.complete ) {
custormLoader( img );
} else {
img.onload = function () { custormLoader( img ); }
}
function customLoader( img ) {
var map = new THREE.Texture( img );
map.needsUpdate = true;
material.map = map;
material.color = null;
material.needsUpdate = true;
}
// here another way
async function customLoader2( img ) {
var bitmap = await createImageBitmap( img, { imageOrientation: 'flipY' } );
var map = new THREE.CanvasTexture( bitmap );
material.map = map;
material.color = null;
material.needsUpdate = true;
}
Where is a bottleneck here? Why loading waterfall is better when this pictures just placed in DOM without WebGL?
So you mean this loader would run in a Web Worker?
TBH, I don’t fully understand the issue but the actual loading process is not the problem. If you are loading resources from the backend, the browser automatically handles these request in parallel. The real overhead in context of textures is the decode overhead and that happens when it is uploaded to the GPU (meaning when WebGLRenderingContext.texImage2D() is called).
Using ImageBitmap is more performant than normal image elements since the decode happens in the background (in a separate browser thread).
I don’t think you are interpreting the lighthouse results correctly. When starting an arbitrary three.js example on my computer or smartphone (hosted from my local dev-server), it is more or less instantly ready. Even on older smartphones the parsing time of the library is definitely not 2 seconds.
We also need to download a library and only then parse. All of it idle pipeline before we download resources. Even if you parse it in 1 nanosecond on a supercomputer.
var lala = new Image(1, 1);
lala.src = "https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_directx.jpg";
You can put this at the very top of your HTML, before all of your Three.js code and bundled JS so the images start loading immediately.
<head>
<script>
var lala = new Image(1, 1);
lala.src = "https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_directx.jpg";
</script>
</head>
This way, when your JS bundle is finally downloaded, parsed, and ready to execute the image requests, they’ll be either already cached on your browser, or in the process of being downloaded. Might save you a few precious milliseconds!
This would fall into the “avoid chaining critical requests” suggestion in your profiler screenshot.
Have you looked into asset prefetch? You can read more about it here:
The goal of the spec is to for a page to let the browser know that something will be needed or requested soon. So presumably you could prefetch your GLTF or images in an html <link> tag or response header so it’s loaded while the javascript is being loaded / parsed, too.
Other than Apple it looks like it’s pretty generally supported:
I mean all examples in repo with huge Three.js dependency. Current workflow not good enough, but your suggestion looks like a smart hack, need to benchmark it.
When I added images in dom I still get waterfall instead of parallel download. I just thinking is there better way to do it? Looks like something is totally wrong with this JS world.
Devtools show that we are downloading resources in the last step, this is definitely not the best case sine HTTP/2 support parallel requests and ordinary pages is faster then WebGL applications.
Yep, tried this too, but downloaded a texture twice. The main reason why I am not sure about my measurements is that I cannot use performance.now()/date.now() and never tried other API for benchmarking.
Yep, tried this too, but downloaded a texture twice.
There must be something else going on, then, because the problem you’re describing is whole point of the prefetch feature. You’re sure the second request doesn’t represent reading from cache or decoding an image? I’m also not sure how the feature interacts with other server response headers that could tell the browser to not cache a requests data so it could depend on your server setup, too.
It might be easier to discuss if you have a dead simple example demonstrating the problem – such as a page that has one script, a while loop that spins for 2 seconds or so at the start, and then a request to load content.
Ok, I just need to change ‘prefetch’ to ‘preload’ after that it downloaded only once.
Problem solved.
Also preload have better support on devices. https://caniuse.com/#feat=link-rel-preload
In WebGL apps we should always use that tag IMHO. Also we can’t notice any significant improvement on a local server.
Also I am added minified in x4 version in preload and get fantastic result !
But after that I am faced to a problem that WebGLRenderingContext.texImage2D() in synchronous, page freezing while full texture loading to page. But this topic deeper, and can be solved only if we have offscreencanvas in client browser and two three.js instance.
Oh interesting I had no idea there was preload and prefetch.! I think I’d read more into preload in the past. For other interested here’s an article about preload: rel=preload - HTML: HyperText Markup Language | MDN
What do you mean by “added minified in x4 version in preload”? Do you mean you’re using GZip?
I was running into this with the 3DTilesRenderer I’m working on. I had some luck using ImageBitmap and ImageBitmapLoader but there are some quirks across browsers to be aware of. When it’s available it can improve texture processing / upload blocking time by quite a bit.
This trick is very application specific, but pretty simple. Since we don’t have progressive loading for textures in hardware, we can make it in software. We can make 2 versions and download minified at first ( satellite view in google map using the same technique ).