As far as I know, it is not possible to exchange individual textures in a THREE.DataArrayTexture. So far I have loaded huge textures into a THREE.DataArrayTexture at runtime and had no problems (over ten 2700 x 2700 textures). I use sharedArrayBuffers to communicate between the threads and I can easily exchange the individual texture data sets that I use for a THREE.DataArrayTexture before I use the DataArrayTexture.
I think @makc3d once mentioned that a DataTexture hardly puts any strain on the main thread. The problem is creating the data for a DataTexture. However, I load and prepare the data in side threads so that the main thread remains free of this.
Iāll test how well this works with lots of small textures and a THREE.DataArrayTexture. So far I have only worked with a few but huge updates with a DataArrayTexture.
However, the fact that the entire DataArrayTexture is sent to the GPU with every update instead of just the parts that need to be replaced will definitely not be resource-optimal.
Maybe webgpu with compute shaders also offers better options here. Iāll take a look at that.
P.S. I came across this. 256 textureArrays are used.
I dont remember that. Maybe in the context of text textures, I did advocate for using ImageData instead of creating new HTMLCanvasElement for every texture
Nice catch, thanks for that. Was able to reproduce.
Iāll have a look whatās going on
[EDIT]:
Turned on pixel jitter function, seems fine now
I believe this was happening because of undersampling in usage pass. Basically, the technique relies on an extra pass to determine what tiles to load, that pass is done at a very low resolution, since we need to read pixel values on CPU to count usage, and we donāt want to have to read and traverse a ton of data on CPU.
With compute shaders and atomics you can pretty much skip this extra pass and just collect usage statistics from your raw render buffers.
Jitter basically moves camera ever-so-slightly left and right to cover the region that a single pixel of this low-resolution pass covers in terms of screen-pixels. Itās a similar technique to temporal anti-aliasing (or temporal super-resolution). You end up with a bit of instability frame-to-frame, but as a trade-off you get full coverage eventually once all covered pixels have been ājitteredā to. The aforementioned instability also leads to a bit of cache thrashing, because usage changes frame-to-frame now even if the camera/scene does not, just because we jitter camera position. From my testing previous testing however the caches are robust enough to handle a bit of thrashing without significant performance impact.
Generally speaking I donāt think that ājitterā feature is worth the slight overhead it imposes, just because in most real applications the user will probably not be able to tell the difference. The render shader will always use the best tile it can, and with a dynamic scene your physical tile buffer will have pretty much perfect tiles at any given moment. I implemented the feature, tested it out a bit and just disabled it in the end when I made the demos.
PS:
Did I overengineer the heck out of this thing? - You bet!
I chose this asset because itās visually interesting, and has a total of 16k worth of textures. Itās also close to worst-case usage in terms of UVs, the asset has a ton of tiny UV island with little to no spatial continuity.
Hereās a 1k preview of the original texture, so you can see how chaotic it is:
If you zoom far enough out - youāll start to see UV seams, this is not a virtual-texture tech issue, but rather the bad UVs of the original texture. Hereās what the model looks like in blender:
hereās just plain three.js (using @donmccurdy 's gltf viewer):
and hereās sketchfab:
Hereās virtual texture and gltf viewer side-by-side, VT on the right:
Interestingly, if I try to load the non-virtual-texture version - Chrome keeps crashing half the time for me. I guess very few things are optimized to render 16k textures.
Blender also gave up the ghost when I tried to bake this texture. So I ended up using GIMP
There is a limit in the canvas for the texture size. Thatās somewhere in the region of 16k, but I donāt know exactly where. The car wreck looks very good. Loading the many textures also works very well for me. But I havenāt tried to load them into the shaders yet because, to be honest, I still have no idea how to get a lot of textures efficiently into the shader.
Do you may have a code snippet on how to do that?
It seems like the ideal solution for our project, where we have to showcase substantial amounts of orthographic photography on glb files (terrain generated from LiDAR point cloud). Would you mind sharing your code or guiding me in the right direction? Your help would be greatly appreciated!
I appreciate the interest in my work, and thank you for the praise. The source is closed, and I donāt plan on releasing it under any kind of permissive license.
If you would like to use this in your work, Iām open to licensing the tech, but not the code. If you are interested - please feel free to reach out to me in private.
Regarding building something like this from scratch yourself - I would recommend going through various GDC publications on virtual textures. There is a fair amount of those that came out over the years. There are a number of github repos with full of partial solutions as well, just not for the web, so youād need to use that code as a reference.
I would recommend going through some of the other answers in this thread for more info as well.