I am interested in implementing 2D texture arrays. I’ve pretty much got the code part down (I think), but I am a little lost as far as how to create the actual images.
Is this the only way to do this? If so, how would one create a file like this from a set of PNG’s?
Can raw images be downloaded separately and passed as array in this line var texture = new THREE.DataTexture2DArray( array, 256, 256, 109 );?
And down the question that would allow me to test that; How does one create a raw image file from a PNG?
I tried saving as a 32bit BMP (photoshop will not let me save as 8bit like the example), but I looked at the saved file in notepad++ and it seems to be a different format than the data from the three.js example.
I’ve never used the method employed by that example. I can’t even see what’s inside the .zip file because I get a binary file without an extension. Maybe you can reach out to the creator directly via Twitter (Divine Augustine) and ask how it was created?
I’ve had luck creating spritesheets in grid format with TexturePacker which gives you lots of compression options. You could potentially use this via Texture.repeat and Texture.offset. Not sure if this applies to your case, but hopefully it helps find a solution.
I’m actually moving away from the atlas method due to mipmap bleed issues on some models. Padding really isn’t an option when dealing with textures that take up a full grid slot.
At this point I’m really looking as to how to convert a PNG into raw data that I can pass to a THREE.DataTexture object. I’m assuming… that I can use an array of these and pass it to THREE.DataTexture2DArray.
I was looking for this and I got it working. So step by step or sort of:
You need to prepare data
You need to load data
Use shader correctly
For 1st. head256x256x109.zip is a RAW data with 1 channel images [256x256]x109 frames which is zipped. How to get this data? I got frames from processing a video with mmpeg with fnExtractFrameToJPG (check the docs, easy). So after that I used jpeg-js and jpeg.decode into a buffer which after that was written on the disk and zipped (just usual zip no fancy stuff). After that you can use the sample code and just replace the numbers, texture channels and the number of frames to what you want. The issue with this approach is that an image 512x512xRGBA will be ~1mb. So I just zipped the image sequence and then used fflate to fill up Uint8Array with them on the client side, Instead of 5mb in my case I had 500kb zip. Also, be advised to pass Uint8Array into a shader, not ArrayBuffer.
I attached a bit messy node.js sample of code that I used to extract the data from video.