-The example code in Chrome
-The example code in Firefox
Over the last month or so, I’ve been working to figure out a bug in my sky code. From a 50,000 ft level, I’m taking texels from a cube map and converting these colors intelligently to look up information in a 32-bit floating point texture. I’ve had problems, however, with the rows I’m getting being slightly off. In this latest version, I’ve been trying everything to get the code to align, but nothing works. The Firefox image seen is the closest I can get. If you’re wondering, there are two textures I’m looking up coordinates in. One is conveniently 128x64 and the other is 32x64 pixels. As a consequence, I can store both x-y coordinates in 24 bits total, (7+6+6+5). Rather convenient as this fits nicely into the RGB colors of a cubemap.
The hardest part is getting these x,y coordinates back out on the other side using simple math, when (because I’m using A-Frame) Web GL 1.0 lacks either pixel sizes for the texture, or (especially) bitwise operators. But I should still be able to access all of my integer values for passed over code by simple bitshifts, so it shouldn’t be a big problem, just a few multiplications and floors.
In fact, here is the shader output code,
//Get the stellar starting id data from the galactic cube map vec3 galacticCoordinates = sphericalPosition; vec3 starHashData = textureCube(starHashCubemap, galacticCoordinates).rgb; //Red float scaledBits = starHashData.x * 255.0; float leftBits = floor(scaledBits * 0.5); float rightBits = scaledBits - leftBits * 2.0; float dimStarXCoordinate = leftBits / 127.0; //Green scaledBits = starHashData.y * 255.0; leftBits = floor(scaledBits * 0.125); float dimStarYCoordinate = (rightBits + leftBits * 2.0) / 63.0; rightBits = scaledBits - leftBits * 8.0; //Blue scaledBits = starHashData.y * 255.0; leftBits = floor(scaledBits / 8.0); float brightStarXCoordinate = (rightBits + leftBits * 8.0) / 63.0; rightBits = scaledBits - leftBits * 8.0; float brightStarYCoordinate = rightBits / 31.0; vec4 starData = texture2D(dimStarData, vec2(dimStarXCoordinate, dimStarYCoordinate)); vec3 galacticLighting = drawStarLight(starData, sphericalPosition);
And in Python, this is how I’m storing the pixels to begin with,
index_r = int(bin(((closest_dim_star_x << 1) & 0b11111110) | (closest_dim_star_y & 0b1)), 2) index_g = int(bin((((closest_dim_star_y >> 1) << 3) & 0b11111000) | (closest_bright_star_x & 0b111)), 2) index_b = int(bin((((closest_bright_star_x >> 3) << 5) & 0b11100000) | (closest_bright_star_y & 0b11111)), 2)
This is my first project trying to really dig into binary stuff to reduce code weight. Apparently it works and I’m super proud of that. However, it only works in Chrome and that’s a problem for me. Are there differences between the floor functions in these two browsers? The way image sizes are stored between 0-1? Has anyone worked deeply enough with the internal rendering differences between these two to know what is causing this discrepancy and do you have a suggested work-a-round?
PS - I know there is probably some kind of issue with my bright star bit encoding, that one DOES break. But I’m just looking at the dim star output stuff, which causes the issue above. It’s just included because the bits are kind of living in the same color but all the bits for the X,Y colors in the dim star map are only the Red-Green channels.