OI, you got a license for that T?

As I’m sure is clear for the title, today we’re talking about OIT or Order-Independent Transparency.

I’ve been working on transparent object support for Shade for the past few weeks. Doing opaque stuff is easy, doing alpha-tested transparencies is a bit trickier, but no big deal. Transparent objects however… hard stuff.

First of all, to help us visualize the problem, let’s consider how we draw non-transparent stuff. Let’s use a toy example of 3 walls of different height and an observer looking at them from head-on

If the walls A,B and C are fully opaque (that is, non-transparent), we can draw the walls in any order and use depth buffer (also known as Z buffer) to keep track of which one should be drawn to the screen. If we draw A first, say it has depth of 0.1 (normalized distance to camera), when we try to draw B at depth 0.2 - the shader will immediately know that 0.2 > 0.1, and therefore the pixel will be “discarded”, that is - it will not be written. When we draw C at depth 0.3 - same thing as with B - discarded.

If we draw C first, our depth buffer will have value 0.3, then we draw B - it will overwrite C as depth is smaller ( 0.2 < 0.3 ) so we will overwrite C’s color with B, then finally A will overwrite B because its depth is even smaller.

This is not terribly original stuff, we’ve been doing depth-testing for donkey’s years.

Transparency

I’m going to use word “fragment” to refer to a pixel shaded by a material shader from now.

With transparencies things are not as clear. We can’t use a depth buffer, because there isn’t one final color, in fact - all objects will contribute to the final color, so in a sense - we could use a depth buffer, but we’d need as many depth buffers as there are planes, or in our case - triangles. That’s not something practical, so depth buffer is pretty much useless. Most techniques that do transparency explicily disable depth writing. We can still read depth buffer that was produced by opaque fragments to quickly discard transparent fragment, but we don’t write to depth buffer. In three.js it’s the corresponding controls are:
Material.depthTest and Material.depthWrite, pretty descriptive names, once you know what you’re dealing with.

So, if we don’t write depth, how do we decide what should be visible? Well, there’s alpha (opacity) value that a fragment produces, and a vey simple formula where S is the fragment we’re drawing and D is the current color in the render texture we’re drawing to:
R = S.rgb*S.a + D.rgb*(1.0 - S.a)

In fact, this is so common that it’s called “normal blending” in a lot of places, including three.js, where it’s the value of Material.blending which defaults to NormalBlending. Other commonly used name is “Alpha Blending”.

Alpha blending is neat, but it has a massive flaw - it’s non-commutative, in other words - order of blending matters. To illustrate, here are two circles with 0.7 alpha (30% transparency)

Here’s what happens if we overlap the circles with green on top:

And here’s the other way around:

Hopefully it’s clear that the blend produces an entirely different color depending on the order.

Now, you’re probably a clever reader and shouting “aha! just sort the objects!” at this point. And indeed, that’s a good strategy, we can sort objects by distance and get the right blend… Except, only if that’s actually doable. If you have two circles like this - it’s all good an well, but in 3d we typically have more complex shapes than this. Consider the following example of just two intersecting rectangles:

What you will see is something like this


or this

depending on how you sort the objects, because the part of each rectangle that’s closest to the camera is about the same distance away from the camera, so there’s no clear winner. Also - both of these are wrong, what you should get is this instead:

Think back to the diagram with the overlapping circles if you need to convince yourself that this is true.

So.. How do we get this? We can sort all triangles, except, if you believe me so far, you’ll see that the same issue happens with two intersecting triangles the same way it does with two rectangles.

If order matters, and sorting objects and triangles is not the solution, what can we do?

There are 3 broad categories of solutions:

  1. Slice the scene along the camera view and draw one slice at a time on top of another, this is your “depth peeling” method. There’s more involved there, but that’s the basics.
  2. Keep multiple layers of transparency for each pixel. There are 2 broad variants here, one with fixed size where you have, say 4 layers and use some formula to decide which layer a fragment should be drawn to and what blendining to use. The second variant is unbounded, where you store all layers, typically as a linked list that can be sorted during or after the main pass.
  3. Estimation and heuristics. Even here we have 2 borad categories, first is discrete, we generate a random number to decide if the pixel should be drawn or not, you end up with a dither of some kind as a result. The second variant is continuous which uses estimation or statistics, for example, you can construct a curve such as Ax^2 + Bx + C and fit it as you draw, to represent something like absorption/depth.

For those interested, three.js implements discrete variant of 3rd category:

Here’s what that looks like if you enable tempotal accumulation

pretty good, even if still a bit noisy.

It would be remiss of me not to mention @pailhead and his Depth peel and transparency implementation, link:

Order-Independence

Going a bit further with classification, techniques that do not rely on drawing object/triangles in a specific order are broadly called “Order-Independent”, even if there’s a bit of sorting involved somewhere along the way else-where.

This brings us to the reason I’m writing this article. I’ve been looking for a suitable technique for Shade, here’s what I considered:

  1. Sort objects.
  2. Sort triangles.
  3. Use hash-based technique (same as the variant that three.js impements)

I was unhappy with sorting, as it needs to be done on the GPU, and sorting, even on the GPU, is a massive pain, it’s either very hard or requires a large number of passes, and in all cases it’s not fast.

I didn’t like hash-based implementation, despite writing it - it was a bit too noisy for my liking. I have a better TAA implementation, so the amount of noise was much lower than three.js, but it was still very noticeable. One point to mention here, for those that might be interested - you can get pretty good spatial and tempotal stability with the technique, there are a number of good papers out there (sorry, don’t remember them, but you can find a bunch with google).

So as I was finishing up with alpha-tested transparencies, I stumbled on a mention of transparency implementation in Alan Wake 2, at Remedy’s REAC presentation on meshlet rendering. They went with MBOIT (Moment-Based Order-Independent Transparency), which belongs in the 3rd category of techniques, and is a not-so-distant cousin of Variance Shadow Maps (which, again, three.js implements ). But where variance techniques require 2 moments, MBOIT goes further and extends the technique to N number of moments. With the most common one, as far as I can tell, being the 4-moment one.

MBOIT

And here we are, after more than a week of pain, I’m at the stage where I have something to show, so I thought I’d write up a bit more on the topic for the uninitiated and hopefully provide a gentle introduction into the topic.

My implementation uses 3 passes:

  1. generation of moments (geometry pass)
  2. resolution of moments (geometry pass)
  3. compositing of transparency (image pass)

The first pass is quite cheap as I’m reading just alpha-related parts of the material and no actual shading is performed. Resolution of moments requires full shading, and last pass is similar to post-process, where we just smash two images together with a bit of math.

Here are a few screenshots of shade vs three.js to highlight difference
Screenshot 2025-08-06 225718

Shade


three.js



Quite often this stuff doesn’t matter much. You have a t-shirt configurator say - you don’t even need transparencies. Or you have an archviz usecase with some building, you want windows to look right - just make sure that each window is a separate object and it will mostly look right.

Where transparencies start to be more critical are usecases with particle effects, non-planar glass (such as bottles or vases) and, of course, water.

As for me, Shade is intended to be a turnkey solution. I don’t want to force the user to adapt their content as much as possible, so graceful handling of transparencies in all usecases is an important point for me.

17 Likes

This remind me of WBOIT and an attempt to recreate it on three.js (inspired from Unreal/others commercial engines techniques) I never seen the blending concept pushed further than early prototypes, despite it could definitively become a solid answer for complex real time rendering.

5 Likes

Thanks for the mention!

2 Likes

Spent more time on the transparencies, managed to chase down most of the bugs. Here are a few more shots comparing to three.js


In three.js, although pleasing-looking, the transparencies a bit.

Here’s a more revealing angle


You can see clearly that three.js draws glass after the potion, which makes glass be drawn on top in the wrong places

Something else interesting happens here that’s less obvious, because three.js does blending in LDR space, we lose some transparency on bright objects, such as this bottle neck shining through in Shade

Glass has transparency of 75% in this scene, and potion has transparency of just 10%, so there’s very little light that can bleed through, but because of the environment map - there’s still some. By doing blending after HDR compression as three.js does - we lose that brightness information and blending produces way less intensity on occluded surfaces.

This was surprising to me, not that it doesn’t make sense, but I didn’t really think about what happens blend you alpha-blend in HDR vs LDR (compressed range).

Everyone likes cars, right?


And one more interesting case

and if we move the camera just a bit, we get this

Anyway, I’m still a bit stuck with cases where alpha is exactly 1, that is - using the technique on opaque surfaces. I’m getting full black pixels for some reason. For now I’m just detecting simple cases during load, like material alpha set to 1, and no transparency channel in the texture and just overriding the material transparency flag. More work for later, for now I’m happy with the result.

6 Likes

Fixed the issue of opaque surfaces appearing black. Thought my fix was a hack, but turns out the original paper had the same idea, so I don’t feel too bad.

The fix is to not have opaque surfaces, that is - alpha has to be in range 0 <= a < 1, I got pretty bad numeric instability with the recommended 0.999 limit, but 0.997 on f32 textures works fine for me.

Opaque surfaces still look a little suspect, but close enough if you don’t look too hard. The problem lies in the transmissive curve reconstruction I suspect, the original paper strongly suggests that

Here are the graphs from the paper (figure 2)

As you can see the reconstruction is pretty hand-wavy, and for β values below 1 you get underestimate on absorbance. I’m using β value from the paper, which is 0.25, so it’s in that domain too.

The graph is for 6 bower moments, I’m using the 4 power moment variant.

Trig moments I expect should help for this case, based on the graphs from the paper

But I’m happy enough for now.

While testing the code, I accidentally ran into another usecase where OIT makes a massive difference: hair.

Here’s a shot of a model with somewhat complex hair with a number of layers looks like in Shade

It looks good, so good that it’s hard to believe this is a standard principled BRDF and not a specialized hair shader. Here’s the same thing in three.js

And just to hammer the point home about sorting, here’s a pretty hard model of a bunch of water bottles in a wrap. It’s hard because each bottle is not an individual instance, so sorting instances doesn’t help here. It’s also hard because there is no correct order possible even if they were separate instance, due to the wrap containing everying. And finally, there’s water inside the bottles, which makes the whole thing more complex.




I’d say it’s both impressive and not. Not impressive because you can achieve this many different ways, but to me it’s impressive because there is no sorting involved. Sorting is expensive and GPUs are bad at it due to data dependencies an incoherent access patterns.

10 Likes

The subject of transparency is important to me because when rendering surfaces, I often use transparent textures to fine-tune my terrains. For example, when rendering an island, I would use “lo-res” vertex displacement to model the shape (e.g. 512x512) and would use a transparent texture along the shoreline (e.g. part of a 4096x4096 texture) to fine-tune the appearance of the shoreline.

I ran into a bit of a challenge when I added a smoke emitter (with transparent particles) to a semi-transparent island - partly because the island has a mixture of solid and transparent textures. I was able to find a solution that worked, but it seems a bit “iffy”.

1 Like

Transparency is expensive, as far as rendering goes - but it’s a really powerful tool as you said. And the whole “iffy” thing I think is okay, that is - graphics is all smoke and mirrors anyway, we’re trying to trick the viewer to believe some version of reality that doesn’t exist, as long as you achieve that - I think it doesn’t matter too much how you got there.

Is that foam on the water or is it a plane with transparency and some squiggles drawn on it? - doesn’t matter :slight_smile:

Probably the best example for me that made me accept the “iffy” is water. We can’t simulate large volumes of water, we can do it small scale, either in a small volume or with a small number of particles, but we can’t simulate, say, a river. Not in real time. And not a lake or an ocean. Heck, even a swimming pool is pushing it, and you’re going to have a pretty crummy interactive experience if 90% of your budget is spent on just the swimming pool’s water. And that’s not taking the light transport into account yet. So… hacks are fine in my view, especially on the art side.

We make the tools, artists use them and there’s rarely such a thing as “using the tool wrong”. Even if I do die inside a little when I see 1 million polygons being used on a single screw in a game :cry:

3 Likes

Beneath the wrapper of your turnkey solution, have you needed to chronicle the dependencies… for onboarding or yourself? For example, if I don’t understand Drafting II then I need Drafting I (whatever that entails). Presumably this scene graph also permits GI which “already has” samples. These are notional concepts to pose the same question: as you subscribe to deferential models of realism, is it common to reference an established compendium (i.e. OpenLOD the GLTF of Physics)?

I know you cite sources, but for example: if transparency adds density samples (i.e. dirty ice)… what is industry standard for a full simulation versus a smoke-and-mirrors solution?

1 Like

It’s a good question, and there are some standards, but in the world of graphics, when it comes to rendering - there are no standards. Not really.

Take even the standard principled PBR model. It’s “standard” in the sense that most engines loosely implement it, but even then there are so many deviations.

People talk, for example, about the “Unreal look”. That is - you can, at a glance, tell that a screenshot is from a game made in Unreal engine. Why? Because lighting model and material model are visually distinct. But, not “standard”.

There are renderers that strive for realism, there are those that strive for physical accuracy. But as long as we’re talking about real-time, we’re likely always going to forgo physical accuracy for the sake of performance.

The “Shade” way

Shade, my engine that is, makes a few core trades:

  1. It is a deferred engine. This has massive implication on material model. It enables post-processing out of the box and allows me to reach high graphical fidelity in return. But material variation has to be kept low, and this is an architectural trade at this point.
  2. It is not aimed at shader authoring. There is currently 1 material, which is a PBR material, close to three.js StandardMaterial. The reason for this is simplicity of implementation, I was burned by shader compile times in three.js before, and other engines such as Unity. I set out to design a graphics engine that would not have this issue. It’s actually pretty easy to do, you just make a shader that does everything all the time. Three.js is quite different in that sense, it does everything, but not all the time. What I mean by that is - three.js materials are full of flags, there are probably something like 20 flags and a bunch of variable constants which creates a combinatorial explosion of potential shader combinations. Each of these combinations needs to be compiled separately to be used. Shade allows all of these permutations with a single shader. Theoretically three.js can be more efficient for any given material than Shade, but real scenes rarely have just 1 material.
  3. The API surface is small. If a choice for the user can be automated - Shade will automate it. As an example
    • you can’t choose how many lights to have in Shade. Shade will work with however many lights you give it.
    • Shade does not let you configure shadows - they are there and they work out of the box.
    • You can’t turn rendering of an object on/off, if it’s in the scene graph - it will be drawn.
    • You can’t set object bounds, the engine does it for you.
    • You can’t choose vertex format, Shade enforces its own format. You can import whatever you have, but once it’s imported - it will be one “standardized” option.

FrameGraph really helps here, because as features are turned on/off either by the user or the engine itself - resources such as buffers and textures are managed automatically and only what will contribute to the final image will actually get executed.

I also implemented a very fat abstraction on top of WebGPU which does a lot of heavy lifting for the user. In many ways it’s similar to three.js in that way, it differs in that the abstraction is still acknowledging the fact that we’re wrapping WebGPU.

For example, here’s the shader that does blending pass for the MBOIT (subject of this discussion):


const resources = new ShaderResourceSetDescriptor();
resources.createGroup()
    .addTexture("moment_zero")
    .addTexture("moments_1234")
    .addTexture("transparent")


const body = CodeChunk.from(
    //language=WGSL
    `
@fragment
fn main(
    @builtin(position) coord: vec4f
) -> @location(0) vec4f{
    
    let coord_u = vec2u(coord.xy);
    
    let b_0 = textureLoad(moment_zero, coord_u, 0).r;
    
    let total_transmittance = exp(-b_0);
        
    let accum_transparent = textureLoad(transparent, coord_u, 0);
    let accum_color = accum_transparent.rgb;
    let accum_alpha = accum_transparent.a;
    
    if(is_nan(accum_transparent.r)){
        // not sure why we get invalid results, but we do
        // this removes the pixel from the final image
        // the final image ends up looking correct as far as I can tell
        // TODO investigate
        discard;
    }
    
    // 4. Calculate the renormalization factor from Equation (2) from section 3.4
    // This correctly scales the accumulated color.
    var renorm_factor = 0.0;
    if(accum_alpha > 0.0001){
        // accum_alpha is sum of alpha * transmittance at each pixel
        renorm_factor = (1.0 - total_transmittance) / accum_alpha;
    }
    
    let final_transparent_color = accum_color * renorm_factor;
        
    return vec4(final_transparent_color, total_transmittance);
    
}
    `, [
        chunk_is_nan
    ]
)

export const shader_oit_blend = ImageShader.from({
    descriptor: ShaderDescriptor.from({
        label: "OIT Blend",
        resources,
        body,
    }),
    targets: [{
        format: "rgba16float",
        blend: {
            color: {
                operation: "add",
                srcFactor: "one",
                // multiply opaque by total transmittance `exp(-b_0)` which is stored in alpha
                dstFactor: "src-alpha",
            },
        }
    }]
})

And here’s the use of that shader:

shader_oit_blend.draw({
    encoder,
    bindings: {
        transparent: t_resolved.obtainView(),
        moment_zero: t_optical_depth.obtainView(),
        moments_1234: t_moments.obtainView(),
    },
    colorAttachments: [{
        view: t_color_output.obtainView(DEFAULT_RENDER_TARGET_VIEW_DESCRIPTOR),
        loadOp: "load",
        storeOp: "store",
    }]
});

As you can see - it’s very obviously WebGPU, but with most of the verbosity taken out.

Internal documentation

As far as documentation goes, I mostly keep notes in the form of comments in code. When it comes to research - I typically create a markdown document to keep notes, and I store it in the repo as well. Here’s the original set of notes Shade for example:

NOTES.md

the renderer is optimized for large number of instances, and for using PBR shader (StandardMaterial)

Normalized Vertex Layout

  • Geometries that are used with StandardShader are expected to conform to a specific Vertex layout.
  • Tools are to be provided to deal with relevant conversion.
    • Conversion may be done automatically with logged messaged about such, to make the user aware of performance
      penalty
  • Where possible without sacrificing rendering performance - support non-normalized layouts.

GPU-resident renderer architecture

  • All geometry data (vertex+index) is written into a huge buffer and sits in GPU memory
  • All meshes are represented by a small descriptor that resides on GPU
  • :warning:*[IMPOSSIBLE]* All draw is indirect draw, indirect draw batches are prepared in a compute shader
    • Impossible on WebGPU, drawIndirect as well as drawIndexedIndirect only execute 1 draw command from the buffer and not the whole buffer. As a result you’re still stuck issuing commands on CPU
  • Culling is done on GPU

references:

Unordered References

Spatio-temporal variance-guided filtering:

IDEA v2:


Deferred shading

  • make use of Visibility Buffer to remove material cost of overdraw and better support GPU-resident rendering (see “The Visibility Buffer: A Cache-Friendly Approach to Deferred Shading”, 2013 JCGT)
    • visibility buffer structure: DEPTH, CLUSTER_ID, TRIANGLE_ID (?)

Geometry Cluster

  • collection of indices (index buffer)
  • references vertex buffer (a geometry) by index
  • A cluster instance also references Mesh that it was produced from

problem: Vertex shader doesn’t allow STORAGE type buffers, which can have runt-time defined array length, this means that vertex shader can only accept fixed-sided data. This makes decoding of clusters inside vertex shader impossible, as we don’t know ahead of time how many clusters we might have.

Draw algorithm

  1. Cull instances, takes list of instances as input, culls against various constraints, produces another, smaller list
  2. Prepare instances for render by breaking them up into fixed chunks of triangles (64/128), to do this:
    1. prepare buffer with clusters

Textures

  • virtual textures?
  • compressed textures, compression support in-engine
  • AI texture upscaling, enlarge texture by generating in-between data using a neural network
  • Toksvig mapping: Specular Showdown in the Wild West

Variable Rate shading

  • Poposal in WebGPU, not in actual standard yet as of 19/05/2024
  • VRS is no good for deferred :frowning:
  • Adaptive Shading possible
    • “Adaptive Undersampling With DACS” Deferred Adaptive Compute Shading
    • maybe use a similar technique to post-process by creating N number of render targets of smaller resolution and combining them in the final pass (ala checked-board pattern)

Compute swizzling for better cache usage

Shadow terminator

VRS links

On Standards

I like standards as much as the next guy, but standards tend to get outdated pretty quickly or grow too unwieldy. For example - GLTF was well intentioned, but by now there are so many extensions that it’s no longer the thing it was intended to be. GLTF was created to very closely reflect WebGL, so you can avoid most of the data conversion and copying. Just map the buffers and off your go. Heck, it even uses GL constants. But today we’re dealing with WebGPU - an entirely different beast, yet what do we have? - GLTF.

I think standards are great for interoperability. Say I want to send something your way - if I send it in a standard format - you can work with it. Same the other way around.

How this relates to Shade - Shade tries to follow convention. That is - I try, as much as possible, to do things the way the rest of the industry does it. For example, I use roughness instead of glossiness, because most of the industry is going that direction. I use continuous scale for metalness instead of binary, because most engines to it that way etc.
As for standards, Shade currently implements support for GLTF input. I plan to add more in the future, but I don’t plan to add support for everything, there will be one preferred format. Likely USD.

Every engine has a philosophy and hard-to-change architectural parts. These things dictate how the engine works and what the data looks like inside. Three.js has that, Unreal has that. The day when three.js and unreal look the same - will be the day that graphics as a field becomes solved and there will be no more significant advancements.

3 Likes

So is this like… adding all the transparent fragments colors together, weighted by their depth and then also opacity.. and then dividing at the end by the sum of the weights?

Ha, yes and no. More like constructing a cumulative curve of absorption, then sampling that curve to figure out how much absorption the fragment has (kind of like opacity).

A,B and C are different fragments at the same X,Y screen position

2 passes, first pass we construct the curve, second pass we sample the curve to reconstruct where the fragment is on that curve, and how much absorbance it contributes.

In a nutshell - you’re right, but it’s a bit more involved. I suggest checking out the paper, it’s not a very tough read, even if you don’t get the math.

As long as you accept that logarithmic addition is equivalent to multiplication, you’re not going to get stuck.

3 Likes

Can you write more about this? What causes this design decision, and what benefits does it bring?

Mostly personal preference really.

I’ve seen things like layers, implemented as layers : uint32 typically.

I’ve seen things like flags

  • enabled : boolean
  • hidden : boolean
  • visible : boolean
  • suspended : boolean
  • cull_enabled : boolean

and more besides. Whenever I see settings like that - I feel disappointed and a bit frustrated.

If you implemented some kind of a flag, something like visible : boolean, let’s say. You’ve just created an ambiguity in your API. If the user wants to remove an object from rendering - they can now either remove it from the scene, or flip the flag. However, removing and object from the scene and flipping the flag are not going to have the same behavior.

For example, if you flip the flag and query “how many children does this thing have?” - you’re probably going to get an answer that doesn’t check for visibility of the child.

Then, what about user’s perspective, flipping the flag will typically cost nothing, but removing an element from the scene graph will typically involve updating spatial index and potentially releasing some GPU resources. So in terms of performance overhead - these two are not equivalent.

Let’s consider I have a scene of a city, and I have 2 versions, one for the past and one for the present. To make it easier for me to keep these two versions consistent - I just hide one of the versions but keep both on the same scene. Say the city has 100,000 objects. Now my perf tanks, because pre-render, I have to filter twice as many objects. The the user this is not clear, and I would argue - they should not need to understand the difference.

Next, let’s consider development of the system itself. If I add a flag like that, I need to check for it everywhere. Say I write a picking system, and I forget to check the flag. The system works just fine, but it picks invisible objects as well.

Lastly, there’s semantics. Say you have an object, and it’s got .visible == false. Say I’m implementing a physics system and I check for collisions with meshes. Should I allow collisions with meshes that are not visible?.. It’s a rhetorical question, of course. But I hope the reasoning for my disdain of “multiple ways of doing the same thing” is clear.


To be clear, I don’t begrudge inventors of such mechanics. A user comes to you and says

but I want to make this thing invisible

you tell them

just remove it from the scene

And they argue

But I want to re-enable it later, and what am I supposed to do to restore it back to where it was?

And instead of saying, unhelpfully

Just keep a record of where it was, add a new data structure on your end if you need to

You instead capitulate.

Yes… indeed… It is a reasonable ask…

Heck, I’ve done that myself many times. But when there is no pressure and I have a choice - I really prefer not to.

4 Likes

Here is the program I was referencing. I was able to get the smoke to render properly by giving the particles sprite texture a renderOrder of 1, so that’s not as bad as I recall. And I had forgot that I also had to view the scene through a transparent propeller - which also works with a renderOrder of 1. The breaking point came when I created a FadeFromBlack using a transparent sphere around my camera. Here the propeller does not appear until the opacity has sufficiently decreased. This appears in the opening sequence of my flight sim. (And I still need to add a semi-transparent gunsight.) These are all standard transparencies that anyone creating a flight sim/demo with three.js would want to have.

Transparent water would be a real challenge to the GPU. Creating non-transparent ocean waves is not so bad. The biggest challenge with my flight simulations is that, no matter how detailed you make the waves, they will quickly disappear as altitude and visual distance increase. That is where you need to start resorting to artistic enhancements. For example, in my ocean, I spread a random B/W texture over large areas to add different shades and reflectivity to the water and that really helped. I would also like to add more detail to nearby water (e.g. foam) and, as you indicate, one option is using transparent textures.

If I were to want to render clouds or trees, I would also need to be able to use transparent textures with all of the above.

So transparencies are very important to three.js programmers, especially those attempting to model the real world. So far I have been able to get them to work fairly well. But I always feel that I am only one transparency away from hitting the program limits.

MORE (edited Aug 20)

I forgot to add that one big factor that will help in the future is when people start using monitors or other devices with eye-tracking - which can take advantage of the fact that apparently (in the interest of efficiency), our brains only perceive a very small section of our field of view in high resolution. The rest is either low resolution, constructed from past information, or imagined (interpolated). This means that all the high resolution content displayed outside of this small area of perception is wasted. I expect that GPUs of the future will be able to use eye-tracking to create displays that better match what our brains perceive. This should significantly decrease the GPU workload.

1 Like

@phil_crowther
Style-transfer and upscaling empower the masses to remix via gestural labels. Did you review Microsoft Flight Simulator 24? At ground level the precision breaks down, but wow! :zzz: I see fake war/weather streams with tuned parameters, like an animated SDF. Maybe Palette Limiter 25 is the next best thing to true wave collapse.

~ Uncle Bokeh

A demo of the bottles scene if anyone is interested

image

controls are ASWD and mouse

4 Likes

One of the very first complaints from clients with receiving product configurators is “why is the transparent part buggy? Can you please fix it?” and the simplest answer is “no, that’s just how Threejs works (unless you can afford the cost of maintaining a new custom transparency implementation)”. It’s always not a good conversation.

That is to say, if we had this in Threejs directly it would be welcome by many!

2 Likes