Why does a transparent object not appear behind another?

I wonder what’s a good rule of thumb for thickness vs backsideThickness. What would make it look the most realistic?

I wonder if render can be optimized by shrinking the camera frustum to encapsulate only the size of the transmissive mesh? Or does it rely on pixels not just behind the mesh?

transmissionsampler at the very least allows all mtm’s to use the same buffer without using manual fbo’s, and they’d still have chroma and aniso—something blur, distortion, too.

frustum recalc would be next level. we do this for drei/caustics with great results, allows to pack much more resolution into a tiny buffer. just a lot of math. if you figure this out please ping. but in theory it would really only need to encapsulate the host mesh bounds + margins for IOR and thickness, though that will probably make it a lot more complex.

1 Like

What’s a scenario in which we want to supply a custom .background for the MeshTransmissionMaterial?

@drcmda do you know if this requires a particular version of three? I’m getting these:

FRAGMENT

ERROR: 0:2463: 'transmission' :  no such field in structure
ERROR: 0:2463: 'assign' : cannot convert from 'uniform highp float' to 'structure 'PhysicalMaterial''
ERROR: 0:2464: 'transmissionAlpha' :  no such field in structure
ERROR: 0:2464: 'assign' : cannot convert from 'const float' to 'structure 'PhysicalMaterial''
ERROR: 0:2465: 'thickness' :  no such field in structure
ERROR: 0:2465: 'assign' : cannot convert from 'uniform highp float' to 'structure 'PhysicalMaterial''
ERROR: 0:2466: 'attenuationDistance' :  no such field in structure
ERROR: 0:2466: 'assign' : cannot convert from 'uniform highp float' to 'structure 'PhysicalMaterial''
ERROR: 0:2467: 'attenuationColor' :  no such field in structure
ERROR: 0:2467: 'assign' : cannot convert from 'uniform highp 3-component vector of float' to 'structure 'PhysicalMaterial''
ERROR: 0:2491: 'ior' :  no such field in structure
ERROR: 0:2491: 'thickness' :  no such field in structure
ERROR: 0:2491: '+' : Invalid operation for structs
ERROR: 0:2491: '+' : wrong operand types - no operation '+' exists that takes a left-hand operand of type 'structure 'PhysicalMaterial'' and a right operand of type 'highp float' (or there is no acceptable conversion)

EDIT: Yep, updated from 0.139 to 0.149, errors went away. The never ending keep up! :slight_smile:

to inject an environment map for instance. or a color. it’s not really physical, you can skip it if the transmissive object is supposed to just “see” what’s behind as is.

Yeah I am skipping it for my case, but more just curious what someone might achieve with it.

For some reason backside doesn’t work for me (either the object turns opaque, or I can see the object in itself (f.e. a smaller sphere inside a larger sphere). I’ll post my code up soon.

Yeah I am skipping it for my case, but more just curious what someone might achieve with it.

that’s a shape with a custom background, an anvmap, just to make it look more apealing.

1 Like

Nice, thanks! I see what the code does, but seeing the applied art is useful!

I noticed that the refraction is based purely in surface normal angle, which for a flat surface (f.e. on the side of a cube) it basically only translates the underlying image on X and Y of the display. For example, with this diamond with flat surfaces, it simply translates the underlying image:

Screenshot 2023-02-19 at 7.29.59 PM

It would be sweet to figure out how to smoosh or stretch the background content based on the angle through the object like a more realistic refraction. You know, similar to when you stick your hand in water, and you see it is not only displaced, but also skewed/smooshed.

Another example is a glass ball: the image we see is flipped in real life, but not with the current refraction. I think this one might be easier to achieve though: add a spherical option (or some name) that negative scales the world on render (useful mainly for a sphere-like object). Might not be perfectly realistic in all cases, but close enough for sphere-likes.

@drcmda here’s what I have so far, ported to LUME’s component system, no new features yet:

1 Like

@trusktr Which app/tool is this? In which you can change roughness, resolution…etc with the help of GUI. I am new to 3D Web development.

you can probably still do it by rendering objects manually. i.e. instead of r.render(scene, camera) do yourOrderedList.forEach(object => r.render(object, camera) and, of course, disable automatic clearing.

2 Likes

For custom and limited cases, that will work great.

But I’m working on a framework, and so changing render order like that will likely break expectations with respect to the visual output that the high-level DOM tree structures provide (I’m making Lume, custom elements for 3D). F.e. transparency may suddenly render differently in other parts of the app due to the modified render order, etc.

The tricky goal is to make something generic that is good enough for most people, without requiring particular knowledge of rendering order details, and providing consistent visual output based on the input HTML/DOM tree. Essentially I’m trying to make it have similar strong guarantees as with regular 2D DOM that browser have today.

F.e. with a browser’s 2D DOM we can’t change details like render order of DOM elements, but we can only change high-level properties like Z position, etc, and then the browser gives us a guaranteed visual outcome based on the tree structure and given (CSS) properties.

My goal is to make this HTML interface have a high-level of rigidity with consistent visual outcomes. So if I start to change render order of things for certain cases, this will introduce a level of instability in visual outcomes.


This is a difficult problem! If browsers ever come out with standard HTML elements for 3D, they will also need to have consistent visual guarantees.

Apple Safari’s new <model> HTML element (revealed in WWDC 2023 for visionOS in Vision Pro, see this video) is the first example of a 3D HTML element with strong guarantees on visual output.

Imagine that later they introduce more elements like <light>, <camera>, etc.

If browsers standardize on these 3D elements, they will all need to give a guarantee of consistent visual outputs, regardless of the details of the underlying renderers, and they won’t be able to arbitrarily allow users to specify options that will arbitrarily break visual of other elements in a scene.

So what I’m trying to do with Lume elements is imagine what that sort of HTML rigidity (consistency guarantee) could be like.

I really like Tweakpane.

I also have an example of how to use it here: