Im trying get a sphere geometry to have a fog effect inside: the close to the sphere center, the more intense the fog should be, and on the surface of the sphere the fog should be 0. I want the fog to obscure things behind it and to have a real sense of volume.
I’m trying to think of ways to solve this and I assume I need to raymarch? Can I start from the fragment global position, raymarch in the direction the camera is facing (i have an orthographic camera) for a max distance, and for each step calculate the distance to the center of the sphere (world position), and then accumulate fog based on that? I’ve tried this:
my cameraDir is the camera position before calling lookAt(0, 0, 0) and doing any other translates.
This does not work, and i’m not sure why: i get a completely transparent sphere (fog = 0). any help would be greatly appreciated. Does this approach even make sense?
Have you had a look at three.js webgl2 - volume - cloud ? Are you trying to establish volumetric fog, scene depth fog, or screen space fog? If volumetric you could probably work with the shader in the linked example, instead of using noise use a radial gradient or sdf for evaluation
so yeah i’m def trying to get some volumetric going, but only for a sphere geometry, and only for a sphere shape. im not looking to raymarch anything more complex.
i’ve looked into the raymarching before and alot of the stuff goes over my head so i was thinking i could get away with something more simple here.
my theory was that since my mesh is a sphere, and the shape i want is also a sphere, then i could use the fragment world position and the camera direction to march x steps, and check distance to sphere center and accumulate fog based of that. its probably not super elegant, or optimised.
but it does not work and what ever i do i always end up with no fog accumulated.
also what is the difference between volumetric fog, scene depth fog, or screen space fog?
i’m happy to keep the fog restricted to the sphere itself, and the camera will never be inside the sphere, but it should intersect/obstruct the rest of the scene
You can simply set the fog intensity depending on the angle between the view direction and the surface normal. The smaller the angle, the closer the ray goes to the center of the sphere, and the greater the influence of the fog. When they are almost perpendicular (the ray only touches the edge of the sphere), the influence of the fog is practically zero. And no raymarching is needed.
Good question, I guess fog is generally all “volumetric” in a sense, fmu of a differentiation, true “volumetric fog” would look like this three.js webgl2 - volume - cloud
Typical in-built scene depth fog
Which can be extended in all types of ways as @trueshko has suggested above, there’s a great thread here…
And then by “screen space” fog I’m referring to more of a shader pass that acts more like a vignette…
Any chance you can give a minimal visual that depicts the effect you’re going for?
the guy behind the video has said it was made “simply” with a sphere that has fog attached to it (in it i assume), tho i think since i want it to intersect properly with the rest of the scene i’m going to need a depth map as well. this might be trickier that i initially though
instead of doing distance to center of sphere for the fog, i instead tried to sample a noise value instead, and that works (ish), tho it does not look quite as volumetric/depth like as i imagined it would
but that makes me really wonder why doing a simple inverse distance to center for each step does not give me a fog like effect
You seem quite familiar with general 3D graphic terms, is there any other medium that you can make a visualisation of what you intend to achieve with a third party program such as blender, houdini, c4d? That may help establish the pipe of processes required to achieve the same visual aesthetic in a 3js env…
It’s not exactly the effect you’re after but this great article just came up on codrops that explains the process of using tsl (in a very rough visual connection to the latest image you shared) this may help get to grips with the translation of your vision into shaders, to which you should then be able to extend the relative three fog shader chunks with…