Perfect geometries in postprocessing

I would like to create perfect mathematical objects in postprocessing so that curved geometries are really continuously curved. here is an example with a very simple curved geometry, a sphere. the relative position is easy but the orientation worries me. I don’t have any vertices, just mathematically described points from the camera’s point of view where the boundary surface of the sphere is. I have to transform these points into the reference system of the world so that I can correctly simulate gloss effects and light shadows. can anyone tell me which matrix i need for this?

export const perfectGeometryVS =`
		
  out vec2 vUv;
 
	void main() { 
		vUv = uv;  
		gl_Position = vec4( (uv - 0.5)*2.0, 0.0, 1.0 ); 
	}
`;

export const perfectGeometryFS = `

	in vec2 vUv;
	uniform sampler2D tDiffuse;
    uniform sampler2D tDepth;
	uniform vec3 cameraPos;
	uniform vec3 spherePos;
	uniform float sphereRadius;
	uniform vec3 lightPos;
	uniform mat4 viewWorldInverse;
	uniform mat4 uProjectionInverse;
    uniform mat4 uMatrixWorld;


vec3 computeWorldPosition(){
		// Convert screen coordinates to normalized device coordinates (NDC)
		
		float normalizedDepth = unpackRGBAToDepth(  texture2D( tDepth, vUv) ); 
		
		vec4 ndc = vec4(
			(vUv.x - 0.5) * 2.0,
			(vUv.y - 0.5) * 2.0,
			(normalizedDepth - 0.5) * 2.0,
			1.0);
		
		vec4 clip = uProjectionInverse * ndc;
		vec4 view = uMatrixWorld* (clip / clip.w);
		vec3 result = view.xyz;
		
		return result;
}

	
	bool RayIntersectsSphere(vec3 rayStart, vec3 rayDir, vec3 sphereCenter, float sphereRadius, out float t0, out float t1) {
    vec3 oc = rayStart - sphereCenter;
    float a = dot(rayDir, rayDir);
    float b = 2.0 * dot(oc, rayDir);
    float c = dot(oc, oc) - sphereRadius * sphereRadius;
    float d =  b * b - 4.0 * a * c;

    // Also skip single point of contact
    if (d <= 0.0) {
      return false;
    }

    float r0 = (-b - sqrt(d)) / (2.0 * a);
    float r1 = (-b + sqrt(d)) / (2.0 * a);

    t0 = min(r0, r1);
    t1 = max(r0, r1);

    return (t1 >= 0.0);
  }
	

	vec4 perfectSphere(vec3 rayDir, vec3 rayOrigin, vec3 spherePos, vec3 diffuse, float rSphere) {

		float t0, t1;

		// Early out if ray doesn't intersect ocean.
		if (! RayIntersectsSphere(rayOrigin, rayDir, spherePos, rSphere, t0, t1)) {
			return vec4(diffuse, 1.);			
		}

        //here i need a matrixoperation because the spherical geometry is always created from the perspective of the camera 
		vec3 vPosition = rayOrigin - spherePos + t0 * rayDir;	//virtual vertex
		vec3 vNormal = normalize(vPosition);	//virtual normal
		
		vec4 LightPosition = viewWorldInverse * vec4(lightPos - spherePos, 0.);
		
		float intensity = 1.0;
		float shininess = 100.0;
		
		vec3 n = vNormal;
		vec3 s = normalize(LightPosition.xyz - vPosition);	
		vec3 v = normalize(-vPosition);
		
		vec3 r = reflect(s, n);
	
		vec3 color = vec3(250./255., 5./255., 20./255.);
		vec3 diff = color * max(dot(s, n), 0.0);
		vec3 spec = vec3(1.0) * pow(max(dot(r, v), 0.0), shininess);

		return vec4(diff + spec, 1.);
	}

	void main() {

		vec3 diffuse = texture2D(tDiffuse, vUv).rgb;	
		vec3 posWS = computeWorldPosition();

		vec3 cameraDirection = normalize(posWS - cameraPos);

	
		vec3 color = perfectSphere(cameraDirection, cameraPos, spherePos, diffuse, sphereRadius).xyz;

	
		gl_FragColor = vec4(color, 1.);
		
	}
`;


The main idea of rendering a surface is that you make it out of triangles, pass each triangle (ish) to the vertex shader and then use barycentric interpolation of attributes (like position, normals or UV) passed to each triangle vertex to get their values at each pixel in the fragment shader.

Interpolation is done by the hardware and that’s what makes your life easier and supports parallel computing in the fragment shader.

In your case I don’t see what you could interpolate.

You could start with one vertex where the center of your sphere is, use point geometry, and then mathematically calculate normals and illumination based on the projected offset of a pixel from the center of the sphere.

It’s possible to render 3D w/o vertex geometry, calculate everything mathematically in computational shaders (not supported by WebGL) and then render on a single triangle. That requires a lot of math, crazy level of optimization of the shader code and very difficult to maintain/change since you don’t use conventional 3D models and everything is described by formulas.

I already have the mathematical sphere with my shader. Compared to what I do in my app, the mathematical effort is very low. What is wrong is the reflection on the surface. I just need to do the right coordinate transformation. Paper and pencil will probably have to be used again to calculate this. I thought I could save myself the effort and just ask :grin:




A lot is possible with mathematics. Luckily i am a physicist. I can fly from far away to the surface without interruption. I could load infrastructure, trees, woods, … and walk around on the ground. This is running on my tablet. I would like to publish the whole project under the MIT license once I’ve finished it. Unfortunately, until now I don’t have any more precise map material, because paradoxically, the performance would be better with even higher resolution maps because I would then have to carry out fewer interpolation calculations for each coordinate point. Incidentally, I also use a mathematical sphere for the atmosphere, but it’s not about the surface, it’s about ray tracing through the atmosphere. I also have clouds but I deactivated them because they are too much for my mobile. Clouds are very computationally intensive.

When i’m on the ground the resolution is 1m, i.e. a vertex every meter. But that is easily adjustable, only two parameters that i need to set if i want more or less. I currently have far more vertices than the resolution of the map material when i land.

By the way, the sun is not a fake. I could also fly to the sun without a interruption. It is a sphere with the radius of the sun at a distance of one astronomical unit, so everything is to scale

5 Likes

The snapshots are amazing. I’d love to see it when it is done.

So you are a physicist with a fair knowledge of math and you have built this complex app, now you are asking “what matrix” you should use to calculate how light reflects from the sphere? :wink:

This is indeed beautiful, I’d love to see more how it is done.

The way I know how to get normals of geometries computed at render time is by using SDFs and calculating the gradient by doing 3 samples with a small offset.

You can see a full shader in this example on Polygonjs: by_node_mat_raymarchingbuilder_usingarealights

You can see the complete shader to the right panel, but the relevant bit in this case would be:

vec3 GetNormal(vec3 p) {
	SDFContext sdfContext = GetDist(p);
	vec2 e = vec2(NORMALS_BIAS, 0);

	vec3 n = sdfContext.d - vec3(
		GetDist(p-e.xyy).d,
		GetDist(p-e.yxy).d,
		GetDist(p-e.yyx).d);

	return normalize(n);
}

where GetDist gives you the SDF value.

The SDF computation is done in world space. But if you want to use threejs lights, those are computed in camera space. So I use this to convert the gradient to camera space. After that I can use threejs light functions.

geometry.position = (VViewMatrix * vec4(pWorld, 1.0 )).xyz;
geometry.normal = transformDirection(transformDirection(_n, vModelMatrix), VViewMatrix);
geometry.viewDir = ( isOrthographic ) ? vec3( 0, 0, 1 ) : normalize( cameraPosition - geometry.position );

I’m not sure if that will completely solve your problem, as you seem to have a different setup, but maybe that can give you ideas.

Yes, I often have to work out on paper how I have to program something. if possible I would also like to save myself the trouble and thought I’d just ask :sweat_smile:

hm… :thinking: world coordinates are what I need. since my camera is the reference system in the post-processing, the calculated spherical points are in the reference system of the camera

How did I do this? I was thinking about making a homepage because my code is very large now (many, many 1000 lines). My cloud shader alone with its subcomponents is about 2,000 lines long. And i still have so much to do with the clouds. Since I deactivated them for performance reasons, here is a screenshot of what the clouds look like. Depends on the set type of clouds and the sunlight


My planetary generator is very neat, but I would have to use spice to make it understandable because it’s really extensive and has a lot of formulas. Maps with a hundred times higher resolution would also look a lot better. Only the high-resolution maps are multi-threaded over the area I’m in, so I can move around freely without being disturbed. Currently I only have a resolution of 172k when I add up my 500+ cards. I would be happier with 1720k. However, a bit more RAM would be helpful as I can only use 3GB on my tablet even though it has 8GB. Android alone occupies 4 GB and 800 MB are reserved. Unfortunately, I only have very low-resolution maps of Mars, so that Mars unfortunately looks rather modest :slightly_frowning_face:
I’ll need to check out more from NASA, because they have far better data.



I also only have the biosphere marker active on Mars because when I activate triplanar mapping the frame rate drops sharply again. On Mars you can see the low resolution very clearly. I urgently need to work on getting high-resolution maps.

2 Likes