Coding jam: Pixel-perfect spheres without high-res geometry

Coding jam context: Interest for collaborative coding "jams"?
Discussion context: PointsMaterial + light support · Issue #17618 · mrdoob/three.js · GitHub

Idea: Pixel-perfect spheres rendered by raycasting on view-space quads in the fragment shader.

Demo: Perfect spheres

JSFiddle version: https://jsfiddle.net/EliasHasle/yrh4k72d/ (Try increasing the number of spheres while decreasing the radius. On my computer, but in the low-resolution corner viewport of JSFiddle, 1 million spheres render at interactive/movie FPS, and 10 million spheres render at a few FPS. In full screen, 100 k spheres work, 1 M spheres lag.)

Problems:

  1. Fragment depth calculation is ad hoc and wrong. The light source mesh consistently lands behind all sphere quads. Fixed!
  2. I don’t understand the need for padding in the perspective branch. :confused: The point of radius-scaled view-space quads was to be sure that the quads would be guaranteed to encompass the spheres.
  3. Branching shaders may harm performance.
  4. Quads have a large geometry overhead compared to points.
  5. Only a custom Blinn-Phong shading is implemented. Depending on custom code for all shading is not sustainable, but should be avoidable.
  6. Custom fragment depth disables early depth testing, but in principle many early tests would be correct. This may be possible to benefit from by using WebGL2 and layout (depth_greater) (or something like that) on the gl_FragDepth output declaration.
  7. Aliasing on the edges. Drawing anti-aliased circular points using OpenGL/WebGL - A Circular Reference seems to provide a viable solution, but all attempts so far have resulted in black squares around the particles.
  8. The demo reminds me of m&m’s… :drooling_face::crazy_face::roll_eyes:

Possible extensions:

  1. Texturing from inverse rectangular projection. This can be achieved by converting back from view space to object space by multiplying with a uniform inverse modelViewMatrix, and finding spherical coordinates which can be converted to UV.
  2. Back and front renders.
  3. “Volumetric” rendering dependent on distance passed within sphere.
  4. Connect to as much as possible of the functionality of three.js ShaderChunks for mesh materials, using the derived view-space positions and normals. Multiple lights, shadowing, and envmaap reflection/refraction should be achieveable.
  5. A simplified API for reuse.

Is it possible to achieve the same result with points instead of quads?
Advantages:

  1. Smaller geometry.
  2. Only have to change one attribute entry to move a particle.

Disadvantages:

  1. Cannot reduce the number of discarded fragments further, while the quad approach can, if desired, be extended to regular polygons with more edges, tightening around the circle.

Obstacles:

  1. Maximal point size is device-specific.
  2. Setting the right point size in the vertex shader.
  3. Converting clip-space(?) point coordinates into view-space rays?

Useful context: WebGL2 Drawing Without Data

Another approach would be to use instancing on the quads, as suggested by gman/greggman here. Update: Implemented in Perfect spheres. The code is much simpler, memory may be saved, updating positions is simpler, along with possible benefits that I do not know yet. But I think there is much more depth flickering on the level of whole spheres, indicating that fragment depth does not work perfectly.

Improvements are more than welcome! The coding jam may eventually result in a collaborative GitHub repo.

6 Likes

Note: This is my first comment in this thread, but I have made many changes to the original post and demo.

I opened an issue on the flickering observed with instancing, but closed it again because of doubt. Does anyone here have an idea what may possibly cause differences in fragment depth calculations for instancing versus repeated quads?

I perceive a small improvement on the “flickering” after increasing the camera’s near plane distance. However, the problem still persists. I no longer see a difference between the instanced and the non-instanced versions though. So I am leaning toward going instanced.

I have a hypothesis that the inconsistent depth may be a consequence of early depth tests run with interpolated depth against a depth buffer that is populated with custom depth. That explains the half-good results. I do not know of a workaround yet, except maybe upgrading to WebGL2 and seeing if layout specifiers for gl_FragDepth can do the trick. Then I can also test WebGL2 instancing with UBO.

Update: My source for the layout trick is LearnOpenGL - Advanced GLSL . It does not sound very promising:

Setting the depth value by our self has a major disadvantage however, because OpenGL disables all early depth testing (as discussed in the depth testing tutorial) as soon as we write to gl_FragDepth in the fragment shader. It is disabled, because OpenGL cannot know what depth value the fragment will have before we run the fragment shader, since the fragment shader might completely change this depth value.

By writing to gl_FragDepth you should take this performance penalty into consideration. From OpenGL 4.2 however, we can still sort of mediate between both sides by redeclaring the gl_FragDepth variable at the top of the fragment shader with a depth condition:

layout (depth_<condition>) out float gl_FragDepth;

So on one hand, my hypothesis cannot possibly be right if the implementation is specification-compliant, and on the other hand, if it is right, WebGL2 cannot fix it because the relevant layout specifier is only available in OpenGL 4.2 and above.

It can be some generative volume filling pattern with spheres.

What do you mean? Are you talking about an application of the quad shader?

just an idea

5 Likes

Ideas are welcome! :slight_smile:

And, as always, bugs too! :wink:

That idea may be realized by many iterations of a fake physical simulation with “gravity”, damping and (soft) intersection constraints between particles, as well as coloring depending on position (or particle density and color correlated). Typically accelerated with a spatial index.

I also have my own application with bouncing balls, which will benefit from nicer shading. In that one I am also planning to make the collisions exact, and GPU-accelerate collision-free transitions.

I don’t feel that the spheres are quite ready for applications yet, though. First I want them to be bug-free and bound tighter by the quads (requiring understanding of problem 2 above). Also, I will need to pack them into a reusable module with a nice API (easy).

1 Like

Wasn’t the context this topic? :smile:

The technique itself is a little case depended and without more expensive raymarching on distance field based data restricted to basic shapes. Another issue comes with depth like in your demo with very dense distributed shapes, without altering depth they won’t naturally intersect with each other in the depth buffer.

For the API i would recommend just transforming the inputs to work like actual geometries like i mentioned there:

I’d give more feedback but im sick and my cough makes typing be like typing in a launching space rocket. :face_with_thermometer:

3 Likes

I had forgotten about that topic. Certainly one of the more interesting threads. :smile:

I do use the fragment depth extension, but it doesn’t seem to be 100% reliable.

Ehm, “working like actual geometries” sounds very ambitious. I focus only on making spheres as good as possible. :wink: I am thinking of something like:

class Quads extends Mesh {}

class Spheres extends class Quads {
    constructor(count, position, radius, color) {
        //All optional except count and maybe position.
        //radius and color can be single or array
    }
    raycast //custom, based on mathematical spheres
    etc., etc.
}

I must admit I am always a little fyrestarstruck by your comments. Hope you will get well soon!

2 Likes

@EliasHasle you do believe this is faster than IntancedBufferGeometry + SphereGeometry?

Update: This comment was written too long after I last worked on the project/experiment, and it looks like I did not remember everything correctly. Sorry.

I haven’t properly simplified the shader to optimize for speed (Update: or have I? I will have to check), but it can be roughly compared to a bunch of sphere geometries with Phong material. A fair comparison is difficult, since depth ordering (per pixel, i.e. depth testing, intersections) will more often fail with “point spheres” (Update: Marked as “fixed” in last version of the original post), whereas pixel-perfect shape and normals will fail for mesh spheres (or require very high resolution meshes). I have not made any tests to compare performance, and the results could depend on hardware, drivers etc… (Update: And possibly usage, if any extra effort is put into ordering by depth, to resolve any remaining depth issues, e.g. when trying to introduce anti-aliasing using the alpha channel.)