Gpgpu points into metaballs

This is more of an approach question that I’m thinking through so excuse the lack of sample code as the project has a lot of gpgpu code but please picture this scenario:

I have used gpgpu successfully to output a particle system which takes geometry and applies some simplex noise to animate and add a liquification effect on the particle field of a points model using the a flowfield which effects the vertex shader. All looks great so far.
Now I’m at the point where I have a fairly bare fragment shader (shadermaterial on points model) and I want to achieve a metaball type effect using raymarching by running one last fragment shader in order to turn the round particles into more of a liquid effect where they sort of ‘gloop’ together based on proximity.

I’ve seen some pretty simple frag shaders over on shadertoy which use raymarching and sdfs to make metaballs to add some “gloopyness” for lack of a better term. These seem like a good starting point for my fragment shader however (please excuse me if this assumption is wrong) it seems like I would have to compute the full scene sdf and raymarch everything for every particle/vertex or run another renderpass? This seems like a very computationally expensive process. Is there a better way to do this?
Apologies but my brain is slightly fried in the process of getting to this point and perhaps my assumptions are all wrong or i’m missing something. I feel like someone must have already cracked this.

1 Like

You’re correct that you need to compute the entire scene sdf per pixel, but it turns out that that isn’t actually too bad for a handfull of metaballs.
It sounds insane, but GPU’s Are pretty insanely fast.

Here’s an extreme example of per pixel sdf wielded by an actual magician: Shader - Shadertoy BETA

Every pixel in that scene is somehow evaluating (or finding ways to cull) the SDF for every bump on the ground, spheres in the character, the ground… the ground blobs… the sky… the clouds… everything.

(and shadows, and subsurface translucency, and motion blur and dithering etc. etc. )

hows about 25,000 points :smirk::smile: that i’m trying to liquify into a single mass lol

Ahh i see… yeah for that you want to use some techniques like in this article:

SO i feel like i’m creeping towards a potential solution by forking the Raymarching Cubes demo and simplifying it here:

This seems to do all of the geometry in the shader through which isnt ideal for me as I’d like to be using vertices which i’ve been manipulating from my Points() gpgpu instance.

Already this seems like it’s going to get very laggy if I push it to thousands of vertices though. I’m not sure this approach is going to work

Ahh yeah I don’t think you understood what I was saying.
You’re gonna have a bad time looping over 25k points in a shader, which is why you probably need a blending/fill based solution like I linked ^.

What I’m ultimately driving for is a kind of interactive 3d fluid simulation based upon points i already have.
I read through the codrops tut also and I think that joining each point to each other in the points frag shaders isnt feasible but working out an SDF of the points every frame then some kind of raymarching on the full model geometry is going to be less intense for the processing.
I found the SDFGeometryGenerator in the docs but theres basically no examples i can find for it which i can learn from with the hope of implementing it with a raymarcher.
Any suggestions would be super appreciated!

2D sdf seems pretty straightforward… rendering particles with a radial gradient and the right blending mode might do that directly.
Doing it in 3 dimensions invokes a bunch of other issues, since you can’t just make a 3d texture with enough resolution to encapsulate the sdf for points of that density… so then you’re looking at GPU acceleration structures like perhaps a sparse voxel octtree or similar in order to cull the set of particles you need to consider in your raymarching shader.

1 Like

Hello! I think that I once did something similar. I used 2 passes: a rough one in the vertex shader and then a more precise one in the fragment shader, where raymarching started from a distance calculated in vertex shader. I simply added a plane to the scene which was displayed on the whole viewport with a division step of about 10 pixels. And seted for this mesh my shader material.
GitHub - trushka/liquid-drop - I’m sorry, maybe my code is not very readable.

This, + for 2D some postprocessing can also help make the particles melt into each other.

1 Like

@manthrax so perhaps outputting the points as a b&w texture with an alpha channel then running a customer ShaderPass to make the SDF/RayMarched liquify effect?
Just thining this through now. perhaps I can optimise this by using a 2d b&w texture which simply doesnt bother with any raymarching if the color at the uv is black but only if it’s white (or even maybe somehow using a normalised greyscale to get the depth of the point?)
is this the sort of approach you mean?