Shader to create an offset, inward-growing stroke?

Well, it is a question that got answered so I think it’s in the right place.

Nice answer btw :smile_cat:

Sorry to bump this one again… I’m trying to figure out another part to this! :joy:

The results from @prisoner849 are awesome, but currently I’m having an issue with aliasing, especially on smaller nodes, or nodes further away from the camera. It seems like when zoomed up close to a point, the smoothstep() function is doing fine to make sure things are not jaggy, but it gets pretty rough the farther away the particles are… Like this:

Any thoughts? Antialiasing is enabled in the renderer, but I’m assuming this is a result of the pixel results from the fragment shader not receiving antialiasing–only geometry edges would benefit from that?

Here’s the link to the pen that @prisoner849 created:

Ah, okay :slight_smile::beers:

1 Like

Maybe this thread will be helpful:

I’ve tried it with FXAA:

Can’t say there is much impovement of visual. :thinking:

Hmm… Just made this :slight_smile:

1 Like

Hmmm, interesting… I saw a few links which tackle antialiasing through the shader, but they all use standard derivatives:

I had trouble trying to enable the standard derivatives extension in three.js, which led me to this:

Still no luck :thinking:

Hey @prisoner849 or anyone else coming to this thread looking for answers; I figured out the antialiasing bit! (well, mostly)

The key was enabling the standard derivatives extension in the fragment shader and then applying the fwidth() function to a few of the spots in the return statement of the getShape function.

First, to enable standard derivatives, you need to be using a RawShaderMaterial, and then at the top of the shader, you put this statement:

#extension GL_OES_standard_derivatives : enable

Secondly, here’s the getShape() function in its entirety. Take a look at the return statement to see where I used fwidth().

float getShape(float thickness, float outer, vec2 uv) {
    uv *= 2.0;
    float a = atan(uv.x,-uv.y) + PI;
    float r = TWO_PI / vSides;
    float d = cos( floor( .5 + a / r ) * r - a ) * length( uv );
    return smoothstep(thickness - fwidth(d), thickness + fwidth(d), d) - smoothstep(outer - fwidth(d), outer + fwidth(d), d);

Another cool thing about this getShape function is that I don’t need to have separate functions (or conditional branching) to make different shapes! The same function that makes a triangle will work for any n-sided shape. In the above function, the variable vSides is just a varying that I’ve pulled in to the shader as an attribute (i.e. 3.0 for triangle, 4.0 for square, 50.0+ for a circle, etc.).

Anyways, I mentioned above that I mostly solved the anti-aliasing issue. If you look closely at the codepen below, anywhere a shape overlaps another you can see the aliasing on the outer edge of the black outline; I believe it’s because of the discard that’s being done to remove the extra pixels around the shape, and I couldn’t seem to antialias that using fwidth(). Not sure if there’s another method to get around it, but honestly I’m super happy with the results and will perhaps revisit it down the line. This is the line I’m referring to:

if (shape < 0.5) discard;

Anyways, thank you so much again, @prisoner849! I’m making great progress now :smile:

Here’s the final codepen which has some extra goodies like attributes controlling color saturation, brightness, thickness, scale, etc. They’re all randomly generated at runtime in the codepen, but it’s cool to see it all working :slight_smile:


Yeah, I had the same thought, but I was too lazy to change the shader :smile:

Now everything looks really cool :+1:

Have you considered altering opacity smoothly instead of discarding? Something like:

gl_FragColor.a = smoothstep(shape);

(or with parameterized sharpening around the threshold value, like smoothstep(0.5+sharpness*(shape-0.5)))

1 Like

A-ha! I still work on this project from time to time and will look at this tonight :slight_smile: Thank you for this! :+1:

Note that altering opacity may turn out to be more expensive than discarding, though. The bright side is that you can avoid a branching. I had a fragment shader recently that used nested conditionals to color pixels differently. It loaded my GPU 100% all the time. After I switched out all the hard boundaries with sharpened sigmoid functions (,, the load was reduced to 30% or so, with better look.

1 Like

Yes! That is the only branching logic I have in the shader, and it has given me an eye twitch :slight_smile: If this solution produces a nicer look with better performance, I will be very happy!

1 Like

No rest for crazy people! :smile:
Just tried that shaping function with THREE.PointsMaterial() + .onBeforeCompile (simplified version):

Still no enabled transparency, just alphaTest: 0.5

1 Like

Nice and simple. But still the edges look like stairs. You need smoothing on the alpha channel too, over the outer contour. I tried to make it work now, but must admit that I don’t understand the code well enough (and don’t really have time).

1 Like

Yeah. Here is a working example:

Simple setting of transparent to true (no alphaTest parameter) gives an odd looking result, as further points can overlap close points. To solve this issue, we need to sort particles by distance from the camera. I found a realization of this algorithm in this example and applied it to my points.
Added Stats to both the current and the previous codepens.

1 Like

Ah, that’s almost perfect! Maybe tweak the edge smoothness with a uniform, adjusted depending on screen resolution? Hm. I think you would transform d like this:

d = 0.5+sharpness*(d-0.5);
1 Like

Hm, I don’t understand this. Isn’t sorting the particles by distance from camera what the vertex shader does with gl_Position.z? Is it a matter of z-fighting? In that case, could reducing camera range be enough in most cases?

Just commet out that line sortPoints(collective); in my last codepen and you’ll see the difference.
Seems, for objects with enabled transparency, it depends on the order of vertices in its geometry, and for an indexed geomety it depends on its .index (which we change by sorting in the last codepen here).

Are you manipulating the drawing order of the vertices by ordering the index, to not rely on correct depth testing and alpha blending? Isn’t drawing order device-dependent, since vertex processing is in principle parallel?

Whoah, that’s a strange effect! (On my computer, at least.) The (supposedly transparent) surrounding squares partly overwrite neighbor particles. I haven’t yet understood how/why you use(d?) alphaTest here.

Sorting points like that at every animation step is not sustainable/scalable. Hm. I still don’t understand why it is necessary. I am encountering the same problem with my spheres, when I try to set opacity on them instead of discarding fragments (eventually to achieve antialiasing). It is as if points forcibly write the clear color on top of other points.

Agree :slight_smile:

If everyting was so simple with transparency, we wouldn’t have such interesting topics, like that Depth peel and transparency.