I didn’t see any good overviews on Signed Distance Fields and specifically their application to ray tracing. So I thought I’d give it a go, and some of you might find it useful
First some pictures
The first set is rendered with 512x512x512 SDF texture, using ray tracing, the second set are standard rasterized renders.
The ray tracing ones visualize number of steps (texture samples) we take in order to get a hit. The scale is turbo
And it goes from 0 to 128. We’ll come back it.
What are SDFs?
Canonically the best introduction would be by Inigo Quilez(iq). Let’s unpack the definition first.
Signed Distance Field. F also interpreted as “Function”.
Let’s start with the field part, as that’s doing the most work, a field is a concept from Physics where we take some phenomenon, such as magnetism, and describe it in relation to space. I’m sure most of you are familiar with a diagram like this:
Magnetic field exists everywhere, and technically it’s unbounded, it extends to infinity, it just has weaker and weaker effect.
Another common example is gravitational field, here’s a visualisation of earth and the moon
So, a field is just a measure of some value across space. Gravity, magnetism… or in our case - distance. Distance field, visualised for a 2d circle we would get something like this:
Where white is “far” and black is “near”
Now the last bit, the Sign. Adding a sign to the distance field, means we allow negative distances. This is the biggest piece of complexity here, a negative distance is distance “inside”. If we take our circle, we could extend the field into the circle like so
where red is negative value.
It might not seem immediately useful, but if we highlight the area of the field where sign transitions (from positive to negative) we find the surface of a thing that’s described by the field:
We can use this property to encode and find surfaces in 3d (and 2d ) space. That’s pretty much it.
Now, with that out of the way, it’s worth mentioning that SDFs often are described by functions, like this one:
float sdSphere( vec3 p, float s )
{
return length(p)-s;
}
Which defines a sphere, here’s a more complete overview of these by iq on shadertoy
We’re not going to talk about those here. Functions are cool, but they are limited, as you describe your scene - you’re essentially building a more and more complex function, and evaluating it costs progressively more and more.
More interesting case is for arbitrary 3d meshes, that’s what we typically work with.
How to describe a mesh SDF?
There are a few different approaches, but generally we use 3d textures and record AABB (Axis-Aligned Bounding Box) for the mesh.
Most of the space will be nowhere near a surface, so another trick is to use sparse 3d textures. But this is outside of the scope of this article.
There are also obscure methods, such as approximating a mesh with a bunch of primitive SDFs such as spheres and capsules, and using some kind of an acceleration structure such as a BVH. I found them to be mostly of academic interest, they are wildly impractical due to various limitations.
Building an Mesh SDF
Building SDFs from meshes is an interesting topic in itself, it boils down to 3 things:
- How to perform a union of 2 SDFs, which is just
min
operator - How to compute SDF of a triangle
- How to determine if we’re inside or outside of a mesh
For triangle SDF, iq has you covered:
float udTriangle( vec3 p, vec3 a, vec3 b, vec3 c )
{
vec3 ba = b - a; vec3 pa = p - a;
vec3 cb = c - b; vec3 pb = p - b;
vec3 ac = a - c; vec3 pc = p - c;
vec3 nor = cross( ba, ac );
return sqrt(
(sign(dot(cross(ba,nor),pa)) +
sign(dot(cross(cb,nor),pb)) +
sign(dot(cross(ac,nor),pc))<2.0)
?
min( min(
dot2(ba*clamp(dot(ba,pa)/dot2(ba),0.0,1.0)-pa),
dot2(cb*clamp(dot(cb,pb)/dot2(cb),0.0,1.0)-pb) ),
dot2(ac*clamp(dot(ac,pc)/dot2(ac),0.0,1.0)-pc) )
:
dot(nor,pa)*dot(nor,pa)/dot2(nor) );
}
For determining inside/outside, it’s more tricky, most common technique is to just cast a bunch of rays in random directions and see if we get more “back” surfaces hit than “front” by evaluating triangle normal of those ray hits, and if we have more back surfaces - we assume the point in space to be “inside” and give it a negative value.
What can we do with SDF?
Okay, so by now you’re probably asking
- “Alex, you explained a bunch of math but what’s the point?”
Ray marching. We can march rays through SDF to perform ray queries, but we get something better, and that is - cone tracing. By marching a ray through an SDF in some direction, we can record distance to nearest surface, in other words - how close did that ray pass to anything along the way. This information can then be used for incredibly hard questions in graphics, such as “how much light reached point X”. It’s an innocent-looking question, and it seems easy for hard lights, but if you consider soft lights and penumbra - it gets very nasty. Ray marching just ignores all that complexity and gives you a direct answer. Meaning you get accurate and exact penumbra value with just a single ray (cone).
For caparison, to get a noise-free result with ray tracing and triangles - you’d need to cast in the ball park of 500 rays, or have some very aggressive denoiser working overtime.
Well, big deal, we can trace rays, so what. Well, here’s another useful bit - tracing rays through SDF is an order of magnitude faster than doing so on triangles. This is not true for simple scenes where you have a handful of triangles, but as soon as we go past 100,000 triangles or so - SDF tracing cost remains constant, but using triangles cost goes up and up and up.
What are the benefits of SDF vs Triangles
In short, SDF lets us describe surfaces in 3d space using a fixed amount of data. If you have a mesh with 1,000,000 triangles - you need to store and evaluate 1,000,000 elements of data. A 3d texture lets you decide resolution you’re happy with.
Another aspect is that a 3d texture is trivial to sample. If we take a point in space and ask a question:
- What are nearby triangles?
Answering such a question with triangles is hard, because you need to essentially check every triangle. You could use an acceleration structure, but you’re not going to do better thanO(log(n))
. And that is to say nothing about memory access patterns. A 3d texture isO(1)
to lookup, and has much better memory access patterns. It also lets us exploit texture units in GPU hardware.
Also, hopefully it will whet your appetite to know that Epic’s Unreal Engine runs on SDFs for the purposes of ray tracing.
Going back to the examples from earlier
To make it easier to understand cost of ray casting, let’s change the scale to go from 1 to 16 texture samples:
Most of the scene gets resolved in ~4 taps, some 15% of the pixels require more than 8 taps and only about 5% require full 16 taps.
Depending on your SDF representation, this is almost cheaper than a standard SSAO pass, as those tend to require 32 samples or more.
Disclaimer
- All screenshots are taken in Shade
References
- Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more
- Inigo Quilez :: computer graphics, mathematics, shaders, fractals, demoscene and more
- Volumetric Rendering: Signed Distance Functions - Alan Zucconi
- Signed distance function - Wikipedia
- 9bit Science: Raymarching Distance Fields
Other Topics
- Personal Project - Ray Tracing Distance Fields
- Clipping Solids (SDF functions)
- Clipping Solids (SDF) Modelling / Sculpting
- I tried to generate SDF in Three.js and use it to calculate shadows, but it looks ugly! help!
- Troika-3d-text: library for SDF text rendering (2D)
- Xiamo - SDF modeling editor with threejs
- Three-mesh-bvh: A plugin for fast geometry raycasting and spatial queries! - #49 by gkjohnson