Hello! I’ve been looking for a way to draw my model with edges. The issue I have is that while all of the various methods work ok, none of them can ignore (to my knowledge) backfaces. My issue is that my model might have backfaces in front of a front facing face. In a render with no edges and a `MeshBasicMaterial({side:THREE.FrontSide})`

, it looks fine. Backfaces are culled and I see only the front facing faces.

I have used @gkjohnson’s Conditional Edges several times, and it always works well. This time though, my model is open and there can be situations where there are backfaces in between the camera and a front face:

Here there is a form on the right that is open and in front of the form on the left. We see through the form on the right because I use `THREE.FrontSide`

on the mesh material. The conditional edges are drawn for all of the faces, no matter their winding.

I have attempted, unsuccessfully, to try and remove the backfaces from the mesh (by looking at the face winding), and pass only front facing faces. I probably overcomplicated things by creating a triangle out of the face vertices, project them to screen space, calculate the slope of the line from the first to the second, and then calculate where the third point is respective to that line and its slope. Works on paper! I end up with too many conditions and never a clear selection of what is front facing or back facing. I’ve also looked at the face normal with respect to the camera world direction. I was also thinking to use WebGLRenderer.readRenderTargetPixels and sample the face midpoint pixel color to determine whether to include or omit the face, but that would require some prerendering that I would like to avoid since this is changing over time.

How could I approach this so that only the faces which are going to be rendered (via the material’s side prop) get edges?

Lines don’t have a facing direction so there’s no way to do this out of the box. But you should be able to track the associated triangle face normal along with each edge and discard it in a custom shader so it works in real time.

1 Like

If I were you, I’d go this way:

- see whether Conditional Edges has some official or unofficial option for filtering front or back facing triangles
- if this fails, then see the source code, maybe it is possible to add such filtering – I’m not familiar with Conditional Edges, but most likely it “adds” some additional properties to the geometry and/or custom shader, so the change could be in the JS code, and/or in the GLSL code.
- if this fails, then you can try with your current approach by filtering front-facing and back-facing triangles. Here is a small demo of a cactus – white thorns are front-facing, black thorns are back-facing. The calculation is this:

If the dot product of the normal vector (in world coordinates) and the vector towards the camera is positive, then the angle between them is less than 90°, so the normal vector is front-facing.

https://codepen.io/boytchev/full/xxmYWjY

2 Likes

```
var sign = v.set( nor.getX(p.index), nor.getY(p.index), nor.getZ(p.index) ).applyMatrix3( nMat ).dot( camera.position );
```

~~For something like this to work in the general case you’d want to use the camera forward vector rather than the position vector as a direction. With the camera forward vector if ~~`dot( cam.forward, faceNormal )`

is > 0 then it should be discarded.

~~It might also make most sense to perform this kind of culling in screen / clip space and discard if the normal is facing ~~`0 0 - 1`

into the screen so camera perspective transform is accounted for. I believe there are cases that will not cull correctly if you just use the camera forward.

4 Likes

I think I both agree and disagree.

In the explanation I said “vector towards the camera”, but in the code I use the camera position (i.e. I forgot to subtract vectors). **This is my mistake in the code and I will fix it soon.** Thanks for pointing it out.

As for using the camera forward vector, I do not agree. Whether a polygon is front-facing depends on how it is oriented in respect to the camera position only. Here is an illustration of two cases with front-facing polygons – the red angle *a* is less than 90° in both cases. If you use the camera forward vector against the normal vector, then one of the cases has an angle <90°, while the other has an angle >90°, so this cannot be used to determine front-back-faciness.

As for perspective projection, currently I cannot think of a case when a front-facing polygon becomes back-facing after perspective projection (or vice versa). Perspective can distort a polygon, but should not flip it. Could you give an example?

3 Likes

You’re right - I was recalling another case where I believe I had to perform operations in clip space to avoid some issues. What you’ve suggested should work in CPU and shader with the relative position fix. Thanks for the correction!

2 Likes

Thanks to the hints here from @PavelBoytchev and @gkjohnson I made a codepen a bit closer to my use case where I am iterating over a buffergeometry position attribute:

Here is the method that does the work (paints front facing red and backfacing white):

```
function calcFaceDirection( geometry ) {
const position = geometry.getAttribute( 'position' )
const colors = []
let cnt = 0
let verts = []
for ( let i = 0; i < position.count; i ++ ) {
let x = position.getX( i )
let y = position.getY( i )
let z = position.getZ( i )
verts.push( [ x, y, z ] )
switch( cnt ) {
case 0:
case 1:
cnt ++
break
case 2:
const triangle = new THREE.Triangle()
triangle.a = new THREE.Vector3( ...verts[ 0 ] )
triangle.b = new THREE.Vector3( ...verts[ 1 ] )
triangle.c = new THREE.Vector3( ...verts[ 2 ] )
const n = new THREE.Vector3()
const v = new THREE.Vector3()
triangle.getNormal( n )
triangle.getMidpoint( v )
v.subVectors( camera.position, v )
const sign = n.dot( v )
let color = new THREE.Color()
if ( sign >= 0 ) {
color.setRGB( 1, 0, 0 )
} else {
color.setRGB( 1, 1, 1 )
}
colors.push( color.r, color.g, color.b )
colors.push( color.r, color.g, color.b )
colors.push( color.r, color.g, color.b )
cnt = 0
verts = []
break
}
}
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) )
}
```

It looks like it is working, let me know if anything strange stands out, especially in the `calcFaceDirection`

method. I will attempt to work this back into my project to see if it can help the initial issue I was having.

Thanks!

I realized this doesn’t work well when the camera has a matrix applied to its projection. Back to the drawing board.