Best way to fill up inner 3D mesh from camera view

Hi,

I’m trying to get an effect to apply inside a 3D line shape.
I have a line which is drawn in space - 3d points - and I would like to draw a different color / apply a shader to the pixels inside the shape from the viewport of the camera.

See demo here - bare bone.

My hint would be to use a shader which applies to the whole scene and process every point of my line and check if the pixel is inside or outside the line?

I’m not sure how to proceed, :(.

Best,

P

You can do this using a Shape. See the following example - the very back row of thin shapes is what you’re looking for.

1 Like

Hi @looeee,

Thanks, I think i will try that even though that’s not quite right.
I have a generative 3D typography which looks like this :

ezgif-4-6ee77f7fb248

It’s a 3d outline in space. - this is why I simplified it in the demo with 3d points on a circle with different depths … -
I want to apply a shader to the inside of the outline, but I believe the quickest way / less computationally intensive is to make a mesh in the first place rather than processing the inside after its creation.

Best approach to fill up an outline is to use this, or that from here? or something else?

Best,

P

For triangulation you need the normal for each point of the 3D surface.

The original algorithm for implicit surfaces is based on this.
Addon for triangulation of implicit surfaces/ forms with holes

For sphere and cylinder the normal is very easy to determine.

But it would also be enough to determine the normal somehow approximate.

But also note the documented errors if the outline is too jagged. In your example I see sharp points, much more problematic than in my star example.

2020-02-12_09.21.48

1 Like

I ended up -after 10 days of pause, :slight_smile: - getting the 2D mesh from the geometry, then adding the faces indices, then the depths:
wireframe :

fillUpFromVector3Array takes a list of 3D points and fills it up like Shape would do.


export function fillUpFromVector3Array(array){
    var letterShape = new THREE.Shape(array);

    var geometry = new THREE.ShapeBufferGeometry( letterShape );
    var mWireframe = new THREE.MeshBasicMaterial( { color: 0x00ff00, side:THREE.DoubleSide, wireframe: true } );
    var mColor = new THREE.MeshBasicMaterial( { color: 0xfe00ef, side:THREE.DoubleSide } );
    mColor.opacity = 0.4;
    var mesh = new THREE.Mesh( geometry, mColor ) ;

    var customGeom = new THREE.Geometry();


    console.log(geometry);
    console.log(mesh);
    var points3D = geometry.attributes.position.array;
    for( var i = 0; i < points3D.length; i+=3){

        var indexArray = i / 3;
        var pointToAdd = new THREE.Vector3(points3D[i], points3D[i + 1], array[indexArray].z);
        customGeom.vertices.push(pointToAdd);
        
    }

    var faces3D = geometry.index.array;
    for( var i = 0; i < faces3D.length; i+=3){
        customGeom.faces.push( new THREE.Face3( faces3D[i], faces3D[i + 1], faces3D[i + 2]));
    }
    customGeom.computeBoundingSphere();
    var customMesh = new THREE.Mesh( customGeom, mWireframe);
    // customMesh.add(wFrame);


    return customMesh;
}

It seems like the Shape is meant to do that already, but it doesn’t, as seen in the example WebGL_geometry_shapes.html : californiaPts is meant to have a random depth, but is rendered flat?
I’m asking as i feel that what I am doing is a little convoluted maybe?

Best,

P