[SOLVED] Drawing a circle in 3d from a centre point and raycasting to a plane

so I want to be able to draw a circle like you may do in Word or Powerpoint. from the users point of view they will click once to set the centre, and then a second time to set the radius.

I know the screen point that the mouse is on
I will know the centrepoint x,y,z.
My plan was to then create a plane at the centrepoint whose normal I will also eventually know, this will more than likely not face the camera, for now i am using a basic vector.

I then thought I should be able to get the point at which the mouse intersects this plane, and the raidus will be the distance between that intersect point and the centrepoint, then i can easily just draw a circle. so heres my code

       screenPoint.unproject( camera );

       let raycaster = new THREE.Raycaster();

        raycaster.setFromCamera(screenPoint, camera);

//eventually will be worked out elsewhere this is just temp`
            let normal = new THREE.Vector3(0, 0, 1);
    
        let plane = new THREE.Plane();

        plane.setFromNormalAndCoplanarPoint(normal,centrePoint);
       let intersectPoint = new THREE.Vector3();

        raycaster.ray.intersectPlane(plane, intersectPoint);


        let radius = centrePoint.distanceTo(intersectPoint);

        let geometry = new THREE.CircleGeometry(radius, 60);

        // Remove center vertex
        geometry.vertices.shift();

        let material = new THREE.LineBasicMaterial({color: red});
        //To get a closed circle use LineLoop instead
        let mesh = new THREE.LineLoop(geometry, material);


        mesh.position.set(centrePoint.x, centrePoint.y, centrePoint.z);

     
        scene.add(mesh);

Im clearly doing something utterly wrong, i get a circle but the radius is miles out. I found a few stack overflow answers which didnt help me (more than likely more lack of understanding) alot of the answers I found seems to not really work in 3d, more assume a plane is facing the camera. ANy pointers would be very much appriciated

If you already have this THREE.Vector2() in NDC, so, there’s no need to unproject it.
.setFromCamera() will do the job for you.

When you have a big problem, and the whole thing isn’t coming together, break it into parts and make the parts work individually before bringing them together.

If this were my project, I would break it into three pieces:

  1. Creating the Plane
  2. Raycasting against the plane
  3. Creating a circle based on two vectors

For item one, make sure you understand how the plane is created, and how it will react to your raycaster. I’m not sure what centrePoint you’re using, but this should be somewhere along the -z axis in camera space. (The camera looks down along its own -z axis, so anything with a z value below 0 will be “in front of” the camera.)

It looks like you’re already raycasting correctly, but it can’t hurt to drop a simple Mesh (like a box or a sphere) at the intersect point to verify you’re getting the correct coordinates.

Finally, again, it looks like you’re using the correct methods to create the circle, but you’ll need to ensure that not only the positioning is correct, but also the rotation orientation as projected from camera space.

1 Like

In addition to @TheJim01 's reply: people will help you faster, if you provide a live code example.

1 Like

There’s a CodePen example.

Move the mouse to show a temp circle (kind of a marker) of yellow color.
Mouse down creates a circle of the current radius of random color.

1 Like

Thanks for the replies. I will take some time to digest your codepen example and see where I am going wrong.

I like the way you chaange the scale of the circle. For future knowledge, is there a reason you set the scale of the circle, rather than just recreating the radius?

I just don’t want to use the technique of dynamic geometries for no reason, as it’s computationally expensive :slight_smile: Scaling is fast and easy to implement :slight_smile:

1 Like

To expand on this, consider that you have a circle with even a small number of vertices (say, 32).

When the geometry is initialized, that buffer of vertices is sent to the GPU (32 * 3 = 96 float values).

If you updated the vertices individually for perform a scale, not only would you be performing at least two operations per vertex (computation and assignment), but then the entire buffer would need to be sent to the GPU again (all 96 float values).

On the other hand, if you change the scale, then you really only update the transformation matrix (16 float values). Yes, those 16 float values are sent to the GPU every time you update them, but it’s only ever 16 values. I’m sure you can now see why this would be the preferred way of scaling a mesh, especially as the meshes become larger and more complex.

1 Like

how to get vertices of instancedBufferGeometry object and how to done vertex point Snapping task in InstancedBufferGeometry object