Finding element or bounding box of mesh from Point Cloud

I making a tool by which users can manually add a physics body/element in a scanned Model. When a user clicks on a mesh, it first creates a physics mesh of box or sphere of unit dimension depending on what the user has selected in UI.

The 3D scanned Models are created from the point cloud.

I am asking my question here because what I have learned in the last 4/5 months is from this group and awesome people.

I have researched so many things but I am not able to get the proper direction or possible direction to explore.

I want to make adding physics elements better, easier, and less worksome.

One thing that I got to know is that if we have a point cloud we can do instance segmentation ( by use of machine learning).

I don’t know much about Machine Learning.

I don’t want to use desktop applications between any process that I need to go through to achieve my goal, I want to have that in the pipeline in my web app.

One challenge that I still have is that I want to render 3D models like .gltf instead of the point cloud.
And if I do that instance segmentation in any way, I don’t know if it is possible to convert that point cloud data into a 3D model within my project.

Not limited to this, is there anything that I can do that you suggest to make this better or smarter ?
I will be very grateful for your feedback !!

Current state:

When I double-click on the chair

image

I get a cube of unit dimension which is a physics element

The cube rendered is because of the physics debug renderer by seanwasere sir,

Now to occupy a chair or its seat, if I double-click on that green wireframe, I can scale it.

Now I want to make it so that when somebody clicks on a chair, it should suggest some shape, not exact but something !!

Can anyone please help me with this !!