Since your materials are the same, you could use BufferGeometryUtils.mergeBufferGeometries. However, this would turn all your separate polygons into a single geometry, so you’d lose the ability to translate/rotate/scale each one independently.
There’s no simple answer for this sorry… if you maintain separate objects they’re easy to reposition but lots of draw calls. If you merge them, repositioning is hard.
The problem is doubly hard with THREE.Geometry, which is less efficient than THREE.BufferGeometry to begin with…
The general approach is to merge all of your geometries (they must share one material to be merged) while keeping track of the ‘offset’ for each geometry so that you can raycast against it later, or update the vertices. This is a more advanced thing to implement, and would likely be many hundred lines of code if including selection and repositioning. I’m not aware of examples to look at for it, other than the mergeBufferGeometries() function itself.
I am not sure if this is relevant or not, but I once had to create custom geometries where the number of vertices became a problem. I did not have the option to merge the geometries.
What I did is to realize that I could save a lot of vertices by placing more vertices at sharp curves in my models and less in more flat surface areas.
This led me to develop a mathematical approach using a mixed arithmetic + geometric series in the function that builds the geometries. It made a huge difference, while avoiding any edge effects due to insufficient coverage.