I already asked this on SO but I have noticed that the discussions and replies on here are quicker and more vast so I will try here too
We have created a PCB Viewer in THREE.js and now we are looking to add some selection functionality to it. This is not a difficult task and I already have this functionality working though I am facing some performance issues. As we want all features of the PCB Layers (shapes) to be individually clickable and selectable (the colors change when selected) I figured each shape need their own THREE.js Mesh, so that we can control their Materials individually. Sure enough, it works as expected but of course it has massive performance issues as now, instead of having one combined mesh for all shapes on a PCB layer we have THOUSANDS.
I understand that having a lot of Geometries (Meshes) will degrade the performance. This is obviously true. Does anyone have any tips on how this could be done in a more performance efficient manner? For now it is enough for us to just change the colors of the indivudal shapes when they are clicked. Before my code changes we had all shapes in the same geometry on the same THREE Mesh. Can I somehow, in any way, keep this more simple structure but still manipulate inidivudal shapes/objects separately? To me it sounds far fetched but I am not too experienced in Three as of yet.
To this I did get a reply from which I am know trying to proceed my work on. The reply suggests the following example: three.js examples
The idea is nice but the problem I have at the moment is the fact that our individual objects could theoretically all be different shapes, meaning I cannot re-use the same “box” as they do in this example. Also, we do all the pre-processing in .NET code and we only feed a list of vertices, indices, uvs and potentially vertexcolors to JS to create the geometries (which I then plan to merge). In the example they make use of position, scale and rotation for the selection “cache”.
An idea I have is that I would somehow keep a cache of the ranges of vertices belonging to each individual original geometry and with the help of those and maybe the same “indexing” method for clicking as in the example, manipulate indices separately. I don’t know though how the ranges of vertices can be calculated in the merged geometry though? Are they simply put in the order of the geometry array given to the mergeGeometries function? In that case I guess it would be easy to just keep a start and stop index for every original geometry’s vertex-range and then somehow dig those out of the merged geometry on demand?
Lastly, currently we are using separate meshes and geometries for objects that need individual manipulation (performance was not an issue earlier due to the amount of geometries not growing this quickly as they would now…). With that, we use the raycaster to identify what object is clicked. I suppose that I will not be able to use the raycaster anymore if I want to detect clicked objects/vertices on a merged geometry? Am I correct?
Thanks for your time. I am quite new to Three.js but I am trying to learn quickly. Problem is that I need to get stuff done quickly too, so have to try to learn as I go
Does that not kind of defeat the purpose? If my intentions are to keep the geometry count down? Or can I have them as “invisible” somehow that they don’t affect performance? Kind of like ghost geometries
Have you looked at BufferGeometry.groups. You may define several groups within a large BufferGeometry and each group will be rendered with its own material.
This might be useful if the data of each object is a compact range in the buffer (otherwise you may need more groups).
Hmm I did not know you could do that, I mean find intersections which are not in a scene. This sounds interesting… So you are suggesting I would first create all my individual geometries, cache them, merge them and only add the merged geometry to my scene. I then use raycasting with the cache to detect what geometry from the cache is clicked and then change its color and then remerge the cache and re-add it to the scene?
No, you will have just 3 groups. One group for all data before the selected vertices. One group for the selected vertices. And one group for all data after the selected vertices. The first and the third group will contain all objects except one – this one will be in the second group. The group ranges are not static. You change them as the selection changes.
Aah now I understand. That also sounds feasible ish. The only problem is that we also may want to support multi-select, meaning selected vertices will not be in a compact range… But I guess for only one selection it would be quite OK.
I do like the sound of my interpretation of @chaser_code’s response though. Of course I want the least costly one though, so not sure which is better, groups or geometry cache? I almost now have a working PoC of the linked example in the original post. It does seem quite slow to render though, not sure if it is due to the additional scene and duplicated objects in this “ghost scene” or if it is due to me now performing the geometry merging in JS code rather than .NET…
So I kind of have this working with the example method. I have a “proxy scene” which have ghost objects of the first scene detecting clicks using readRenderTargetPixels. The problem now though is as I do this onMouseUp, the readRenderTargetPixels seems to see the object that is “deepest” meaning if two objects where to be behind one another, it would find the one that is behind even if clicking the one in front… Not sure what is going on here
I render “simple” meshes to my real scene, where a lot of objects are in the same geometries (objects sharing the same Material mostly)
I keep a “virtual scene” which I never render, on which I perform raycasting and check for intersections. In the virtual scene I have actual separate meshes for the individual objects.
When I find an intersection in the virtual scene, I clone that mesh object and add it to the real scene with the “selection material”. I also apply a small offset to it so that it is slightly bigger than the original, to prevent z-fighting
When objects are deselected, I remove the selection objects from the real scene. I use user attributes to identify which ones they are. Each selection object also has a “back link” to the mesh from where it was cloned in the virtual scene. This way I can also check if I already have a selection of a certain object.
To know “Where” to the real scene to add my object (to which group - we have quite an advanced tree structure…) I also keep a user attribute in the virtual scene that “points to” (by id) the parent in the real world, to which I would add the selection object IF it is selected.
Hope this makes sense and that it will help someone else too.
If anyone have improvement suggestions to this, please let me know. Performance is always a top priority of mine