I have been stuck for too long on a Raycaster issue and I am slowly going nuts.
For a platform game in the making I generate the level programatically at start of a new level. This is done by repeatedly cloning a loaded mesh which represents a game-platform and then placing each of those in space - translating, rotating and even bending them along 3D-curves to add some variance to the level. This all works fine.
Now I want to place some random things on those platforms. The platforms themself (i.e. the loaded meshes) have some topology to them so it is not totally straightforward to place these random things so that they are located exactly on the platform surface.
So … I thought that it would be straight forward to pick points in space (world-coords) a bit above each platform and then cast a ray from that point directly downwards to see where it hits the platform mesh. Those intersection point would then be where the platform could be decorated with stones, trees etc.
The problem is: Only the first 1 or 2 (out of 20) points intersects … after that I get . The further I get from origo it seems the more likely to get . See picture where I show the situation where only the first ray did find its surface and the others did not.
Only if I get intersections the point will be adjusted downwards … and if not they will remain on the starting point. This way I can clearly confirm that the starting points are perfectly above the platform.
Also worth to mention:
I have done updatedMatrixWorld() on the platform-objects.
I would very much appreciate some clever insights here!
Not sure, but try setting the Raycaster point threshold raycaster.params.point.threshold, also is the platform segmented enough, since you are targeting points, you can use Points instead of Mesh to debug your platform.
Are you trying to get the platform points with a Raycaster? If it’s the case, you don’t need a Raycaster, you already have that data in the platform’s geometry platform.geometry.attributes.position.array. If the geometry is a primitve PlaneGeometry, you’ll need to deIndex it geometry.toNonIndexed().
Fact is: I just now found out where the problem was.
I got an idea while tracing through the intersection-code of threejs using chrome. I noted that the code uses both boundingSphere and boundingBox as a way to minimise work. And if there is no boundingSphere it calculates one. Fair enough.
But then I thought:
What if the modification I do to the cloned geometry (moving vertices i local space, bending the platform along a curve etc) leave that object as a whole in an inconsistent state. I mean … I do fiddle with very low-level-properties after all.
It sounds like by setting those to null, made threejs recompute them.
You can manually recompute them, with mesh.geometry.computeBoundingBox() and/or mesh.geometry.computeBoundingSphere(), after you do your geometry edits, but it sounds like either way works. glad you figured it out!
After a very reasonable suggestion in another thread that it’s against best interest to paste GPT answers in the forums, continuing to do so (and continuing to not validate the correctness of the GPT answer) is pretty much a bit of malicious behaviour… Someone in the future will come to these threads and potentially spend a lot of time trying learn from and to follow these GPT answers.
I didn’t know that either. I’ve passed along this inaccuracy. Hopefully, next version gets it right.
Perhaps the training data it is using should be reviewed for inaccuracies. If its looks through some of the examples maintained for three.js over the years, I’ll bet they’re riddled with changes that are no longer valid. Someone looking at these would be mislead in the same way.
There has been some discussion about curating better examples. Perhaps, if we ever want better answers, its needed.