So I’m curious to know what’s the best approach for this situation. You have FP controls, with the player model attached to the camera, moved just a little bit back to avoid the head clipping into the camera’s view. Collision detection works fine, but there’s a gap between a collider and the player, because of that offset. We could create a model without a head or upper body of course, but then how would we handle mirrors, something we inevitably want to include in our scenes? I assume this is something a ton of FPS games have tackled, so what’s the best approach? I know we could increase the near clipping plane of the camera, but that in turn would introduce clipping through walls whenever the player goes really close to one…
I’ll use this thread to illustrate my own collision detection potential “issues” (well, not really issues, more like potential performance problems in certain scenarios), because that’s more or less what the absence of a REAL collision detection system is about - and by real, I mean accurate and of course feasible from an implementation point of view.
First, let me point out what the current collision detection systems are about, and why they all fail to represent a real solution to the problem (with all due respect for the developers of such techniques). They are based on calculating bounding boxes / spheres, but they work well only for simple shapes, because of the inherent complexity of the approach when it comes to more exotic shapes. Same for the so called “physics engines”, which are nothing more than developments of the current - and partially ineffective, as you noticed - ways to tackle this, that are given a boombastic name to suggest more legitimity. There are also convex hulls that approximate the shape of an object perfectly, but only for convex geometries, and not concave ones (the example there is 2D, but useful to get the idea).
While not a physics engine, BVH (bounding volume hierarchy) is a sort of “middle ground” approach to the issue, and is based on dividing the main shape into subdivisions on which bounding boxes / spheres are calculated on. Thus, you end up with a collection of bounding boxes (i.e. a “volume”) that tries to approximate the actual shape of the object. It does it reasonably well, but obviously there will still be zones that do not follow the object’s shape, depending on how many such subdivisions (and CPU power, obviously) you use. A somewhat similar approach would be to manually build your shape (which can be a concave one) as a collection of convex geometries so you can test for intersections on those convex hulls instead, like @Dylan_Hodge tried to do here, but that is not quite feasible if the shape is irregular or even relatively random one, since you would have to create lots of convex hulls and won’t really know how to structure them in such a case.
What I use for collision detection is a simple and effective method illustrated in this fiddle, but again, one that has potential shortcommings, if the scenario is different. Because a minimum distance from the object is needed in case of concave objects in order to prevent near plane clipping, I clone the object invisibly, scale the clone up by that minimum distance, use a raycaster to detect the camera direction intersection point with the clone, and move the camera to that point if needed. It works for all controls, follows the object shape, and if I add the clone to the object I don’t have to synchronize transformations between them, but obviously it can become a potential performance problem if I had to clone lots of objects, no matter the approach (e.g. individual cloning, clone reusage, merging clones, etc).
Now, on the “gap” between concave objects and the camera and why they’re needed. Imagine you have a shape whose surface looks like this, for example:
__ / \Near Plane____ / \ / \ / Near Clips \___ / \____/ \ __/ Surface \__
It goes without saying that you would like the camera to be positioned at the
Surface level above, or even at the
Near Clips level in order to not have “gaps” between it and the object, but you have to place it even further from the object at
Near Plane level illustrated above since there is the minimum distance where the camera’s near plane doesn’t produce unwanted clipping of the object’s surface.
A good way of setting that distance is to account for the
surface level, the
camera.near value, and the size of a
face or segment from that object, since the latter precisely identifies how far the regular / planar level would rise if a face of that object would be in full view and its normal would be orthogonal or perpendicular on the camera direction (in other words, the face would be coplanar with the camera direction and would be seen as a simple line because of its orientation). For the record, that is the meaning of the
Math.PI / 180 value in my fiddle above, since the sphere radius is 1 and it has 360 by 180 width by height segments (now that I better think about it, it’s actually the square root of the sum of the squared segment sizes or the longest side of a face triangle, since the hypothenuse length has that value in the usual case of a right triangle face, but anyway).
For smooth or convex objects you don’t have this problem and you can position the camera at the
surface + camera.near distance so a bit closer, but apart from man made simple objects, you can hardly find such shapes in reality:
Near Plane Zone --- Surface Clipped --- / \ / \ | | | |
Bottom line, unfortunately there is no perfect solution or “best approach” to this, and whoever says otherwise is either looking to sell his idea or have the easy way out by recommending less effective methods: you either hit performance issues in certain cases if you want to be precise, or you sacrifice precision everywhere to make it light on the system. Edge cases like convex geometries allow proper solutions that are both performant and precise, but most real shapes are not convex and subdividing concave shapes into convex parts has the potential to be both intensive and tricky for large and / or irregular patterns.
And yeah, a ton of FPS games have tackled that … by cheating their way via the simpler and less effective ways above, or by implenting some kind of spatial indices. As already mentioned, such ways have nice names and all, but in effect are just ways to avoid the performance bottlenecks caused by implementing properly accurate collision systems.