How to use THREE.RayCaster with a sphere-projected envmap?

i have a test-case here: ThreeJS Journey Lv 1 Fisheye (forked) - CodeSandbox

it’s the compute function line 37 which is receiving the events. but i have no idea what to do from there. a cube cam has 6 camera, so im guessing i have to raycast with all of them but how do i set up the raycaster is beyond me.

basically i have a sphere whose radius fills the screen, filmed by a fixed orthocam. the sphere has an envmap which is filmed by a cubecam which is synced to a dummy-camera steered by orbitcontrols.

i have the intersect of the sphere that reflects the envmap, and the uv. the goal is that pointer events work normally (the orange spinning box is supposed to be hoverable and clickable)

welp for starters, you should not bother with uv - instead just take sphere xyz and the normal and reflect the camera forward vector there, and that will give you direction into the cube map. I think.

now THEN you could convert that direction into spherical coords / uvs for the env map.

2 Likes

My understanding is the effect is basically working by rendering a sphere with an environment map. Visually, the sample from an environment map is driven entirely by the a reflected view normal accross the surface normal of the geometry the material is being rendered with.

So I believe this should work:

  • Raycast from the render camera to the sphere and get the surface normal of the point hit in world space of the sphere scene (you could make this faster by using Ray.intersectSphere instead of performing a geometry intersection)
  • Reflect the camera view ray across the surface normal.
  • Take the adjusted ray and transform that into the world space of the desk scene with the cube camera by multiplying the ray with the cube camera matrix world.

I tried to modify the example you provided but CSB seems unusable to me. I can’t save or fork a copy without pro, the performance on the in-page console is very poor and tanks framerate, and it’s not clear when my changes have updated in the page (maybe due to the framerate). If you can provide something that’s easier to edit I can give it another go.

1 Like

@makc3d @gkjohnson thank you so much for your time!

i think i have a better understanding now. so the 6 cameras inside a cubecam render 6 faces of a cube, which gets projected onto the halfdome of a sphere. so im guessing i would first have to figure out which face the mouse is over, and then raycast with that particular camera, or something …

If you can provide something that’s easier to edit I can give it another go.

that was my mistake, the box was unlisted. now you should be able to fork and safe. if you want i can also upload to github. if this can be fixed i would be so happy ---- i have been waiting for a fisheye for years.

i think i have the normal, it looks correctly given that in the middle it says 0/0/1. the reflect and the last part are a bit of a mystery still …

const { width, height } = useThree((state) => state.viewport)
const radius = new THREE.Vector2().distanceTo({ x: width, y: height })

const vec = new THREE.Vector3()
const normal = new THREE.Vector3()
const sph = new THREE.Sphere(new THREE.Vector3(), radius)
const compute = (event, state, previous) => {
  // objects contains all objects with handlers on them
  // cubecam is the cubecam used for filming
  // cameras are the 6 internal cameras of the cubecam
  const objects = state.internal.interaction
  const cubecam = cubeApi.current.camera
  const cameras = cubeApi.current.camera.children

  // Raycast from the render camera to the sphere and get the surface normal
  // of the point hit in world space of the sphere scene
  // (you could make this faster by using Ray.intersectSphere instead of performing a geometry intersection)
  previous.raycaster.ray.intersectSphere(sph, normal).normalize()

  // Reflect the camera view ray across the surface normal.
  const rayDirection = state.camera.getWorldDirection(vec)
  const reflectedRay = rayDirection.reflect(normal)

  // Take the adjusted ray and transform that into the world space of the desk scene
  // with the cube camera by multiplying the ray with the cube camera matrix world.
  reflectedRay.applyMatrix4(cubecam.matrixWorld)  

  // Raycast
  // If state.raycaster is set up correctly interaction will start to work ...
  const x = (event.offsetX / state.size.width) * 2 - 1
  const y = -(event.offsetY / state.size.height) * 2 + 1
  state.pointer.set(x, y)
  state.raycaster.ray.origin.setFromMatrixPosition(state.camera.matrixWorld)
  state.raycaster.ray.direction.copy(reflectedRay).normalize()

  const intersects = state.raycaster.intersectObjects(objects)
  console.log(intersects.length)
}

I dont see why though, since you will have a direction from reflect() call and the only other piece of information is camera position which is most probably 0 any way

oh right, so using raycaster with its ray via origin and direction. im still struggling with it, i guess the above code must be totally wrong.

There are literally hundreds of different ways one could solve this.

Here is one possible way:

  1. Align the sphere and the cube camera on the same center.
    .
  2. Since the sphere radius is equal to the diagonal from the center of the client area to one of its corners, the mouse coordinates of the client area can be used (if conditioned properly) to draw a line parallel to the forward ortho camera vector and find the intersection on the sphere exactly under the mouse pointer.

That line will go through two points on the sphere, one on the semi-sphere looking at you, and one exiting at the opposite semi-sphere.

The vector from the center of the sphere to the point on the opposite side is the vector you want, that will point to the object that corresponds to the view you see, under the mouse pointer.

  1. To navigate and look around the scene, you move the sphere and the cube-camera together, but you orbit the orthogonal one around the sphere (always looking at its center) to the desired direction!
    (you can place a dummy object at the center of the sphere, then parent the orthogonal camera to that object and rotate that object instead, to have the orthogonal orbit around the sphere).

BTW, you could also figure out that vector from the screen coordinates of the pointer alone using trigonometry - instead of finding the intersection.

I think this should be mostly correct, but since I haven’t tried it, it remains a theory, I might have made a mistake or two, or I might be missing something critical - there are no guarantees!

I mean, the obvious - if the initial direction returned by getWorldDirection is in world frame, why is there a need to convert it to world frame again after reflect() ? - btw, I do not remember if raycaster returns local normal or world one - if local, it needs to be converted instead.

then, rat.origin should be at cube camera position, is state.camera a cube camera? or ortho camera?

@drcmda here’s a working csb: link. I’ve marked areas that I’ve worked on with GARRETT NOTE:...

A few comments on the fix:

  • The wrong camera (and therefore ray and direction) was being used when intersecting the sphere
  • You have to make sure the sphere being intersected is using the same radius as the geometry being rendered
  • The ray direction must be flipped across the X axis after being reflected to accommodate the “flip” parameter on the cube map texture
  • A normal matrix must be computed from the cube camera matrix world before transforming the direction

The last thing you might want to do is derive which cubemap camera the computed ray direction is in so you can assign it to raycaster.camera which is used for things like pixel-perfect Line2 intersections. You can also use the camera to initialize the ray at the near camera plane to ensure camera near clipping is taken into account.

Let me know if that’s what you were looking for or if you have any other questions

4 Likes

this is btw the point that does not get mentioned much - if one used Vector4 normal with w = 0 then they could use the same Matrix4 with no “normal matrix” bullshit. same goes to 3js shaders.

how can i thank you! this is perfect. i thought at first i’ll leave it at rendercubetexture as a low level primitive since the fisheye is hard to abstract but this was the last missing puzzle piece. this will make so many people happy, i’ve been wanting to have a fisheye ever since trying out threejs for the first time. :hugs:

1 Like

this is btw the point that does not get mentioned much - if one used Vector4 normal with w = 0 then they could use the same Matrix4 with no “normal matrix” bullshit. same goes to 3js shaders.

A normal matrix is a little bit different (computed as inverse transpose of the transform matrix), though I don’t recall all the math on why :sweat_smile:. I haven’t read the full write-up but this page has a good illustration on the issue that a normal matrix addresses - middle is just a direction transform, right is using the normal matrix:

Given that we’re using the camera transform that likely wont have non-uniform-scale it probably doesn’t matter - and truthfully I’d have to think about whether the better thing to do in this case use a normal matrix or just the upper 3x3 matrix with Vector3.transformDirection. It’s possible the normal matrix isn’t the best thing to use in this scenario.

2 Likes

About normal matrices … here is a snapshot of 6 slides from my WebGL course. Although you might not read the text, the Math notation is (almost) universal.

Here is a brief legend:

  • slide 1 says “matrix for normals”
  • slide 2 shows that almost all normals are wrong (red arrows) when scaling is not uniform
  • slide 3 introduces the tangent T, the normal N before applying the transformation matrix M
  • slide 4 shows transformed T → MT = T’ and N → MN = N’, where N’ is wrong, so we need another matrix X, so that XN is the correct normal (i.e. perpendicular to T)
  • slide 5 shows the math needed to calculate matrix X, which happens to be (M-1)T
  • slide 6 shows that for uniform scaling M-1=MT, so M=X and we do not need addition normal matrix

Add on: the reason for the need of a different matrix is that uniform scaling preserves directions (so normal vectors are still orthogonal to the surface); while non-uniform scaling changes directions (so normal vectors are no longer orthogonal)

3 Likes

@gkjohnson i have almost finished a complete abstraction, will publish open source today for others to use :slight_smile: here’s a draft: ThreeJS Journey Lv 1 Fisheye (forked) - CodeSandbox

and if you have time for a smaller math riddle, i would like to also allow it to just project into a sphere without taking over render and filling the screen, because that would allow for “tiny world” usecase. like so: ThreeJS Journey Lv 1 Fisheye (forked) - CodeSandbox

currently it’s renderPriority but i will think of some better name. would it be possible then to still enable events? in the compute you can get the scene camera like this:

  const compute = React.useCallback(
    (event, state, prev) => {
      // Raycast from the render camera to the sphere and get the surface normal
      // of the point hit in world space of the sphere scene
      // We have to set the raycaster using the orthocam and pointer
      // to perform sphere interscetions.
      state.pointer.set((event.offsetX / state.size.width) * 2 - 1, -(event.offsetY / state.size.height) * 2 + 1)
      state.raycaster.setFromCamera(state.pointer, renderPriority ? orthoC : state.camera)

i’ve already put the check in there, renderPriority ? orthoC : state.camera i mean, but im guessing it can’t be that easy, probably something else has to be done now to allow reflect from any angle.

ive also changed this bit since it has to hit the actual sphere for this

      const [intersect] = state.raycaster.intersectObject(sphere.current)
      if (!intersect) return
      normal.copy(intersect.normal)

@drcmda asked for more details on this but I’ll answer here in case other’s have thoughts or are interested.

The last thing you might want to do is derive which cubemap camera the computed ray direction is in so you can assign it to raycaster.camera which is used for things like pixel-perfect Line2 intersections.

I think it should be enough to determine which vector component has the largest magnitude (ie x, y, or z) in the local camera space and then depending on the sign of that value you can infer whether it lies in the left, right, top, down, forward, or backward facing cameras. I’m not sure what else might be needed for pixel perfect intersections with something like Line2 - I just recall that the raycast function needs that information.

ive also changed this bit since it has to hit the actual sphere for this

  const [intersect] = state.raycaster.intersectObject(sphere.current)
  if (!intersect) return
  normal.copy(intersect.normal)

I don’t follow this - keep in mind that raycasting against your geometric sphere may be thousands of times slower than just intersecting the mathematical sphere since all the triangles will be checked against the ray. You should just be able make sure the THREE.Sphere is positioned and scaled at the same world location as the geometric sphere.

thanks, i will implement this! i always wondered what the .camera prop was used for in raycaster, i had no idea it was for line2.

1 Like

ok I see your point, but let me just say I do not remember myself using non-uniform scaling in maybe 10 years. maybe implicitly, when someone saved the model like that, and maybe in jsfiddles intended to confuse beginners, but in a production :thinking: nope, do not remember