How to make a texture always face the camera?

UV coordinates don’t have a natural up direction, so I would not expect any generalization from working on a sphere. It is “coincidental” that the standard UV mapping for spheres follows the polar coordinates.

slightly off-topicI made a demo once where I projected a photo from the camera and onto geometry defined in the scene. Actually great fun. The purpose was to mix virtual 3D models into the photo, and it worked!

No. material.envMap would not be used.

I cannot see the image you linked. There’s no assumption of specific UV of any geometry, the up direction is decided by the camera. It’s like the environment map reflection @marquizzo told though not exactly the same thing.

How about scale? Do you require that the texture fills the view space projection as perfectly as possible? (In the sphere case you showed, at most half of the texture is visible at a time.)

I couldn’t imagine how scale might help or perhaps you would provide some images/examples.
I think @marquizzo shed light on figuring out the right path except environment map is not used, so is the CubeCamera.

If CubeCamera was a choice, I should create the second scene and a cube or sphere as the skydome, bind the texture on its back side and then put the CubeCamera inside and pose it as the target mesh’s orientation, take a picture then use resulting texture as the target mesh’s … but unfortunately, it’s not a choice …

I mean whether you have requirements to the scaling of the texture. I still consider your problem ill-specified.

Like the environment map I think. The default repeat settings seems fine. Please feel free to let me know if there’s anything I can make it more clear.

An environment map is globally oriented. “Facing the camera” implies you want something in view space, which is the local coordinate system of the camera.

The default repeat settings are with respect to the UV mapping of the model. If we are going to ignore the UV mapping, the requirements for scaling the texture must be clear.

Maybe using the scale of a bounding box or bounding sphere … ?

Yes, I thought about that option. It just depends on what is the desired behavior, which in turn is guided by the application. And I do not at all know the application.

Would you please describe what is also needed to add to my fiddle? And I’ll try.

I don’t know your use case.

I’m afraid I don’t know how to describe further without citing the proprietary code which I don’t have the right to. Thank you for trying to help me.

I saw your question on the highlights for the week.
You can find a possible solution in my program at: Shader Example

In that program, I projected a “moving clouds” shader on a simple rectangle. The shader is not relevant to your question, but the rectangle is relevant since, to make the display work, I had to make the rectangle always face me.

In the code, you will find two sections, “Move and Rotate Camera Position” and “Move and Rotate Plane0” which contain the relevant math.

In the program, I am using the mouse to move the Camera around the center. The camera always faces the center. So the plane is opposite the camera an must also rotate around and face the center point.

I hope that is useful.

For that example, you could just have added the plane as child to the camera, with the appropriate offset. If you need a moving plane, but still want the plane to face the camera, you can do like I did for each sphere in my spheres example. The principle is to add the vertex position in view space. For just a single object, you could just add in a uniform position that you apply the modelViewMatrix to, and then add the vertex position before applying the projectionMatrix. (Or even projecting first and adding in the projected space.)

You are right. Making the plane a child to the camera would have been an improvement and saved some calculations. That particular project turned out to be a dead end because of performance issues. However, I was hoping that the equations would be useful for purposes of this thread.

But as I read the discussion of your project, I was reminded that what I designed is commonly called a “billboard” and there are probably a million examples of how to create those.

So, in this case, my reply - while well intended - probably contributes nothing useful to this thread and can be ignored.

I think in general having multiple methods in the arsenal can be useful. :smiley:
(Edit: removed off-topic stuff)

(Edit: removed off-topic reply)

(Edit: removed off-topic stuff)

@ken.kin So, for a on-topic (but still blind) advice:

  • I suggest computing (as needed) a (not necessarily the smallest) bounding sphere for the geometry (geometry.computeBoundingSphere(), geometry.boundingSphere), and scaling it by the object scale (assumed uniform for now).
  • Pass the view-space center and radius of the sphere to the shaders. (E.g. using object.onBeforeRender.)
  • In the vertex shader, make sure to calculate view space position of the vertex and pass it in a varying. ( viewMatrix*vec4(position, 1.);)
  • In the fragment shader, transform view space xy position by subtracting the sphere center, dividing by two times the sphere radius and adding vec2(1.0). (vec2 uv = (viewPos.xy-sphereCenter.xy)/(2.*sphereRadius)+vec2(1.);)
  • The resulting vector is an uv for the texture lookup. (vec4 color = texture2D(map, uv);)

(Edit: removed off-topic reply)