How to make a texture always face the camera?

I’m trying to make the texture of a mesh always face an active perspective camera, no matter what are the relative positions but it seems I can’t get the job done easily. My example is at

Please note that I simplified the form of the problem I’m trying to solve in the relativity of motion, I made the mesh rotate instead of moving the camera. I’ve got some restrictions of solving this like CubeCamera may not be used as the client machines might have performance issues.

I’m also asking the same question on, and @Marquizzo guided me to ask in the forum. My post on is at

I think it’s easiest to do with shaders. In principle, you can use texture2D with any vec2 as second argument, and if you use a good transformation of view space coordinates, you will be done. The coordinates can be passed as a varying from the vertex shader and be used in interpolated versions in the fragment shader.

Or do you have any other requirements to the texture mapping?

@EliasHasle Thank you for your reply. There should be no other requirement as long as it can face the camera constantly. Would you mind to build or refer to an example for me to get better understanding of what you talked about?

Hm, looking at your code, it seems you are preserving the standard (equirectangular) texture projection on the sphere, but using texture offset to make it face the camera. My solution would not preserve the equirectangular (or other predefined UV) projection. It would just “project” an image out of the camera and onto the object.

I think you may be near a solution (specific to your sphere geometry) by using texture offset, if you also use texture rotation and texture center appropriately.

But really, if all you need is to have a textured that does not rotate, within a wireframe sphere that rotates, why not just add the rotating sphere as child to the fixed sphere?

The wireframe is just to make the movement visible or it would look like a static object as its texture offsets to face the camera. You are right about that the predefined UVs are preserved, and this is a desired behaviour. I’m not targeting to the sphere geometry, it’s just so far what I can achieve which is able to express my attempt.

It’s more like what we can do with the unwrap UVW modifier in 3DS Max as the following picture shows:

In this picture I intentionally made the sphere polars align to the horizontal plane and keep the texture’s up the same as the camera’s. I’m not sure if the offset/repeat/center/rotation parameters of the texture are sufficient to get this solved, but I suppose it would be as they eventually affects the UV transform. So I think I’m basically trying to move the texture’s coordinate no matter the geometry of the mesh it binds to.

Are you trying to reflect an environment map?

UV coordinates don’t have a natural up direction, so I would not expect any generalization from working on a sphere. It is “coincidental” that the standard UV mapping for spheres follows the polar coordinates.

slightly off-topicI made a demo once where I projected a photo from the camera and onto geometry defined in the scene. Actually great fun. The purpose was to mix virtual 3D models into the photo, and it worked!

No. material.envMap would not be used.

I cannot see the image you linked. There’s no assumption of specific UV of any geometry, the up direction is decided by the camera. It’s like the environment map reflection @marquizzo told though not exactly the same thing.

How about scale? Do you require that the texture fills the view space projection as perfectly as possible? (In the sphere case you showed, at most half of the texture is visible at a time.)

I couldn’t imagine how scale might help or perhaps you would provide some images/examples.
I think @marquizzo shed light on figuring out the right path except environment map is not used, so is the CubeCamera.

If CubeCamera was a choice, I should create the second scene and a cube or sphere as the skydome, bind the texture on its back side and then put the CubeCamera inside and pose it as the target mesh’s orientation, take a picture then use resulting texture as the target mesh’s … but unfortunately, it’s not a choice …

I mean whether you have requirements to the scaling of the texture. I still consider your problem ill-specified.

Like the environment map I think. The default repeat settings seems fine. Please feel free to let me know if there’s anything I can make it more clear.

An environment map is globally oriented. “Facing the camera” implies you want something in view space, which is the local coordinate system of the camera.

The default repeat settings are with respect to the UV mapping of the model. If we are going to ignore the UV mapping, the requirements for scaling the texture must be clear.

Maybe using the scale of a bounding box or bounding sphere … ?

Yes, I thought about that option. It just depends on what is the desired behavior, which in turn is guided by the application. And I do not at all know the application.

Would you please describe what is also needed to add to my fiddle? And I’ll try.

I don’t know your use case.

I’m afraid I don’t know how to describe further without citing the proprietary code which I don’t have the right to. Thank you for trying to help me.

I saw your question on the highlights for the week.
You can find a possible solution in my program at: Shader Example

In that program, I projected a “moving clouds” shader on a simple rectangle. The shader is not relevant to your question, but the rectangle is relevant since, to make the display work, I had to make the rectangle always face me.

In the code, you will find two sections, “Move and Rotate Camera Position” and “Move and Rotate Plane0” which contain the relevant math.

In the program, I am using the mouse to move the Camera around the center. The camera always faces the center. So the plane is opposite the camera an must also rotate around and face the center point.

I hope that is useful.