My scene is simple: a camera placed at (0,0,2), looking at a radius: 1 SphereGeometry at origin.
But in my shader code, I make calculations such that there’s another inner sphere besides the existing “outer” sphere. What I want to do is some simplified kind of raymarching, for each vertex on the original sphere, I would like to find the intersection points between the ray and both the outer and inner spheres, so then I can march the ray step by step through the volume between the spheres, and get the color value from an albedo heightmap on each step point.
The result achieved is that any sphere heightmap can be extended inwards to form a 3D volume pattern. I already have a working version that looks like this:
But my problem is about optimization…I don’t want to calculate the intersection points for all the vertexes in the shader code for each frame because the results will be always the same for each vertex. If somehow there’s a way for me to run glsl code and calculate the intersection points for once, then save this result like a BufferAttribute, so when it renders each frame, the fragment shader can then refer the intersection points from this BufferAttribute, instead of calculating them again each time.
Is this achievable in any way?