What is the best way of drawing a Geometry with a custom shader without using a Camera?
I am basically trying to render a square that takes the entire viewport (PlaneBufferGeometry), using my own shader (RawShaderMaterial), without using any projection (gl_position is linearly bound to the vertex position).
I see there is a renderBufferImmediate
method, but I am not sure this the right place to look at https://threejs.org/docs/index.html#api/en/renderers/WebGLRenderer.renderBufferImmediate
You always use a camera, for a fullscreen quad a orthographic camera is used, here’s a simple setup:
Thanks for your answer. I find it weird that you always need a Camera, in this special case you don’t need to go through a projection matrix, the relation between the vertex position and the viewport position is linear.
But fair enough, I will pass a fake Camera to WebGLRenderer.render but I won’t use the projection matrix.
You don’t need the projection matrix, but you need a camera.
Well, the point of the Camera is to provide a projection matrix in shaders, right? So if you don’t need a projection matrix, I don’t see why you would need to create a Camera.
It does more than just that, setting up clipping planes, frustum to perform culling etc. it isn’t the primary thing you do with THREE rendering just a quad, you can also set quad.frustumCulled to false.
If all you need for your project is a quad with shader i would suggest using WebGL directly.
When I want to draw just a plain quad to draw something in a fragment shader, all the stuff I need is:
var camera = new THREE.Camera();
var geom = new THREE.PlaneBufferGeometry(2, 2);
In my case it’s for a RTT, for a complicated effect. It will not use clipping, culling or anything from the camera.
I also need a more classical scene with a camera, that will make use of the RTT result as a texture. So it makes sense using ThreeJS instead of WebGL directly.
Anyway, I am fine passing the camera to the render method, it just feels weird when you look at the code.
trying to render a square that takes the entire viewport (PlaneBufferGeometry), using my own shader (RawShaderMaterial), without using any projection (gl_position is linearly bound to the vertex position).
makes one think why do they even need 3js ) check this out:
I have been there, too, and yes - it is possible but I guess you are right, it is easier to avoid that mess by staying within 3js.
Just for the record, this is the 3D scene I was implementing:
I did use a Camera in the end for the RTT, even though it’s useless.
why dont you put it on the box surface instead? you will have automatic embedding and less pixels to calculate
I’m sorry I am not sure to understand.
Put what on the box surface? What do you mean by box surface, you mean the pool?
I mean pu the whole thing in new THREE.Mesh( new THREE.BoxGeometry( your, box, dimensions ), new THREE.ShaderMaterial( here )); // <—
better yet, just put it on the plane, because realistically in 99% of cases you would only see the top side of this
I think I see what you mean. You mean instead of using a plane for the water and a box for the pool I could use only one box mesh?
Idk, maybe I misunderstood, but from my quick glance at the code and your not using the camera I concluded that you basically calculate the entire screen
Oh no I do use a camera to compute the end result, it is useful for the perspective.
But for the caustics computation I don’t need one, so I have a useless camera at this particular place in the code.
Nice! Thanks a lot for the link. I will definitely take a look at this.