Does normal vector length matter?

Recently, a colleague passed me this article by Arseny Kapoulkine which discusses the quantization of floats, specifically, vertex normals. The article reasons that there isn’t a good reason for vertex normals, whose coordinate values range from -1.0 to 1.0, to be passed to the gpu as floats. By encoding them as signed ints, you get a savings on the amount of data needed to be pushed to the GPU.

I tested this out, and got a 12% average file size reduction on meshes of varying size and complexity (then I realized I didn’t need to save normals at all, but that’s a different story). I haven’t done any GPU performance tests, but there is no discernible difference in the quality of the scene, just smaller overall data size.

My colleague was wary though, since this means that normal coordinates would be in the -127 to 127 value range, meaning they would not, in fact, be normalized. He thought this could affect lighting calculation.

As I mentioned, I did not notice any difference in lighting in a few three.js scenes I tested, but I wanted to ask if there is any information behind this, and can normal vector length influence lighting calculations?

2 Likes

You mean instead of Float32Array? Yeah, I guess normals do not require that much precision. This is something the user can do when using RawShaderMaterial.

I guess I could use a Int16Array for normals instead? At the moment I’m still exporting meshes as Geometry, not BufferGeometry, and thus, normals are passed via the JSONLoader.

Would such a change internally in three.js make any significant performance boosts?

Not sure… It should be les GPU memory, but we’ll have to convert from float to int16 at upload time. We definitely don’t want to stop using floats in user-land.

That is a great idea! I think this would majorly affect some of the smaller devices that can be targeted, because they don’t have the best GPU’s. Some don’t even have GPUs.

What happens when you pass the normals as signed ints and then normalize them in a higp variable?

You know what, I don’t know… That’s more of a performance question, so I would test it out yourself, and find out!

I’m not sure how to test the performance there, but i was wondering about precision, technically it’s like casting to a higher precision float, normalization should give as close to 1 as it is possible with that precision.

Just get Fraps Advanced up, and run your test programs xD

True, but I still am not sure whether or not that precision is necessary if it takes up more processing power than the average normalization procedure.