Optimizing the GLSL code (slow performance when increasing the data number)

Hi shaders gurus,

I would like to display some data on a 3D shape using shader Material. I was successful in plotting the shape (location, scale and rotation) and the corresponding color. However, I think my code is not optimized at all because as soon as I increase the number of data to 100 or 1000 (refer to my commented LOC in frag shader), my performance is significantly affected.

Here are the things that I tried, where I got performance improvement for some but can’t scale up the number of data to a 1000:

  • Reduced the number of vertices significantly of the mesh
  • Tried working with step function and other custom-made functions rather than if statement but without success (Graphics computing myths of branching with if statements)
  • I also notice that when we increase the dimension of cubes (edge length) and zoom out the camera, it gets a bit better… should I be coding things in function of camera position (meaning adding complication)

Any suggestions/comments or even debugging using an online code editor would be fantastic. I spent almost 2 weeks trying to improve the performance with large data set.

Below is the codesandbox :

Hey, what exactly is it that you’re trying to increase to 100 - 1000? Your uniforms are Kd Ks and Ns…

The number of iteration in the fragment shader
for(int i=0; i< 15 ;i++) { // good performance
// for(int i=0; i< 100 ;i++) { // “not the best” performance
// for(int i=0; i< 1000 ;i++) { // very bad performance

This will simulate a high number of data for example (refer to data.js file)

  1. Material shaders are executed “per pixel” on the screen. The more pixels your object covers (ie. the closer you zoom on it for example) - the more time your shader will be executed.

  2. Could you explain what kind of effect you’d like to achieve? Maybe an example of already existing equivalent? It’s hard for me to imagine a use-case for a non-postprocessing / non-raymarching shader, in which you’d need to sample a texture 100-1000 times per pixel.

1 Like

Thanks for your replies,

Here is pictures explaining the effect I want (pictures 2 and 3). Basically, I was successful in plotting the desired intersection on the main mesh (picture 1), however, once I increase the number of data really high (1000 for example), performance is affected.

Is there a smart way to do this in shaders ?

data looks like this

legend looks like this

Any attempt from a good samaritan ?
I’m sure lots of people will benefit from whatever solution that is proposed.
Thank you in advance for your time and efforts

well there are two obvious ways:
1 you pack all your data into a texture, then pass texture length as a uniform; the price being lots of texture read operations;
2 you just hardcode the data in the shader; the price being having to recompile the shader every time the data changes.