How to test performance. What are the reasonable limits

Hi, I’m working in my first threejs project. It is a front-end for a WordPress CMS. Each post has a 3D cube with a texture taken from its featured image. It can be tested online here:

https://tgp.e451.net/#debug

Click on any cube or scroll down to see how it works. The cubes come to the fore when clicked.

I’m afraid that all the setup goes against performance:

  1. All cubes move independently, So geometries cannot be merged and there are many draw calls.
  2. There are many textures operating at the same time.
  3. The textures have a high resolution (1024 x 1024) because, when they come to the fore, they are seen at a large size.
  4. There is a directional light with cast shadows. Cubes are receiving and casting shadows. Floor is receiving shadows.
  5. The scene is large, so the shadow.mapSize should be 2048 x 2048
This info comes from spector.js

Commands

  • uniformMatrix4fv: 65
  • drawElements: 63
  • bindTexture: 31
  • uniform1f: 30
  • bindVertexArray: 5
  • useProgram: 4
  • bindFramebuffer: 2
  • clear: 2
  • clearColor: 2
  • drawBuffers: 2
  • frontFace: 2
  • scissor: 2
  • uniform1i: 2
  • viewport: 2
  • uniform1iv: 1
  • uniform3fv: 1

Commands Summary

  • total: 216
  • draw: 63
  • clear: 2

Primitives

  • total: 2364
  • triangles: 2364
  • triangleStrip: 0
  • triangleFan: 0
  • lines: 0
  • lineStrip: 0
  • lineLoop: 0
  • points: 0

Frame Memory Changes

  • Buffer: 0
  • Renderbuffer: 0
  • Texture2d: 0
  • Texture3d: 0
  • Program: 0

Total Memory (Seconds Since Application Start: Bytes)

Buffer

  • 0: 10312

Renderbuffer

  • 0: 20971520

Texture2d

  • 0: 146800668

Program

  • 0: 117445

I can see it mooving smoothly in my computer but I don’t know how its behavior will be in other computers with less memory, CPU, GPU, etc…

My questions are:

  • How do you test your project for low-performance machines?
  • Is this project within reasonable limits or should I rule it out entirely? Any idea for improving its performance?
  • What are the reasonable limits?

Any insight appreciated. Thanks.

That’s not a question which can be answered globally, imo. The range of possible client devices is just too broad.

As a general, pragmatic strategy I’d suggest you define a frame rate, which you don’t want to fall below, like, for instance, 30 fps.

Maybe you can query the client’s capabilities and implement an adaptive approach in your app, by dropping successively some of your features in the sequence from “mere eye candy” to “indispensable core navigation”, until your minimum fps threshold is not breached anymore…

Features you might want to make “droppable” could include:

  • shadow casting and receiving
  • random object movements (sometimes barely noticeable creeping)
  • texture map lower resolutions (mip-maps?)
  • implemeting LODs, for large scenes
  • cube movement

As a “tangible” feedback:

my 2014 iMac 5K

Bildschirmfoto 2021-12-06 um 19.27.32

constantly runs its fans, with the GPU at a minimum load of 30% and CPU at 20%, even on your landing page without any user input like mouse movements, clicks, keystrokes. My machine maintains 60 fps throughout, but this load on idling is definitely too much, imo.

1 Like

Thank you very much for the answer. Very useful and clear. The only part that remains unclear for me is “query the client’s capabilities”. I found this advice recommending to rely on FPS instead GPU/CPU/Memory usage. Is it right?

This gets me into a logical problem: I have a tick function in my app with the calculated delta time. I could use this information to implement the LOD system. But to calculate delta time I need, first of all, to load all the content and run the application in the browser, even on low-performance machines.

For example, to simplify, just related to texture resolution, something like:

// load hi-res textures at start

let lodProfile = 1

if (deltaTime >= 32) 
{
    lodProfile ++

    // dispose loaded textures

    // load the textures corresponding to the current LOD profile 
} 

Is this a right approach?

Maybe https://www.npmjs.com/package/detect-gpu can help.

2 Likes

I can’t give you a piece of code on that, but Three.js supports a “stats.module.js” which gives you a live bar-graph-like history of the current and past frame rate. See this example on three.js. The frame rate is nothing more (or less) than the inverse of “time per frame”, like you are suggesting yourself. You could compare in its source code how they collect that information.

LOD stands for “level of detail”, which is a technique used to draw objects which are far away from the viewer/camera with low(er) detail than objects which are close to the camera. There is an example on three.js on how to do that. It has nothing to do with the overall time consumption of the scene as a whole but rather with a depth sort.

You are right in your observation that my initial proposal bears the risk of overloading weak clients initially by relaxing only “after the fact”. You could of course go the other way around: start easy, and increase load/features as long as you don’t see a drop in frame rate below your self-defined threshold.

2 Likes

Thank you very much for the clarifications and the links. I have enough material here to research.

Thanks for sharing this information, It was very useful.