WebGL performance monitor with GPU loads

I wanted to measure the load on the CPU / GPU simultaneously and created this tool.
Here live example how it works https://munrocket.github.io/gl-bench/examples/webgl.html


Screenshot%20from%202019-10-08%2012-28-08

Any feedback would be appreciated.
Source: https://github.com/munrocket/gl-bench

10 Likes

How do you get the GPU% profiler to show up? It’s not displaying for me.

1 Like

I am opened chrome on desktop which is support EXT_disjoint_timer extension.

1 Like

Oh, I see. It didn’t show up in Firefox, but it does in Chrome.

For some reason this demo shows 445 FPS on my machine, do you know why? My monitors are set to refresh at 60FPS, and requestAnimationFrame should cap out at that setting, yet I’m getting 445 on the widget:

445fps

Looks like you off vsync in browser/os and your gpu constantly heating without synchronization to rAF. Can you check vsync flag in chrome://flags and about:gpu?

(I am thinking about explicit frame iterator api in order to achieve measurement outside of the requestAnimationFrame and exclude bugs when you have 2 measurement with first name in rAF loop. It will be more verbose but let you more control… and probably bugs, ha-ha, because after that you can increase frame counter by hands two and more times and get crazy fps.)

Using Chrome 78 in windows, the GPU shows 0% in all examples for me.

I have a pretty powerful GPU: GTX 1080 and this is what i get:

pretty sure it’s wrong :smiley:

Strange and the GPU load is too big for the GTX1080. Right now it’s really device specific.

Is this due to lack of vsync? I noticed that when I tested I was getting like 400fps but my GTX 1060 was at like 90% lol.

No, EXT_disjoint_timer_query was removed from browsers due to memory exploit. Funny, but it still sometimes works and exists in webglstats.com

1 Like

FIXED! So, it works in web workers now. https://munrocket.github.io/gl-bench/examples/web-workers.html

4 Likes

Works great in Chrome and FF on Android :+1:

Is there any way to show memory as well?

what I meant by “wrong” is not just the load, also the FPS count, I attached JS profile to show that a frame was being dispatched every 16.6 ms, or stable 60 FPS, while the widget shows 602 FPS.

Yes memory also can be calculated, I need to change design shape little bit.

Also I didn’t know that we have canvas inspection in devtools.

You could do the same as fps/ms, just toggle between cpu/ram on click.

Looks interesting, we can off gpu tracking in this mode, because tracking idle rendering pipeline little bit and drop FPS.

Added heap memory size and fps chart, Instanced Mesh supported also.

1 Like

I tried this out but wasn’t sure it was showing the correct load - full screening a custom shader didnt seem to make much difference, though maybe it was my setup. I’m using three.js and one of the objects uses a custom shader material - how can i ensure that it’s catching all the processes?

Also - web worker supprt, does this catch the load placed by web workers, or does it use web workers to do the processing?

I really like where this is going, btw, I tried to make something similar myself but ran into problems with it. Accurate benchmarking and load profiling is essential when making stuff like this, exp when building things on a fast machine but making sure there’s enough spare resource for slower machines.

  • Can you show minimal example? It’s possible when you use another context or webgl extension with instancing/draw_buffer.

  • Yes. I am created 1 web worker with three.js and gl-bench without dom which is send log messages to the main thread. On main thread I am invoke new instance with UI and loggers handlers.

  • You can try Three.js Developers Tools or Spectre.js, but anyway all timers will be on CPU. If you really trying to profile GPU consider native tools with ANGLE backend on windows like described here. Because web API is limited now, very few PCs still support EXT_disjoint_timer_query extension.

P.S. [quote=“Cotterzz, post:18, topic:9970”]
full screening a custom shader didnt seem to make much difference, though maybe it was my setup.
[/quote] Maybe you have bottleneck in vertex shader. Texture size reducing not guarantee performance boost.

Yeah possibly, I just wonder about how you’re measuring the GPU if you no longer have direct access to the data, and with shaders it can be hard to measure because of the parallel nature of the gpu - per pixel shading, IIRC, uses equal processing time up to a point, then slows down.

What would be really useful is to know how much extra power I have left before hitting gpu/shader thresholds - as I’d like to keep adding stuff knowing there’s spare processing power.

Does that make sense? Sorry if I’m not explaining it well.

Also, how do I know I’m catching all the processes, is it enough to enclose the render loop?