Artificial Browser Limitations and The Best Gaming Rig for WebGL?

Often times, when hearing enthusiasts talk about computing, they often discuss how various portions of the computer are ‘bottle-necked’ by certain hardware. Certain aspects of the system hardware are far more important for performance then others, like the importance in gaming of single-core clock speeds over multiple cores - or the pointless nature of PCI-E 4.0 in a world that makes no use of that extra bandwidth.

I’ve never written for desktop gaming environments, but I suspect that our world could have a slightly different priorities list when it comes to hardware. I know for a fact that JS will not make use of the full system memory, often limiting it to about 1-4 GB of my 16 GB of system memory (having first hand experience with out of memory errors). There might also be certain limitations I’m unaware of for GPUs. For instance, I recall that our GPU memory was limited to something like twice our screen resolution? I don’t know if that is still true, but it leads to a weird reality…

My GPU memory is then bottle-necked by the size of my monitor? Weird. (If that’s still true)

I think I also recall reading that Web GL tends to pass our texture buffers back and forth with the GPU, while most gaming environments I’ve read, tend to just pass information around on the GPU itself. I’m not sure if this understanding is correct, but if so, does that mean that PCI-4.0 and 5.0 would be far more important for our field because of increased CPU-GPU bandwidth expansions?

So, it begs the question. What artificial design limitations do we REALLY have as web developers and what changes in hardware should we prioritize or find most exciting if we code in this space? What DOES a killer rig for Web GL look like?

Totally wondering because - well it’s kind of fun to think about ^_^. Wondered if anyone else ever gave this a lot of thought, could poke holes in things I remember reading, or knew of new things I could stow away in my attic of experience that helps guide my crazy inventions for this world.

2 Likes

Notice that WebGL always uses a native 3D graphics API when talking to the GPU. This might be OpenGL, DirectX, Vulkan or something else. It really depends on the device what runtime is targeted.

The actual “WebGL overhead” comes from the ability to target many different 3D graphics API. That means each WebGL command has to be converted to a native counterpart. This is also true for shader compilation. You have at least one additional shader source transformation step compared to a real native 3D application. If you are interested in this topic, read about ANGLE. It’s the software component in Chromium based browsers which is responsible for the above tasks.

In any event, WebGL applications tend to be more CPU bound than native ones. And that means a proper WebGL engine has to optimize the usage of the WebGL API as good as possible. The general idea is to minimize the amount of API calls and use performance related WebGL features (like VAO, UBO or instancing) to speed things up.

Apart from WebGL, there is also a certain amount of overhead in browsers. In the last years, browsers vendors tried to introduce new web standards in order to improve the performance of graphic intensive applications. Examples of such new APIs are ImageBitmap or OffscreenCanvas. A real “killer” WebGL application should definitely have an eye on this stuff.

So to sum up. Having a good understanding of hardware and programming languages is good, but I would say it is actually more important to use latest browser features, optimize the usage of WebGL and use a proper 3D model format (glTF) to achieve a good performance.

7 Likes

ANGLE is definitely something I want to read more about in the future. I wonder if it Web GPU will continue to build off it. I’m always surprised to hear that WebGL is more CPU bound, as I tend to find the CPU usage is pretty light. Maybe that’s because most of my work is in shaders. But, if so, I’d be happy as GPU continue to have excellent advances while single threaded CPU loads are lagging.

That said, multi-threaded CPU loads are definitely something to keep an eye on. I’ve been really sad that Firefox is taking so long to implement off-screen canvas. I think it’s done, they just never moved it out of the experimental stage for some reason? There’s also the interesting question of whether that causes the GPU to do a context switch, which is supposedly really pricey, or whether multiple off-screen canvases even use multiple graphics threads behind the scenes, or all just report to the same thread once they’ve finished converting everything from WebGL to native graphics APIs.

I think Apple are moving to ANGLE as well. At least, there are some mentions of it in the the Safari release notes.

1 Like

Yep, and with that, WebGL 2 should also be supported by default :tada:.

1 Like

That’s awesome to hear, so knowledge I can learn from that should have some longevity. I’m really glad Three moved over to Web GL 2.0 by default, and hoping that my other favorite web library (A-Frame) follows along soon - or at least gives us the option of using Web GL 2.0. I could simplify my life so much by having that available ^_^. I have hacks for stuff now, but, they’re not optimal…