Often times, when hearing enthusiasts talk about computing, they often discuss how various portions of the computer are ‘bottle-necked’ by certain hardware. Certain aspects of the system hardware are far more important for performance then others, like the importance in gaming of single-core clock speeds over multiple cores - or the pointless nature of PCI-E 4.0 in a world that makes no use of that extra bandwidth.
I’ve never written for desktop gaming environments, but I suspect that our world could have a slightly different priorities list when it comes to hardware. I know for a fact that JS will not make use of the full system memory, often limiting it to about 1-4 GB of my 16 GB of system memory (having first hand experience with out of memory errors). There might also be certain limitations I’m unaware of for GPUs. For instance, I recall that our GPU memory was limited to something like twice our screen resolution? I don’t know if that is still true, but it leads to a weird reality…
My GPU memory is then bottle-necked by the size of my monitor? Weird. (If that’s still true)
I think I also recall reading that Web GL tends to pass our texture buffers back and forth with the GPU, while most gaming environments I’ve read, tend to just pass information around on the GPU itself. I’m not sure if this understanding is correct, but if so, does that mean that PCI-4.0 and 5.0 would be far more important for our field because of increased CPU-GPU bandwidth expansions?
So, it begs the question. What artificial design limitations do we REALLY have as web developers and what changes in hardware should we prioritize or find most exciting if we code in this space? What DOES a killer rig for Web GL look like?
Totally wondering because - well it’s kind of fun to think about ^_^. Wondered if anyone else ever gave this a lot of thought, could poke holes in things I remember reading, or knew of new things I could stow away in my attic of experience that helps guide my crazy inventions for this world.