Very large models FF vs Chromium on windows 10/11 - context lost

Noticed if I load very large models 3-4GB firefox manages to do a lot better than chromium based browsers.
FF I could load 3 models of 3-4 GB of position/index without issues. While chromium base only manged 1 (2 with angle set to opengl), if I start on another one I get context lost a lot.
When I tried on linux I did not get this issue with chromium based browser, manged 4 large ones.

On a small laptop I was not able to load model at all in chrome/edge, while ff it worked, it even manged to load 2.

Yes, loading models so large models might be weird… but still feels like its a bug in chromium…

Anyone else noticed this?

edge: Version 131.0.2903.70 (Official build) (64-bit
chrome: Version 131.0.6778.86 (Official Build) (64-bit)
ff: Version 133.0 (64bit)

tried to make a small sample/write results on some of the pc’s Ive tested on.

Not sure how helpful it would be, but I would recommend using some kind of a LOD system.

Browser environment looks more and more like native desktop environment, but there are a bunch of security considerations. For example, what if somewhat wanted to load your GPU memory with gigabytes of garbage data and your graphics driver would crash, that would be bad. Or what if someone issued a giant shader to the GPU driver to compile, and that caused resource contention for other applications running on your computer?

In essence, the browser treats any web application as a potential malware. And you, as a developer of a web application, as a malicious actor. So, in this view, Chromium is arguably doing the right thing where FF is being too “nice” to your application at the cost of user’s experience.

On today’s hardware, I’d say that you’re loading more that 10 Million triangles on a single mesh - you’re probably doing something wrong.

A mesh with 10,000,000 triangles, would at worst require 30,000,000 vertices and 30,000,000 index values. That’s 360 Mb for vertex data (3 float32 values per vertex) and 120 Mb for index data, for a total of 480 Mb. And I would say that even this is too much for browser usecases, as you don’t know what hardware someone is running, and you can’t exactly push this amount of data over the network very easily. I mean - you can, but it will cost you, and you’re going to make the user wait a fair bit, so best to use a LOD system and work with smallest mesh possible.

I don’t mean going for 10 polygons per mesh, but at 10,000,000 triangles you already have polygon density of 5 polygons per pixel at standard 1080p, that’s pretty absurd. (1920 * 1080 = 2,073,600 pixels).

You should aim to preserve visual curvature, so some polygons will be necessary to prevent “polygonal” look for curved shapes. If you’re working with CAD-like models, you should generally treat those as a starting point for optimization, and apply some kind of post-processing to them to reduce polygon count to a reasonable amount.

This same concept applies to textures too, I see some people use 8k images to texture a 1cm screw in a large scene.

You might be doing something like a CAD application on the web, but if you do - I salute you, and take off my hat with condolences, as that is an uphill battle :sweat_smile:

2 Likes

Yes, its cad, and allow user to do a lot of coloring/opacity/hiding/moving items :joy:
And its just chromium on windows desktop, linux do not have these issues.

Many models are contained within a 100m100m100m space, and users move around a lot and fast. So it can easly turn into bad user experience with jank or waiting if I need to unload/load a lot. I know I will need to look more into loading/unloading, but was hoping I could focus more on more useful features using data we have and maybe point cloud etc. :upside_down_face:

So Im rooting for chromium on windows/desktop allowing users to load more into GPU if they have the hardware/memory for it and want to. Then we can do more on gpu without load/unloading and just hide/switch lod using draw ranges with multidraw. :tada:

Will be fun to see what webgpu gives us in the future, I prb want to load more onto gpu to use compute :joy: Maybe upload is better there too.

Very old video of the app here, a lot have changed.

Looks interesting, maybe consider virtual geometry and consider doing automatic partitioning of your models. The user would not care that it’s, say, 2 geometries instead of 1, as long as their workflow does not change. They don’t even need to be aware of it.

But yeah, CAD on the web is a rough task :sweat_smile:

1 Like