Let's solve pixelated SVG texture rendering in Three.js

Is it solvable?

Before starting, I’ve seen the other threads: using SVGLoader is not an ideal final solution for rendering absolutely any SVG content because SVGLoader cannot draw all SVG features, and features it supports often come out glitched and distorted. I’ve also tried changing mipmap modes, but that seems unrelated to the following.

The only currently-available approach that I know of, for 100% SVG support, is to render an SVG image into a texture. However, as many of you reading know, the result is pixelated and blurry compared to native SVG rendering.

I have a feeling this texture approach can be improved, but I’m not sure how yet.

In the following screenshot, I’ve put a Mesh with a PlaneGeometry having an SVG image as a texture (by passing the <img> to Texture.image) side-by-side with a DOM element containing the <img> linking to the same SVG.

  • On the left is the mesh plane.
  • The mesh plane is aligned to screen space, such that one unit of length along the surface of the plane is one CSS pixel
  • On the right is a DOM element with the same width and height (in CSS pixels) as the mesh plane
  • The width and height values of the mesh, and of the DOM element, are numerically identical.

Despite my WebGLRenderer having the correct pixel ratio set (my devicePixelRatio is 2), we can see that the SVG rendering on the left is more pixelated, and thus more blurry.

First question:

Why may the texture (which is aligned to CSS pixels) be more pixelated than the native SVG rendering? I would imagine that the texture system should take this into account. Or maybe instead I need to make the mesh twice as big numerically, then scale it back down by 0.5 (I’ll report back on how this turns out).

1 Like

Could this be an aspect ratio issue? Perhaps it could be improved by creating a texture to match the SVG aspect ratio rather than square?

Also, have you tried using the Path2D API?

1 Like

I updated my post to mention that I set the SVG as a texture by applying the <img> tag to a Texture.image.

I discovered something: If instead of setting Texture.image to the <img>, I draw the <img> to a <canvas> with ctx.drawImage using the same resolution as the <img> (times devicePixelRatio), then assign the <canvas> to Texture.image, the result is remarkably better:

This makes me wonder: what is it about the <img> element when used as Texture.image that results in that being lower resolution than when the <canvas> is used as Texture.image? Maybe there is something to fix in Texture?

Unless I missed it, I don’t see Texture doing anything with image.width, image.height, image.naturalWidth, or image.naturalHeight, and just passing it to gl here: three.js/WebGLTextures.js at cba9457c1c8b2a68a2bb593a3a329545a10f91e4 · mrdoob/three.js · GitHub

				state.texImage2D( _gl.TEXTURE_2D, 0, glInternalFormat, glFormat, glType, image );

Despite that both the <img> and the <canvas> render crisp natively, why might the Texture look crisp only with a <canvas> and not an <img>?

1 Like

I think I ran into similar issues in the past. My issues were:

  • insufficient resolution
  • incorrect mip-maps
  • wrong sampling types (linear/near/mip etc.)
  • poor screen pixel/texel alignment
  • aspect ratio

Ultimately, I think that using SVG the way that you do is a little weird. You can just use browser’s native SVG rendering and get a great result, if your SVG is entirely static and will be seen in a specific a priori known size - you can just render it to texture offline.

Rendering SVG in webgl context makes sense to me if you have a dynamic SVG, that is to say you like SVG format and modify your image in some ways before it is to be rendered. I like SVG too, it’s really useful and has a lot of nifty tricks that HTML/CSS doesn’t. I guess if I was to look at SVG rendering in three.js - I would look more towards the loader side of things, otherwise it’s not really about SVG, it’s about pixel-perfect reproduction of plain-old 2d textures.

Alternatively, if you’re just after vector graphics - there are other, simpler formats out there, as well as sub-sets of SVG. All depends on the usecases for you.

From the above conversation, you can’t possibly know what I will do with it. :stuck_out_tongue:

(I will be applying light/shine/shadow to surfaces (of geometries) that are covered by SVG graphics. This is not currently possible with the browser’s native SVG or CSS rendering.)

Did you know that when using CSS 3D transforms, the browser renders the CSS surface (including any SVG content) on the GPU? This shows that SVG can be rendered crisply as a texture on the GPU.

Did you know that the Adobe team created a CSS Shader (aka Custom Filters) spec, that made it possible to apply shaders to DOM content (including SVG) and prototyped it in Chromium?

Unfortunately the Custom Filters spec was discontinued:

https://lists.webkit.org/pipermail/webkit-dev/2014-January/026098.html

I am not trying to do something new or strange here. I’m just trying to figure out how to do what has already been done many years ago, but this time doing it with Three.js, and then making it easy for people reproduce the same results, pixel perfectly just like browsers did in 2012-2014.

2 Likes

From the above conversation, you can’t possibly know what I will do with it. :stuck_out_tongue:

You’re right, it’s just that if you plan to rasterize an SVG at some point, and the SVG is not going to change - that’s the weird part to me.

SVG is not very light-weight in terms of rendering pipeline, especially the filters that I assume you’re talking about. So pre-rendering it offline seems like a sensible choice to me, coming at it from game-dev perspective. You avoid the need for SVG support inside your app altogether, and you get better performance probably. The only downside is that your image will take up more space and therefore more network traffic, since it’s rasterized. I hope I’m being clear on my logic here.

Did you know that when using CSS 3D transform s, the browser renders the CSS surface (including any SVG content) on the GPU? This shows that SVG can be rendered crisply as a texture on the GPU.

Ha, yes, I am. It’s pretty cool what modern browsers do these days.

Did you know that the Adobe team created a CSS Shader (aka Custom Filters) spec, that made it possible to apply shaders to DOM content (including SVG) and prototyped it in Chromium?

Yes, I remember that spec. I thought it was cool, but at the same time too unstructured. I though “I really want to have it, but it doesn’t feel like something normal developers could ever use”. I started working with shaders myself somewhat recently, perhaps 6 years ago? And I feel like that’s a whole different world. Expecting designers or your average JS dev to work with that just didn’t seem like it’s in line with how W3 group operates. And here we are today.

I am not trying to do something new or strange here. I’m just trying to figure out how to do what has already been done many years ago, but this time doing it with Three.js, and then making it easy for people reproduce the same results, pixel perfectly just like browsers did in 2012-2014.

Such is life, I remember when it was unclear if SVG would become part of the HTML spec or not, now we take it for granted, or how various pieces of CSS seemed too complex to be true. Like when Adobe joined the efforts and donated a lot of work regarding drop-shadow stuff, for text and boxes. And today it’s extremely common to see the tech being used.

My point is - standards go the way they do, not even working group members have enough control to force them in a specific direction. Like how mozilla pushed for asm.js and got rejected or how google pushed for pn.cl and got rejected. Arguably both good ideas, or SIMD in JS, or threads, or just about anything else. So many cool and powerful things get abandoned or rejected for various reasons, often political ones.

PS:
“weird” is just for me. I see it as me having a different perspective. I did not mean to say that you’re wrong, or that your perspective is not valid. I hope I didn’t make you feel like this is a hostile space, or to diminish your truth. I believe myself to be an ally when it comes to diversity of ideas.

1 Like

If you are only considering the use case of 3D game development, then perhaps what I’m trying to do is not as good as pre-generating rasterized textures. But you also mention it: SVG is smaller, which can have advantages in certain cases.

Think about this other case: devs use SVGs in web apps all the time to make an image appear on screen, then they never update (animate) the SVG. I estimate that a majority of websites use SVGs in this fashion, simply for static images that don’t change. I’ve implemented production apps quite a few times using SVG graphics because the approach was simply easier for making responsive websites.

Static images generated from SVG (at runtime) end up on many websites because it makes responsive design easy.

These SVG-based images actually can change, just not often and not necessarily for animation. Usually it happens if someone resizes their browser (f.e. makes the window bigger by maximizing it): in this case the SVG should re-render so that it will fit the new size in the updated layout. We don’t know before hand what space the SVG will fit inside of; the size is random and we will never know. If we want the best quality of static graphics for a responsive application, then using pre-generated raster graphics is simply not an option (some end user somewhere is going to experience pixelated or blurry graphics, the very reason this discussion topic is open).

I’m not trying to do anything new: I’m simply trying to achieve what browsers and other programs have been doing for a long time now. I’m just trying to achieve it with WebGL (specifically with the aid of Three.js).

That’s easily the case with canvas.createContext('webgl'): it’s a complicated API, and a vast majority of people don’t use it directly, but it allowed web apps to do what they never could before.

The main idea here is that something like Custom Filters would have enabled people to achieve things they weren’t able to do before. A limited set of developers with enough skills would be able to make higher-level abstractions on top of it to make certain things easy for designers, less-experienced programmers, or lazy expert programmers who like to make re-use of existing abstractions and can dig into them as needed.

Because we don’t have Custom Filters today, there are things that simply are currently impossible to achieve with CSS rendering for HTML content, no matter how hard we try. There are things that Custom Filters would have enabled us to do that we currently can never achieve unless we implement it ourselves from scratch in WebGL, but that means we must leave our DOM behind and go on an entirely separate path.

By implementing things that Custom Filters would have been able do, but in WebGL, we can no longer use all features of DOM in this new system; we can not take existing DOM and simply add effects to it. We would instead have to completely port our application from regular DOM/CSS to the pure WebGL alternative, and that’s simply not feasible: there are a huge amount of applications on the web written in HTML, and it would be much easier to add shader effects to those existing apps with relatively-minimal effort compared to re-writing all those apps in WebGL.

'Tis true. But that won’t stop me: I want to achieve certain things with WebGL (the equivalent of CSS Custom Filters with pixel perfect SVG render that scales to any responsive design, etc). :slight_smile:

No worries, I was not offended, I just didn’t know why you thought it was “weird” to use SVGs in that fashion because I’m merely trying to reproduce equivalent features browsers already have, then add the power of WebGL (similar to what CSS Custom Filters would have enabled, and more).


Another thing to think about is this: web developers didn’t get to have something like CSS Custom Filters, which would have allowed them to customize their HTML/CSS apps with graphics as good as native apps. If they had this, perhaps the web landscape would be radically different. Maybe we’d have many 3D games written purely in HTML/CSS already, without even touching WebGL APIs. These HTML/CSS apps could perhaps even be rendered inside WebXR experiences, which is also not possible today without limitations and without bad developer experience. We would probably even have the ability to inject WebGL content into the HTML/CSS space.

This is all nothing new. Qt has has this for a long time already; the mixing of declarative 2D content with 3D content with one simple language called QML (a markup language as if a JSON-like alternative to HTML, but with dependency-tracking reactivity built in to it out of the box in a fashion similar to Vue, Svelte, and SolidJS):

However this dream landscape doesn’t exist yet (in the web), and all good 3D experiences on the web today are written with custom WebGL from scratch, which is very inaccessible for search engines, for blind people, for average web developers because DOM/CSS content not natively mix with WebGL, etc, so people end up writing apps in one way or the other: they write an HTML/CSS app that they can not add magnificent effects to or that have highly limited and poorly-performing 3D rendering abilities due to the glitches that all major browsers today have with CSS transforms (for, example), or they write a WebGL application that can not make re-use of the abundant amount of HTML/CSS libs in existence (for example, how do you take an existing jQuery plugin, and stick the result inside a 3D scene?).

This is not the ideal win-win situation (look outside of pure game development here) that I imagine. The ideal landscape would allow people to write HTML/CSS applications that can optionally be styled with 3D effects, be entirely 3D apps, or be converted into 3D apps after the fact, all with high compatibility with existing tooling (jQuery, React, Vue, HTML parsers, CSS languages, CSS-in-JS libs, etc), high compatibility with all existing web content (for example imagine you open the source code of an old website, and you make the old web page render inside a WebXR experience in which parts of the website’s UI explode into 3D pieces that float around your VR space), and high accessibility (all 3D content is indexable by Google, a screen reader can read all of it, a 3D turn-based app can be easily played by a blind or def player, etc).


This is what I’m imagining, literally just trying to re-create existing concepts (tools like Qt already have this stuff if you look outside of the web). :smiley:

No. They will all suffer the pain. The 3D requires a sacrifice. They cannot ignore the GPU as this complex nerdy-step brother that lives in the basement, and never learn to talk to it. To try and automate this will require a visual editor to bridge the coding gap. That would be as big of a paradigm shift to standard applications devs as learning WebGL itself. I’m sorry but this is real development and real coding. Their use-case with 3D may start out simple but it will quickly grow to include complex interaction, and GPU limiting things, that without GPU knowledge will be impossible to predict. They will learn WebGL because we live in the world of large data now. If they want to have visualizations that can be performant on large data models then they have no other choice. AM Charts is having to use a version of “point clouds” now and guess what? Nobody in the whole computer does point clouds better than our friend The GPU. The browsers single CPU thread is not going to do what the GPU can do. Because it is the only way, they will come to drink from our fountain and learn the ways of the 3D tribes. Many keyboards will be broken. Many mice will die. But those that make it will be better men/women.

You don’t get what I mean yet I think. I know that writing shaders is complicated.

However, even the most advanced WebGL expert today can not write WebGL that will grab a DOM element and cause it to bend like a sheet of curled paper. Custom Filters allowed this, and it is not possible otherwise.

With Custom Filters, a shader expert could do this:

const el = document.querySelector('.some-element')

el.style = '
  filter: /* shader code here */
'

and it would apply directly to the element (f.e. make the element shaped like a wrinkled paper and add shine to it).

The point is, it is impossible to do the same thing today.

Also it seems you are not thinking about the benefits of abstraction here:

I also mentioned that people would abstract these features to make it easy even for beginners (just like jQuery plugins allowed total beginners to place fully working components into a page, and now these things today can be shipped as Custom Elements).

What I mean about abstraction is that a total beginner who knows only basic HTML could import a CSS library, and all they would have to do is simply add a class name to their element to get 3D shader effects.

A beginner could write something like the following if Custom Filters existed:

<link rel="stylesheet" href="/path/to/paper-curl.css" />

<style>
  .my-curled-sheet {
    width: 400px; height: 600px;
    --paper-curl-radius: 100px; /* User customization of paper curl shader. */
  }
</style>

<div class="paper-curl-bottom-left my-curled-sheet">
  This content has a page curl at the bottom left.
  ...
</div>

Can you see how insanely powerful this is?

Writing HTML code like that, taking advantage of a CSS lib, is really easy for a beginner. But more importantly the 3D shader effects can be applicable to any regular DOM element by simply adding class names. Only a CSS library author would have to write complex CSS shader code, then everyone would be able to re-use it in any website that ever existed.

Someone could, for random example, easily write a browser extension that implements a magnifying glass for accessibility.

This type of thing is not possible today! The fact that Custom Shaders could modify existing DOM content is powerful and no one can do this today.

I’m trying to make an alternative way to do this with HTML markup, and this SVG topic is but a small part of this overall process. Having this available via HTML would make it highly accessible to both web developers and their end users who will get new and better experiences.

Back onto the topic, I think what we need is an SVGTexture class that has options like resolution.

This would make it easy for a developer, for example, to decide what resolution is appropriate for their SVG texture. For example, this

low

vs

vs

For purposes of a demo, there could be an interactive slider control (f.e. dat.GUI) that determines the resolution at which to render the texture when dragged.

As far as I can tell, this is not possible to achieve using <img> (Image or HTMLImageElement), and we must use <canvas> (HTMLCanvasElement) to draw at our desired resolution.

The SVGTexture class would be a class that we can point anyone to (as alternative to SVGLoader, and the user needs to evaluate what works for their use case).

This new SVGTexture class will end all the questions people ask about “blurry pixelated SVG textures” without being limited in what SVG features are supported (like how limited the SVGLoader class is).

1 Like

This seems reasonable. Maybe you could create a PR?

As of PR 19940 I am blocked from the repo. Happy to leave a link to the implemention here though.

1 Like

Good luck brother. I suppose people will use it for stuff like that. No shortage of ThemeForest/WordPress sites or CSS3/HTML Dashboards with crazy transitions. My hope is that these companies actually convert to WebGL. React Fiber, and Angular, now using Three in some of their builds. I hope more people switch to 3D development. I see a ton of GL shaderToys that I never thought possible. I think the devs are getting smarter these days. If you follow the money then 3D games, and consoles, are dominating the young and old alike. If you follow past trends in framework choices then companies have usually leaned toward the one with most Widgets. You would think that this is counterintuitive to real business needs but companies will actually choose Angular for more choices in the Material lib over a framework that fits their need and is easier to maintain. I think the bridge between standard application and 3D game is where things are trending. Another example of form over function is a look at Kendo/JQuery/Dojo success. A large company I worked for once went with Dojo because it supported more UI components out of the box. Seriously. If a lib/framework runs in all supported browsers is a major deciding factor too. An issue for the adoption of WebGL libs is going to be the GPU differences per device. Its expensive to write conditional code based on browser or hardware differences in a large company codebase. The versioning becomes expensive and a nightmare to maintain. Due to this, will companies who do go for WebGL, for performance reasons, go with an engine vs an open source rendering API? That is the question. I think a quicker adoption for companies to use GL would be to run a full rewrite of an existing chart lib like D3. Companies want to see large data visualizations. Devs think writing data for an x,y axis is too hard in most cases lol.

I’ve worked in UI for a long time. Ive worked at a lot of Fortune 100, 300, 500 companies, and a few startups. Ive been tasked with creating custom visualizations for everything from organizational charts for orgs with 150k+ employees, to globes, to cloropleths of the country, to financial dashboards, etc. I don’t get a lot of requests for page-curls but I get a lot of requests to make large data sets work with a chart visualization. Many times a company will try and build an entire application around a chartcenntric view of their data. They fall in love with one way of seeing an important set of data and just run with it. Only to find out that it wont perform with a larger data set or that they need a very specialized developer to get the job done. Now to the latter point your idea makes sense to me. Companies dont want to rely on too many specialized devs on their teams. They are expensive, hard to find, and hard to replace. So to that point yes they will want canned transitions that are easy for standard devs to apply to regular content. But large-data isnt a trend. Its a growing fact of life and displaying it in meaningful forms is becoming hard to do in the CPU/JS thread.

Again, I think its a great idea for designers and lower level devs. But I hope devs go with the better approach and learn WebGL. Having a WebGL API that works on across most devices is going to be a bigger win for adoption IMHO.

Yessss!

This is also what would have happened already with CSS Custom Filters and whatever new APIs would have come later due to it.

That stuff would have been rad with CSS Custom Filters. For example placing the contents of an HTML <table> element into an interesting 3D rendition with lighting :sparkles:, while staying semantic and accessible.

I feel you though: the dataviz I do at work, and the page curls I do on my free time. :grin:

Mmmm hmmmm.

It’s just not feasible for everyone to know the lowest levels, let alone all levels. Some people even have no interest in it, or will never get to that level (maybe they’re a designer who found themselves dabbling with sprinkles of HTML/CSS/JS out of necessity, or they’re a data scientist who just needs something quick so they can get into “more important things”, etc)

Thanks for the encouragement. LUME will get there eventually. :grin:

Lets not even start on Tables. No UI dev doesnt cringe when they hear the words “smart data grid”. Its never just a table. The 3D version of that would be awesome but the ability of the datagrid to be handled in WebGL would be called the “Datagrid Engine”. A cubic grid would be cool as hell for instance. Representing a grid per side but all still sortable/filterable as a single multidimensional array. Recreating common data widgets is needed and WebGL could do a bangup job of it for sure. But making work everywhere is still going to be a challenge.

Heh, yeah. It’s just a table until there’s too many bits of data, then they need more than just HTML.

Challenge accepted! :smiley:

@trusktr Did you come up with any more options for solving this problem? I’ve just run into this when I loaded a Lottie (via ThreeJS’s LottieLoader)… it looks great on screens of DPI of 2, but screens with DPI of 1 look jagged and in need of anti-aliasing.

Hi @ericdjohnson I haven’t published a utility class like SVGTexture yet, but in the meantime here’s something you can adapt:

async function svgTexture(img, canvas, width, height) {
	const ctx = canvas.getContext('2d')

	await imgLoaded(img)

	canvas.width = width
	canvas.height = height
	ctx.drawImage(img, 0, 0, width, height)

	const tex = new CanvasTexture(canvas)

	// If we remove the canvas before the above creation of CanvasTexture, the texture will not render. hmmmm.
	canvas.remove()
	img.remove()

	return tex
}

function imgLoaded(img) {
	return new Promise(res => {
		if (img.complete) res()
		else img.onload = res
	})
}

and then

	const img = document.querySelector('.my-svg')
	const canvas = document.querySelector('.my-canvas')

	await svgTexture(img, canvas, 200, 300)

	mesh.material.map = tex
	mesh.material.needsUpdate = true
1 Like

An over engineered solution (aka rabbit hole) would use signed distance field SDF or MSDF, it’s the same method used to render text fonts with a crisp edge, and the same could be applied to SVG.

This 6 minutes video explain very well the subject.

If you want to go down this path, this js library svg-path-sdf could be a good starting point.

Or this one in C++ msdfgen can be integrated with wasm and web workers (Rabbit hole alert intensifies)

2 Likes

@Fennec this is amazing, but I definitely smell rabbits, lol! I’ll keep it in my back pocket.

2 Likes