My app needs to apply a texture comprised of a base image and one or more PNG files with transparency. I load the base texture into a canvas, composite the transparent PNGs onto it and then finally use the result as a texture for my object.
The naive approach in that Fiddle I came up seem to work but I feel like there is bound to be a more elegant/performant solution out there - especially as texture changes will be tied to input from the user so they will change many times over the lifetime of the app.
What is a better solution?
You could write a custom shader that receives multiple textures and creates the composite in one pass (on a render target that you can re-use without creating canvas texture every time). This might be more performant that multiple 2D drawing.
Interesting - I don’t know how to do some of the pieces in that approach but I bet I can figure it out. Thank you.
Like so, for example (fiddle link):
Fantastic - thank you so much - that looks easy to follow.
No problem, I put the wrong formula for blending in the shader code though, fixed now.