My app needs to apply a texture comprised of a base image and one or more PNG files with transparency. I load the base texture into a canvas, composite the transparent PNGs onto it and then finally use the result as a texture for my object.
The naive approach in that Fiddle I came up seem to work but I feel like there is bound to be a more elegant/performant solution out there - especially as texture changes will be tied to input from the user so they will change many times over the lifetime of the app.
You could write a custom shader that receives multiple textures and creates the composite in one pass (on a render target that you can re-use without creating canvas texture every time). This might be more performant that multiple 2D drawing.