How can I use `CanvasTexture` in WebGPU?

How can I use CanvasTexture in WebGPU?

There is no example of this on the official website.

I tried using the original WebGL method, but nothing is displayed. Can anyone provide a rough guide on how to do this? Thanks a lot.

Try this code:

import * as THREE from 'three';

import WebGPURenderer from 'three/addons/renderers/webgpu/WebGPURenderer.js';

let renderer, scene, camera;

init();

function init() {

  scene = new THREE.Scene();

  camera = new THREE.PerspectiveCamera( 40, window.innerWidth / window.innerHeight, 0.1, 100 );
  camera.position.set( 0, 0, 4 );

  const canvas = document.createElement( 'canvas' );
  canvas.width = 128;
  canvas.height = 128;

  const context = canvas.getContext( '2d' );

  context.fillStyle = '#00ff00';
  context.fillRect( 0, 0, canvas.width, canvas.height );

  const texture = new THREE.CanvasTexture( canvas );
  texture.colorSpace = THREE.SRGBColorSpace;

  const geometry = new THREE.PlaneGeometry();
  const material = new THREE.MeshBasicMaterial( { map: texture } );

  const mesh = new THREE.Mesh( geometry, material );
  scene.add( mesh );

  renderer = new WebGPURenderer( { antialias: true } );
  renderer.setPixelRatio( window.devicePixelRatio );
  renderer.setSize( window.innerWidth, window.innerHeight );
  renderer.setAnimationLoop( animate );
  document.body.appendChild( renderer.domElement );

}

function animate() {

  renderer.render( scene, camera );

}

Live demo: three.js dev template - module - JSFiddle - Code Playground

You can just replace WebGPURenderer with WebGLRenderer to reproduce the same result.

1 Like

Thank you very much, it can run now.

However, I have another question. When creating a UI, previously in WebGL, we could use a dedicated UI Scene and OrthographicCamera to create the UI layer, adding all UI meshes to this UI Scene. Here is the example:

Do you have any suggestions for implementing UI in WebGPU, apart from using HTML?

Another problem is that in version 165, long text in the canvas does not wrap automatically.

Not sure I understand the question. Why can’t you use the same approach like with WebGLRenderer. I see no difference when using the fiddle with WebGLRenderer or WebGPURenderer.

Since there may be some programs that do not display correctly when converting from WebGL to WebGPU, such as this example.

async function animate() {
  await renderer.clearAsync()

  //await renderer.renderAsync(scene, camera);
  await postProcessing.renderAsync()

  await renderer.clearDepthAsync()
  //await renderer.clearStencilAsync()
  await renderer.renderAsync(UIScene, UICamera)

}

For example, in the animate function, if I continue using
await renderer.renderAsync(scene, camera);,
both planes display correctly.

However, when I switch to
await postProcessing.renderAsync(),
one of the meshes fails to display.

I’ve been stuck on this issue for almost a day, which is why I’m seeking help.