1.Currently, does Threejs fully support the webgpu API?
2.When will the development documents for integrating Threejs with WebGPU be released?
3.What are the plans and arrangements?
Please reply, wait online,Thank you!
1.Currently, does Threejs fully support the webgpu API?
2.When will the development documents for integrating Threejs with WebGPU be released?
3.What are the plans and arrangements?
Please reply, wait online,Thank you!
The API is not yet fully supported. But that is also true for WebGL regarding WebGLRenderer
.
Probably better to ask whether a specific feature is supported or not.
I guess you are referring to the documentation, right? There is no detailed roadmap for this so it’s not possible to provide a date.
The current plan is to enhance the renderer and new material system such that typical three.js scene graph definitions work with WebGPURenderer
like with WebGLRenderer
. When this is done and the implementation stabilizes, documentation is going to follow.
Hi @Mugen87,
How can I know if the implementation I have of three.js will be working with WebGPU?
I’m not aware of any functionalities specific to WebGLRenderer that we are using in the project, so I’m not sure how to know what features to check for support.
Thanks!
We try to add support for all features in the core except ShaderMaterial
, RawShaderMaterial
and usage of onBeforeCompile()
since these contain GLSL specific code.
Is it possible implement ShaderMaterial
, RawShaderMaterial
to support user custom defined shader files based on WebGPU or Vulkan API?
No, both classes will not be supported with WebGPURenderer
. The node material will provide different (better) ways for implementing custom materials. Read How to utilize webgpu (is nodes the best option?) - #2 by Mugen87 for more information.
To clarify, you’ll still be able to write raw shaders, but it will be through the node system. Here are some of relevant code snippets.
Here is a JavaScript-based way to write shaders, using JavaScript to define the shader code (this way compiles to both GLSL and WGSL with a single language):
// Custom ShaderNode ( desaturate filter )
const desaturateShaderNode = tslFn( ( input ) => {
return vec3( 0.299, 0.587, 0.114 ).dot( input.color.xyz );
} );
// ....
// Custom ShaderNode(no inputs) > Approach 2
const desaturateNoInputsShaderNode = tslFn( () => {
return vec3( 0.299, 0.587, 0.114 ).dot( texture( uvTexture ).xyz );
} );
“tsl” is Three Shading Language.
And here is a WGSL-specific way to write WGSL shader pieces:
// Custom WGSL ( desaturate filter )
const desaturateWGSLNode = wgslFn( `
fn desaturate( color:vec3<f32> ) -> vec3<f32> {
let lum = vec3<f32>( 0.299, 0.587, 0.114 );
return vec3<f32>( dot( lum, color ) );
}
` );
// ...
// Custom WGSL ( get texture from keywords )
const getWGSLTextureSample = wgslFn( `
fn getWGSLTextureSample( tex: texture_2d<f32>, tex_sampler: sampler, uv:vec2<f32> ) -> vec4<f32> {
return textureSample( tex, tex_sampler, uv ) * vec4<f32>( 0.0, 1.0, 0.0, 1.0 );
}
` );
You then compose the nodes together.
Having only one way to do it is nice, it means the future is all node-based, and composable, easier to construct the shaders we want and avoiding hacks like string injection.
Anyone know if readRenderTargetPixels will be supported for webgpu?
Need it for my gpu picking…
it seems that WebGPURenderer’s support for EffectComposer and OutlinePass is not very good, and many errors will be reported.
I don’t think THREE.WebGPURenderer supports post-processing yet. Keep in mind that every shader effect must be rewritten in a different shading language, and for a different graphics API … this is a large effort.
I’d recommend looking at the official webgpu examples, to get a sense for what has been implemented in the new WebGPU API so far.
Postprocessing works with WebGPU r156. I already use this extensively to control textures that I created with computer shaders. All you have to do now is use a MeshBasicNodeMaterial and assign a texture to its colorNode. Then you can pass the material to the ShaderPass as usual and this to the composer via addPass. At the moment I have 9 post-processing shaders.
P.S. no warnings or errors in the console
Here is a nice postprocessing result with a WGSL compute shader. I used post-processing to check whether it looked the way I imagined it. This is a butterfly texture for IFFT calculations that I made in wgsl code.
You can create custom WGSL shaders using the wgslFn node and then assign them to a ColorNode, PositionNode, ComputeNode. Depending on what the shader is intended for, of course.
The textureSampler is e.g. just something for the colorNode. Interpolable fragments between the vertices only exist for the fragmentShader. For the vertexShader there is nothing between the vertices and therefore only the TextureLoad works in the positionNode. These are the subtleties of wgsl that you have to consider without and with the node system.
The positionNode is in a way a kind of vertexShader node and the colorNode is a kind of fragmentShader node.
One of the big advantages with the node system is that it takes over the management of all the bindings and locations in WGSL shaders. I really like that, because with larger shaders it’s just annoying and with several bigger shaders in a larger project it can quickly become confusing and error-prone if you have to take care of all the storage management yourself.
At first I struggled to understand it but now I love it and find it a big improvement.