Help Porting ShaderToy to Threejs

Hello Guys.

i am trying to port the following shadertoy shader to Threejs: https://www.shadertoy.com/view/4sG3WV but i am failing miserably…

the problem with it is that im unsure the sequence that shadertoy renders the buffers (im assuming A, B, Image) and also rendering each Buffer separately i get completely different results from i believe it has something to do with the vec2 uv = fragCoord.xy / iResolution.xy; as im translating it to: vec2 uv = vUv

this is a sample of the code:

import { Camera, LinearFilter, Mesh, OrthographicCamera, PlaneBufferGeometry, RGBAFormat, Scene, TextureLoader, Vector3, Vector4, WebGLRenderer, WebGLRenderTarget } from 'three'
import { BufferAShader } from './shaders/BufferAShader'
import { BufferBShader } from './shaders/BufferBShader'
import { BufferImageShader } from './shaders/BufferImageShader'

class App {

    private width = window.innerWidth
    private height = window.innerHeight

    private renderer = new WebGLRenderer()
    private loader = new TextureLoader()
    private mousePosition = new Vector4()
    private orthoCamera = new OrthographicCamera(0, this.width, 0, -this.height, 0, 10000)
    private sceneA: Scene
    private sceneB: Scene
    private sceneC: Scene
    private counter = 0

    constructor() {

        this.renderer.setSize(this.width, this.height)
        document.body.appendChild(this.renderer.domElement)

        this.renderer.domElement.addEventListener('mousedown', () => {
            this.mousePosition.setZ(1)
            this.counter = 0
        })

        this.renderer.domElement.addEventListener('mouseup', () => {
            this.mousePosition.setZ(0)
        })

        this.renderer.domElement.addEventListener('mousemove', event => {

            const x = (event.clientX / this.width) * 2 - 1
            const y = -(event.clientY / this.height) * 2 + 1

            this.mousePosition.setX(x)
            this.mousePosition.setY(y)

        })

    }

    private targetA = new BufferManager(this.renderer, { width: this.width, height: this.height })
    private targetB = new BufferManager(this.renderer, { width: this.width, height: this.height })
    private targetC = new BufferManager(this.renderer, { width: this.width, height: this.height })

    private bufferA: BufferAShader
    private bufferB: BufferBShader
    private bufferImage: BufferImageShader

    public start() {

        const resolution = new Vector3(this.width, this.height, window.devicePixelRatio)
        const channel0 = this.loader.load(require('./images/wallpaper.jpg'))

        /**
         * Target 1
         */
        this.bufferA = new BufferAShader({
            iFrame: { value: 0 },
            iResolution: { value: resolution },
            iMouse: { value: this.mousePosition },
            iChannel0: { value: this.targetA.readBuffer.texture },
            iChannel1: { value: this.targetB.readBuffer.texture }
        })

        this.sceneA = new Scene()
        const meshA = new Mesh(new PlaneBufferGeometry(this.width, this.height), this.bufferA.material)
        this.sceneA.add(meshA)

        /**
         * Target 2
         */
        this.bufferB = new BufferBShader({
            iFrame: { value: 0 },
            iResolution: { value: resolution },
            iMouse: { value: this.mousePosition },
            iChannel0: { value: this.targetB.readBuffer.texture }
        })

        this.sceneB = new Scene()
        const meshB = new Mesh(new PlaneBufferGeometry(this.width, this.height), this.bufferB.material)
        this.sceneB.add(meshB)

        this.bufferImage = new BufferImageShader({
            iResolution: { value: resolution },
            iMouse: { value: this.mousePosition },
            iChannel0: { value: channel0 },
            iChannel1: { value: null }
        })

        this.sceneC = new Scene()
        const meshC = new Mesh(new PlaneBufferGeometry(this.width, this.height), this.bufferImage.material)
        this.sceneC.add(meshC)

        meshA.frustumCulled = false
        meshB.frustumCulled = false
        meshC.frustumCulled = false
        meshA.position.set(this.width / 2, -this.height / 2, 0)
        meshB.position.set(this.width / 2, -this.height / 2, 0)
        meshC.position.set(this.width / 2, -this.height / 2, 0)

        this.animate()

    }

    private animate() {
        requestAnimationFrame(() => {

            this.bufferA.uniforms[ 'iFrame' ].value = this.counter++

            this.bufferA.uniforms[ 'iChannel0' ].value = this.targetA.readBuffer.texture
            this.bufferA.uniforms[ 'iChannel1' ].value = this.targetB.readBuffer.texture
            this.targetA.render(this.sceneA, this.orthoCamera)

            this.bufferB.uniforms[ 'iChannel0' ].value = this.targetA.readBuffer.texture
            this.targetB.render(this.sceneB, this.orthoCamera)

            this.bufferImage.uniforms[ 'iChannel1' ].value = this.targetA.readBuffer.texture
            this.targetC.render(this.sceneC, this.orthoCamera, true)

            this.animate()

        })

    }

}

class BufferManager {

    public readBuffer: WebGLRenderTarget
    public writeBuffer: WebGLRenderTarget

    constructor(private renderer: WebGLRenderer, { width, height }) {

        this.readBuffer = new WebGLRenderTarget(width, height, {
            minFilter: LinearFilter,
            magFilter: LinearFilter,
            format: RGBAFormat,
            stencilBuffer: false
        })

        this.writeBuffer = this.readBuffer.clone()

    }

    public swap() {
        const temp = this.readBuffer
        this.readBuffer = this.writeBuffer
        this.writeBuffer = temp
    }

    public render(scene: Scene, camera: Camera, toScreen: boolean = false) {
        if (toScreen) {
            this.renderer.render(scene, camera)
        } else {
            this.renderer.render(scene, camera, this.writeBuffer, true)
        }
        this.swap()
    }

}

(new App()).start()

the fragment shader is pretty much a control c + control +v from shadertoy and replacing the relevant parts like gl_FragColor, gl_FragCoordmain() etc… and im using this for the vertexshader

varying vec2 vUv;

void main() {
    vUv = uv;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}

I have uploaded the whole source code here: https://we.tl/JecAgsGlPA

(as i am a new member i am not able to upload attachment so i hosted it to on weTransfer)

That’s the way shader toy computes uv coordinates in the vertex shader. three.js uses the uv data of the underlying plane geometry instead. The result is effectively the same. You can verify this if you visualize the uv coordinates in three.js like so:

gl_FragColor = vec4( vUv.x, vUv.y, 0.0, 1.0 );

And in shadertoy via:

fragColor = vec4( uv.x, uv.y, 0.0, 1.0 );

You should see the following in both cases:

Yes i see that gradient of colors, to exemplify better this is what i am seeing against what i should be seeing:

image
image

final-buffer.frag

uniform sampler2D iChannel0; // this is an image
uniform sampler2D iChannel1; // this is the output of buffer-a.frag
varying vec2 vUv;

void main() {
    vec2 uv = vUv;
    vec2 a = texture2D(iChannel1,uv).xy;
    gl_FragColor = vec4(texture2D(iChannel0,a).rgb,1.0);
}

buffer-a.frag

varying vec2 vUv;

void main(){
    gl_FragColor = vec4(vUv,0.0,1.0);
}

the same code works on shadertoy: https://www.shadertoy.com/view/MdGcWd

Any chances you provide a live demo?

here you are: https://codepen.io/RafaelMilewski/pen/NYZpMr

In this demo UV coordinates are float values in the range of [0,1]. If you render these values into a 32bit RGBA buffer (a render target in format RGBA and type UnsignedByte), you will lose precision since you can only store 8 bit (256 possible integer values) per color channel. This lost is visible if you use the sampled uv coordinates for a texture fetch.

You can fix the issue if you add this parameter when creating the render target type: THREE.FloatType. The underlying texture is now a float texture that can hold your uv coordinates and retain precision.

2 Likes

Well that indeed would take someone with low level knowledge in webgl to figure that out! many thanks! :+1:

now my porting is 99% complete, there is only one issue left trough… shadertoy iMouse as described in the help section it should be a vec4 being Z to indicate if it has a click… but it doesnt not specify it is is normalized or what… then the problem is that the Y is in “reverse”…

i have tried the following:

// this approach doesnt work at all

this.renderer.domElement.addEventListener('mousemove', event => {

    let x = (event.clientX / this.width) * 2 - 1
    let y = -(event.clientY / this.height) * 2 + 1

    this.mousePosition.setX(x)
    this.mousePosition.setY(y)

})

and the following

// this approach does work but Y is inverted 
this.renderer.domElement.addEventListener('mousemove', event => {
  this.mousePosition.setX(event.clientX) // x works perfectly 
  this.mousePosition.setY(event.clientY) // y is inverted....
  // this.mousePosition.setY(-event.clientY) // if invert this it doesnt work....
})

Here is another code pen to demonstrate:

Thanks for the help! :+1:

1 Like

That should work:

this.mousePosition.setX(event.clientX)
this.mousePosition.setY(this.height - event.clientY)
2 Likes

After overthinking too much ended up not thinking in the less obvious solution :smile: thanks again!

Just to summarize this for anyone looking into porting complex shades from ShaderToy to Three.js:

here is the original shader:

This is the equivalent in threejs:

Cheers :slight_smile:

4 Likes

Thank you. This is awesome.

Just in case someone need the full code for iMouse.xyzw it is here:

const iMouse = new THREE.Vector4();
var iMousePressed = false;

	function onMouseMove( event ) {
		// calculate mouse position in normalized device coordinates
		// (-0.5 to +0.5) for both components
		//iMouse.x = ( event.clientX / window.innerWidth ) - 0.5;
		//iMouse.y = (( event.clientY / window.innerHeight ) - 0.5) * -1.;

		// calculate mouse position in normalized device coordinates
		// 0 - 1
		//iMouse.x = ( event.clientX / window.innerWidth );
		//iMouse.y = (( event.clientY / window.innerHeight ) - 1) * -1.;

		// no mapping - pixel coordinates width and height
		iMouse.x = event.clientX ;
		iMouse.y = (event.clientY);
		//console.log(iMouse.x);
		//console.log(iMouse.y);
		//console.log(event.buttons);
		if (event.buttons > 0) {	// button is pressed
			//console.log("pressed");
			if (!iMousePressed) { 	// if it wasn't pressed last frame
				iMouse.z = iMouse.x; 
				iMouse.w = iMouse.y; 
				iMousePressed = true;
				//console.log(iMouse.z);
				//console.log(iMouse.w);
			}
		}
		if (event.buttons == 0) { 
			iMousePressed = false; 
			//console.log("not pressed");
		}
	}

// And remember to add eventlistener in your code
window.addEventListener( ‘mousemove’, onMouseMove, false );

2 Likes

Hey @milewski. This example does not display on iOS devices. My guess is because of the multi-rendering process of the buffers.

Is there a way to fix this?

which browser?

All browsers on iOS. Currently, I have checked Safari, Chrome, and Firefox.
And they display all the same