GPGPUComputationRenderer

Hi! I have based a lot of my code on the GPGPUComputationRenderer class and the Three.js flocking/birds-example. Everything worked great until I found out that I need to support some Android devices that doesn’t support the OES_texture_float extension. I wondered if it’s possible to modify the Birds-example and or the GPGPUComputationRenderer to support UnsignedByte or lower precision? Help would be much appreciated on this one. Thanks!

You could take a look at the Textures constants page. If you scroll down to the Types section, you’ll see that there are plenty of data types to choose from, including THREE.UnsignedByteType. Not sure how this would affect the flocking/birds example, since I haven’t looked at its code, but it’s worth a shot!

Here’s a snippet from a project from 2017 where I needed to perform some GPGPU computation, and I ran into iOS Safari issues as well, so maybe it can be of some help.

if(/(iPad|iPhone|iPod)/g.test(navigator.userAgent)){
	// iOS devices only support HalfFloatType 
	return THREE.HalfFloatType;
}else if(renderer.extensions.get("OES_texture_float")){
	// Most devices support FloatType
	return THREE.FloatType;
}else{
	// Otherwise use UnsignedByte type
	return THREE.UnsignedByteType;
}
2 Likes

This is golden, thanks! This seems to be the right track. I do get this when I switch to UnsignedByte: WebGL: INVALID_OPERATION: texImage2D: type UNSIGNED_BYTE but ArrayBufferView not Uint8Array

Do I have to create my BufferGeometry using Uint8Array instead of Float32Array? Do you know how that works?

Edit: Saw that Three.js actually has support for different BufferAttributeTypes. I will try that. https://threejs.org/docs/#api/core/bufferAttributeTypes/BufferAttributeTypes

I have come a step further on this, but I’m struggeling a bit to find the best way to convert data between an Uint8Array and a Float32Array. The Uint8 goes from 0-255 I think. Anyone got any idea on this?

You should be able to simply iterate through the Uint8Array and assign its values to the Float32Array, since integers can easily “fit” in a float memory space; 8 bits fit into 32 bits.

var uint8A = ...;
var float32A = new Float32Array(uint8A.length);

for (var i = 0; i < uint8A.length; i++){
    float32A[i] = uint8A[i];
}

See the MDN Float32Array page for more details.

There’s also the alternative of using TypedArray.from() like this: float32A.from(uint8A), but you will most likely run into browser compatibility issues, since it’s pretty new ECMAScript 2015.

1 Like