Points with custom point geometry instead of square/circle

Hi!

I’m implementing a custom image viewer (something like Pixplot), but my dataset is of texture atlases is comprised of irregular image cells - so not squares, but rather rectangles. This would require setting up a custom geometry on the vertex level, but I’ve already started implementing everything with THREE.Points, which, to my knowledge, does not have a vertices attribute.

I guess simplified, my question is whether there is a way to display a point cloud with custom rectangles (specific dimension data is stored in a json, which I am already reading from and is working) as points instead of squares of the same size or should I revert to using BufferGeometry for everything - I am currently using THREE.Points with custom shaders.

I’d really appreciate any tips!

Thanks!

Tian

As far as I remember, points are always squares, you can set their size per vertex and then inside each square draw whatever shape you need in the fragment shader, a circle or triangle or rectangle.

Your rectangle should fit into the square, so set the square size to be the biggest of the rectangle dimensions, the rest of the square will be transparent.

I drew stars in point geomerty:

star

1 Like

hi!

This makes perfect sense, thank you! It seems that no matter what I set gl_PointSize to it renders the same size of square every time, expect when I go to really small values. But the problem is, it seems to scale/fit the texture to the square. For example if I set the gl_PointSize to 200 or 500 I get the same result in which the textures are scaled to fit.

Does this make sense to you?

It’s because fragment shader gets texels, using gl_PointCoord.

yes, true. do you know a work-around?

Cut the desired rectangle, using discard in shaders.

2 Likes

Yes, you know the size of the point in pixels and the size of the texture in pixels, so you can calculate some scale factors and use them when you sample the texture. Provide them into the fragment shader as uniforms. That is, use some scaled coordinates when sampling the texture instead of pure gl_PointCoord.

As prisoner849 suggested, do not set any color outside the texture, so it doesn’t repeat

1 Like

Why not inctancedBufferGeometry. 1 drawCall


1 Like

I’m not that comfortable with shaders yet, so this might be stupid. but scaling the point coord with some factors based on the texture dimensions would mean that I would be essentially creating a uniquely sized texture samples. or would it still be better to scale the uv with the largest dimension of the textures to just get one big square and then discard everything else?

and this is all really helpful, I appreciate it a lot, thank you!

You still sample your texture in the range of [0,1] on each axis wherever the centers of your screen pixels recalculate inside that range. When your texture is visually stretched, a few screen pixels translate to very close uv values that sample the same pixel from the texture. You need to counteract this by shrinking your sampling grid down, depending on the ratio of the point to the texture size in pixels.

You won’t create any new textures or “unique samples”.

Here is the result you want:

this is a great solution, but it does not work in my case, because my image samples are cut out of a texture atlas. So if I just scale the image, more of its neighbouring cells will show up in the view window. Is it possible that I am somehow scaling the image uniformly when I cut it out of the atlas? I’m confident this is not the case, but this behaviour is making me think something might be wrong in at that step. I have a texture atlas of non-uniform images and a json file with the offsets and image dimensions. I manage to extract the images fully, but the problem remains that they are smushed into a square. Here is how the projection looks like at the moment.

Put that information in additional buffer attributes and process it in shaders :thinking:

already got it! if I didn’t the images wouldn’t show up at all in this way. this is my current relevant code here.

The first is the relevant part of the fragment shader and in the second you can see how I define the values to scale the uv with and sample the right image (texture.repeat + texture.vOffset). The textureChiechi is a 2048 x 2048 texture atlas in which the non-uniform sample images are packed. so you see that the image is cut out correctly because it is wholy visible, but I cannot figure out which part of the shader is scaling it down to a square, is it because of how the repeat is defined or is it because I leave the gl_PointCoord as is. In the third picture are the values of the repeat buffer, you can see that the first of every pair is constant, because all the images are 180px in length and the second is varying, because the height of the images is non-constant. So the transformation is definitely correct, the uv is scaled down to every image with the repeat and vOffset, the image is cut and then displayed - I don’t see where the image is then transformed into a uniform square. It is cut out of the texture as a rectangle, as the measures in the buffer above prove, so when is this scaling to a square done.

I’m really sorry if this is verging on annoying, but I really just need an extra pair of expert eyes!

It’s hard to tell why it’s not working in your code, but it works in mine :sunglasses:

Here is implementation of atlas, if you will, hopefully you can take it from here:

1 Like

but you are not cutting images from the atlas, you just display it as an image itself? no?

Imagine that image is your atlas, I’m displaying a part of it, defined by offsets.

but what does the text_size mean exactly in your case? is it just random?

perhaps it would be better to ask. once I cut the image from the texture atlas, how would I scale it to its original size. That is just the only seemingly simple thing I would like to achieve

Point size depend on videocard limit. 512px. InstancedBufferGeometry no limit

but what does the text_size mean exactly in your case? is it just random?

Your atlas in that example is 75x100 pixels, so everywhere in the code where you see 75 and 100 these are the atlas dimentions, text_size is the reciprocal of those, nothing is random.

once I cut the image from the texture atlas, how would I scale it to its original size.

Atlas normally contains images at their original size, packed together for various reasons, so you just cut them out of the atlas as they are.