This Isn’t Just a Blog — It’s Your Own 3D Room

When I first started building this service, the biggest challenge was balancing UX, placement freedom, and performance. These three are constantly fighting each other.

One thing I realized early on is that when an avatar interacts with furniture, it takes time to walk over to it. It might only be a few seconds, but modern users won’t wait for that. If someone just wants to check a bookshelf or play a video, making them watch their character walk across the room first breaks the flow.

On top of that, giving users full placement freedom means more furniture on screen at once, which makes both saving scene state and rendering it a lot harder. Techniques like LOD and instancing help, but I felt they wouldn’t be enough long-term as the platform scales and new features get added.

So I made a decision early on: one furniture per category. Instead of a room cluttered with objects, each piece of furniture has its own dedicated interaction. A bookshelf is your blog. A TV plays your videos. A music player handles your music. Every furniture has a clear purpose and you interact with it directly.

This constraint actually simplified a lot on the technical side. Scene data only needs position/rotation/scale per slot so it stays small, loading other users’ rooms is fast since it’s just fetching a placement list, and the rendering budget stays predictable. All GLB assets go through meshopt compression, room tiles are generated with a tilemap-based system using InstancedMesh, and the renderer adapts to GPU capability automatically.

Raycasting is limited to wall visibility checks and direct interactions like dragging or clicking objects. Everything else uses simple Box3 AABB collision testing.

Beyond that, pathfinding runs on a Web Worker to keep the main thread free, and we use camera-direction-based wall culling with deferred geometry creation so hidden walls don’t generate tile meshes until the camera actually faces them.

This way, performance stays structurally guaranteed even when considering future expansion.

For visuals, I did consider custom shaders to make things look nicer, but decided against it. When artists upload their own furniture later, overriding their materials with custom shaders would break their intended look. So we keep the original GLB as-is.

That’s how www.uniroom.world came to be.

1 Like

Interesting concept and the idea of turning a blog into a personal 3D space is pretty cool. One thing I’m curious about though is how the platform handles performance as rooms get more complex. If users start adding a lot of objects, videos, audio sources, and lighting changes, it could get heavy pretty quickly depending on how assets are managed.

Another thing that would be interesting to know is how the GLB pipeline is handled. Keeping GLB fidelity without shader overrides makes sense if the goal is to support artists uploading their work later, but it might also limit optimization options. In larger scenes sometimes some level of material control or batching helps keep things performant.

It might also help to show a bit more about the technical side of how things are structured. For example how scenes are stored, how furniture placement works under the hood, and how you handle loading other users’ rooms. Overall it’s a nice idea though and it feels like something between a personal blog and a small social virtual space.

1 Like

Thanks!

For performance, we originally planned fully free furniture placement, but after considering UX and performance we settled on one furniture per category. The optimizations are built around that constraint. Furniture uploads will be limited to approved artists with low-poly models only in the early stage, and all GLB assets go through meshopt compression. Room tiles (floor, walls, ceiling) use InstancedMesh to keep draw calls low, and the renderer adapts to GPU capability automatically (pixel ratio, shadow quality, antialiasing etc are adjusted per device tier).

For the GLB pipeline, we don’t override shaders or materials. Instead we focus on the delivery side with meshopt compression and CDN caching. Shadow casting is selective too, wall and floor tiles only receive shadows, never cast them.

For how things are structured, furniture placement is coordinate-based with AABB collision detection via Box3. Raycasting is only used for direct user interactions like clicking and dragging, everything else goes through the bounding box system. Furniture is stored as position/rotation/scale + metadata, so loading another user’s room is just fetching a list of placements and assembling the scene from cached GLB assets.