New Year Collaboration: Join Crateria City Development

Happy New Year, Three.js community

As we start 2026, I’m inviting designers, developers, and 3D enthusiasts to voluntarily collaborate on Crateria City, a detailed game map available here: Crateria City on RenderHub

You can also explore the live demo here: Crateria City Live Demo

Why Participate
Collaborating is a unique opportunity to:

  • Gain Hands-On Experience: Work with a browser-based Virtual Experience Engine built with Three.js and Web Physics

  • Expand Your Skills: Improve modeling, interactivity, optimization, and immersive visualization skills transferable to game development, architecture, or virtual design

  • Collaborate and Network: Connect with other designers and developers, share ideas, and contribute creatively to a growing virtual world

  • Showcase Your Work: Contributions can be used in your portfolio or for personal learning

Reference Engine
The engine’s architecture and workflows are available in this Virtual Expereince Engine.pdf (5.0 MB) book to help participants experiment independently

Participation Terms
Participation is completely voluntary. Contributors are free to join or leave at any time. There is no obligation or recourse to me regarding contributions, decisions, or outcomes. This collaboration is purely for learning, experience, and creative exploration

Who Can Contribute

  • Designers and architects familiar with 3D tools like Blender, SketchUp, 3ds Max, or Rhino

  • Developers interested in interactive 3D web applications

  • Anyone passionate about immersive virtual environments

Let’s Make 2026 Creative and Interactive
By participating, you can help turn Crateria City into a fully interactive, living virtual world, while gaining practical skills and experience in a voluntary, flexible, and self-directed way

1 Like

Add Your Own GLTF Models

This feature provides a blank browser-based virtual environment where architects, designers, and developers can add their own GLTF models and view them directly in a real-time Three.js experience.

Participants can use any GLTF asset from Sketchfab, or export their own models from SketchUp or other 3D tools, convert them to GLTF using Blender, and then drag and drop the model directly into the scene.

All files are handled locally in the browser.
No files are uploaded, stored, or sent to a server. The drag-and-drop process is purely client-side, making it suitable for private testing, experimentation, and learning.

Because the environment is intentionally minimal, contributors can focus on:

Testing scale, orientation, and spatial placement
Validating assets for real-time web performance
Understanding how GLTF models behave inside a Three.js workflow

Access the blank virtual environment here:
https://theneoverse.web.app/#threeviewer&&construct

Participation is fully voluntary and self-directed, allowing contributors to explore, iterate, and learn at their own pace.

Resources:

Sample levels:

sandyards market.glb (3.5 MB)

vistadistrict.glb (1.8 MB)

crateria city terrain 12k.glb (7.2 MB)

speed way.glb (1.5 MB)

underground bunker.glb (1.3 MB)

lake lagoon.glb (2.6 MB)

Sample vehicles:

normadic.glb (1.5 MB)

furnariscafatigtvehicle.glb (921.2 KB)

Sample characters:

blue shirt.glb (120.7 KB)

avatar biker leather top jeans male.glb (289.6 KB)

avatar red rider female.glb (394.7 KB)

Building Virtual Worlds with GLTF Assets in a Browser-Based Virtual Experience Engine

This post introduces a browser-based Virtual Experience Engine designed to make building immersive 3D environments more accessible for architects, designers, engineers, and developers.

The goal is to allow contributors to focus on designing spaces, levels, characters, and vehicles, rather than spending time setting up complex JavaScript workflows. The environment is intentionally minimal so participants can explore how GLTF assets behave in a real-time Three.js context.

GLTF Structure Shown in the Demo

In the video demo, a GLTF model is not just a visual mesh. Each level is composed of multiple functional components:

  • Physics bodies (pyh-bodies)
    These meshes are used by Cannon.js as colliders, defining physical interactions within the scene.

  • Plane axes
    Used for avatar positioning, spawn points, and scene orientation.

  • Visible meshes
    The rendered geometry that appears in the scene. These meshes may be baked or textured to optimize real-time performance.

This separation allows contributors to better understand how structure, physics, and visuals work together in a Three.js-based virtual environment.

Supported Design Tools

Any 3D modeling or design software can be used to create assets or levels. Architects and designers can work with tools they already know, such as:

  • 3ds Max

  • SketchUp

  • Rhino

  • Maya

  • Blender

  • AutoCAD

  • Or any other 3D or CAD application

Models can be exported to GLTF (for example, using Blender) and then used directly in the engine.

Drag-and-Drop Workflow

Participants can:

  • Use GLTF assets from Sketchfab

  • Export their own models from their preferred design tools

  • Drag and drop GLTF files directly into the browser-based environment

All processing happens locally in the browser. No files are uploaded, stored, or sent to a server. This makes it suitable for private testing, experimentation, and learning.

Purpose of the Environment

Because the environment is intentionally simple, contributors can focus on:

  • Testing scale, orientation, and spatial layout

  • Validating assets for real-time web performance

  • Understanding how GLTF models function inside a Three.js workflow

Access & Resources

Blank virtual environment:
https://theneoverse.web.app/#threeviewer&&construct

Resources used in the demo (including downloadable GLTF models):
https://discourse.threejs.org/t/new-year-collaboration-join-crateria-city-development/88918/2

Reference documentation (Virtual Experience Engine Book):
Virtual Expereince Engine.pdf (5.0 MB)

Participation is fully voluntary and self-directed. Contributors are encouraged to explore, iterate, and learn at their own pace while experimenting with real-time 3D environments in the browser.