Load external three.js script inside a main scene. Next step for metaverses.

Hey experts. I’m wondering how 3D/VR web content can be nested so that metaverses/webspaces can load external procedural 3D/VR content (done by code, not 3d assets). For example, a NFTs 3d gallery to load several three.js based NFTs and display them as interactive/procedural 3d in a limited region of the gallery. That sounds like future, right?

I’ve searched and well, seems like it’s far from being a reality. One approach is to do some css3d tricks to display an interactive html page, but I haven’t seen this could bring 3D to the main scene, it is just a flat interactive texture.

So, what’s missing in the current technology state?

Somehow it has to be done, let’s look for the least bad way. Maybe export the external html/js in such a way that the main scene can call a method and input its renderer, camera, local position/orientation/whatever to the external scene? so that the external three.js script works like a parametric component rather than a global scene, so it should have two ways of run the script (standalone as currently and as a function). The security problem of loading external sources can be solved by just accepting trusted sources (reading the code to validate) and ensuring its immutability by storing it in repositories like IPFS, right? Or even then would it be a risk?

It is something that has to happen definitely, can you experts speculate how it could be? Thank you.

I think you’re on the right track with your thoughts about this. If I understand you correctly you are talking about the 3D equivalent of the IFrame, which in 2D seems cope with security issues by allowing websites either to be displayed in an IFrame, or not, using CORS. I.m.o. this mechanism could also be used in 3D.

1 Like

in react everything is a component. components are self contained units that can be loaded runtime. for instance, here is a gallery: Image Gallery - CodeSandbox

it contains textures, but it wouldn’t make much of a difference if these were “windows” into procedural content, here’s an example for this: Magic mirror - CodeSandbox

the “MagicMirror” component in that example uses createPortal which allows you to mount stuff into a foreign node. that can happen dynamically, by routing, preceduraly, lazy loading, etc. on the common web outside of webgl this is an everyday-practice.

for instance this bit:

  <Heli />

“Heli” is a helicopter that’s floating, just mounting it in there will confine it to the mirror, mounting it outside and it’s part of the scene, it could be in both places.

in plain threejs this won’t be easy, because there is no common ground. if you wanted to share something it must have big assumptions as to the projects structure as well as awareness of the environment, and reactivity. this is generally what blocks three from having a unified eco system.


Is there no way to wrap it in a function to pass environment, interaction, etc, as parameters so it is implementation agnostic?

Is it possible with just three.js, how? I think it is not a problem the lack of interoperability for now because the use case of NFT platforms should be very similar between different platforms, and developers can follow the standard of each platform where they want to upload it in the worst case.

Unfortunately I am not familiar with React. I understand that a React component can be loaded from an external source and embedded in the main scene, and this component can carry other lighting and its own interactivity? Can this be done with React?

The idea is, you are in a 3d gallery from the browser or from VR, and instead of seeing a static/animated piece, you see an interactive piece that is a boid system that you can interact with your hand to change their target position. That’s the kind of experience that will make creative coders the kings of design in the metaverse, don’t you think?

So, what is needed to develop this?

try it. in my experience sharing in oop based environments is not a thing. even if you make an injection & rule based catalogue of what people need to adhere to, it will never fill with content because you’re bumping against the limitations of oop.

Unfortunately I am not familiar with React. I understand that a React component can be loaded from an external source and embedded in the main scene, and this component can carry other lighting and its own interactivity? Can this be done with React?

that’s generally what frameworks allow. and yes, see: Image Gallery (forked) - CodeSandbox

basically im saying this: if you wanted what you describe in threejs, you’d be creating a framework. even if you don’t call it that, in the end, when you’re done, you have made one. it will create common ground, and the most natural evolution of this concept is a component.

PS. i don’t want to disturb your thought process or take the wind out of your sails. by all means go ahead and explore. i just thought im going to show it to you, because this stuff is very common in generic web dev and now it applies to threejs as well.

This example sort of implements the ‘CssObject3d trick’ Daniel mentioned, making external content visible by supplying just a link. There is no design time knowledge necessairy of what is at the other end. But of course the 3D scenes should not be constricted by the iframes but displayed within the parent’s 3d space.

So maybe there is a difference between sharing components like in the React example, and showing 3D websites in the Metaverse.

1 Like

I do not understand your position. I’m not telling you that your way is not the right way. I’m just asking. I don’t have enough experience in web development to do this, nor the time to do it. The most I can do is a minimal version for use on my website, but I am looking for a solution to share with 3d galleries devs out there. It is something that will be done, I just wanted to open the debate.

I don’t know what limitations you are referring with an object oriented language as Javascript.

I am not against learning React, I know it is everywhere these days. I asked you how to do it with just three.js because you say: “in plain threejs this won’t be easy.” to me that means that there is a way.

I asked you how to do it with just three.js because you say: “ in plain threejs this won’t be easy. ” to me that means that there is a way.

i have no answer to this. i can only point out that in order to share you need common ground. simple technical stuff like what camera is being used, which renderer, how do events works, to more complex structural stuff and state management.

the only solution i can think of is a framework, that means a common agreement between the parts of your app. the question that you will have to come to terms with is: do you want to make your own, or do you want to use an existing one.

ps just to try out the concept, which i find interesting, i built it out further: Image Gallery (forked) - CodeSandbox i could fill this now with everything anyone ever made. because their stuff and my environment agree.

Yes, of course. And sure doing it on top of React makes sense. So back to my question, it’s all currently there and just needs to be developed?

In such a case, if no one independent takes over, each metaverse/gallery/webspace will do its own private implementation and it will be a hassle for artists who can’t port their work between webspaces, it shouldn’t be like that.

I think that three.js as a library should provide a way to allow this kind of communication in a standardized way.

I think that three.js as a library should provide a way to allow this kind of communication in a standardized way.

Not quite so imo. If you consider browser based access to a metaverse, the common specification should be there, just like the 2d web where it resides in the dom/css handling. The real issue is that the dom has not extended to 3D with (immersive) VR in mind.

I’m no saying it wouldn’t be helpful for threejs to have some support for this, but what about babylonjs and other environments?

this is going in circles. it is not possible. threejs is oop and therefore does not have an eco system. no two libraries can ever interoperate, they all come with their specific rules and assumptions.

frame—works were made to provide a layer for interop and eco systems, because they standardize a common interface or “frame” around parts that do not necessarily know one another. you can either use an existing one, or write one yourself. imo threejs should never provide this because that’s not its domain.

it seems quite inevitable that the metaverse will use these existing interop layers, because they’re the only solution to share content, even across systems (dom & three) and platforms (mobile, web, native, vr). one of these for instance is being made here https://twitter.com/sougenco/status/1479420605640638474

Correct, I’d be up for collaborating on building something like this ‘webTHREE.js’ it’s would be long but very useful.

1 Like

building something like this ‘webTHREE.js’

How do you envision this?

It’s a bit cloudy, I guess a place to start research is to look at frameworks like codepen and js fiddle that allow code processing in the window, I guess an editor that connects to an already built framework of preconstructed THREE.js modules can be constructed around something similar, modules that are written in here could be varified to follow syntax rules, if so it could be exported as a es6 module appending the script with appropriate module export methods, maybe a node based visual panel that outlines what’s already contained in the scope, eg camera plugs into scene plugs into renderer, this would begin with a default set of nodes created from a core framework, the Web3 side of it I guess would be minting your js module ‘plugin’ to a block chain network that’s then placed in your wallet, the wallet could then be connected to a platform that’s built from the same default node graph which could build a js object containing reference to the default nodes as well as nodes collected / minted in your wallet, this js object could be posted to a database entry to be called from when this particular wallet connects, returning the js object with references to all appropriate nodes.

Nodes could of course include the option to ‘drag in’ events and event listeners, eg ‘pointerdown’ ‘mousemove’ ‘resize’ etc…

Bare in mind the above is just a 5 minute random approach, its a very broad but also specialist mixture of knowledge that would take a lot of thought, logic and development to make not only practical and intuitive to use but also to make work at all…

What are your thoughts?

1 Like

It is not clear for me wat would be your end goal for a webTHREEJS: Is it the use of shared ‘active assets’ that can be loaded at runtime, or a framework for a sort of Threejs based Metaverse (which of course would include the first goal)?
Or maybe even a project to serve as a demo for the W3C for future browser development?

@awonnink, @drcmda, @forerunrun guys you get me wrong. It is not the responsibility of three.js to create a framework for the metaverse, I wasn’t saying that. We are probably 2 or 5 years away from something like that being specified by W3C or whoever. But I think three.js should look beyond the standalone mode and include a way to combine scenes into a main scene, or at least include it as an example. This would make it easier to create web spaces and start experimenting with metaverses, at least in their less interoperable version. Step by step.

I imagine it like this (keep in mind that my web experience is limited):

  • The external scene is exported as an object with metadata (name, description, capture, author…) and functions where the script is written using its arguments (region space, camera, render…). It could have a setup function and an update function and others for interaction?
  • In html references all the external three.js.
  • In the main scene code, objects containing external three.js are imported and added to the scene.
  • In the update/render/interact functions of the main scene, you call to update/render/interact these objects.

Does this make sense?

I am doing some quick and dirty prototyping on what might be feasable in a metaverse scenario, so the subject very much has my interest.

But maybe the discussion should first be about what is actually the use case before diving to quickly into technical details? So far, Metaverses, like those from Facebook and Microsoft are hosted by their respective companies. That is for example the reason you can use your own avatar to be present on different locations (different ‘3D websites’), but you can’t use the same avatar on the other platform.

So in my view a Metaverse should be public domain and browser based, like the 2D web, making it possible to develop and host your own 3D website. Different from 2D site; many 3D sites will allow and therefore should support bringing your own avatar and interact with other people on that site.

The next step might be that you allow your 3D website to be presented within another 3D environment. You can look at this parent environment as being a search engine, or perhaps a digital twin (think for example of a shopping center where each shop has its own site added to the virtual twin). Both for the ‘bring your own avatar’ and the ‘digital twin’ you would need the external script loading as discussed here.

1 Like

Cool! Do you have a repo set up on github? Would be great to start looking at what’s possible, I’m thinking, how web3 is an abstracted protocol that runs on top of http which gives public access to a virtual “directory” (wallet) that belongs to any browser user… that user may have modules that can plug directly into your scene, depending on the module properties, in a node based sense, they could be postprocessing modules, animation modules, glsl shader imports, the scope is broad, in essence it would offer a user to view their 3d environments on a platform that delivers a core THREE.js scene setup eg a scene, a camera, a renderer, a common scale, I guess it would save users having to setup these core boiler plates again and again for each project

Great news you want to work on it @awonnink! In case you or anyone want to talk on Discord about this, add me as DaniGA#9856. Also as @forerunrun says, I’m willing to help if you share the repository or with anything.

@forerunrun I think it’s impossible today to expect with details how the blockchain part will work for metaverses. Ethereum (the main blockchain for NFTs) is going to change this year (hopefully) lunching Ethereum 2, and the common tokens ERC721 and ERC1155 are too simple for build all the NFT industry above it, makes it too complex to develop advanced things and makes it very difficult for others to use your implementation as a standard. These standards are based on the idea of leave the metadata elsewhere and contracts must know how to interpret that json (meaning that you can host your metadata on centralized sites as Drive or Dropbox, so it’s not the most decentralized solution to build on top of it a whole industry). There are better alternatives within EVM (I put the ethereum virtual machine as a reference because all other blockchains want to be interoperable with EVM) but this issue is on standby since ETH developers are focused on the merge, sharding, rollups…(ETH2). Ethereum’s plan is to stop using Ethereum 1 (the current chain) because it will become increasingly expensive, the next version will be layer 2-centric. It is reasonable to think that after that, there will be a new wave of protocols/standards, for example, it is possible that we will not connect with a wallet on web3 sites but with a blockchain profile, where we have our assets in several wallets and we choose permissions to external smart contracts about what to do with assets (for instance, ERC725 it’s fucking epic). So, I mean, is too early to be worth focusing on the web3 part IMO for metaverses, it is very likely that if you work there it will be obsolete quickly. This does not detract from the fact that the first implementation of NFT avatars will be using ERC721 (each company is likely to release its own version first and eventually standardize), but I don’t think it is their long future.

Anyway, one can use web3.js or ether.js together with metamask to ask the person whether to connect their wallet to the site to see their NFTs. Then with that tokenID you can query the tokenURL where their metadata is stored (name, description, image…) and do whatever you want with it. There are several NFT galleries in 2d web that instead of an image, allow you to put an html/js (in a sandboxed iframe, without external calls), which would be what you should use for three.js based NFTs, I think.

In fact, this thread comes from that exploration I started a few months ago, doing procedural/interactive/generative NFTs. However, even if it is 2d artwork, galleries like oncyber.io or spatial.io do not yet run these html-based NFTs, their limit is to run gifs or videos for the moment. So I started talking to their developers to try to move things along, and the answer was always yes we want to but I didn’t get the impression that they saw it as a priority at all, but they know generative art, so it’s a matter of time. However I got tired waiting and started looking for the solution on my own and my conclusion was that I need help, and here I am!

I would focus on personal/branded webspaces, like your webpage but as a 3d space (where you can walk virtually). I want to learn A-frame for this. And in there, to create a way to include 3d scenes. The target audience of NFTs galleries is not programmers but common people. Developers will find that they cannot customize the scenery and in order not to have something “default” you have to pay too much money for a scenery NFT to customize your space. Then developers/creative coders will start creating their own webspaces (mainly as a gallery/portfolio using web3 commerce/tokenomics) and this will end up in influencers and brands having their custom version as well. Once webspaces are here, next step would be to make it multiplayer, so each viewer is in the same session at the same time (so a web-space-time?). Then you will have your personal metaverse. And yes, there’s a huge business opportunity for developers here.

At that point it makes sense to talk seriously about how to standardize avatars. But at the moment, there’s not even much avatar-ready hardware out there (with facial gesture/body recognition), although Facebook is supposed to bring this out this year with their Cambria project. The point is that until that hardware is massified (in the early adopters market), avatars are not going to be relevant. Once you can see each other’s laughter in VR with some kind of realism, NFT avatars are going to explode in popularity. This will be a fascinating year for anyone interested in these topics :slight_smile: