Since this question is not directly related to
three.js, you might have more success when asking it at stackoverflow.
I figured that of all groups of WebGL programmers in the web someone in the Three.js community would be most likely to know.
Plus I want to render stuff in Three, so there will be intricacies related to that, but maybe those would be separate questions. On the other hand, maybe some people would be interested to know about this for use in their Three projects?
Okay, sounds valid
It’s the idea of an augmented reality API to provide the possibility to spatially register virtual objects in the real world. I don’t think you can emulate this with a simple video stream and some JS code. I mean if you want to work without markers but with arbitrary surfaces.
There is the WebXR spec that is available in Chrome Canary and should be coming soon to Chrome on mobile.
It doesn’t quite work the same as ARCore/Kit, instead of detecting planes you can use the Hit Test API to raycast against the world for your anchors.
As it is in Canary at the moment, it can potentially break at any update so I usually prevent auto-updates on it when I get a working version. Also its unlikely to be supported on iOS because they are pushing for AR QuickLook instead.
Why not? Would it be too slow in JS?
Interesting! So if I wanted to detect a surface, I’d have to do a raycast sweep across the whole view and generate planes from those points? (it seems wasteful if the underlying engine already has that info)
I don’t know the exact motivations behind why its using hit tests rather than planes. It works more like it detects the mesh of the environment and casts against it. I believe they don’t give you direct access to the mesh for security reasons, which may limit the amount of hit tests you can perform as well.
The spec/system is still being worked on, so this has the potential to change over time.
What use case do you specifically want planes for? The hit test works well for placing items in a room but planes would be better if you were substituting carpets/tiles/paint etc…
It would defeat the purpose if we can’t reliably render things within boundaries of a surface (f.e. water falling off the edge of a table).
I am looking to use web tech (Three.js) to create a demo over the next couple weeks, and reliably detect surfaces and place objects. From the videos I see, looks like that’s possible, so I’ll tinker with it.
So far I’m having no luck getting any WebXR demos working in Chrome Android. I’ve enabled every flag that has “WebXR” in it. Maybe there’s API updates and the demos are outdated, but I haven’t checked the console for errors yet.
Any thoughts on WebXRForArCore? Looks like it isn’t updated much lately.
Its not ideal, but you can get the user to point to the boundaries. As far as I’m aware you can’t just detect a surface yet.
I use Chrome Canary with
WebXR HitTest flags turned on, and
WebVR turned off: https://play.google.com/store/apps/details?id=com.chrome.canary&hl=en_AU
Chrome Android is still a little way off still.
WebXRForArCore was one of the first implementations and I don’t think its been maintained as actively.
I think so. All this stuff mentioned in the ARCore documentation like motion tracking, environmental understanding and light estimation is not something you want to implement in JS for performance reasons.
3D reconstruction from real-time stereoscopic images using GPU. Hmmmm, maybe it can be adapted for WebGL.
@calrk Thanks, Chrome Canary worked!