Looking into your recommendation still requires in the pipeline Xcode but then again I’m on a mac maybe need to test this with Linux, at least when I tried to install the pycode. Then ran into having to install PySlide, requires a certain configuration to do this process. Will keep attempting, still learning!
Getting warmer, things are beginning to take shape, found the right readme file:
After many failed attempts with pip and pip3, during the build I keep running into an import Image error. Read through stackoverflow related problems. It is something with either my environment or I need to hear from others they have gotten this python script working where we can discuss what went wrong. Possible revisit, for now Xcode is a viable solution!
Hey, is there anything new here? Im also looking for way to convert glTF models to USDZ, best would be on a webserver. I have a app, that can export a dynamic generated model, which then should be converted to USDZ and send back as a link.
I found a Python tool to convert models via terminal, but is it possible to run this python script on a webserver somehow?
you would need your own webserver, with a storage place for your glft files, then from a php script or other trigger the convertion with python/php interface, and access the usdz file from http
Here’s a newer glTF to USDZ converter, which I believe works without Xcode. In the Background section the authors mention some of the differences between it and Apple’s conversion tools.
Jumping on board, I’m interesting in solutions for this too.
I did have the idea of using usdzconvert python script as a child process on a node server, but I haven’t tested it. What I’m curious about is if we had a scene with with for example a chair, this chairs legs were a chrome metal using Three’s standard metalness and reflectivity and envmap (the usual approach). I’m interested in knowing how someone would generate a PNG of this as I understand the USD format takes images as textures only.
My end goal here is to load up a model, for instance a chair, and have the user choose for instance leather or denim for the chair, and wood or chrome for the legs. Then output this as a USDZ to view it in AR.
It’d be amazing to be able to take all materials in a scene and bake them into one single image (toDataUrl) but I’m sure that’s not possible?
Edit: I have looked into AR.JS, and I don’t know how anyone finds that useful? In all types of lighting, the jitter of the object is enough to make someone nauseous.
AR.JS is an interesting project. As I played with AR.JS was thinking this is also mainly used for prototyping ideas rather you could use this independently from an app store. unless I’m missing the intent. AR.JS is very useful though in knowing what capabilities with JS are possible.
thanks for sharing your repo will play with this more later, so intrigued!
I don’t think AR.js uses any WebXR APIs, it’s marker-based. Something based on the new WebXR APIs would presumably have less jitter and other advantages.
Thanks! A year ago this couldn’t be done without owning a mac and having admin on it. Today, this seems much easier with more tools by other people instead of just the xcode one from apple.