Within our European Project RePAIR (https://www.repairproject.eu/), we are looking for a developer to help us with the development of the interface. Our project focuses on reassembling broken 3D artifacts. We have the high-resolution 3D models and are working on an algorithm for automating the fresco reconstruction, and we want to create a user interface for people to interact with the system. Your job will be showcased at EXPO 2025.
For the EXPO 2025 our aim is to develop an interface that communicates with a semi-automated virtual 3D fresco reconstruction system (meaning you can get candidate solutions of the pieces via API calls, and we manage them) and allows visualization and interaction with 3D models of the fragments. The interface should be usable either via a web browser or through an immersive device (e.g., Meta Quest). There is total freedom of implementation subject to the portability of the solution on the two devices indicated.
We need to finish the interface at the end of April, so we have two months (March and April) for the development.
Please reach out if interested for further details or questions.
What format do you have the models in and do they have textures mapped to them?
When you say high resolution models - can you be more specific i.e. polygon count and texture sizes.
When you say candidate solutions via API calls. Does the API exist or does that need to be developed? By candidate solutions do you mean that want a system develop that matches potential fragments and then allows the user to adjust the suggested solution or something similar?
How are the models cataloged ? Is there are a database of them and if so what type of database are you using, where is it hosted and what data does it contain for each model?
You mention Meta Quest headsets, do you require the same functionality as the website?
Do you have a Meta Quest Store account ?
I’m a creative technologist working with multiple tech stacks for over 20 years.
With extensive experience in WebGL, Three.js, Babylon.js, and immersive technologies (WebVR, WebAR, and Metaverse development), I am confident in my ability to create a seamless and engaging experience for both web browsers and immersive devices like Meta Quest.
Looking forward to collaborating on this innovative project!
Hi everyone, I am very happy to see the amount of attention that this got. I am very happy to see the answers, and your work is impressive. I will reply to individual messages one by one, I just wanted to clarify some details here for everyone:
the work contract is ruled by the University of Venice. There is an open call, you will have to apply for the position, and we will make a short interview with all of the candidates and pick (unfortunately only) one. The call is open until the 21st of February, and the interview will take place on the 24th of February. This gives us the time to review your work, and we make the interviews all in one day. The budget is 6.000 €. This is the gross amount (before taxes).
We have a ready-to-use dataset, which is available in Zenodo. We have the models also on our disks and we have freedom to choose the best architecture to allow for the best workflow. The models are high-resolution for being archaeological fragments, but should not be too heavy overall. We are talking about 100k vertices and 200k faces, with a 6000x6000 pixels texture map. Downscaling for visualization is accepted, just need to preserve quality (at the end the pictorial details are important). Details about the dataset have been published in an article here and we have a very basic project page with a few links if you are curious.
Regarding the communication with our semi-automatic system, we are developing a puzzle solving algorithm. At the moment we are working on small groups of pieces, for example let’s say 10-20 pieces. The communication between the interface and our system could be an exchange of messages with a list of ids (of the fragments) and our system gives back the positions and orientations of the fragments, so should be relatively straight-forward. So the goal is to have an interface showing 10-20 fragments, allow the user to play/manipulate them, while at the same time (or via some trigger) make use of the solutions proposed by our system.
About the Meta/Web part, we would like our guest/user (people coming to Expo) to actually be able to use the interface using a web browser, or using a head mounted display (the Meta Quest is the one we have right now). Ideally it should be the same interface, some slight differences are not an issue, as long as it is a nice experience.
Apart from this, I will try to answer to everyone who replied or asked for more information and I hope to see you at the interview stage.
Ok - as specified before, the work can be done remotely, the interview will be made online. There is no need to be physically in Venice or anywhere in Italy. So I am not sure about the eligibility criteria. The only related thing is that you will get paid from an Italian University. If that’s not possible for you, sorry, I did not mean to make you lose time.
But thanks anyway.