I am exploring the need for a specialised three.js code generator that can help early-level developers or someone new to coding to start developing three.js code. I understand what Llama3, Claude, and likes generate but am looking for something more specialised that can be created by fine-tunning the Llama3 model with a tree-based approach for a much more accurate, fast, and shippable code generation. I would love to hear any suggestions on what it must incorporate that you feel is missing in other LLMs. Any examples of prompts that didn’t yield much results would be extremely helpful. Any features that you would like to generate that can be used in your workflow.
There is one here, ChatGPT - Three.js Mentor, although I’m not sure how specialized it is. I have to admit, I have never used AI in respect to Three.js. BTW, on several occasions someone posts here AI generated Three.js code and asks for help, because it is only almost working.
I’m curious whether it is possible (and worth!) to integrate AI in this forum.
One of the issues with AI and early-level developers is that a lot of Three.js info on the web contains code for older and already incompatible Three.js releases. How could AI be tuned to avoid this?
Thanks so much for replying and giving your thoughts on it! I am trying to avoid incompatible codes by training model on codes that work with the latest releases. Debugging a three.js code is something to consider too but would take a while to achieve some really good debugging assist.
Meta’s Llama3 is open source and can be fine tuned with your home PC without the need for a data center. The main problem is with the dataset, to train your model, you need gigabytes of accurate data […{ instruction, input, output }]. AFAIK, there is none specifically built for three.js, so the main challenge is to build one.
100% agree! That is the intention. Maybe there is a way to incentivise developers and artists to contribute to the dataset. Initially, it should do better than boiler plates but less than shaders.
I think the biggest problem with this is a language barrier in which the user knows what problem they are trying to solve, but lack the right knowledge to put it into right wording.
Using the correct prompt requires you to use the right words to describe your problem. Since you’re specifically targeting “new developers”, it’s safe to assume that they aren’t skilled in the correct lingo. This quickly introduces the issue in which the Ai just blatantly generates garbage that the user doesn’t understand.
Thanks for replying. I am collecting code from people ready to provide snippets and definitely want to cover shaders to some aspect. It would still require developer oversight to make it work from the get go. Would it be ok for you to test it at its current state and provide feedback? It will greatly help.