Maybe slightly off-topic for this forum but since most users would probably ask it for some help with code, like I did, then I thought it could be posted here.
This is being posted in the Discussion category so feel free to share your experiences but not really looking to start a World Wide Debate about this (that’s what some more qualified people can do).
Would this be limited to chat bots only or are there some more constructive approaches to use AI that doesn’t make faulty suggestions, like field of medicine or similar, and is never really able to provide incorrect information?
I had AI suggest to use certain function from a 3rd party library and after I looked it up - that function just didn’t exist in that particular library.
Do we really have to sift through the information that is provided by any AI in order to make some good use of it? Shouldn’t the bot present impartial and constructive suggestions?
Definitely wouldn’t trust it to ever be able to follow those unofficial Three Laws of Robotics but this is just my opinion.
Just so you don’t get me wrong, I would suggest that everybody gives it a try with AI and maybe put it to the test any way you could think of.
I would not recommend it for three.js at least
The library has been in perpetual changes for over a decade. AI are mixing all the versions and outdated ressources with very low chances to give something you can rely on.
But the case is a bit special, standard JS is probably better supported.
webGL / webGPU being more “experimental” mixing techniques and with more scarce ressources.
I have gotten the impression that many “newbies” start aiming too high, punching above their weight class, so to speak. They see something fancy, want to do that “too” and ask ChatGPT or something similar. When they fail at AI, they eventually turn to human intelligence in this forum.
The reason why they fail is, that they lack the background knowledge and the vocabulary which is required in order to ask the right questions. Plus they aren’t able to discern a garbage reply from a potential lead to the solution.
While I don’t deny that AI may be supportive to the advanced, I definitely discourage the use of AI for the beginner.
Coming to your initial question: “trust” is something which has its place in an intimate personal relationship, maybe in religion, for the so inclined. What’s required in 3D graphis is however pure knowledge, human intelligence and lots of perseverance.
Ironically, I wouldn’t recommend it for beginners due to the same issues mentioned by @Oxyn and @vielzutun.ch, mainly mixing too many three.js versions. and also If you don’t phrase your questions correctly (build the right prompt), it can throw you off. You need to use the right terminology and jargon, which most beginners lack.
It’s decent for intermediate to senior devs since you’ll instinctively notice when it starts hallucinating or just pick and correct what you need. It’s also great for boilerplate. A prompt I use often: “With three.js, I need a scene with THREE.Points spread over a 100-radius sphere.”
Using it with VS Code Copilot feels like magic (when it doesn’t hallucinate). It pulls code straight from your mind, sometimes even multiple lines at a time.
I still don’t know if I love or hate it, but one thing’s certain: it will replace all of us sooner or later
I must say that I am glad to see all yours down-to-earth replies and mainly focused on the three.js library usage.
@Fennec what you call hallucinating some might interpret as pathological lying whose description on Wikipedia seems to support. Definitely one human characteristic that they have nailed in adding to AI.
@vielzutun.ch the trust word was kind of used in a sense that AI might eventually start controlling certain aspects of our lives and that is when we might need to determine whether to trust it or not. Recently, at least couple of top diplomats have suggested that AI should never be allowed to control nuclear weapons and if their fears were not based on what is being discussed here than I am not sure what they based it on.
Maybe we should just keep this discussion archived for some future use (give it some time to mature).
Frankly: I’m not very confident if a collective “we” worth mentioning exists at all. And even less confident, whether such a “we” gets to have a say in such matters.
As far as I’m concerned, I’ll never trust a system I don’t understand. I’m aware that my scepticism may well put me in a minority position.
I see more and more contributions to this forum that are clearly AI generated responses.
The quality varies wildly, but I try to keep in mind that some folks are coming from other languages and cultures, and are just using gpts to help them engage with this forum.
Just for those who would like to see one possible representation of AI then check the video below.
I guess it was meant to be beautiful but what does not kill it just makes it stronger. The recent news do show that AI has managed to replicate itself.
For those who have not seen the whole movie yet, try to see it and focus on what Ultron can do in info space instead of focusing on special effects.
In other words, when AI frees up users’ cognitive resources, they typically don’t use their extra time and brain power to problem-solve or create. Rather, they tune out by watching Netflix or perusing social media — content served up by AI algorithms.
@GitHubDragonFly, Please don’t tempt me, my conspiracy theories have nothing to do with Syfy. If I even drop hints, I guarantee I’ll get banned from this forum. I’m slowly reeducating my private, foreign and local, authorities agents on other accounts. Three.js isn’t part of the program.
I just want to raise concerns about privacy. You shouldn’t trust these AIs with your data. Use VMs or isolate your IDE and code directories with limited access. Setting the right chown, chmod, and chgrp permissions is a good start. Adding a firewall config is also a good practice.
If you’re using Windows, you’re not concerned. According to their privacy policy, you’re perfectly fine.
AI evolved a bit - not only in terms of generation quality but also how you’d use it to code in the current “meta” (skipping the privacy part @Fennec mentioned, I feel like trying to save privacy in a world of smartphones / Google / Meta / remote work is not a worthwhile effort.)
Is using ChatGPT useless for coding, esp. three.js? 100%, most of the time.
Is using Cursor / Copilot / Copilot Workspaces usless? 95%, most of time they are super useful and make you code boring parts 10x faster.
The main difference being that both Cursor and Copilot read through your code and suggest stuff based on it (kinda like how TypeScript suggests you types by deriving types.) That solves the biggest issue of ChatGPT suggesting random outdated versions of Three.js - Copilot will follow the APIs, architecture, and code style you’re using.
That’s exactly the problem. How much of your code can they access? Are they limited to the current directory, or can they scrape the entire system? The “I have nothing to hide” approach is fine, but this isn’t about your last vacation photos. It could be proprietary work you’re not planning or allowed to share.
Ideally, you’d run your own local AI server, but not everyone has four 4090s lying around. So, setting the right permissions or working with VMs isn’t a far-fetched idea.
Likely the same amounts Github for Windows can. Or Windows 11 can. Or VS Code that you gave permissions to access local filesystem can.
I see your point, but I don’t mean the “nothing to hide” - imo relying on privacy policies feels a bit naive too, and stalling progress / performance because of too much worrying is just counter productive. Neither OpenAI nor China likely cares would someone’s WebGL game or side project - both have funds to make their own. And if that side project is on Github, Microsoft and their subsidiaries (ex. OpenAI) already have access to it anyways.
There’s more fun things to do than being paranoid about whether Copilot reads my SSH keys before sleep (and if there’s something that’s really meant to be hidden, like client secrets¹ / bank account passkeys / your fav monkey NFT / crypto-stuff - that’s ofc worth being paranoid about and it shouldn’t be stored in a device connected to the actual Internet, outside of VPN, at all.)
¹ - that’s one case for which I have a separate non-AI device, if client explicitly says they don’t want AI to touch their code. But so far that happened only once, all other clients not only permitted AI but even shared their own Copilot access
I’m a big fan of ChatGPT, I used the Threejs Mentor today, and I asked it this question.
in glsl shaders, often the uv coordinates are multiplied by 2 and subtracted by 1. This resizes the screen so that the UV range is between -1 and 1, instead of 0 to 1. What is the generally accepted terminology to descibe this process in short terms.
I was happy with the answer.
The process of transforming UV coordinates from the range [0,1][0,1][0,1] to [−1,1][-1,1][−1,1] is commonly referred to as “normalizing” or “remapping” the UV coordinates to clip space or NDC (Normalized Device Coordinates) space.* Some common terms used to describe this transformation include:
“Remapping UVs to NDC space”
“Converting UVs to clip space”
“Mapping UVs from [0,1] to [-1,1]”
“Shifting and scaling UVs to NDC”
“Transforming UVs to screen space” (though this can sometimes imply other transformations)
This transformation is often used in shaders for screen-space effects, fullscreen passes, and procedural textures. It helps align texture coordinates with coordinate systems commonly used in computer graphics.
Three gpt looks promising as a guide to beginners given the use of somewhat knowledgeable terminology (I guess ). Is the model refined on threejs resources?
Haven’t tried it yet but seen some OK results being generated with v0
I completely agree with the above sentiments of privacy, I don’t see a complete problem with open source development on a vm… With correct and concise directives ai can serve as an enhancement in this scenario but of course still takes an understanding to oversee what the code is actually doing (like giving a junior dev a task who knows how to use Google, search the correct terminology and find answers on stackoverflow, it’s highly unlikely the result is getting pushed to production without a peer review inspection )…
closed circuit / proprietary dev however, I would err on the side of caution having a bot potentially scrape entire hardrives worth of data, especially if contractual non disclosure paperwork is in place around a project, the best bet would be a locally hosted llama.cpp / ollama server on a vm through secure ports with locally distilled models focusing on a provided subset of high quality resources and not to blindly post code directly from a repo but rather supply generalistic, stripped back, basic snippets that detach from the original source idea but gives enough conext to generate what is needed.
I think the usefulness of ai and agents is really dependent on how they can be used, “driven” and orchestrated as non hindering tools by the user, whilst understanding any outcome is completely unregulated and detactched from the idea of accountable risk taking, consequences, responsibility and to sometimes “not know” an answer rather than spew a hallucinated mess, behaviours that don’t necessarily serve the best purpose in systems that need to use definitively logical and secure processes for absolute outcomes.
It’s definitely useful for some things, even if it’s to center a div
If you’ve been programming and teaching for more than 55 years, you’ve come across a lot of “new and groundbreaking” things.
Even an assembler was an ingenious thing (my first program Module Import / Usage - #15 by hofk ), if you had only looked at binary codes before. After some time of “3rd generation higher programming languages”, the really big thing was to come. It was called the 4th generation and we were promised that we would only have to provide tasks and general algorithmic descriptions. The rest would be done by the technology and programmers feared that they would no longer have any work to do. This is probably not quite the case, and although many new technologies make work easier, the complexity has increased massively compared to earlier times. We need “modern” programmers.
I use artificial intelligence where it saves me time and makes some things possible in the first place. Not for highly complex problems, but for specific tasks. Three.js doesn’t stand alone, but works in conjunction with other “things”.
2 examples:
Decades ago I once learned regular expressions and used them a bit. Now, after a long time, I needed more complex expressions again. It would have taken me a lot of time and effort to familiarize myself again. A task of seconds for the AI.
I have a three.js project where I have to load a lot of (also larger) video and audio files. With only some videos it worked immediately, the audios caused problems. I had my suspicions, but it would have taken a while to clarify them without AI. A question with a program section to the AI and the problem was solved immediately in my notation and with hints. Who tests many large videos? So I asked what problems are known. I received a detailed explanation and can now continue in a targeted manner.
Apart from
analog computing, quantum computing, artificial neural networks, the understanding of “0 and 1” is the basis of (digital) programming and if you use AI sensibly there, that’s very OK.
P.S. I wanted to do it 50 years ago, but due to time constraints I only made it 2014-Windows( cpusimulation.hofk.de ) and 2021-three.js (CPU simulation 8Bit) without AI, a custom processor.
I am kind of trying not to get any of you started with any subject but just to brighten up your day the first 11 seconds of the following video are for all of you.
Don’t even get me started on the SkyNet and what it could do, which was introduced 40 years ago.
In my opinion, once AI reaches certain level of sophistication then employers might just decide to pay for the service instead of keeping the developers.
Human race should also consider keeping AI away from the power grid so they could shut it down once it becomes dangerous.