you can’t run html without a server, you never could. unless you hack the browser to allow local file urls. in the past you used python http servers and such. if you didn’t then all you used html for was plain text, now having opinions about stuff is fine but i don’t think that would qualify as informed.
writing that whole chapter cost you more time than typing: npm create vite and seeing for yourself where the benefits are.
Those were all some examples from the book, and I was buying the book in order to see how to make those samples. And they were only provided in the npm way. So yeah like, without the book I would have been just, without npm going forward no prob. The cdn way.
I did run this stuff. And seeing no benefits and a bunch of added complexity and a change in how I were to work from now on, were I to program like the book’s examples. I became depressed.
Maybe I’m not informed. I don’t know. I only know what I know.
It’s a big universe and especially web development with servers and Threejs and the level of complexity in the world we are in is way beyond what we have evolved for.
because browsers became paranoid about reading local files (which is needed to load 3d models or textures). firefox was the last browser that allowed it, but it ended when somebody abused it to read attachments saved by some messenger app on android by sending an html attachment so now you need to run a server
This is the most confusing thread I have ever read on web development. How are the assets loaded without a server, what does the browser make requests to? Why didn’t you buy a book that has different examples?
If it is any consolation, I have not yet gotten to the point of using NPM.
I write my programs using an html editor.
I can still run some locally - ones that don’t require me to load objects or textures.
Where that is not possible, I simply upload the program to my server and then run it from there.
I have not yet run into an example where something more is needed.
But I am probably missing out on some helpful features like errors checking (beyond what Chrome provides), etc.
Can we all agree at least on one thing? His nginx server has absolutely nothing to do with npm. It’s like if we were discussing tug boats and and types of smoked tea.
I don’t use npm either.
But it’s probably more convenient for threeJS maintainers and large projects users.
the whole repo has become quite large so the use of external tools is understandable to remove some hassle. All these folks are doing it for free, we should not complain about this imo.
As long I can hack my way into it, and revert it back to module-only and no compiler structure, I’m totally fine spending few minutes to customize it. Could be waay worse (like imposing Typescript and other bloated frameworks)
A couple of hours ago I posted a question on this forum, whether it is possible to develope three.js displays while retaining my existing workflow using XAMPP als localhost webserver.
What you describe is what I am looking for: a self-sufficient development environment, not requiring internet connnection.
when you install three.js from npm, it downloads these files. you can get very same files directly from github. once you have these at your computer, you can disconnect from the internet.
Disclaimer: read this at your own risk. Many developers would say this approach is outdated and causes more problems than solutions. I tend to disagree, as in some specific cases I find this approach more suitable to my (subjective!) needs.
Here is what I do:
Case 1 (when local webserver is not needed)
I download the whole three.js package and keep it in a separate folder – it is used only as a repository folder, if I need some file and I have no internet, I can still get it
In my project folder I copy only three.min.js
If I need some addon (e.g. OrbitControls.js) I take the file from my repository folder, demodule it and put it in the project folder (there is free tool for demoduling, but I rarely find myself using more than 4-5 addons)
In my project HTML file I use normal tags to load three.min.js and OrbitControls.js
Deployment is just uploading the project folder somewhere online
Pros: you do not need internet to develop things
Cons: no local modules/textures/models; no tree shaking; the size will be bigger as the whole three.js is included; difficult to use external libraries that have only module version; if you want to update the three.js version, you have to copy the new files by yourself
Case 2 (when local webserver is needed)
I download the whole three.js package and keep it in a separate folder – note, I do not do this for every project, I do this only once (and I redo it when there is a new Three.js release that I need to use)
In my project folder I copy only three.module.js
If I need some addon (e.g. OrbitControls.js) I take the file from my repository folder and put it in the project folder (no need to demodule it)
In my project HTML file I use <script type="module"> tags and import in my JS code;
Deployment is just uploading the project folder somewhere online
Pros: you do not need internet to develop things, you have modules, textures and 3D models
Cons: the size will be bigger as the whole three.js is included, if you want to update the three.js version, you have to copy the new files by yourself
Case 3 - you have a big project, you use a lot of external libraries, you want tree shaking, you want to rebuild three.js, you want to pack all files in a bundle, you want to contribute to the ever growing world of JS libraries etc.
It is better to switch to some modern web development workflow, because it will automate a lot of the tasks, will save you a lot of time and will significantly reduce the chance to mess up things.
Here is analogy of what I do: if I want to send one greeting card, I write the greeting by hand. If I want to send 1000 greeting cards, I use a printer and print them. Of course, other people can do differently, some will prefer to handwrite 1000 greetings; other will advice to always use a printer even for one greeting card.
Thank you for your overview of cases, this allows me to make an informed choice.
I have an addititional question about deployment.
For comparison:
I use JSXGraph for 2D graphics. JSXGraph is a single .js file.
On my website I refer to each instance of ‘graphical applet’ as ‘graphlet’. The multiple graphlets on a page require a single resource: the single JSXGraph file. That is: the browser has to download that file only once.
In the javascript: each graphlet instance is declared with an IIFE.
JSXGraph declares the object ‘JSX’ in the global namespace; all JSXGraph objects/properties are within the JSX. namespace.
My 3D visualizations will contain only a few elements: an example would be a sphere (with meridians and latitude lines), and a dot moving over the surface of the sphere. There will be multiple graphlets on a page.
So for my use-case it is better if for all the 3D graphlets on page/website the visitor’s browser has to download the required .js files only once. It would appear that for my use-case it is better to have a large upfront download which has to be downloaded only once.
I’m guessing the workflow/deployment as described in ‘case 2’ will work like that.
The size of the things that I publish online is relatively small and I have never been concerned with file sizes, download time, hosting quota etc. My biggest criteria is how convenient is the project for me to develop and for the users to use.
In the past I’ve made some experiments and the difference between one-monolith-file and a-cohort-of-many-smaller-files appeared to be insignificant for the scale of my projects. After all, browser cache quickly eliminates all load-time differences. Thus, the efforts spent to decide which one is better are more than the efforts for using any of them.