If you have a stupid simple HTML+CSS+JS project that nevertheless uses a build tool for some reason, a contributor should be able to clone the repo, open the README, read the instructions on how to build it and not encounter "first, install bun" (or whatever); in 90+% of cases, those instructions don't actually need be anything more complicated than "First, open the build script in a browser tab. Now drag and drop the project folder onto the build tool." You can double up and make that build script your README if you want—call it README.html.
It's baffling the numbers of folks who decided to focus on JS development because they wanted to draw on their familiarity/desire for developing for the browser and out of recognition that being able to create browser-based apps and deploy to users without making them install anything is a huge boon... and then went on to make a bunch of shoddy tooling that despite itself being nominally written in JS still requires you to download a bunch of other stuff to run it and doesn't leverage the standard JS runtime that everyone _already_ has installed.
Nobody here is arguing against code reuse (pointing devices).
> Yeah, so they could have just answered my original question.
That's not a real question, just you trying to be a know-it-all and failing miserably.
conflating dependency management with npm is your mistake, especially now with ESM. You can just do
import React from "https://esm.sh/react"
No NPM neededNo, I'm not. You just seem to think people are trying to be smart asses when they aren't. Don't try to quell curiosity with your cynicism. HN is a place for curiosity, let it be that way.
In the esm.sh case, esm.sh is a CDN that is a proxy to NPM (and a build tool).
The original commenter answered my questions. They use JS and TS with bazel. Which likely acts as a package manager for them, fetching packages from the npm registry for them - or the infra it uses does.
EDIT, since meiraleal edited their comment while I was replying, here is their original comment:
> conflating dependency management with npm is your mistake, especially now with ESM. You can just do > > import React from "https://esm.sh/react" No NPM needed > > What languages do you use that don't have a package manager? > > Yeah, so they could have just answered my original question. > > That's not a real question just you trying to be a smart ass, wrongly, and insisting on it.
No, esm.sh in this case is just an external source for a JS file that happens to mirror NPM because that's the JS files people want to download. It could be github.com, both, multiple sources, using sourcemaps or just local. There is no need for a central package manager to manage dependencies nor a build step in modern JS with ESM and that's way better than installing thousands of npm packages in your local machine for a hello world project.
> for a hello world project.
I was coming at this from the perspective of real-world projects, sorry. The repo I work on most has over 4000 npm dependencies (including dev dependencies). There are even more in other repos from other teams that it depends on. If we had spread these out across a handful of sources (github, esm.sh, someone's random host, etc) I don't think an app of our size would ever see a day without some downtime*.
In real world applications spreading dependencies across domains results in multiple points of failure instead of one (npm registry), you also have a much harder time with security (especially around automated security tools and auditing), dependency versioning becomes a bit trickier, managing updates are harder, ESM packages are far from ubiquitous, etc. I do understand the sentiment; it'd be nice to import from wherever and have the same guarantees that a centralized package manager provides, but that's not how things are right now.
Maybe if you don't work in a large legacy app, or your systems don't need the security tools that have been built up around package managers, or you can afford increased load times from unbundled/unoptimized dependencies, the ESM dream works. But most significant apps can't go the "Vanilla JS" route. Sadly, "Vanilla" JS isn't as great as it can be, so leaning on tools to get the job done is great, and a centralized registry (with well-managed mirrors!) is a really great tool.
There are no dynamic imports or downloads. You build from source.
Dependencies are merged into our monorepo very sparingly - angular is "batteries included" so you don't need to import hundreds of thousands of extra dependencies and transient-dependencies to build a functional site like you need to do with other frameworks. Left-pad issues simply do not happen this way.
And from what I've read (I haven't used it myself) Bazel has some internal and external dependency management built in. And there are many JS related build rules for Bazel (https://bazel.build/docs/bazel-and-javascript), and at least one of these looks like it would enable npm.
> Dependencies are merged into our monorepo very sparingly
Oh that must be nice. The project I'm working on (a browser extension) currently has 4083 dependencies (including dev dependencies).
https://prodimage.images-bn.com/pimages/9781962572439_p0_v1_...
Just because the canonical approach at your org (or anywhere) is to use the NPM style of late-fetching dependencies to do an end-run around Git, that doesn't mean that you, personally can't take advantage of Git's branching model and its support for multiple remotes for your own, local benefit.
If I cared to have personal projects that still involve sitting a computer. These days I just prefer to go outside.
To be clear, the issue isn't npm/yarn, it's the sheer number of external dependencies that I loathe.
If that could be pulled off, it would be a great way to isolate the build process and prevent malicious packages or build tools from mucking with your system.
Does not look very realistic for projects of any considerable size though.
To the original commenter: I don't recommend using Node. I said use the World Wide Wruntime instead. Even so, there's a difference between an app that might incidentally use an API somewhere down the line versus programming directly against that interface.
Look at the early versions of the TypeScript compiler. Here's 1.8: <https://github.com/microsoft/TypeScript/blob/release-1.8/src...>
tsc doesn't care whether it's running on Node or in the browser or somewhere else. I think the TypeScript team originally bootstrapped it with the Windows Script Host JScript implementation (available pre-installed on all Windows machines) and didn't start using Node until later on.
The only thing the compiler cares about is that something implements the ts `System` interface, so the compiler can invoke e.g. the read-file operation (which belongs to `sys`, i.e. `sys.readFile`) and that it does what it's supposed to do. And it can use the method/capability for listing directory contents, the one for getting the arguments, and so on. That's all defined and implemented straightforwardly in a short file sys.ts.
Figure out what services (capabilities) your program actually needs and sketch out an interface that permits all those things. Then write your program against that interface. It'll make your program more readable, anyway.
> Does not look very realistic for projects of any considerable size though.
Define "realistic" (or "considerable size"). All of Firefox's non-core (i.e. not Gecko) components and the build phase involving them that produces the final release package (~90MB[1]) could probably work this way. (If anyone actually cared about it[2].) Maybe even VSCode, too.
Could you do this with any given project you stumble across on GitHub today? Maybe not, but that's because your average NPM programmer turns out programs that use entirely Too Much Fucking Code for what ultimately is just a todo app or whatever. I once benchmarked create-react-app's "hello world" exercise from a blank slate a few years ago. The numbers involved ended up with `node_modules/` taking up something like just under HALF A GIGABYTE of disk space. For a "hello world" app. That's nuts.
1. <https://ftp.mozilla.org/pub/firefox/nightly/2024/09/2024-09-...>
(Not everyone who's trying to use something on a work machine is doing it because it's a requirement of their job, either.)
Do you understand what "required" and "needed" mean?
If you just plan on brainlessly lecturing, feel free to go dump on someone’s else’s thread, friend.
What Node APIs are proprietary?
Standard APIs work in chrome or Firefox or safari.
If you target specifically Node so that you can use the non-standard features that only Node has, then you can only use Node. Proprietary.
Target the standard APIs and you can use anything.
You can't make this up.
A few tips:
* If you have a lot of JS files and want excellent performance you may still be able to skip a build step by leveraging `modulepreload`
* If you want some typing while still running without a build step, you can use JSDoc to embed TypeScript annotations for your JS[0].
* If you really want to have a mostly HTML site but don't want to manually duplicate code, I've had decent success with a technique I call "tuplates"[1][2]. Basically it's a very simple templating tool where the output is intended to be committed to git. It uses comments so it's language agnostic.
* If you go web components I highly recommend these best practices[3]. A lot of them were confusing for me at first but just keep them on your radar as you're learning.
[0]: https://www.typescriptlang.org/docs/handbook/jsdoc-supported...
[1]: Description: https://github.com/anderspitman/tuplates-js/blob/master/READ...
[2]: Latest implementation: https://github.com/anderspitman/tuplates
[3]: https://web.dev/articles/custom-elements-best-practices
// template.js:
//
// m4_include(`my/script.js')
usage: m4 --prefix-builtins template.js > filled.js
m4 is a nifty old tool, and it might be on your machine already. But props for arriving at your own solution.The main difference between tuplates and what you're doing is that the tuplates comments remain in the output. Which means you can run tuplates again on the same files and it simply replaces everything between your tuplate_start() and tuplate_end comments. This has a couple nice properties:
1. Your files are always valid code. There's no such thing as a template file that doesn't compile or run.
2. You can commit everything to git
I forgot I have an example from a production site (https://lastlogin.net) if you're interested in seeing it in action: https://github.com/lastlogin-net/lastlogin.net
I made a way to insert html into other html via a small js snippet, and you can add dynamic data by placing the dynamic data in data-attributes and using {} in the html template as a standin variable
I have to say, it's been a few years since I last did front-end dev work, and I was really pleasantly surprised by how much better CSS and Javascript became. All in all, it's been a really positive experience.
generally small parallel requests perform better than large single requests in a modern situation - modern browsers, HTTP 2+ etc. - in my experience - but there can be a lot of caveats about that statement.
However the import statements in the css are not downloaded in parallel last time I checked - and probably never would be because of ways css works, so this using import statements inside css instead of having a build of a single css is actually the worst possible solution I think (ok there is probably something even worse that you could build if you really wanted to but worst "reasonable" solution)
So the trick is if you can keep the build to seconds you don’t change your workflow. Which means you might be able to pick one transpiler like Less, and throw a watch on it, and still be fine.
As long as, that is, you make a cultural decision not to go fucking nuts with the CSS. You need a low threshold for triggering tech debt cleanup.
If you're a solo dev, sure, you're free to do anything you want and nobody will balk or complain about you causing them extra work or having to clean up a mess later. If you're on a small project and the CSS will fit onto a couple sheets of A4 paper, you also won't have enough volume of code to see major benefit because you could just rewrite an entire UI of that scope from scratch in a few hours. But on a team working with a larger codebase Tailwind solves real problems in maintaining styles across developers and over longer periods of time.
Merits of semantic classes and class hierarchies are very debatable. And CSS dropped the ball with style composition (which e.g. SCSS fixes). And media/container query ergonomics aren't ideal. But essentially hardcoding styles inline is a very weird reaction to these. A bit like "let's get rid of variables and hardcode values in functions because OOP inheritance caused a mess".
I’ll take its idiosyncrasies which take less than an hour or so to understand (spread across a few days of usage) as opposed to raw CSS which is a totally different sort of madness.
I can see that Tailwind does prevent architecture astronauts going crazy. But that's a cultural issue, not technological.
It’s a fundamentally different way of working. It enables styling by a different set of principles than pure css.
Merely looking vaguely like inline styles does not make it so.
And recently I rewrote our CSS to use modern syntax and nesting and variables and etc. - no more Sass!
It worked great but then customers started to complain and send messed up screenshots.
Turns out a lot of Android phones are using an older browser and a couple of customers were locked into old versions of Chrome by corporate policy.
Fortunately this was all nesting rules. So renaming the css file to scss and running it though Sass fixed the issues.
adds more complexity
I was surprised how well module imports work in modern browsers.
I've never used css preprocessors before, so that wasn't an issue for me. I also have never nested CSS files, so that also wasn't an issue for me.
Another pain point is not having type safety on my front end calls to my backend. With TS I could use Zod on the front end as well and have type checks in place before I make those API calls. While obviously js doc tries to ensure at compile time that things are the correct shape, that isn't the same level of assurance.
It's a bit more verbose, but overall I find it refreshing.
But inevitably the complexity of my projects increase. My patience with focus, refresh, review cycle crashes. My first stop is usually vite, just for hot reload at first, but then for raw imports, and I do like nested CSS. When I find myself beginning to roll my own reactivity I tend to read up on the latest state of vue and svelte and drop whichever one feels right into the mix. It goes on.
All with the best intentions.
* Dependency management (use libs so you dont have to write/maintain that code yourself) * Code generation (we use that for added typesafety: jOOQ the SQL eDSL, and OpenAPI3 for generated clients + server-side DTOs) * Compilation (compile higher-level languages with strong type-safety guarantees, e.g. Kotlin, Elm, Haskell, Rust, TypeScript; to lower-level or dyn-typed/quirky languages, e.g. JS, WASM, byte-code, binaries)
Three pretty important advantages!
Sure builds come with added complexity, so it is a trade off. But when carefully picking technologies this can greatly pay back the cost of complexity.
Now, if you compile, say, JS to JS (less to CSS) with brittle build tools such as npm (shell commands in a string in a JSON called packages.json), and low quality dependencies, then the advantage your build step probably negative.
Do you not run a script to deploy your software in whatever way? If you do then what's the difference if the script builds some things as well and not just uploading?
All on the server!
Do I get nerd creds now? <3