This I don't get. This is interpreted code, most web backends integrate node, why not just ship a node module? Why in gods name ship bun.
The whole web development scene has had some of the worst software engineering I've ever seen. With the exception of the Ruby scene, we have Tilt and Nokogiri for these types of things.
I try to avoid that strong of a language (although believe me - when this was going about I was very, very angry indeed). I think the reason was exactly removing the "npm hell" for users and not requiring any dependencies for using Tailwind.
The irony of it being, of course, that for some situations Bun turned out _more_ of a problem than shipping npm modules would be.
I was also advised to "use Tailwind as an NPM module for now". Only - I couldn't because the only "pure-JS" variant I could find was some months old, and the current builds all work through that bun-based 100MB binary.
The idea was good. The execution by the maintainers - and this is my subjective opinion, again - is not. An experimental JS runtime supported by one person is not a good fit for this.
Wasn't CSS invented so we didn't have to write every html element "its blue, centered text and bold font" yet people willingly choose to use class="items-center font-bold text-blue-600" and running it through a compiler.
Tailwind is far from the first framework to require it. Bootstrap has things like .center and its entire basis are layout classes for the grid. its responsive by default, but not semantic.
I think CSS has failed because people want far more control on appearance (for branding and aesthetics) than the people devising it anticipated.
The motivation for semantic markup has also reduced because people write much less HTML. You might have some markup in what you edit, but the layout and base styles are usually generated by a system (e.g. a CMS). Even most people serving static files use a site generator.
I don't think doing layout using other means (formatting specifiers? spacer elements?) would have been that much different, given that it's constraints all the way. The difficult bit is that the layout is not fixed - and it's a pain either way you do it, doing things well in InterfacerBuilder.app was also a struggle.
Yes, "center things" is ridiculous because you have to do it "4 different ways" with different tradeoffs - and there are some areas where things are still very painful (text-box-trim just now becoming available). But "failed"?.. that is a bit harsh.
Where do you expect that compiler to come into this? Or is it me that doesn't understand Tailwind and this is actually not Tailwind, just some random CSS with the same name that a compressed database puked up for me?
So it is not as much of a compiler - it is a preprocessor, with the idea being to output just the utility classes that you use. With "all" the utility classes being this multi-MB CDN version... only if it worked.
Whole article is about getting the tailwind compiler and its dependencies up and running on a 12 year old CPU so it must be important somehow...
So I'm wondering whether this compiler/preprocessor stuff is actually something people run into and my experience is deceptive, or if it's something happening to few people, on the margins, that I'll likely never experience.
Because it matters to me, since I'm not going to spend time digesting the Tailwind docs and whatnot and will forever stay a casual, disinterested user that cribs some stuff I don't understand from search interfaces. If I can't expect this to continue working as it has I'll have to figure out a way to ditch the Tailwind stuff I'm already using.
And yes, being able to jettison a pre-processor for frontend things is a very necessary thing, and unless the designers have accounted for this (I only know of create-react-app having a "jettison" feature, and then - just) you are in for a heap of fun if you need to resurrect a 5-years old app with its dev environment.
I wrote my website in C, so I don't know about this modern web process stuff.
Since it's already in component files you can use the templating already present in that context to fill in values from colour schemes and so on, without the hazards of cascading.
Instead your application is now packing a 100M binary just for your CSS framework. Even when you are already using Node.
I see no benefit, imagine if every component of your application shipped its own runtime. Imagine if erb in RoR packaged its own version of ruby. That would be crazy right?
Although I use tailwind on all my projects and not once have I thought "oh boy, development is so slow because of tailwind, I sure wish it was implemented in a compiled language!"
But eventually I didn't have choice as I inherited a web app that has all of the newfangled build components that web apps come with. I love that we're coming full circle back to MVC with server components.
After getting used to it, I ended up liking Tailwind, mostly because it breaks the cascading part of CSS. There are so many unique parts to webpages these days that I think it makes sense to keep the styles close to their components, as opposed to global cascading components.
I still have qualms with Tailwind. My classic CSS sensibilities are offended, but whatever. The part that I still don't like is really what this post boils down to: a massively complex build system that creates footguns in the weirdest places.
That being said, Tailwind that's set up well in coordination with a complex design system really does feel like it's a win. Seeing that in action was an aha moment where I was able to see value that made some of the tradeoffs worth it.
If you're using Vue, you even get a <style> block that can be injected into any component so you're still working in the same context. It's all delightful and optional and you can still do whatever you want.
What about a system that could take a really long TW className and let you give it a single name, and you could still append more TW after for here and there adjustments.
Can you elaborate on this a little bit — was there a lot of Figma tooling, plugins to swap variables between systems, etc?
For me, not having a build-step means that, yes, I miss out on typescript, but the upsides are easier to read, easier debugging and lower cognitive burden during both writing and reading the application.
It means I will never, like the author in the article, spend $X weeks, and then spend a further $Y dollars just to build something "modern".
My "modern looking" applications are developed, and then tested, on a machine from 2010 (1st-gen i7) that has 16GB of RAM and oodles of slow spinning-rust disks, and it isn't painful at all.[1]
[1] It is, in fact, quite nice when I see a client use their system from their own PC and it's just a little bit snappier than I am used to.
It's the JSX+webpack perversion that ruined it for us all.
Classical ASP.NET, Spring with vanilla js.
We stopped inlining style attributes for a reason - is this just how the next generation needs to learn?
(Full disclosure, I haven't worked with Tailwind since 2020 or so. Though I was obviously not too favorably impressed by it, I don't recall it having problems like this then, which if anything I'd expect to have been exacerbated by the Apple CPU architecture transition then ongoing.)
This is literally what bootstrap did/does. You could also trivially do this yourself with just a tiny bit of discipline -- it's part of why variables were valuable in SASS/SCSS.
Why must we re-invent CSS to get semantic class names? Like the parent, I have yet to hear an explanation for Tailwind that doesn't sound like "someone who didn't fully understand CSS re-wrote it, and a generation of bootcamp coders adopted it without knowing why, or what came before."
Every generation has to invent sex and politics for itself, or at least imagine for a while in its 20s and 30s that it did. Why not the same in another preparadigmatic field like computing?
> Every generation has to invent sex and politics for itself, or at least imagine for a while in its 20s and 30s that it did. Why not the same in another preparadigmatic field like computing?
Because it's not "preparadigmatic"? There was a perfectly good paradigm before it was tossed out and re-written in Javascript (and then again in some other language, apparently). There have certainly been some revolutionary paradigms in my career (e.g. the web itself), but this "reinvention" of basic front-end tech doesn't qualify.
This stuff holds back the industry. It's part of why software engineers over the age of 30 are considered "old".
(Even the damned alchemists have their ball-and-stick models! And sure, we have S-expressions, had them for something like seventy years, and do we use them? Do we, hell...)
It's how you get people thinking that the web was revolutionary, and not a product of decades and generations of work toward the concept of a global communications network. But the idea that this inchoate condition holds back the industry doesn't seem to me to hold much water. The first boilers blew up a lot too, before the underlying principles were understood, and mere prolonged survival quickly came to be seen as no mean qualification in a steam engineer. How much did that "hold back" the building of railroads, from where the trains were to where the money was? That, if you care to know, is the overarching metaphor with which I like to describe this industry - though I concede the machines we build are not nearly so hazardous to we ourselves.
If I had to boil down my entire analysis of this industry to something expressible in a single adjective, the only word to fit would certainly be "irresponsible." But I'll also mention at this time that I topped out at a high-school diploma on a sub-3.0 GPA, so if as the holder of a doctorate you find you begin to become bored or uncomfortable talking with me, experience strongly suggests the option of impugning any or all of my intellect, discipline, character, and decency of motivation in speaking is always available as a resort.
And given a somewhat thoroughly developed analysis of this extremely young industry's place in the span of human history to date, why not talk about that when I can spare the time? I feel like I'm probably not the only person here who finds such ideas of interest.
Unless I do the front-end for my own app and then, the order of preference is server rendered HTML, HTMX, Web Components, Vanilla JS. Stuff I am sure I can maintain with ease 100 years from now. For CSS I would use something simple such as bootstrap.
I kind of agree with the author of using tools you know are reliable as opposed to chasing fads. Of course you must and should learn and use new things, but proceed with care and carefully consider both upside and downsides.
Most people ask "why ship bun (whatever that is), why not just be an npm module".
I am baffled as to why we have forgotten the lost art of spitting out something from a build and then using that thing. As in, "make" producing a CSS file. Or a JavaScript file. Or multiple files. Why does npmness have to force itself into every nook and cranny of our software and consume it all?
In my webapp I use several small CSS and JavaScript libraries, and for building those I use docker containers. The npm horrors live in the docker containers, I try not to look in there, but whatever happens there, a bunch of css and js files come out. Reproducibly. Reliably. And things don't break if there is a headwind or a tailwind (ahem) today on the internet.
What kind of horrors did you encounter that led to this abstraction?
This is bad, and it should not be an acceptable situation, but a relatively short support cycle is a choice you made by buying their product. I'm not sure it's right to then blame others for following along.
And while on a formalistic, nitpicky level it is a "what are you complaining about with your old box" - in actuality I do find the idea to require a CPU upgrade to run a CSS pre-processor (a CSS pre-processor! come on! not an H265 encoder. Not some sophisticated animation system. Not an AI blob. A tokenizer for HTML...) absolutely, completely excessive.
And I know why that decision came - it is because building portable binaries for the Mac is a pain in the butt. Well, guess what - if you made the call of shipping a multi-platform runtime - that backwards compat is part and parcel, Apple's LLVM versions, the linker and the dylibs and whatnot.
So no, I understand that you are "right" formally, but the situation this brought me to - I still find bad, and the choices made by the chain of maintainers - I still find inconsiderate.
Trust me, I get very confused and frustrated whenever I have to figure out python deps or kubernetes, but I accept it’s going to be difficult since I’m not familiar with the field.
While keeping in mind that this isn't a field that I have a ton of practice in, I can confidently assert that a parser for HTML input that outputs CSS classnames does not need all of the following:
1. A recent Node (+ dependencies)
2. Pnpm
3. Rust (+ dependencies)
4. Bun build environment
5. A binary size of 100MB
I'm pretty certain I needed an HTML parser at some point in the past, and it was built as a single standalone file, that compiled to a single standalone ~50Kb binary, that took a single day to write.
Actually, here it is: https://gist.github.com/lelanthran/896a2d1e228d345ecea66a5b2...
Now, fair enough, that doesn't spit out class names (because I didn't need it at the time, although looking at the query language it seems that it might be able to do that) that, but it's very easy to add a single flag for "spit out class names". The build dependencies are:
1. A C compiler.
There's no makefile/build-script - simply doing `$CC program.c -o program` is sufficient.
So, sure, while I don't have a ton of practice in this area, I have enough to know that the binary/program in question is heavily over-engineered. Their dependencies are artificial requirements (we require this because this other technology requires it), not functional requirements (we require this because this feature requires it).
I agree; this was probably an oversight (done in a single day, on a weekend day, so I probably spent no more than about 6 hours on it).
If I want to make this production-ready, I'd move it into a repo, split it into a library (so I can drive it from Python or similar), add some tests, etc...
I literally needed a once-off tool to parse HTML, and had some time to spare.
And as much as I prefer things be done well instead of.... like that... there are only that many hills to die on - and only that many people interested in my opinion :-)
I would've literally opened an editor, wrote a quick parser in JS, bundled it in and be done with the problem without going any further.
This functionality, if I am understanding correctly, is to grab all the classnames from HTML files and then pass that to a tool that includes those, and only those, in a .css file.
I'll burnout in no time if I spend even 20% of my time working on configuring or debugging the environment and the rest on the development activities[1]. I can't imagine roles where time is split 50/50 (or more) between "fiddling with YAML, npm, AWS, etc" and "development".
[1] I include requirements elicitation, debugging, talking to customers, etc as "development activities".
That is very kind of you to say, thank you!
This is kind of an example of the Dunning Krueger curve. You’re admitting naivety while also claiming confidence, which should give pause.
The main issue I have with your argument is the framing - you’re trying to make your build tool as simple as possible to a developer like yourself which is not who the tools were built for or by. Context matters. Not everything is designed to work on 15 year old hardware and most developers would simply scoff at an engineer who thinks the best way to build software is with ancient hardware.
If you view everything as trying to use as little dependencies as possible (like a C programmer should) then you absolutely will think it’s bananas that this used 100 MB of dependencies. But if you have a different perspective, you may see that the dependencies don’t matter that much as long as it works.
In fact, by using common tools that have good interoperability, that are only used on a developer machine, it doesn’t matter too much what resources they use. Of course if you’re developing on a 2010 laptop with 16Gb of RAM then you may have issues but that’s not the open source developers problem. If all the open source developers had to fit your performance constraints then they would just not get much work done at all.
My main point is that developer tools don’t have to be light speed, they just have to be fast enough on modern hardware, which they absolutely are for frontend. I have enjoyed 3-5 second iterative builds on all my projects for the better part of 10 years.
> The main issue I have with your argument is the framing - you’re trying to make your build tool as simple as possible to a developer like yourself which is not who the tools were built for or by.
My argument is "Not all of the specified dependencies are necessary", and not "None of the specified dependencies are necessary".
See my other post, in which I point out that the functionality required could have been done in plain JS, using Node. That's exactly one dependency.
The other dependencies are not required to have the same feature, especially as you point out:
> My main point is that developer tools don’t have to be light speed, they just have to be fast enough on modern hardware
If we're both agreeing with that main point of yours, there is no reasonable justification for depending on anything other than Node (which is already there in the project anyway).
My PoV is less Dunning-Kreuger than you claimed; additional dependencies are added for no additional value, and come with breakages. After all
> But if you have a different perspective, you may see that the dependencies don’t matter that much as long as it works.
The whole point of the saga is that it doesn't work on an otherwise perfectly capable computer.
Unfortunately that's the world we live in now.
The owner of the company selling Next.js and adjacent services is firmly in that list of influencers I have muted going forward.
I'm working on it ;-)
Recompiling the (open source) code should have offered a solution but OP could not make this work.
On some platforms you can emulate the missing AVX2 instructions with intel SDE but not apple.