> See, I know that shipping end-user software - like the Tailwind compiler - if that software is written in an interpreted language - is fucking hard.
This I don't get. This is interpreted code, most web backends integrate node, why not just ship a node module? Why in gods name ship bun.
The whole web development scene has had some of the worst software engineering I've ever seen. With the exception of the Ruby scene, we have Tilt and Nokogiri for these types of things.
I try to avoid that strong of a language (although believe me - when this was going about I was very, very angry indeed). I think the reason was exactly removing the "npm hell" for users and not requiring any dependencies for using Tailwind.
The irony of it being, of course, that for some situations Bun turned out _more_ of a problem than shipping npm modules would be.
I was also advised to "use Tailwind as an NPM module for now". Only - I couldn't because the only "pure-JS" variant I could find was some months old, and the current builds all work through that bun-based 100MB binary.
The idea was good. The execution by the maintainers - and this is my subjective opinion, again - is not. An experimental JS runtime supported by one person is not a good fit for this.
A lot of CSS tooling feels designed to overcomplicate it so it feels more like "engineering" rather than "markup". I find Tailwind particularly bizarre, essentially writing your CSS as inline styles via single use non-semantic classes is absurd to me.
Wasn't CSS invented so we didn't have to write every html element "its blue, centered text and bold font" yet people willingly choose to use class="items-center font-bold text-blue-600" and running it through a compiler.
> Wasn't CSS invented so we didn't have to write every html element "its blue, centered text and bold font" yet people willingly choose to use class="items-center font-bold text-blue-600" and running it through a compiler.
Tailwind is far from the first framework to require it. Bootstrap has things like .center and its entire basis are layout classes for the grid. its responsive by default, but not semantic.
I think CSS has failed because people want far more control on appearance (for branding and aesthetics) than the people devising it anticipated.
The motivation for semantic markup has also reduced because people write much less HTML. You might have some markup in what you edit, but the layout and base styles are usually generated by a system (e.g. a CMS). Even most people serving static files use a site generator.
I don't think CSS has failed - it's just a big and complex system for automatic layout. Automatic layout is hard, and if the age and power of CSS show us anything - is that it has succeeded. Comparing CSS of today to CSS of, say, 2005, is just night and day.
I don't think doing layout using other means (formatting specifiers? spacer elements?) would have been that much different, given that it's constraints all the way. The difficult bit is that the layout is not fixed - and it's a pain either way you do it, doing things well in InterfacerBuilder.app was also a struggle.
Yes, "center things" is ridiculous because you have to do it "4 different ways" with different tradeoffs - and there are some areas where things are still very painful (text-box-trim just now becoming available). But "failed"?.. that is a bit harsh.
I have an LLM synthesised HTML file that links in a remote tailwind.css and contains no script tags or any JS at all. It contains a lot of class="bg-cover bg-center h-screen" and class="text-3xl font-bold text-gray-800 mb-4" that I don't really understand, and this file renders just fine in a web browser.
Where do you expect that compiler to come into this? Or is it me that doesn't understand Tailwind and this is actually not Tailwind, just some random CSS with the same name that a compressed database puked up for me?
I know the feeling. The way it works is that Tailwind parses your HTML, finds `mb-4` and thus knows that it needs to emit the `.mb-4 { margin-bottom: some-var-calc-thing-in-4-increments-of-your-layout }.
So it is not as much of a compiler - it is a preprocessor, with the idea being to output just the utility classes that you use. With "all" the utility classes being this multi-MB CDN version... only if it worked.
No, the pre-processor has to be running continuously in "watch" mode with patterns for all your HTML (and sorta-HTML) files. When it detects a change in one of them, it chews through the changed file and emits a .css file into your web root which is the particular subset of Tailwind you are actually using.
OK, but I don't recognise this from my scant experience with Tailwind, which consists of either having a framework do whatever it does with ":tailwind" (maybe it's a compiler, I don't know) which it has done without me noticing anything at all or putting a link in a link-element to pull in a CSS.
So I'm wondering whether this compiler/preprocessor stuff is actually something people run into and my experience is deceptive, or if it's something happening to few people, on the margins, that I'll likely never experience.
Because it matters to me, since I'm not going to spend time digesting the Tailwind docs and whatnot and will forever stay a casual, disinterested user that cribs some stuff I don't understand from search interfaces. If I can't expect this to continue working as it has I'll have to figure out a way to ditch the Tailwind stuff I'm already using.
Yes, they run it - but for most it is melted into another bundler like PostCSS. The few of us who try to resist the "just use bundlers for everything and everywhere" do notice that there is now a bundler process firmly anchored in your development process.
And yes, being able to jettison a pre-processor for frontend things is a very necessary thing, and unless the designers have accounted for this (I only know of create-react-app having a "jettison" feature, and then - just) you are in for a heap of fun if you need to resurrect a 5-years old app with its dev environment.
Someone told me about Tailwind last week on IRC, and I literally do not see the point of writing class="font-bold" versus style="font-weight:bold;" especially if the former means you have to add a whole bunch more complexity to your build process (and to have a build process at all).
I wrote my website in C, so I don't know about this modern web process stuff.
If all your frontend code lives in component files then it is quite nice that all the styling is also local to those components so you don't end up with horrors in unexpected places because you changed a line in a CSS file.
Since it's already in component files you can use the templating already present in that context to fill in values from colour schemes and so on, without the hazards of cascading.
Most likely they decided to go for a binary so that people with other runtimes could use Tailwind without subscribing to the whole Node+npm ordeal. And it's actually neat when it works - for us RoR folks it makes things like tailwindcss-rails possible - where there isn't any Node stuff installed at all.
But RoR does support having node integrated and it was the default for a long time. tailwindcss-rails could've just used Node.
Instead your application is now packing a 100M binary just for your CSS framework. Even when you are already using Node.
I see no benefit, imagine if every component of your application shipped its own runtime. Imagine if erb in RoR packaged its own version of ruby. That would be crazy right?
The application is not packing the binary - it is the gem that you use to build CSS, and I _think_ it's only active during development, but don't quote me on that. In production there is a generated built asset which contains all the Tailwind-generated CSS classes you are using across your templates.
Although I use tailwind on all my projects and not once have I thought "oh boy, development is so slow because of tailwind, I sure wish it was implemented in a compiled language!"
I think it's more like "having less moving parts". In which they kinda-sorta-succeeded, except that the choice of bun seems misguided for something where you want broad adoption. And Tailwind is, by now, a product that is clearly clamoring for broad adoption (and achieving it).
I was hesitant to give tailwind a try, like most aging web developers I couldn't stand that it breaks the "cascading" part of CSS.
But eventually I didn't have choice as I inherited a web app that has all of the newfangled build components that web apps come with. I love that we're coming full circle back to MVC with server components.
After getting used to it, I ended up liking Tailwind, mostly because it breaks the cascading part of CSS. There are so many unique parts to webpages these days that I think it makes sense to keep the styles close to their components, as opposed to global cascading components.
Yeah, I've been a fairly vocal curmudgeon about Tailwind within my team. Some folks really like it. They're doing the work and I trust them, so I rolled with it.
I still have qualms with Tailwind. My classic CSS sensibilities are offended, but whatever. The part that I still don't like is really what this post boils down to: a massively complex build system that creates footguns in the weirdest places.
That being said, Tailwind that's set up well in coordination with a complex design system really does feel like it's a win. Seeing that in action was an aha moment where I was able to see value that made some of the tradeoffs worth it.
The other thing that people seem to just either forget or ignore is that you can do _both_ things! Tailwind can be used for the areas that it makes sense for, and you can write a sitewide layout using insane (in a good way!) grid declarations using plain old CSS and a few extra classes in your markup.
If you're using Vue, you even get a <style> block that can be injected into any component so you're still working in the same context. It's all delightful and optional and you can still do whatever you want.
Yes, exactly. You can use TW for the utility classes and copy-paste-and-go components, and append your own over/under/side-lay layers to create whatever css you want.
What about a system that could take a really long TW className and let you give it a single name, and you could still append more TW after for here and there adjustments.
> That being said, Tailwind that's set up well in coordination with a complex design system really does feel like it's a win. Seeing that in action was an aha moment where I was able to see value that made some of the tradeoffs worth it.
Can you elaborate on this a little bit — was there a lot of Figma tooling, plugins to swap variables between systems, etc?
I see this is absolutely something I am not going to be using as long as I can help it (it is the same setup as importing CSS and MP4 in Webpack via the import declaration), but a good tidbit to know it exists.
I've got a number of vanilla JS, HTML, CSS libraries that work only from within the browser, using the standard tags to include them.
For me, not having a build-step means that, yes, I miss out on typescript, but the upsides are easier to read, easier debugging and lower cognitive burden during both writing and reading the application.
It means I will never, like the author in the article, spend $X weeks, and then spend a further $Y dollars just to build something "modern".
My "modern looking" applications are developed, and then tested, on a machine from 2010 (1st-gen i7) that has 16GB of RAM and oodles of slow spinning-rust disks, and it isn't painful at all.[1]
[1] It is, in fact, quite nice when I see a client use their system from their own PC and it's just a little bit snappier than I am used to.
Absolutely with you on "no build". Sometimes it doesn't work (for me - where I need NPM modules with annoying math I don't want to do myself), but the basic premise of "nobuild" is absolutely and completely how it all should have always worked.
It's the JSX+webpack perversion that ruined it for us all.
The comparison to inline styles is understandable but misses Tailwind's real value. Inline styles failed because they were arbitrary, inconsistent, and impossible to scale. Tailwind, by contrast, offers a constrained, standardized set of utilities that enforce consistency, simplify maintenance, and reduce CSS complexity. For example, instead of arbitrary inline styles like `style="padding:7px; margin-left:3px; font-size:14px;"`, Tailwind provides predictable classes like `p-2 ml-1 text-sm`, ensuring uniformity across your app.
Okay, that's reasonable, I see value there and have worked productively with similar systems in pure CSS and relatively lightweight Sass. The next thing I'd like to see addressed is why something so conceptually straightforward still needs so heavyweight a build step that it has to care what ISA it runs on.
(Full disclosure, I haven't worked with Tailwind since 2020 or so. Though I was obviously not too favorably impressed by it, I don't recall it having problems like this then, which if anything I'd expect to have been exacerbated by the Apple CPU architecture transition then ongoing.)
> Tailwind provides predictable classes like `p-2 ml-1 text-sm`, ensuring uniformity across your app.
This is literally what bootstrap did/does. You could also trivially do this yourself with just a tiny bit of discipline -- it's part of why variables were valuable in SASS/SCSS.
Why must we re-invent CSS to get semantic class names? Like the parent, I have yet to hear an explanation for Tailwind that doesn't sound like "someone who didn't fully understand CSS re-wrote it, and a generation of bootcamp coders adopted it without knowing why, or what came before."
I didn't say word one about "bootcamp coders." That's all you, pal.
Every generation has to invent sex and politics for itself, or at least imagine for a while in its 20s and 30s that it did. Why not the same in another preparadigmatic field like computing?
> Every generation has to invent sex and politics for itself, or at least imagine for a while in its 20s and 30s that it did. Why not the same in another preparadigmatic field like computing?
Because it's not "preparadigmatic"? There was a perfectly good paradigm before it was tossed out and re-written in Javascript (and then again in some other language, apparently). There have certainly been some revolutionary paradigms in my career (e.g. the web itself), but this "reinvention" of basic front-end tech doesn't qualify.
This stuff holds back the industry. It's part of why software engineers over the age of 30 are considered "old".
Yes, and it is also what doesn't really appear to happen in mature engineering fields, isn't it? No one is reinventing, I don't know, bolts. Or ohms, or amperes, or how one determines a Young's modulus. In those fields you see incremental refinements; mostly, when you see claims of major revolutions, like the recent flap over supposed high-temperature superconductivity in the "LK-99" material, mostly those receive deep suspicion that typically turns out to be justified, because these are fields where exist sizable, coherent bodies of well-tested and reliably predictive theory whose consequences can for most purposes be taken as known. If there is any similar body of knowledge in this engineering discipline then the discipline still qualifies as preparadigmatic for having developed a paradigm its exponents failed to become competent to transmit. But I think there simply exists next to none of such knowledge.
(Even the damned alchemists have their ball-and-stick models! And sure, we have S-expressions, had them for something like seventy years, and do we use them? Do we, hell...)
It's how you get people thinking that the web was revolutionary, and not a product of decades and generations of work toward the concept of a global communications network. But the idea that this inchoate condition holds back the industry doesn't seem to me to hold much water. The first boilers blew up a lot too, before the underlying principles were understood, and mere prolonged survival quickly came to be seen as no mean qualification in a steam engineer. How much did that "hold back" the building of railroads, from where the trains were to where the money was? That, if you care to know, is the overarching metaphor with which I like to describe this industry - though I concede the machines we build are not nearly so hazardous to we ourselves.
If I had to boil down my entire analysis of this industry to something expressible in a single adjective, the only word to fit would certainly be "irresponsible." But I'll also mention at this time that I topped out at a high-school diploma on a sub-3.0 GPA, so if as the holder of a doctorate you find you begin to become bored or uncomfortable talking with me, experience strongly suggests the option of impugning any or all of my intellect, discipline, character, and decency of motivation in speaking is always available as a resort.
Yeah, but it's also important to me I don't get mistaken for wanting to stand next to a guy who's comfortable displaying that kind of attitude. I think my original approach inadvertently encouraged that, so I'm overcompensating now.
And given a somewhat thoroughly developed analysis of this extremely young industry's place in the span of human history to date, why not talk about that when I can spare the time? I feel like I'm probably not the only person here who finds such ideas of interest.
I try to stay away as much as I can from front-end, because modern front-end is a shit show.
Unless I do the front-end for my own app and then, the order of preference is server rendered HTML, HTMX, Web Components, Vanilla JS. Stuff I am sure I can maintain with ease 100 years from now. For CSS I would use something simple such as bootstrap.
I kind of agree with the author of using tools you know are reliable as opposed to chasing fads. Of course you must and should learn and use new things, but proceed with care and carefully consider both upside and downsides.
I generally have nothing to do with webdev, but my general expectation was always "okay, it's a mess now, and everything is terrible, but everything will settle down eventually". Instead, every time I hear about it, it has somehow gotten worse.
This would happen if browser technologies/standards would, you know, standardize. However, everyone and their uncle picks and chooses what they'll support, how they'll support it, and what extensions to the standard they want to champion for "standardization".
I'd say these days the support is pretty decent across the board, if you can limit your app to a relatively modern set of evergreen browsers (or at least - to the versions from "2 years old or newer").
I find the discussion here somewhat amusing, because I am on another level of disbelief when looking at the state of things in the npm world.
Most people ask "why ship bun (whatever that is), why not just be an npm module".
I am baffled as to why we have forgotten the lost art of spitting out something from a build and then using that thing. As in, "make" producing a CSS file. Or a JavaScript file. Or multiple files. Why does npmness have to force itself into every nook and cranny of our software and consume it all?
In my webapp I use several small CSS and JavaScript libraries, and for building those I use docker containers. The npm horrors live in the docker containers, I try not to look in there, but whatever happens there, a bunch of css and js files come out. Reproducibly. Reliably. And things don't break if there is a headwind or a tailwind (ahem) today on the internet.
So instead of managing your versions in one package.json and installing your dependencies with one npm i command you manage several different docker container that produce builds you then consume?
What kind of horrors did you encounter that led to this abstraction?
But in this case it is me who controls when anything gets updated. And I have all of my dependencies in the container. So I get reproducible builds. Also, dependencies of one package do not interfere with dependencies of another package.
I get this. I also do this (to some extent). Instead of installing whatever tooling I need for each project, I’ll store the tools in a Docker container. Then I don’t have to think about each project and how they interfere with each other. Even better, when something inevitably goes wrong, I can nuke the container from orbit and start over.
I don't even know what it is, to my shame. I do know that when you run a container it will be calling into the CPU ins directly, unless it concerns something like Rosetta 2 - but even then I don't know whether, say, the AVX2 instructions are emulated.
I like this post and agree with a lot of what it says. Shipping Bun at this stage is probably not a great idea, particularly with overly-optimistic CPU requirements, and the complexity of the stack is very far from ideal. At the same time some of that criticism seems slightly mistargeted. Bun didn't drop support for your hardware arbitrarily: your OS vendor dropped support for your hardware. Your "computer with 128GB of RAM and 6 CPU cores" is obsolete according to the manufacturer.
This is bad, and it should not be an acceptable situation, but a relatively short support cycle is a choice you made by buying their product. I'm not sure it's right to then blame others for following along.
The vendor does not officially publish its dropped support. But from a brief piece of research I did, Jared dropped support for macOS 12 before Apple shipped the last update for it.
And while on a formalistic, nitpicky level it is a "what are you complaining about with your old box" - in actuality I do find the idea to require a CPU upgrade to run a CSS pre-processor (a CSS pre-processor! come on! not an H265 encoder. Not some sophisticated animation system. Not an AI blob. A tokenizer for HTML...) absolutely, completely excessive.
And I know why that decision came - it is because building portable binaries for the Mac is a pain in the butt. Well, guess what - if you made the call of shipping a multi-platform runtime - that backwards compat is part and parcel, Apple's LLVM versions, the linker and the dylibs and whatnot.
So no, I understand that you are "right" formally, but the situation this brought me to - I still find bad, and the choices made by the chain of maintainers - I still find inconsiderate.
Front end is definitely pretty hard nowadays. It seems like the technology should be easy but keep in mind you’re getting everything for free and this isn’t the field you have a ton of practice in. It’s grown a lot while also being backwards compatible with browsers which are some of the more complicated pieces of software.
Trust me, I get very confused and frustrated whenever I have to figure out python deps or kubernetes, but I accept it’s going to be difficult since I’m not familiar with the field.
> It seems like the technology should be easy but keep in mind you’re getting everything for free and this isn’t the field you have a ton of practice in.
While keeping in mind that this isn't a field that I have a ton of practice in, I can confidently assert that a parser for HTML input that outputs CSS classnames does not need all of the following:
1. A recent Node (+ dependencies)
2. Pnpm
3. Rust (+ dependencies)
4. Bun build environment
5. A binary size of 100MB
I'm pretty certain I needed an HTML parser at some point in the past, and it was built as a single standalone file, that compiled to a single standalone ~50Kb binary, that took a single day to write.
Now, fair enough, that doesn't spit out class names (because I didn't need it at the time, although looking at the query language it seems that it might be able to do that) that, but it's very easy to add a single flag for "spit out class names". The build dependencies are:
1. A C compiler.
There's no makefile/build-script - simply doing `$CC program.c -o program` is sufficient.
So, sure, while I don't have a ton of practice in this area, I have enough to know that the binary/program in question is heavily over-engineered. Their dependencies are artificial requirements (we require this because this other technology requires it), not functional requirements (we require this because this feature requires it).
Nice! Micro-fix: you should remove the final `;` in the `FPRINTF()` macro definition, that's supposed to be added at the callsite to keep the syntax natural (for C). :)
> Nice! Micro-fix: you should remove the final `;` in the `FPRINTF()` macro definition, that's supposed to be added at the callsite to keep the syntax natural (for C).
I agree; this was probably an oversight (done in a single day, on a weekend day, so I probably spent no more than about 6 hours on it).
If I want to make this production-ready, I'd move it into a repo, split it into a library (so I can drive it from Python or similar), add some tests, etc...
I literally needed a once-off tool to parse HTML, and had some time to spare.
So do I, but we were told that "open source is a gift", and I can only have opinions about choices made by others, not command them "do it this way instead". I am not the inventor, owner or maintainer of Tailwind nor do I aspire to be.
And as much as I prefer things be done well instead of.... like that... there are only that many hills to die on - and only that many people interested in my opinion :-)
I would've literally opened an editor, wrote a quick parser in JS, bundled it in and be done with the problem without going any further.
This functionality, if I am understanding correctly, is to grab all the classnames from HTML files and then pass that to a tool that includes those, and only those, in a .css file.
I'll burnout in no time if I spend even 20% of my time working on configuring or debugging the environment and the rest on the development activities[1]. I can't imagine roles where time is split 50/50 (or more) between "fiddling with YAML, npm, AWS, etc" and "development".
[1] I include requirements elicitation, debugging, talking to customers, etc as "development activities".
> While keeping in mind that this isn't a field that I have a ton of practice in, I can confidently assert that a parser for HTML input that outputs CSS classnames does not need all of the following:
This is kind of an example of the Dunning Krueger curve. You’re admitting naivety while also claiming confidence, which should give pause.
The main issue I have with your argument is the framing - you’re trying to make your build tool as simple as possible to a developer like yourself which is not who the tools were built for or by. Context matters. Not everything is designed to work on 15 year old hardware and most developers would simply scoff at an engineer who thinks the best way to build software is with ancient hardware.
If you view everything as trying to use as little dependencies as possible (like a C programmer should) then you absolutely will think it’s bananas that this used 100 MB of dependencies. But if you have a different perspective, you may see that the dependencies don’t matter that much as long as it works.
In fact, by using common tools that have good interoperability, that are only used on a developer machine, it doesn’t matter too much what resources they use. Of course if you’re developing on a 2010 laptop with 16Gb of RAM then you may have issues but that’s not the open source developers problem. If all the open source developers had to fit your performance constraints then they would just not get much work done at all.
My main point is that developer tools don’t have to be light speed, they just have to be fast enough on modern hardware, which they absolutely are for frontend. I have enjoyed 3-5 second iterative builds on all my projects for the better part of 10 years.
> The main issue I have with your argument is the framing - you’re trying to make your build tool as simple as possible to a developer like yourself which is not who the tools were built for or by.
My argument is "Not all of the specified dependencies are necessary", and not "None of the specified dependencies are necessary".
See my other post, in which I point out that the functionality required could have been done in plain JS, using Node. That's exactly one dependency.
The other dependencies are not required to have the same feature, especially as you point out:
> My main point is that developer tools don’t have to be light speed, they just have to be fast enough on modern hardware
If we're both agreeing with that main point of yours, there is no reasonable justification for depending on anything other than Node (which is already there in the project anyway).
My PoV is less Dunning-Kreuger than you claimed; additional dependencies are added for no additional value, and come with breakages. After all
> But if you have a different perspective, you may see that the dependencies don’t matter that much as long as it works.
The whole point of the saga is that it doesn't work on an otherwise perfectly capable computer.
I was a web developer from 1996 until 2023. I jumped ship because of exactly this sort of nonsense. I still do private web development, but I do it all using native vanilla HTML, JS, and CSS. It just works.
I feel for the author, and influencers have a lot to answer for. Through a lot of this article I was thinking about next.js, another tech you almost have to know to get hired these days, and how much I dislike it for a myriad of reasons, and despite my dislike, Vercel just absolutely dominates at marketing and devrel so here we are, with every startup and even bigger companies using it.
reading that really makes me wish there were a compiled-to-static-binary language with a reasonably easy migration path from javascript (sort of like ruby->crystal, where the latter tries hard to look like ruby so that at least some fraction of your code translates mechanically). there really is no reason that a tool like tailwind should be this hard to ship in a robust and compact binary.
TLDR; lots of software now relies on AVX2 instructions which aren't present in older x86 hardware. This prevented author from running some web development stuff.
Recompiling the (open source) code should have offered a solution but OP could not make this work.
On some platforms you can emulate the missing AVX2 instructions with intel SDE but not apple.
> See, I know that shipping end-user software - like the Tailwind compiler - if that software is written in an interpreted language - is fucking hard.
This I don't get. This is interpreted code, most web backends integrate node, why not just ship a node module? Why in gods name ship bun.
The whole web development scene has had some of the worst software engineering I've ever seen. With the exception of the Ruby scene, we have Tilt and Nokogiri for these types of things.
> Why in gods name ship bun.
I try to avoid that strong of a language (although believe me - when this was going about I was very, very angry indeed). I think the reason was exactly removing the "npm hell" for users and not requiring any dependencies for using Tailwind.
The irony of it being, of course, that for some situations Bun turned out _more_ of a problem than shipping npm modules would be.
I was also advised to "use Tailwind as an NPM module for now". Only - I couldn't because the only "pure-JS" variant I could find was some months old, and the current builds all work through that bun-based 100MB binary.
The idea was good. The execution by the maintainers - and this is my subjective opinion, again - is not. An experimental JS runtime supported by one person is not a good fit for this.
A lot of CSS tooling feels designed to overcomplicate it so it feels more like "engineering" rather than "markup". I find Tailwind particularly bizarre, essentially writing your CSS as inline styles via single use non-semantic classes is absurd to me.
Wasn't CSS invented so we didn't have to write every html element "its blue, centered text and bold font" yet people willingly choose to use class="items-center font-bold text-blue-600" and running it through a compiler.
> Wasn't CSS invented so we didn't have to write every html element "its blue, centered text and bold font" yet people willingly choose to use class="items-center font-bold text-blue-600" and running it through a compiler.
Tailwind is far from the first framework to require it. Bootstrap has things like .center and its entire basis are layout classes for the grid. its responsive by default, but not semantic.
I think CSS has failed because people want far more control on appearance (for branding and aesthetics) than the people devising it anticipated.
The motivation for semantic markup has also reduced because people write much less HTML. You might have some markup in what you edit, but the layout and base styles are usually generated by a system (e.g. a CMS). Even most people serving static files use a site generator.
I don't think CSS has failed - it's just a big and complex system for automatic layout. Automatic layout is hard, and if the age and power of CSS show us anything - is that it has succeeded. Comparing CSS of today to CSS of, say, 2005, is just night and day.
I don't think doing layout using other means (formatting specifiers? spacer elements?) would have been that much different, given that it's constraints all the way. The difficult bit is that the layout is not fixed - and it's a pain either way you do it, doing things well in InterfacerBuilder.app was also a struggle.
Yes, "center things" is ridiculous because you have to do it "4 different ways" with different tradeoffs - and there are some areas where things are still very painful (text-box-trim just now becoming available). But "failed"?.. that is a bit harsh.
Just like everything in good old Web, CSS was designed for documents, not interactive applications themes.
I have an LLM synthesised HTML file that links in a remote tailwind.css and contains no script tags or any JS at all. It contains a lot of class="bg-cover bg-center h-screen" and class="text-3xl font-bold text-gray-800 mb-4" that I don't really understand, and this file renders just fine in a web browser.
Where do you expect that compiler to come into this? Or is it me that doesn't understand Tailwind and this is actually not Tailwind, just some random CSS with the same name that a compressed database puked up for me?
I know the feeling. The way it works is that Tailwind parses your HTML, finds `mb-4` and thus knows that it needs to emit the `.mb-4 { margin-bottom: some-var-calc-thing-in-4-increments-of-your-layout }.
So it is not as much of a compiler - it is a preprocessor, with the idea being to output just the utility classes that you use. With "all" the utility classes being this multi-MB CDN version... only if it worked.
Is there somehow a preprocessor invoked through the link-element? Or is this about some optional stuff to produce custom Tailwind style CSS files?
No, the pre-processor has to be running continuously in "watch" mode with patterns for all your HTML (and sorta-HTML) files. When it detects a change in one of them, it chews through the changed file and emits a .css file into your web root which is the particular subset of Tailwind you are actually using.
> Where do you expect that compiler to come into this?
Whole article is about getting the tailwind compiler and its dependencies up and running on a 12 year old CPU so it must be important somehow...
OK, but I don't recognise this from my scant experience with Tailwind, which consists of either having a framework do whatever it does with ":tailwind" (maybe it's a compiler, I don't know) which it has done without me noticing anything at all or putting a link in a link-element to pull in a CSS.
So I'm wondering whether this compiler/preprocessor stuff is actually something people run into and my experience is deceptive, or if it's something happening to few people, on the margins, that I'll likely never experience.
Because it matters to me, since I'm not going to spend time digesting the Tailwind docs and whatnot and will forever stay a casual, disinterested user that cribs some stuff I don't understand from search interfaces. If I can't expect this to continue working as it has I'll have to figure out a way to ditch the Tailwind stuff I'm already using.
Yes, they run it - but for most it is melted into another bundler like PostCSS. The few of us who try to resist the "just use bundlers for everything and everywhere" do notice that there is now a bundler process firmly anchored in your development process.
And yes, being able to jettison a pre-processor for frontend things is a very necessary thing, and unless the designers have accounted for this (I only know of create-react-app having a "jettison" feature, and then - just) you are in for a heap of fun if you need to resurrect a 5-years old app with its dev environment.
Someone told me about Tailwind last week on IRC, and I literally do not see the point of writing class="font-bold" versus style="font-weight:bold;" especially if the former means you have to add a whole bunch more complexity to your build process (and to have a build process at all).
I wrote my website in C, so I don't know about this modern web process stuff.
Huh. Yeah, I'd never heard of Tailwind before, but this looks suspiciously like how people formatted HTML _before_ CSS existed.
If all your frontend code lives in component files then it is quite nice that all the styling is also local to those components so you don't end up with horrors in unexpected places because you changed a line in a CSS file.
Since it's already in component files you can use the templating already present in that context to fill in values from colour schemes and so on, without the hazards of cascading.
Most likely they decided to go for a binary so that people with other runtimes could use Tailwind without subscribing to the whole Node+npm ordeal. And it's actually neat when it works - for us RoR folks it makes things like tailwindcss-rails possible - where there isn't any Node stuff installed at all.
But RoR does support having node integrated and it was the default for a long time. tailwindcss-rails could've just used Node.
Instead your application is now packing a 100M binary just for your CSS framework. Even when you are already using Node.
I see no benefit, imagine if every component of your application shipped its own runtime. Imagine if erb in RoR packaged its own version of ruby. That would be crazy right?
The application is not packing the binary - it is the gem that you use to build CSS, and I _think_ it's only active during development, but don't quote me on that. In production there is a generated built asset which contains all the Tailwind-generated CSS classes you are using across your templates.
Speed, I guess?
Although I use tailwind on all my projects and not once have I thought "oh boy, development is so slow because of tailwind, I sure wish it was implemented in a compiled language!"
I think it's more like "having less moving parts". In which they kinda-sorta-succeeded, except that the choice of bun seems misguided for something where you want broad adoption. And Tailwind is, by now, a product that is clearly clamoring for broad adoption (and achieving it).
I was hesitant to give tailwind a try, like most aging web developers I couldn't stand that it breaks the "cascading" part of CSS.
But eventually I didn't have choice as I inherited a web app that has all of the newfangled build components that web apps come with. I love that we're coming full circle back to MVC with server components.
After getting used to it, I ended up liking Tailwind, mostly because it breaks the cascading part of CSS. There are so many unique parts to webpages these days that I think it makes sense to keep the styles close to their components, as opposed to global cascading components.
Yeah, I've been a fairly vocal curmudgeon about Tailwind within my team. Some folks really like it. They're doing the work and I trust them, so I rolled with it.
I still have qualms with Tailwind. My classic CSS sensibilities are offended, but whatever. The part that I still don't like is really what this post boils down to: a massively complex build system that creates footguns in the weirdest places.
That being said, Tailwind that's set up well in coordination with a complex design system really does feel like it's a win. Seeing that in action was an aha moment where I was able to see value that made some of the tradeoffs worth it.
The other thing that people seem to just either forget or ignore is that you can do _both_ things! Tailwind can be used for the areas that it makes sense for, and you can write a sitewide layout using insane (in a good way!) grid declarations using plain old CSS and a few extra classes in your markup.
If you're using Vue, you even get a <style> block that can be injected into any component so you're still working in the same context. It's all delightful and optional and you can still do whatever you want.
Yes, exactly. You can use TW for the utility classes and copy-paste-and-go components, and append your own over/under/side-lay layers to create whatever css you want.
What about a system that could take a really long TW className and let you give it a single name, and you could still append more TW after for here and there adjustments.
> That being said, Tailwind that's set up well in coordination with a complex design system really does feel like it's a win. Seeing that in action was an aha moment where I was able to see value that made some of the tradeoffs worth it.
Can you elaborate on this a little bit — was there a lot of Figma tooling, plugins to swap variables between systems, etc?
The irony is that these days, with nested selectors supported natively, there is zero need to use this heavy of a tool for this isolation.
You can use CSS modules with your bundler to isolate things to the file they're imported in:
https://vite.dev/guide/features#css-modules
I see this is absolutely something I am not going to be using as long as I can help it (it is the same setup as importing CSS and MP4 in Webpack via the import declaration), but a good tidbit to know it exists.
I've got a number of vanilla JS, HTML, CSS libraries that work only from within the browser, using the standard tags to include them.
For me, not having a build-step means that, yes, I miss out on typescript, but the upsides are easier to read, easier debugging and lower cognitive burden during both writing and reading the application.
It means I will never, like the author in the article, spend $X weeks, and then spend a further $Y dollars just to build something "modern".
My "modern looking" applications are developed, and then tested, on a machine from 2010 (1st-gen i7) that has 16GB of RAM and oodles of slow spinning-rust disks, and it isn't painful at all.[1]
[1] It is, in fact, quite nice when I see a client use their system from their own PC and it's just a little bit snappier than I am used to.
Absolutely with you on "no build". Sometimes it doesn't work (for me - where I need NPM modules with annoying math I don't want to do myself), but the basic premise of "nobuild" is absolutely and completely how it all should have always worked.
It's the JSX+webpack perversion that ruined it for us all.
Same approach over here for side projects, I only put up with "modern" Web development when ir isn't my choice.
Classical ASP.NET, Spring with vanilla js.
I have genuinely never understood Tailwind's value proposition, other than as padding for its developers' CVs, at which I assume it excels.
We stopped inlining style attributes for a reason - is this just how the next generation needs to learn?
The comparison to inline styles is understandable but misses Tailwind's real value. Inline styles failed because they were arbitrary, inconsistent, and impossible to scale. Tailwind, by contrast, offers a constrained, standardized set of utilities that enforce consistency, simplify maintenance, and reduce CSS complexity. For example, instead of arbitrary inline styles like `style="padding:7px; margin-left:3px; font-size:14px;"`, Tailwind provides predictable classes like `p-2 ml-1 text-sm`, ensuring uniformity across your app.
Okay, that's reasonable, I see value there and have worked productively with similar systems in pure CSS and relatively lightweight Sass. The next thing I'd like to see addressed is why something so conceptually straightforward still needs so heavyweight a build step that it has to care what ISA it runs on.
(Full disclosure, I haven't worked with Tailwind since 2020 or so. Though I was obviously not too favorably impressed by it, I don't recall it having problems like this then, which if anything I'd expect to have been exacerbated by the Apple CPU architecture transition then ongoing.)
> Tailwind provides predictable classes like `p-2 ml-1 text-sm`, ensuring uniformity across your app.
This is literally what bootstrap did/does. You could also trivially do this yourself with just a tiny bit of discipline -- it's part of why variables were valuable in SASS/SCSS.
Why must we re-invent CSS to get semantic class names? Like the parent, I have yet to hear an explanation for Tailwind that doesn't sound like "someone who didn't fully understand CSS re-wrote it, and a generation of bootcamp coders adopted it without knowing why, or what came before."
I didn't say word one about "bootcamp coders." That's all you, pal.
Every generation has to invent sex and politics for itself, or at least imagine for a while in its 20s and 30s that it did. Why not the same in another preparadigmatic field like computing?
Yeah, I know. That's all me.
> Every generation has to invent sex and politics for itself, or at least imagine for a while in its 20s and 30s that it did. Why not the same in another preparadigmatic field like computing?
Because it's not "preparadigmatic"? There was a perfectly good paradigm before it was tossed out and re-written in Javascript (and then again in some other language, apparently). There have certainly been some revolutionary paradigms in my career (e.g. the web itself), but this "reinvention" of basic front-end tech doesn't qualify.
This stuff holds back the industry. It's part of why software engineers over the age of 30 are considered "old".
Yes, and it is also what doesn't really appear to happen in mature engineering fields, isn't it? No one is reinventing, I don't know, bolts. Or ohms, or amperes, or how one determines a Young's modulus. In those fields you see incremental refinements; mostly, when you see claims of major revolutions, like the recent flap over supposed high-temperature superconductivity in the "LK-99" material, mostly those receive deep suspicion that typically turns out to be justified, because these are fields where exist sizable, coherent bodies of well-tested and reliably predictive theory whose consequences can for most purposes be taken as known. If there is any similar body of knowledge in this engineering discipline then the discipline still qualifies as preparadigmatic for having developed a paradigm its exponents failed to become competent to transmit. But I think there simply exists next to none of such knowledge.
(Even the damned alchemists have their ball-and-stick models! And sure, we have S-expressions, had them for something like seventy years, and do we use them? Do we, hell...)
It's how you get people thinking that the web was revolutionary, and not a product of decades and generations of work toward the concept of a global communications network. But the idea that this inchoate condition holds back the industry doesn't seem to me to hold much water. The first boilers blew up a lot too, before the underlying principles were understood, and mere prolonged survival quickly came to be seen as no mean qualification in a steam engineer. How much did that "hold back" the building of railroads, from where the trains were to where the money was? That, if you care to know, is the overarching metaphor with which I like to describe this industry - though I concede the machines we build are not nearly so hazardous to we ourselves.
If I had to boil down my entire analysis of this industry to something expressible in a single adjective, the only word to fit would certainly be "irresponsible." But I'll also mention at this time that I topped out at a high-school diploma on a sub-3.0 GPA, so if as the holder of a doctorate you find you begin to become bored or uncomfortable talking with me, experience strongly suggests the option of impugning any or all of my intellect, discipline, character, and decency of motivation in speaking is always available as a resort.
Buddy it's inline syntax for referencing constants in a design system (same on GPA stuff, 2.8 here)
Yeah, but it's also important to me I don't get mistaken for wanting to stand next to a guy who's comfortable displaying that kind of attitude. I think my original approach inadvertently encouraged that, so I'm overcompensating now.
And given a somewhat thoroughly developed analysis of this extremely young industry's place in the span of human history to date, why not talk about that when I can spare the time? I feel like I'm probably not the only person here who finds such ideas of interest.
I try to stay away as much as I can from front-end, because modern front-end is a shit show.
Unless I do the front-end for my own app and then, the order of preference is server rendered HTML, HTMX, Web Components, Vanilla JS. Stuff I am sure I can maintain with ease 100 years from now. For CSS I would use something simple such as bootstrap.
I kind of agree with the author of using tools you know are reliable as opposed to chasing fads. Of course you must and should learn and use new things, but proceed with care and carefully consider both upside and downsides.
I generally have nothing to do with webdev, but my general expectation was always "okay, it's a mess now, and everything is terrible, but everything will settle down eventually". Instead, every time I hear about it, it has somehow gotten worse.
This would happen if browser technologies/standards would, you know, standardize. However, everyone and their uncle picks and chooses what they'll support, how they'll support it, and what extensions to the standard they want to champion for "standardization".
I'd say these days the support is pretty decent across the board, if you can limit your app to a relatively modern set of evergreen browsers (or at least - to the versions from "2 years old or newer").
I find the discussion here somewhat amusing, because I am on another level of disbelief when looking at the state of things in the npm world.
Most people ask "why ship bun (whatever that is), why not just be an npm module".
I am baffled as to why we have forgotten the lost art of spitting out something from a build and then using that thing. As in, "make" producing a CSS file. Or a JavaScript file. Or multiple files. Why does npmness have to force itself into every nook and cranny of our software and consume it all?
In my webapp I use several small CSS and JavaScript libraries, and for building those I use docker containers. The npm horrors live in the docker containers, I try not to look in there, but whatever happens there, a bunch of css and js files come out. Reproducibly. Reliably. And things don't break if there is a headwind or a tailwind (ahem) today on the internet.
So instead of managing your versions in one package.json and installing your dependencies with one npm i command you manage several different docker container that produce builds you then consume?
What kind of horrors did you encounter that led to this abstraction?
But in this case it is me who controls when anything gets updated. And I have all of my dependencies in the container. So I get reproducible builds. Also, dependencies of one package do not interfere with dependencies of another package.
I get this. I also do this (to some extent). Instead of installing whatever tooling I need for each project, I’ll store the tools in a Docker container. Then I don’t have to think about each project and how they interfere with each other. Even better, when something inevitably goes wrong, I can nuke the container from orbit and start over.
The irony of my situation was that even using a Docker container would not help, because Docker does not emulate CPU instructions.
Aren't you referring to Docker multiarch?
I don't even know what it is, to my shame. I do know that when you run a container it will be calling into the CPU ins directly, unless it concerns something like Rosetta 2 - but even then I don't know whether, say, the AVX2 instructions are emulated.
I like this post and agree with a lot of what it says. Shipping Bun at this stage is probably not a great idea, particularly with overly-optimistic CPU requirements, and the complexity of the stack is very far from ideal. At the same time some of that criticism seems slightly mistargeted. Bun didn't drop support for your hardware arbitrarily: your OS vendor dropped support for your hardware. Your "computer with 128GB of RAM and 6 CPU cores" is obsolete according to the manufacturer.
This is bad, and it should not be an acceptable situation, but a relatively short support cycle is a choice you made by buying their product. I'm not sure it's right to then blame others for following along.
The vendor does not officially publish its dropped support. But from a brief piece of research I did, Jared dropped support for macOS 12 before Apple shipped the last update for it.
And while on a formalistic, nitpicky level it is a "what are you complaining about with your old box" - in actuality I do find the idea to require a CPU upgrade to run a CSS pre-processor (a CSS pre-processor! come on! not an H265 encoder. Not some sophisticated animation system. Not an AI blob. A tokenizer for HTML...) absolutely, completely excessive.
And I know why that decision came - it is because building portable binaries for the Mac is a pain in the butt. Well, guess what - if you made the call of shipping a multi-platform runtime - that backwards compat is part and parcel, Apple's LLVM versions, the linker and the dylibs and whatnot.
So no, I understand that you are "right" formally, but the situation this brought me to - I still find bad, and the choices made by the chain of maintainers - I still find inconsiderate.
Front end is definitely pretty hard nowadays. It seems like the technology should be easy but keep in mind you’re getting everything for free and this isn’t the field you have a ton of practice in. It’s grown a lot while also being backwards compatible with browsers which are some of the more complicated pieces of software.
Trust me, I get very confused and frustrated whenever I have to figure out python deps or kubernetes, but I accept it’s going to be difficult since I’m not familiar with the field.
> It seems like the technology should be easy but keep in mind you’re getting everything for free and this isn’t the field you have a ton of practice in.
While keeping in mind that this isn't a field that I have a ton of practice in, I can confidently assert that a parser for HTML input that outputs CSS classnames does not need all of the following:
1. A recent Node (+ dependencies)
2. Pnpm
3. Rust (+ dependencies)
4. Bun build environment
5. A binary size of 100MB
I'm pretty certain I needed an HTML parser at some point in the past, and it was built as a single standalone file, that compiled to a single standalone ~50Kb binary, that took a single day to write.
Actually, here it is: https://gist.github.com/lelanthran/896a2d1e228d345ecea66a5b2...
Now, fair enough, that doesn't spit out class names (because I didn't need it at the time, although looking at the query language it seems that it might be able to do that) that, but it's very easy to add a single flag for "spit out class names". The build dependencies are:
1. A C compiler.
There's no makefile/build-script - simply doing `$CC program.c -o program` is sufficient.
So, sure, while I don't have a ton of practice in this area, I have enough to know that the binary/program in question is heavily over-engineered. Their dependencies are artificial requirements (we require this because this other technology requires it), not functional requirements (we require this because this feature requires it).
Nice! Micro-fix: you should remove the final `;` in the `FPRINTF()` macro definition, that's supposed to be added at the callsite to keep the syntax natural (for C). :)
> Nice! Micro-fix: you should remove the final `;` in the `FPRINTF()` macro definition, that's supposed to be added at the callsite to keep the syntax natural (for C).
I agree; this was probably an oversight (done in a single day, on a weekend day, so I probably spent no more than about 6 hours on it).
If I want to make this production-ready, I'd move it into a repo, split it into a library (so I can drive it from Python or similar), add some tests, etc...
I literally needed a once-off tool to parse HTML, and had some time to spare.
So do I, but we were told that "open source is a gift", and I can only have opinions about choices made by others, not command them "do it this way instead". I am not the inventor, owner or maintainer of Tailwind nor do I aspire to be.
And as much as I prefer things be done well instead of.... like that... there are only that many hills to die on - and only that many people interested in my opinion :-)
You're a much more patient person than I am.
I would've literally opened an editor, wrote a quick parser in JS, bundled it in and be done with the problem without going any further.
This functionality, if I am understanding correctly, is to grab all the classnames from HTML files and then pass that to a tool that includes those, and only those, in a .css file.
I'll burnout in no time if I spend even 20% of my time working on configuring or debugging the environment and the rest on the development activities[1]. I can't imagine roles where time is split 50/50 (or more) between "fiddling with YAML, npm, AWS, etc" and "development".
[1] I include requirements elicitation, debugging, talking to customers, etc as "development activities".
> You're a much more patient person than I am.
That is very kind of you to say, thank you!
> While keeping in mind that this isn't a field that I have a ton of practice in, I can confidently assert that a parser for HTML input that outputs CSS classnames does not need all of the following:
This is kind of an example of the Dunning Krueger curve. You’re admitting naivety while also claiming confidence, which should give pause.
The main issue I have with your argument is the framing - you’re trying to make your build tool as simple as possible to a developer like yourself which is not who the tools were built for or by. Context matters. Not everything is designed to work on 15 year old hardware and most developers would simply scoff at an engineer who thinks the best way to build software is with ancient hardware.
If you view everything as trying to use as little dependencies as possible (like a C programmer should) then you absolutely will think it’s bananas that this used 100 MB of dependencies. But if you have a different perspective, you may see that the dependencies don’t matter that much as long as it works.
In fact, by using common tools that have good interoperability, that are only used on a developer machine, it doesn’t matter too much what resources they use. Of course if you’re developing on a 2010 laptop with 16Gb of RAM then you may have issues but that’s not the open source developers problem. If all the open source developers had to fit your performance constraints then they would just not get much work done at all.
My main point is that developer tools don’t have to be light speed, they just have to be fast enough on modern hardware, which they absolutely are for frontend. I have enjoyed 3-5 second iterative builds on all my projects for the better part of 10 years.
I feel you've misidentified my argument.
> The main issue I have with your argument is the framing - you’re trying to make your build tool as simple as possible to a developer like yourself which is not who the tools were built for or by.
My argument is "Not all of the specified dependencies are necessary", and not "None of the specified dependencies are necessary".
See my other post, in which I point out that the functionality required could have been done in plain JS, using Node. That's exactly one dependency.
The other dependencies are not required to have the same feature, especially as you point out:
> My main point is that developer tools don’t have to be light speed, they just have to be fast enough on modern hardware
If we're both agreeing with that main point of yours, there is no reasonable justification for depending on anything other than Node (which is already there in the project anyway).
My PoV is less Dunning-Kreuger than you claimed; additional dependencies are added for no additional value, and come with breakages. After all
> But if you have a different perspective, you may see that the dependencies don’t matter that much as long as it works.
The whole point of the saga is that it doesn't work on an otherwise perfectly capable computer.
I was a web developer from 1996 until 2023. I jumped ship because of exactly this sort of nonsense. I still do private web development, but I do it all using native vanilla HTML, JS, and CSS. It just works.
What did you move into?
Other engineering work. Currently I'm doing integration work.
I feel for the author, and influencers have a lot to answer for. Through a lot of this article I was thinking about next.js, another tech you almost have to know to get hired these days, and how much I dislike it for a myriad of reasons, and despite my dislike, Vercel just absolutely dominates at marketing and devrel so here we are, with every startup and even bigger companies using it.
Unfortunately that's the world we live in now.
That one I try to avoid at all costs, but I hear what you are saying.
The owner of the company selling Next.js and adjacent services is firmly in that list of influencers I have muted going forward.
reading that really makes me wish there were a compiled-to-static-binary language with a reasonably easy migration path from javascript (sort of like ruby->crystal, where the latter tries hard to look like ruby so that at least some fraction of your code translates mechanically). there really is no reason that a tool like tailwind should be this hard to ship in a robust and compact binary.
That would be nice. There was also duktape, which was very compact and very embeddable - it could have been a decent carrier for Tailwind too.
> reading that really makes me wish there were a compiled-to-static-binary language with a reasonably easy migration path from javascript
I'm working on it ;-)
do tell!
Still a work in progress, so not much to tell.
TLDR; lots of software now relies on AVX2 instructions which aren't present in older x86 hardware. This prevented author from running some web development stuff.
Recompiling the (open source) code should have offered a solution but OP could not make this work.
On some platforms you can emulate the missing AVX2 instructions with intel SDE but not apple.