Instead of learning and working with bloated tool chains, every tool in the repo has been built or added carefully for the task at hand. Simplicity provides a lot benefits over time.
Personally, after having gone through a lot (GNU make, sh script + POSIX make, CMake + conan at work), I think a simple POSIX 2024 make + pkgconf(ig) really is enough for simple programs; and Windows devs can use WSL or whatever.
Dependencies are hell in JavaScript, Haskell, and Python too, but you don't notice because the computer works through the most common hells automatically... so developers don't hesitate to create hells. In C, they hesitate.
Looks at most Linux distros.
Are you sure about that?!
As in SIGSEGV dangerous? C is a language so simple that together with the lack of libraries it'll drag you down to problems you were not going to stumble into in most alternatives.
Sure, eventually you'll get your own scars and learn the hard way lessons that will stick and give you better intuition on how things work, but I don't feel there's need to keep using C these days beyond learning and doing specific low level stuff.
This reads remarkably tongue-in-cheek, especially when combined with the "dangerously productive" remark a bit later, but: navigating the maze that is picking a C runtime, a libc implementation, a compiler or potentially compiler frontend and backend, a linker, a build tool, a dependency manager, and then figuring out the linking, figuring out the dependency management, figuring out the building process, tackling version control crap (setting up submodules) if needed, then rinse repeat for every single project ever. And if you're targeting not just *nix, but also platforms people actually use (Windows), then this gets especially nasty.
Not your project? Well then you get the chance of taking a deep dive into a random assortment of these, taking up however much of your time. Or you'll just try building it and squash errors as they come. Such a great and sane toolchain and workflow.
All of this is of course completely excluding the very real factor of even knowing about these things, since being even slightly confused about this stuff is an immediate ticket to hours of research on various internet forums to even get to know what it is that you don't know - provided you don't just get sick of all this crap and move on to some other toolchain instead, or start the rote reproduction of some random dood's shitty guide, with the usual outcomes.
You're misrepresenting this somewhat: all but two of those items listed need you to only "pick" a compiler and (as parent said) use Make.
The dependency manager is a problem yes, but lets not misrepresent everything.
Another thing I didn't touch on is the massive compatibility tables for language features one needs to look out for, in case they plan to make their code work on multiple mainstream compilers, which is arguably the prudent thing to do.
I really don't think that considering the C/C++ build story as complex and frustrating would be misrepresentative.
Who was talking about C++? This thread was about C, right?
(It's possible that I didn't read very closely upthread, but I'm almost certain that we were all talking about C).
I actually agree that C++ can be a pain in the rear, especially for multiplatform, and you have to pick all your options, etc.
But C? Not so much. Even on multi-platform.
As GP (or GGP, or GGGP, I forget how far upthread it is) said, with C all you need is a compiler, an editor and Make. That's pretty much still true, even if you're using third party libraries.
I am also a human. When I deal with another person I have never met before, I have a set of defaults. I override them as required.
gcc (at least) only requires an input and will behave reasonably and generate: a.out.
You might like to pass -O2 for something a bit more complicated. Beyond that then yes you will need to get into details because any engineering discipline is tricksy!
When you have multiple files in your project or are using external libraries, pretty much any other programming language will know what tt do. Only C requires you to manually name them in the compilation command even though they’re already named in the file you’re compiling.
$ gcc lolno.c && ./a.out
lol, no
$
$ ls
lolno.c
$ make lolno
cc lolno.c -o lolno
$ ls
lolno lolno.c
CC = gcc -std=c99 -Wall -Wextra -pedantic
CFLAGS = -g
LDFLAGS = -g
LDLIBS = -L/usr/X11R6/lib -lICE -lSM -lXpm -lX11 -lXext -lXmu -lXt -lm
VIOLA_PATH := $(shell pwd)/resources
override CFLAGS += -DPOSIX_C_SOURCE=199309L \
-D_POSIX_SOURCE -D_XOPEN_SOURCE \
-D_BSD_SOURCE -D_SVID_SOURCE \
-DDEFAULT_VIOLA_PATH='"$(VIOLA_PATH)"'\
-DVIOLA
%.a:
$(AR) $(ARFLAGS) $@ $?
viola/viola: $(patsubst %.c,%.o,$(wildcard viola/*.c)) \
libIMG/libIMG.a \
libXPA/src/libxpa.a \
libStyle/libStyle.a \
libWWW/libWWW.a \
parmcheck.o
libIMG/libIMG.a : $(patsubst %.c,%.o,$(wildcard libIMG/*.c))
libXPA/src/libxpa.a : $(patsubst %.c,%.o,$(wildcard libXPA/src/*.c))
libStyle/libStyle.a : $(patsubst %.c,%.o,$(wildcard libStyle/*.c))
libWWW/libWWW.a : $(patsubst %.c,%.o,$(wildcard libWWW/*.c))
It's 155,000 lines of C code across 361 files. Not shown are the nearly 900 lines that make up the dependencies, but using `makedepend` (which came with my system) makes short work of that. I have a more complicated project that compiles an application written in Lua into a Linux executable. It wasn't hard to write, given that you can always add new rules to `make`, such as converting a `.lua` file to a `.o` file: %.o : %.lua
$(BIN2C) $(B2CFLAGS) -o $*.l.c $<
$(CC) $(CFLAGS) -c -o $@ $*.l.c
$(RM) $*.l.c
Okay, that requires a custom tool (`bin2c`) but can other build systems do this? I honestly don't know.A GUI could be done in SDL+Nuklear, GTK+ or others.
Database access from C is easy, since the databases are written in C and provide C APIs for access.
I can compile all sorts of things on my Mac LC III+ with 36 megabytes of memory. Sure, Perl takes nine days, but it works. What other language can actually be used on a machine with such little memory?
But disclaimer that my experience in C is limited to a specific subset of scientific computing, so my experience is definitely limited
Other systems exist, and plenty of us are less belligerent about our choice of OS.
But, none of them come preinstalled.
I've never had problems with make versions specifically. Usually the project requires distro at most X years old because of gcc/clang version or shared library version. By the time you solve those, your make is new enough as well.
What are you comparing it to? C++? Java?
How do they compare to GCC in (a) size and (b) portability.
328.7M llvmbox-15.0.7+3-x86_64-linux
242.3M x86_64-linux-musl-native
169.4M x86_64-linux-musl-cross
Is compiling python (ninja and pyYAML) also a requirement.
https://rustc-dev-guide.rust-lang.org/building/how-to-build-...
In general I've found rust pretty easy to build.
Does anyone have any comments on Bazel[1] because I'm kind of settling on using it whenever it's appropriate (c/c++)?..
Another option is PHP, which was practically made for the purpose of generating HTML. You can run it on your pages to generate static versions, and just use "include" directives with that.
Posts are all written as markdown, styling and layout are centralised with layout HTML pages and CSS.
I believe indexes can be auto-updated via post metadata.
And it's all static pages once generated, so there's no dynamic load on a database.
If all you need is include functionality, then that's the way to go for static file hosting.
No Markdown, no Perl/Python/Ruby, also no binary program, just a few simple shell scripts and plain HTML files.
[1] https://github.com/commonmark/cmark
[2] https://github.com/tiehuis/tiehuis.github.io/blob/38b0fd58a2...
Because you are developer who enjoys coding, and you will find any and every excuse to spend more time writing code than you care to admit.
I think those who enjoy paychecks but don't enjoy coding are likely to be incompetent developers. Which is not a desirable end.
But also, enjoying coding recreationally and enjoying coding for work are very different things.
For the record, the .deb download from [1] gives you a 146MB statically linked pandoc executable that depends only on libc6 (>= 2.13), libgmp10, zlib1g (>= 1:1.1.4).
"The only work I needed to do was to write a C script (which turned out to be ~250 LOC) to call md4c functions and parse my md files, and then chuck those converted files into the GitHub Pages repo."
Am I crazy that this doesn't seem like too much wizardry to me? I mean, I have a source directory and a destination directory which gives me a set of unambiguous file to file mappings. At which point I'm looking at comparing some kind of file timestamps. Add in checking if the destination file exists at all and it looks like 2 or 3 system calls and a couple of comparisons.
However, I agree with almost everything else in the article, even just blowing things away and rebuilding every time if it is fast enough. I was musing today that with LLMs we might see a resurgence of personal libraries. Why would I take in a dependency (227 transitive dependencies for that one project dependency!) when I could implement the 20% of functionality from that bloated library that I need? In some circumstances it might be easier to describe the functionality I need to an LLM, review the code and then I have my own vendored implementation.
But if this was me, I would probably choose Go over C. The speed would be good enough, GC for memory safety and convenience, unlikely to be going away in the next 50 years, simple path from static to dynamic generation with standard lib webserver, etc.
Here is the source code (it should be easy to fork for your own site): https://github.com/miguelmartin75/miguelmartin75.github.io
- To generate a page: https://github.com/miguelmartin75/miguelmartin75.github.io/b...
- See usage: https://github.com/miguelmartin75/miguelmartin75.github.io/b...
- Install nim & run `nimble setup` in repo to install necessary packages
- Run `nim init` to enable deployment/publishing the site (using git work trees)
- Serving it locally before deploying is as easy as `nim dev`, see: https://github.com/miguelmartin75/miguelmartin75.github.io/blob/master/config.nims#L15-L16
- Serving it locally with my private notes (710 files): `nim devpriv`, see: https://github.com/miguelmartin75/miguelmartin75.github.io/blob/master/config.nims#L12-L13
- Generating the site: `nim gen`
- To publish the site: `nim publish`
I use Obsidian to write notes and can put top-level YAML metadata on each page and retrieve it in my generator, see: https://github.com/miguelmartin75/miguelmartin75.github.io/b...For the local development HTTP server (using Mummy), you can refresh the page to regenerate the markdown. Re-generating the page on file changes and refreshing the page with websockets is on my backlog.
Previously I was using GatsbyJS, but have had a lot of trouble with dependencies constantly updating, copying the setup to another computer and generally it was pretty slow to generate the site. Now I can generate my site in <0.1s - even if I include all my private notes (710 markdown files).
Why? At the beginning we were frustrated trying to find one true solution--granted, the perfect solution--to what we wanted to do and how we wanted to work for 20 years. We found that C interfaced with everything, worked everywhere, every programmer knew it, and it could do anything we wanted it to do. If there wasn't a library for something, we could easily make our own.
I could go on and on but I won't. I closed shop just a few years ago cause my reasons for doing that work went away.
Many languages have markdown parsers in them, produce binaries, and are portable.
C satisfies all their priorities, and there are not many, or even any other languages that do as well, and none actually better.
I am immediately intrigued about doing code review in Vim (from their posts) as well as using vale.py to lint my prose (from their GitHub.)
Sure, if the memory safe language comes with a package manager that happens to have postinstall hooks, the picture might be different.
But scanning some go packages to see that they don't do I/O is rather feasible.
I didn't have problems with a C++ toolchain, I just go g++ *.cpp. No make, no cmake, no autotools (shudder). It's fine for small projects with a handful of source files and a few thousand LOC.
I have nothing against directly implementing this in C or just writing markdown files and have the auto-translated into HTML.
I just don't like his arguments about it must be fast to recompile everything. I am writing this comment, and this is going to take me a few minutes. After all, I am thinking about what I am writing, typing it out, thinking some more. And then, the deploy is the thing that go the author? Really? Time to server is an important metric?
Let's be real, nothing would be lost if it took 5 minutes. He would send it off and 5 minutes later, his phone buzzes, notifying him that it is done.
Alright, he found a way to do it in under 10 seconds. Cool. Good for him. Now that it is built, there is nothing bad about it. I just don't see how this was ever an important KPI.
I think this was just a fun challenge rather than to get any kind of useful advantage.
C isn't necessarily fast to recompile everything. Too much preprocessor magic and the compilation can slow down a lot.
And a lot of the reason for that, is that C's preprocessor is inherently weak – e.g. it doesn't explicitly have support for basic stuff like conditionals inside macros – but you can emulate it with deeply nested recursion – which blows up compilation times enormously. If C's preprocessor was a bit stronger and natively had support for conditionals/etc, one wouldn't need these expensive hacks and it would be a lot faster.
Example of real world project where C preprocessor slowed things down a lot is the Linux kernel: https://lwn.net/Articles/983965/
Make determines which files changed and their dependencies and reexecutes your script to regenerate the necessary output.
If, for example, their posts were stored as Word documents or Google Docs, they would have a far fewer options for building and deploying their blog.
But, because theyre're using the (comparatively) simple markdown format, they have a lot more options.
I do something similar and I've migrated my blog from Jekyll + Github Pages to Zola + Netlify without too many issues. If Zola or Netlify go away, I'm confident I can migrate again easily.
If that's a use case you want to support because of convenience, I really don't see a reason to use one over the other, other than personal preference.
I'm using it for a couple of static sites and you can ignore most of the complexity just treat it pretty much like a md converter. There are minimalist templates available as well that get your site going.
It's lightning fast too & whether there's a GC at the back of it seems not relevant.
Longevity isn't just for programs, it's also for content...
Almost 30 years later, there's plenty of more modern tools that will do a much better job than custom C code.
The blog section is a memory-mapped set of C structures, but in the next rewrite I'll directly embed blog posts in the program, too.
I did it this way instead of using a static site generator because I realized that there's no such thing as a static site. My server has to run code to serve any website, including a "static site", so why arbitrarily limit it to the code for loading static files? https://www.immibis.com/blog/12
Not a single library is used, besides libc - at least in the backend code. I use nginx as a reverse proxy and SCGI as the interface from nginx to the C backend.
The short version: the author eventually decided that statically generating the site would require literally only a Markdown parser and a wrapper to iterate over .md files and fix internal links and add a common header and footer. The author then found a Markdown parser implemented in C and therefore interfaced to it in C (which of course involves a bunch of buffer manipulation and memory management and looks incredibly clunky compared to any obvious choice of language).
> I looked for a better alternative and found md4c, which is a parser written in C with no dependencies other than the standard C library. It also has only one header file and one source file, making it easy to embed it straight into any C project.
I see three of each looking at the repository's src/ folder.
> My website converter script, which is all in this 250 LOC source file (less md4c) is feature-complete and runs on any compiler that supports the C standard from 1999 onwards. There's no platform-dependent code and it's portable to Windows, Linux, and MacOS.
Feature-complete if you're the author, I guess. Portable if you're willing to edit source code and recompile just to change file path configuration. Except for the part that makes a system call to `find` (prepared in a statically allocated buffer using sprintf without bounds checking). As far as I can tell, <ftw.h> is also Linux, or at least Posix-specific.
250 LOC if you ignore the author's own array, file and string libraries as dependencies.
And, again: that 250 LOC is way more than it would be to do the equivalent in a higher-level language.
> It seems better than some alternatives like pelican which is written in Python and thus will be slower to parse md files.
You'd think experienced C programmers would know to benchmark things and not make assumptions. And if the goal is to find just a markdown parser, this simply doesn't reflect looking hard enough. For example, I can easily find https://pypi.org/project/Markdown/ ; in fact it's what gets used under the hood by Nikola (and I'd guess Pelican too). It's less than 1MB installed, pure Python with no further dependencies. I imagine the story is similar for Hugo. Sure, I wouldn't at all be surprised if it's slower, but it seems like there's considerable acceptable slack here. It's good enough to enable Nikola to implement an auto-refresh using a 1-second watchdog timer on the source directory, for example, despite that it's also feeding the Markdown result into a full templating system.
> I've been writing about things on a personal website since 2017.
> GitHub pages didn't exist at the time
GitHub pages existed before 2017. Like, almost a decade before.
So much of this post feels like alternate history and gloating. Its cool that he wrote a wrapper around an existing library, but his main argument is that C will still be around in "the upcoming decades." Im willing to bet money on Hugo/go existing and still working too -- or any number of other static-site generators.
edit: This can all be done with just pushing markdown files to github anyway. Github will automatically generate the html if you do some basic configuration in your repo.
Yea if you change something in header or footer build speed matters but when the HTML file is done, it's done.
Sending pre-compressed html and css (with a sprinkling of JS at best to enhance the experience after it's already in the browser) is kilobytes of work at most (irrespective of how much it unpacks to on the client).
The biggest lie the modern web sold us is that information needs to be packed up in complex datatypes and be interpreted on the client before it's information again.