I have found it an absolute joy to maintain this piece of little 'machinery' for my website. The best part is that I understand every line of code in it. Every line of code, including all the HTML and CSS, is handcrafted. This gives me two benefits. It helps me maintain my sense of aesthetics in every byte that makes up the website. Further, adding a new feature or section to the site is usually quite quick.
I built the generator as a set of layered, reusable functions, so most new features amount to writing a tiny higher level function that calls the existing ones. For example, last month I wanted to add a 'backlinks' page listing other pages on the web that link to my posts and it took me only about 40 lines of new CL code and less than 15 minutes from wishing for it to publishing it.
Over the years this little hobby project has become quite stable and no longer needs much tinkering. It mostly stays out of the way and lets me focus on writing, which I think is what really matters.
I ended up migrating back to a hosted solution explicitly because it doesn't allow me such control, so the only thing I can do is write instead of endlessly tinkering with the site.
I don't see the problem with that ;)
I ended up subscribing to Bear Blog and calling it a day. In fact I need to delete those half-baked attempts so I am never tempted to get back to them.
You should write a blog about it, like geerlingguy did.
That said, building your own static site and faffing with all the tech is generally an enjoyable distraction for most techies
If I were maintaining multiple large sites or working with many collaborators, I'd rely on something standard or extract and publish my SSG. For a personal site, I believe custom is often better.
The current generator is around 900 SLOC of Python and 700 of Pandoc Lua. The biggest threats to stability have been my own rewrites and experimentation, like porting from Clojure to Python. I have documented its history on my site: https://dbohdan.com/about#technical-history.
I am now slowly rebuilding it in TypeScript/Bun and still finding a lot of LISP-isms, so it’s been a fun exercise and a reminder that we still don’t have a nice, fast, batteries-included LISP able to do HTML/XML transforms neatly (I tried Fennel, Julia, etc., and even added Markdown support to Joker over the years, but none of them felt quite right, and Babashka carries too much baggage).
If anyone knows about a good lightweight LISP/Scheme dialect that has baked in SQLite and HTML parsing support, can compile to native code and isn’t on https://taoofmac.com/space/dev/lisp, I’d love to know.
Also have a RSS feed generator and it can highlight code in most programming languages, which is important to me as I write posts on many languages.
I did try Hugo before I went on to implement my own, and I got a few things from Hugo into mine, but Hugo just looked like far too overengineered for what I wanted (essentially, easy templating with markdown as the main language but able to include content from other files either in raw HTML or also markdown, with each file being able to define variables that can be used in the templating language which has support for the usual "expression language" constructs). I used the Go built-in parser for the expression language so it was super easy to implement it!
Used this for code syntax higlighting: https://github.com/alecthomas/chroma And this for markdown: https://github.com/russross/blackfriday
The rest I implemented myself in simple to read Go code.
Have you responded to this post on your own site? Send a webmention[0]! Note: Webmentions are moderated for anti-spam purposes, so they will not appear immediately.
I find this method to be a sweet spot between generating content on your own pace, while allowing other people to "post" to your website, but not relying on a third-party service like Disqus.The comment form is implemented as a server-side program using Common Lisp and Hunchentoot. So this is the only part of the website that is not static. The server-side program accepts each comment and writes to a text file for manual review. Then I review the comments and add them to my blog.
In the end, the comments live like normal content files in my source code directory just like the other blog posts and HTML pages do. My static site generator renders the comment pages along with the rest of the website. So in effect, my static site generator also has a static comment pages generator within it.
The next time I rebuilt the blogs the page "XXX" would render a loop of all the comments, ordered by timestamp, if anything were present.
The CGI would send a "thanks for your comment" reply to the submitter and an email to myself. If the comment were spam I'd just delete the static file.
Many programmers' first impulse when they start[0] to blog is to write their own blog engine. Props to you for not falling into that particular rabbit hole and actually using - as opposed to just tinkering on - that engine.
[0] you said you migrated it, implying you already had the habit of blogging, but still,
A neat little in-between "string replacements" and "flown blown templating" is doing something like what hiccup introduced, basically using built-in data structures as the template. Hiccup looks something like this:
(h/html [:span {:class "foo"} "bar"])
And you get both the power of templates, something easier than "templating engine" and with the extra benefit of being able to use your normal programming language functions to build the "templates".I also implemented something similar myself (called niccup) that also does the whole "data to html" shebang but with Nix and only built-in Nix types. So for my own website/blog, I basically do things like this:
nav = [ "nav"
[ "h2" "Posts" ]
[ "ul" (map (p: [ "li" [ "a" { href = "${p.slug}.html"; } p.title ] ]) posts) ]
]
And it's all "native" Nix, but "compiles" to HTML at build-time, great for static websites :)Thank you. That was, in fact, the inspiration behind writing my own in CL.
You list all the links to posts on the landing page: what if you have 1000, 2000 posts ? Have you thought of paginating them ?
I created a test page with 2000 randomly generated entries here: <https://susam.net/code/test/2k.html>. Its actual size is about 240 kB and the compressed transfer size is about 140 kB.
It doesn't seem too bad, so I'll likely not introduce pagination, even in the unlikely event that I manage to write over a thousand posts. One benefit of having everything listed on the same page is that I can easily do a string search to find my old posts and visit them.
How large does the canvas need to get before pagination makes sense?
Modern websites are enormous in terms of how much needs to be loaded into memory- sure, not all of it is part of the rendered document, but is there a limit to the canvas size?
I'm thinking you could probably get 100,000+ entries and still be able to use CTRL+F on the site in a responsive way since even at 100,000+ entries you're still only about 10% of Facebooks "wall" application page. (Without additional "infinite scroll" entries)
I had a vision of what I wanted the site to look like, but the org exporter had a default style it wanted. I spent more time ripping out all the cruft that the default org-html exporter insisted on adding than it would have taken to just write a new blog engine from scratch and I wish I had.
There's a way to set a custom export template, but I couldn't figure it out from the docs. I found and still do find the emacs/org docs to be poorly written for someone who doesn't already understand the emacs internals, and I wasn't willing to spend the time to become an emacs internals expert just to write a blog.
So I lived with a half-baked org->pandoc->html solution for a while but now I'm on Jekyll and much happier with my blogging experience.
I regret it.
I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
Unless the new version of the software includes some feature I need, I can be totally fine just running an old version forever. I could just write down the version of the SSG my site builds with (or commit it to source control) and move on with my life. It’ll work as long as operating systems and CPU architectures/whatever don’t change too much (and in the worst case scenario, I’m sure the tech exists to emulate whatever conditions it needs to run) Some software is already ‘finished’ and there’s no need to update it, ever.
Like most build systems work, for example when you set a "rust-version" in Cargo.toml and only bump it when you explicitely want to. This way it will still use the older version on a fresh checkout.
Once setup, all you need to do is:
$ nix develop —-command hugo regenerate
$ # version is pinned by flake.lock
The beauty of this approach is that it extends to almost any CLI tool you can think of :)My devshell can be found here and is dead simple: https://github.com/stusmall/stuartsmall.com/blob/main/defaul...
I used Zola for my SSG and can't think of the last breaking change I've hit. I just use the pattern of locked nix devshells for everything by default. The extra tools are used for processing images or cooklang files.
For Hugo, there is Hugo Version Manager (hvm)[0], a project maintained by Hugo contributor Joe Mooring. While the way it works isn't precisely what you described, it may come close enough.
I say this as someone who uses Hugo and is regularly burned (singed) by breaking changes.
Pinning your version is great until you trip across a bug (usually rendering, in my case) and need to upgrade to get rid of it. There goes a few hours. I won’t even mention the horror of needing a test suite to make sure the rendering of your old pages hasn’t changed significantly. (I ended up with large portions of text in a code block, never tracked the root cause down… probably something to do with too much indentation inside a bulleted list. It didn’t render that way several years before, though.)
0: not all, I use cargo to manage the rust toolchain
Also, you know that you can do a binary search for the version that works for you? 0.154.0, 0.77.0, 0.115.0 ... (had to do it once myself)
[0]: https://github.com/oslc-op/website/blob/9b63c72dbb28c2d3733c...
Alternatively there's apparently some nix flakes that have been developed.
So, there's options.
I just recommend pinning your version and being intentional about upgrades.
Oh definitely. How can you suggest adding a binary to a git repository? It's a bad idea on many levels: it bloats the repository by several orders of magnitude, and it locks you to the chosen architecture and OS. Nope, nope, nope.
So I have a fixed Hugo version that I know works.
And when I mean "solved", I actually never had the issue because, since I have no reason to upgrade Hugo, I never had to change my Docker image and never had the opportunity to risk breaking my theme.
I just assumed static website generators would be stable but well, there's always something that breaks. Terrible user experience for someone who just wants to use the generator to generate a website vs. to tinker with it as a hobby.
I'm in the process of porting my website to PHP ... but that project hasn't gone anywhere because currently everything works ;)
I've been using 4.3 to 4.4 without much issues, granted the sites I generate are simple.
My needs for a site are pretty simple, so I might just go with the custom-built one to be honest.
If it breaks, I can just go look in the mirror for the culprit =)
Looking at the comments here a common pain (that I share) is config and code drift, or just losing your config file and being unable to publish a new version without re-doing everything.
I made a version where everything, including the HTML templates and CSS, is built in to a single static Go executable, no configuration files, everything is hard-coded.
This way as long as I have the specific executable version and the source markdown files, I can deterministically replicate my blog output structure.
The source is a directory in my Obsidian vault and the setup supports Obsidian-style front-matter
Pretty sure the version of Hugo used to generate a site is included in metadata in the generated output.
If you have a copy of the site from when it last worked, then assuming my above memory is correct you should be able to get the exact version number from that. :)
Right now, you are pretty much locked into the theme (and it's version) when you set up your website for the first time.
Had the same problem. Binary search is the latest trick people use.
For SSG there's not much point in upgrading if everything works, and planned migration beats the churn in this case.
You can just print a Hugo version in a HTML comment to track it in git.
No need for the entire binary.
Just put `go run github.com/gohugoio/hugo@vX.Y.Z "$@"` into a `hugo.sh` script or similar that's in source control, and then run that script instead of the Hugo binary.
You'll need Go installed, but it's incredibly backwards compatible, so updating to newer Go versions is very unlikely to break running the old Hugo version.
> Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
I've had the same issues as you, and yes, I agree that pinning a version is very important for Hugo.
It's more useful for once-and-done throwaway sites that need some form of structure that a static site generator can provide.
Hugo-papermod, the most famous Hugo theme, doesn't support the latest 10 releases of Hugo.
So, everyone using it is locked into using an old version (e.g. via Docker).
Or use HVM and submit the .hvm file (which is just a text file with the Hugo version that you use)
I maintained a personal fork of Zola for my site (and a couple of others), and am content to just identify the Git repository and revision that’s used.
Zola updates broke my site a few times, quite apart from my patches not cleanly rebasing. I kept on the treadmill for a while, initially because of a couple of new features I did want, but then decided it wasn’t necessary. You don’t need to run the latest version; old is fine.
—⁂—
One piece of advice I would give for people updating their SSG: build your site with the old and new versions of the SSG, and diff the directories, to avoid regressions.
If there are dynamic values, normalise both builds before diffing: for example, if you have timestamp-based cachebusting, zero all such timestamps with something like `sed -i 's/\?t=[0-9]+/?t=0/' **/*`. Otherwise regressions may be masked.
I caught breakages a couple of times this way. Once was due to Zola changing how shortcodes or Markdown worked, which I otherwise might not have noticed. (Frankly, Markdown is horrible for things like this, and Zola’s shortcodes badly-designed; but really it’s mostly Markdown’s fault.)
I’ve had amazing success debugging compile errors with Claude Code.
Perhaps a coding agent could help you get it going again?
A) Low-stakes application with
B) nearly no attack surface that
C) you don’t use consistently enough to keep in your head, but
D) is simple enough for an experienced software developer to do a quick sanity check on and run it to see if it works.
Hell, do it in a sandbox if you feel better about it.
If it was a Django/Node/rails/Laravel/…Phoenix… (sorry, I’ve been out of my 12+ years web dev career a short 4 years and suddenly realized I can only remember like 4 server-side frameworks/environments now) application, something that would run on other people’s devices, or really anything else that produces an executable output, then yeah fuck that vibe coding bullshit. But unless you’ve got that thing spitting out an SPA for you, then I say go for it.
* I have forked some public repository that has kept up with upstream (IE; lots of example code to draw from)
* Upstream is publishing documentation on what's changing
* The errors are somewhat google-able
* Can be done in a VM and thrown away
* Limited attack surface anyway.
I think you're downvoted because the comment comes across as glib and handwavy (or not moving the discussion forward.. maybe?), and if it was a year ago I would probably argue against it.. but I think Claude Code can definitely help with this.
It just didn't exist as it does in 2023~ or whenever it was that I originally started having issues.
---
That said: it shouldn't be necessary. As others in this thread have articulated (well, imo) sometimes software is "done" and Hugo could be "done" software, except it's not; so the onus is on the operator to pin their definition of "done" version.. which is not what you'd expect.
Yep. I missed the mark.
OP seemed down and out about their blog being broken. So I was trying to put the idea across as not something to be afraid of.
I should’ve just said it - LLMs are perfect for this use case.
I am the parent, and I am indeed down about it. :P
It's a fair fix today like I mentioned, but back when it happened it wasn't available, and anyway, as I mentioned it shouldn't have been necessary.
Given, mine is not sophisticated at all and simple by design. But curious what kind of issues pops up.
No need for docker.
Nobody can point to a reason why it's a good idea for a site with any interactivity now.
All the supporters here are all the same: "I had to do a whole bunch of mental gymnastics and compromises to get <basic server side site feature> but it's worth it!" But they don't say why it was worth it, beyond "it's easy now <after lots of work costs sunk>".
When you try get to why they did it in the first place, it's universally some variation on "I got fed up with <some large server side package> so took the nuclear SSG route <and then had to eventually rewrite or get someone else's servers involved again>"
Part of this is a me problem: a personal website should be owned by the person, IMO. A lot of people are fine to let other people own parts of their personal websites, and SSGs encourage that. What even is a personal website if it's a theme that looks like someone else's, hosted and owned on someone else's server - why not just use Facebook at that point?!
1: https://www.vice.com/en/article/this-solar-powered-low-tech-...
This is the part I'm struggling with. That's the view I held from 2016 - 2024. Practically though, it's only true if you want a leaflet website with 0 interactivity.
If you want _any_ interactivity at all (like, _any_ written data of any kind, even server or visitor logs) then you need a server or a 3rd party.
This means for 99% of personal websites with an SSG, you need a real server or a 3rd party service.
When SSGs first came around (2010 - 2015) compute was getting expensive, server sides were getting big and complex, bot traffic solutions were lame, and all the big tech companies started offering free static hosting because it was an easy free thing to offer.
Compare this to now, 2026, it's apparently nothing special to handle hackernews front page on free or cheap compute. Things like Deno, Bun, even Go and Python make writing quick, small, modern server sides so much quicker, easier and safer. Cloudflare and or crowdsec can cover 99% of bot and traffic issues. It's possible to get multiple free multiple GB compute instances now.
I didn't mean to imply there's some sinister plot of people maliciously encouraging people to use SSGs to steal their stuff, but that's the reality that modern personal webdev has sleepwalked into. SSGs were first sold to make things better performing and easier than things were at the time. Pretty much any "server anywhere" you own now will be able to run a handwritten server doing SSR markdown -> HTML now.
So why force yourself to have to start entertaining ideas like making your visitors download multiple megabyte client side index files to implement search, or embedded iframes and massive JS external libraries for things like comment sections? Easier looking SSG patterns like that typically break all the stuff required to keep the web open and equal, like screen readers, low bandwidth connections and privacy. (Obviously SSR doesn't implicity solve these, but many of these things were originally conceived with SSR in mind and so are naturally more compatible).
Ask anyone who's been in and out of web dev for more than 15 years to really critically think about SSGs in depth, and I think they'll conclude they offer a complete solution for maybe 1% of websites, but seem to be recommended in 99% of places as the only worthy way to do websites now. But when you pick it apart and try it, you end up in Jeff's position: statically rendered pages (the easy bit) and a TODO with a list of compromising options for basic interactivity. In 5 years time, he'll have complex SSG pipelines that's running almost 24/7, or a complex mesh of dependencies on external services that are constantly changing or trying to start charging him more to deal with his own creations.
I really hope I'm wrong.
If I used MacOS then Hugo was probably very old, since I often forget to update brew packages and end up running very old software.
But, that's what I thought to do first also.
In the end, it becomes not worth the hassle, and spending time fixing it means that whatever I was going to write gets pushed out of my head, and it's very difficult to even bother.
I'll probably go back to Svbtle.
If you encode the transformations that your desired SSG should perform by writing the processing rules as plain text source code that a browser is capable of executing (i.e., an "HTML tool" or something adjacent[1][2]), then you can just publish this "static site generator" itself as yet another page on your static site.
To spell it out: running the static site generator to create a new post* doesn't need to involve anything more than hitting /new.html (or whatever) on the live site, clicking the button for the type=file input on that page, using the browser file picker to open the directory where the source to your static site lives, and then saving the resulting ZIP somewhere so the contents can be copied to whatever host you're using.
1. <https://simonwillison.net/2025/Dec/10/html-tools/>
2. <https://crussell.ichi.city/pager.app.htm>
* in fact, there's nothing stopping you from, say, putting a textarea on that page and typing out your post right there, before the new build
About a year ago I converted my 500+ post Jekyll blog to Hugo, overall it's been a net win but boy do I find myself looking up syntax in the docs a lot. Thankfully not so much nowadays but figuring out the templating syntax was rough at the time.
Jeff, you don't have to set draft to false. You can separate your drafts into a different directory and use Hugo's cascade feature to handle it. Also you don't have to update the date in your frontmatter if you prefix the file with YYYY-MM-DD and configure Hugo to use that.
Just a heads up, you didn't mention this in your post but Hugo adds a trailing slash for pretty URLs. I don't know if you had them before but it's potentially new behavior and canonical URL differences unless you adjust for that.
When I did the switch from Jekyll to Hugo, I wrote a 10,000 word post with the gory details and how I used Python and shell scripts to automate converting the posts, plus covered all of the gotchas I encountered. There are sections focused on the above things I mentioned too: https://nickjanetakis.com/blog/converting-my-500-page-blog-f...
Why? I'm using Jekyll and been happy with it. What am I missing?
> That gives near instant live reload when writing posts which makes a huge difference from waiting 4 seconds.
Mhm. Why? I can write all of my post and look at it only afterwards? Perhaps if there's a table or something tricky I want to check before. But normally, I couldn't care less about the reload speed.
> I use that plugin because it digests your assets by adding a SHA-256 hash to their file names. This lets me cache them with nginx. I can’t not have that feature.
Why?
My site has a fixed max width which is what most tablets or desktops will view it as.
Sentence display width is something I pay attention to. For example sometimes I don't want 1 hanging word to have its own full line (a "hanger") because it looks messy. Other times I do want it because it helps break up a few paragraphs of similar length to make it easier to skim.
Seeing exactly what my site looks like while writing lets me see these things as I'm writing and having a fast preview enables that. Waiting 4 seconds stinks.
> Why? [asset digesting and cache busting with nginx]
It helps reduce page load speeds for visitors and saves bandwidth for both the visitor and your server. If their browser already has the exact CSS or JS file cached locally, this allows you to skip a server side call to even determine if the asset can be served locally or needs an update from the server.
The concept of digesting assets with infinitely long cache header times isn't new or something I came up with. It's been around for like 10+ years as a general purpose optimization.
Isn't your website responsive? If it is, for how many different resolutions do you check this? I think I obsess about details, but thankfully not about this!
You should be able to use `text-wrap: pretty;` to eventually remove orphans. If you sometimes want them on purpose and sometimes avoid them, that's just weird. I'm sure this is a lost fight: it'll look different with different setups anyway. Different browser, different OS, different fonts, ... it's a lost battle.
> It helps reduce page load speeds for visitors and saves bandwidth for both the visitor and your server.
Mhm I just use apache's cache_dist[0]. It works fine out of the box, for all my myriads of websites. Yes sometimes there's a new file and a browser is stuck on the old version. Requires a hard refresh. Someone who doesn't know will see the old version. I don't particularly mind that.
I guess you're more of a perfectionist than I am.
[0]: https://httpd.apache.org/docs/2.4/mod/mod_cache_disk.html
SSGs are good for static sites with no interactivity or feedback. If you want interactivity or feedback, someone (you or a 3rd party service provider) is going to have to run a server.
If you're running a server anyway, it seems trivial to serve content dynamically generated from markdown - all an SSG pipeline adds is more dependencies and stuff to break.
I know there's a fair few big nerd blogs powered by static sites, but when you really consider the full stack and frequency of work that's being done or the number of 3rd party external services they're having to depend on, they'd have been better by many metrics if the nerds had just written themselves a custom backend from the start.
Jeff: I think you'll regret this. I think you'll waste 5 - 10 years trying to shoehorn in basic interactivity like comments, and land on a compromised solution.
I also used and managed Drupal and Joomla before I went to SSGs, and then finally realised there's a sensible midpoint for the pain you're feeling: you write/run a simple server that dynamically compiles your markdown - good ol' SSR. It's significantly lighter, cheaper and easier than drupal, and lets you keep all the flexibility and features you need a server for. Don't take cave to the "self hosted tech was too hard so I took the easy route that forces me to also use 3rd party services instead" option.
SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Until you have enough visitors or evil AI bots scraping your site so that it crashes, or if you're using an auto-scaling provider, costs you real money.
The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
For blogs, number of reads vs number of comments or other actions that require a server is probably on the order of 100:1 or 1000:1, even more if many of the page loads are bots/scrapers.
> SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
Looks like Jeff plans to do exactly that: https://github.com/geerlingguy/jeffgeerling-com/issues/167
There's been multiple blog posts on HN from people who've received a hug of death and handled it fine with basically free or <$10 /month VMs
A couple of gigs of RAM and 2 cores can take viral posts and the associated bots. 99% of personal websites never go viral either.
> The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
This is my exact argument against SSGs, and Jeffs post proves it: it's easy to use SSG to generate web pages, but the moment you want comments, or any other bells and whistles, you do what Jeff's going to have to do and say you'll do it later because there's no obvious easy solution that doesn't work against and SSG.
> Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
EXACTLY! This is my point! Why not just SSR the markdown on the server you're already running?!
This is the opposite of what Jeff and 99% of other SSG users do, they switch to SSGs to get rid of dealing with servers, only to realise they need servers or third parties, but then they're sunk-cost-fallacied into their SSG by the time they realise.
The Markdown-to-templated-HTML pipeline code is the same whether it runs on each request or on content changes, so why not choose the one that's more efficient? Serving static HTML also means that the actually important part of my personal webpage (almost) never breaks when I'm not looking.
SSGs force people into particular ways of doing all the other parts of a website by depending on external stuff. This is often contrary to long term reliability, but nobody associates those challenges with the SSG that forced the external dependencies.
It becomes a sunk cost fallacy because people do what Jeff has done, they switch to an SSG in the promise of an easier website and proudly proclaim they're doing things the new best way. But they do the easy SSG bit (the content rendering) and then they create a TODO with all the compromised options for interactivity.
When they've got to a feature complete comparison, they've got a lot more dependencies and a lot less control/ownership, which inevitably leads to future frustrations.
The end destination for most nerdy personal website is a hand crafted minimal server with minimal to no dependencies.
The code surface with SSG + 1 or 2 small self-hosted OSS tools is much, much smaller than it ever was running Drupal or another CMS.
But all you've done in bought on all the pain and compromise of having to think from an SSG perspective, and that created problems which you've already identified you'll figure out in future
I'm suggesting 2 or 3 small self-hosted OSS tools, where one is a small hand crafted server that basically takes a markdown file, renders it, and serves it as plain HTML with a header/footer.
This is more homogenous, fewer unique parts/processes, and doesn't have the constraint of dealing with an SSG.
I remember my own personal pain from 2010 - 2016ish of managing Drupal and Joomla. I did exactly the same as you in 2016 and went all in on SSGs and in 2024, I realised all of the above. I feel like I wasted years of potential development time reinventing basic personal website features to try and work with an SSG and you literally have a ticket to do just that: https://github.com/geerlingguy/jeffgeerling-com/issues/167. 1 of your 3 solutions involves letting someone else host your comments:(
A custom framework/server is the end destination for all nerdy personal websites - I can't wait to see what you make when you realise this:)
edit/p.s. I love you and all your work. Sorry for sounding disagreeable, I'm excited to see what you learn from you SSG journey, I hope you prove me wrong!
For me, an unstated reason for SSG is being able to scale to millions of requests per hour without scaling up my server to match.
Serving static HTML is insanely easy, even on a cheap $5-10/month VPS. Serving anything dynamic at all is an order of magnitude harder.
Though... I've been using Cloudflare since 2022, after I started getting regular targeted DDoSes (was fun initially, seeing someone target different parts of Drupal, until I just locked down everything except for the final comment pages). The site will randomly get 1-2 million requests in a few minutes, and now Cloudflare eats those quickly, instead of my VPS getting locked up.
Ideally, I'll be able to host without Cloudflare in front at some point, but every month, because of one or two attacks, the site's using 25-35 TB of bandwidth (at least according to CF).
I totally see where you're coming from, but you just said it yourself, SSGs don't actually solve any problems for you right now that cloudflare doesn't. A site of Jeffgeerling.com scale is the archetypal scale site that _should_ benefit from SSGs, but Cloudflare is the easier, and arguably better, solution to the traffic/bot/scale problem.
If the problem you hit with Drupal is that it was more and less than you needed and became a headache to maintain, you will hit the same problem with Hugo eventually.
The solution to that problem is to just write your own server side that does what you need. It's so much more fun and rewarding, and I'm confident if you did it, the output would be better. With modern servers and server side technologies, you would most likely not have a problem running your minimal MD->html server on your current VPS behind Cloudflare.
Worst case scenario is you spend your time dealing with problems or misunderstandings with your own code, at least that'll be a refreshing change to dealing with problem or misunderstandings in Drupal's or Hugo's code or decisions.
There's a time and a place for SSGs, and geerlingengineering is the perfect use case, because it has no real interactivity. But - again, please take this from a place of candour than intended offence - from a user perspective, in the process of migrating Jeffgeerling.com to hugo, comments and search have been broken. Your migration to Hugo has just begun, you did the easy hugo part and created a post suggesting it was done. But the extra phases and tickets for comments and search suggest there's no obvious and easy answer on how to finish migrating interactive bits to an SSG.
Custom server side software is a complete solution, SSGs restrict what your complete solution can be without being one themselves. Nobody really seems to mention this until they move away from SSGs.
(Sorry for the bluntness again! Thanks again for your content, I stumble across your stuff all the time. I migrated my dad from a Windows XP machine to a Pi, and your resources are particularly useful and accessible for both of us!)
"Why go to all the burden of serving a few k of static html directly when you could just require a globe-spanning mega cdn?"
I feel you're missing my point which was "SSGs aren't good for sites which require interactivity because they force compromises elsewhere", a corollary to that is that for any problem an SSG promises to solve, if you have interactivity on your site, you probably already have a better solution available. E.g Jeff/bots/traffic/cloudflare.
I really want to know because there is a Drupal 7 site that I need to migrate to something but I need good search on it (I’m using solr now).
Edit: I should have specified that I need more functionality than just word searching. I need filtering (ie faceted search) too. I’ve used a SSG that uses a JavaScript index and appreciate it, but that’s not going to cut it for this project.
Of course, for my site I just redirect the user to a search engine plus `site:stavros.io`.
When does this become 1 step forward with the SSG and 2 steps back with search solutions like this?
Nothing is perfect, but the above is really simple to host, is low maintenance, and easy to secure.
Sorry, I don't mean to come across as disagreeable. You're right, nothing is perfect, and this is obviously a workable and usable solution. My issue is if we analyse it beyond "it looks like it works", it starts to look like a slightly worse solution than what we already had.
Nothing wrong with moving backwards in some direction, as long as we can clearly point to some benefit we're getting elsewhere. My issues with SSGs is most of the benefits they offer end up undermined if you want any interactivity. This is a good example of that, as you end up compromising on build time and page load time compared to an SSR search solution.
You don't see how the server based solution is an order of magnitude more effort to maintain, monitor, optimize, and secure compared to hosting static files? Especially if search is a minor feature of your site, keeping the hosting simple can be a very reasonable tradeoff.
Lots of blogs/sites also do fine with only category and tag filtering (most SSGs have this built in) without text search, and users can use a search engine if they need more.
I agree with you, lots of blogs/sites do fine with just tag/category. Those sites don't have interactivity and so you could SSG them and use free static hosting. We're in agreement on this. But I have explicitly been talking about sites that want interaction. Still - how many personal sites that "do fine without interactivity" are actually "interactivity was too much of a technical/maintenance challenge"?
I completely see how a server based solution is an order of magnitude more effort. We agree on that too. Running any servers efficiently typically has fixed overhead for server infra and processes. Running 3 pieces of server software or even 3 servers is harder than 1, but significantly less than 3x harder.
If you want interactivity, a server is needed. This means you are already maintaining, monitoring, optimising and securing a server solution. Adding SSG processes alongside your existing server solution does not remove any problems relating to servers, because you still have servers. It does add complexity though because you're running servers, and some SSG pipelines. These architectures clash and create more work. Running 2 conflicting architectures can be a lot more more than 2x harder than running 1 architecture.
If you want interactivity and don't want to run a server, then you must use a 3rd party service. This brings its own issues, especially if you have a personal website because you want to own your own stuff.
I feel like you're repeating to me "servers are hard, SSGs are good" - you can't have an SSG without a server, and you definitely can't have interactivity. If you have a server anyways for comments or search, then by using SSGs, you're not removing your need to deal with servers, you're adding an extra picky new thing to your servers. And what complex feature of a personal website does the SSG solve specifically? Generate the text/HTML. Compared to comments and search, generating html from markdown is so painfully easy. SSGs tie our hands behind our backs for the hard interactivity problems to solve the easy text rendering problem.
This seems like a really, really crystal clear point to me. But you, Jeff and Susam have all tried to say that SSGs are generally better, easier, safer or faster. I felt like that from 2016 - 2024. But when you've been pressed, all of you have ended up saying variations on:
- SSGs are great for sites without interactivity (agree, you know and I know though, most personal websites do want some form of interactivity, even just search or comments, and the only reason not to is technical hurdles.)
- Comments and Search are doable with SSGs, but not trivially, or without compromising on things which were traditionally seen as crucial for an open and accessible web (agree, see jeffs comments and search tickets)
- You use SSGs not to solve any particular problem, but just for fun. (agree, that's fine, but if that's your only reason, that's probably a sign there's virtually no technical advantages.)
- The problems you used the SSG for originally have been solved with compute power, storage, connectivity and bandwidth becoming more available over the past decade
I really appreciate and respect your's, Jeff's and Susam's time to respond, I want to be in agreement with you, as you all have the experience and reputation in personal websites. I just can't see it though. I mean this whole HN topic is about Jeff's post "Jeffgeerling.com has been migrated to Hugo" which on the surface looks like a success for the ease of SSGs, but when you probe deeper, it's the text that's been migrated, and comments and search have been disabled.
Why are we here if SSGs are so obviously the easiest, most reliable way to run a personal website? Jeff's site is currently working less than it did before and the fixes are future TODOs are essentially "pick a compromise" (sorry, no offence Jeff - I still think you're great!)
Assuming 500 bytes of metadata + URL per blog post, a one megabyte index is enough for 2000 blog posts.
As already mentioned, you don't generate search result pages, because client side Javascript has been a thing for several decades already.
Your suggestion of converting markdown on every request also provides near zero value.
Writing a minimal server backend is also way easier if you separate it from the presentation part of the stack.
Based on https://news.ycombinator.com/item?id=46489563, it also seems like you fundamentally misunderstand the point. Interactivity is not the point. SSGs are used for publishing writing the same way PDF is used. Nobody sane thinks that they need a comment section in their PDFs.
Your 1 megabyte index file has just added over 2 seconds to your page load time in 30 different countries based on average internet speeds in 2024. Chuck in some pictures, an external comment library and your other SSG hacks, and you've just made your website practically unresponsive to a quarter of the planet and a bunch of other low powered devices.
Value is relative. The benefit of rendering markdown on every request is it makes it easier to make it dynamic, so you don't need to do SSG compromises like rebuild and reupload multiple pages when a single link changes.
You're replying in my thread here, to my original points. My original points were that SSGs don't make sense for sites with interaction, which is why were were discussing the limitations of SSG search approaches.
> SSGs are used for publishing writing the same way PDF is used. Nobody sane thinks that they need a comment section in their PDFs.
Thank you! We're in agreement, it doesn't make sense to use SSGs for sites that require interaction. When you do, it forces the rest of your site to do the compromising search stuff like we're discussing here.
You may be interested in Backdrop, which is a maintained fork of Drupal 7.
(No experience with it personally. Only know about it from a friend who uses it.)
This is the exact opposite of what static site generation does.
If you don't use an SSG, this step is done by virtue of the server running.
For my website, I do both. Static HTML pages are generated with a static site generator. Comments are accepted using a server-side program I have written using Common Lisp and Hunchentoot.
It's a single, self-contained server side program that fits in a single file [1]. It runs as a service [2] on the web server [2], serves the comment and subscriber forms, accepts the form submissions and writes them to text files on the web server.
[1] https://github.com/susam/susam.net/blob/0.4.0/form.lisp
[2] https://github.com/susam/susam.net/blob/0.4.0/etc/form.servi...
[3] https://github.com/susam/susam.net/blob/0.4.0/etc/nginx/http...
It looks like you did exactly what Jeff did: got fed up with big excessive server sides and went the opposite way and deployed and wrote your own minimal server side solutions instead.
There's nothing wrong with that, but what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
The common sales points for SSGs are often:
- SSGs are easier (doesn't apply to you because you had to rewrite all your comment stuff anyway)
- cheaper (doesn't apply to you since you're already running a server for comments, and markdown SSR on top would be minimal)
- fewer dependencies (doesn't apply to you, the SSG you use is an added dependency to your existing server)
This largely applies to Jeff's site too.
Don't get me wrong, from a curious nerd perspective, SSGs presented the fun challenge of trying to make them interactive. But now, in 2026, they seem architecturally inappropriate for all but the most static of leaflet sites.
I was not trying to solve a specific problem. This is a hobby project and my choices were driven mostly by personal preference and my sense of aesthetics.
Moving to a fully static website made the stack simpler and more enjoyable to work with. I did not like having to run a local web server just to preview posts. Recomputing identical HTML on every request also felt wasteful (no matter how trivially) when the output never changes between requests. Some people solve this with caching but I prefer fewer moving parts, not more. This is a hobby project, after all.
There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
An added bonus is portability. The entire site can be browsed locally without a server. In fact, I use relative internal links in all my HTML (e.g., '../../foo/bar.html' instead of '/foo/bar.html') so I can browse the whole site directly from the local filesystem, from any directory, without spinning up a web server. Because everything is static, the site can also be mirrored trivially to hosts that do not support server-side programming, such as https://susam.github.io/ and https://susam.codeberg.page/, in addition to https://susam.net/. I could have achieved this by crawling a dynamic site and snapshotting it, which would also be a perfectly acceptable solution. Static site generation is simply another acceptable solution; one that I enjoy working with.
This, definitely.
I think until you experience your first few DDoSes, you don't think about the kind of gains you get from going completely static (or heavily caching, sometimes at the expense of site functionality).
I feel like I'm going crazy here: you're both advocating for SSGs, but when pressed, it sounds like the only benefits you ever saw were to problems from many years ago, or problems which you already have alternate and more flexible solutions to.
Regardless, I'm going to hunt you down and badger you both with this thread in a few years to see where we all stand on this! Thanks again :)
Your post boils down to "I evolved into this form problems I had in the 2010 - 2020 period".
> There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
I really appreciate this explanation. It mirrors my experiences. But it's literally saying you did it for performance reasons at the time, and that doesn't matter now. You then say it allowed you to avoid caching, and that's a success because caching is extra moving parts which you want to avoid.
The SSG is an extra moving part, and it basically is a cache, just at the origin rather than the boundary.
Portability is a good point. My preferred/suggested alternative to SSG is dynamic rendering and serving of markdown content, and that gives me the same portability. Most markdown editors now respect certain formats of relative wiki links.
> Static site generation is simply another acceptable solution; one that I enjoy working with.
You are right. Fun is the primary reason why I'm being so vocal about this, because I spent 5 - 10 years saying and thinking and feeling all the things SSG advocates are saying and thinking and feeling about SSGs. I spent a few years with Jekyll, then Hugo, a brief stint with 11ty, and also Quartz. But when I wanted to start from scratch and did a modern, frank, practical analysis for greenfielding from a bunch of markdown files last year, I realised SSGs don't make sense for 99% of sites, but are recommended to 99% of people. If you already build, run and maintain a server side, SSG is just an extra step which complicates interactivity.
Having said all that, I don't really share my stuff or get any traffic though, so whilst I might be having fun, you and Jeff both have the benefit of modern battle testing of your solutions! My staging subdomain is currently running a handcrafted SSR markdown renderer. I've been having fun combining it with fedify to make my stuff accessible over ActivityPub using the same markdown files as the source of truth. It might not work well or at all (I don't even use Mastodon or similar) but it's so, so much fun to mess around with compared to SSG stuff. If fun coding is your motivator, you should definitely at least entertain the throw-out-the-SSG way of thinking, y'know, for fun:)
Thank you for the kind words. I don't think your position is controversial at all. Preferring a server-side solution to serve a website, especially when one is going to do server-side programming for interactivity anyway, sounds like a perfectly reasonable position to me. If anything, I think it is my preference for two different approaches for the content and the comment forms that requires some defence and that's what I attempted in my previous commment.
> But it's literally saying you did it for performance reasons at the time, and that doesn't matter now.
Actually, it still matters today because it's hard to know when the next DDoS attack might come.
> The SSG is an extra moving part, and it basically is a cache, just at the origin rather than the boundary.
I agree and I think that is a very nice way to put it. I realise that the 'fewer moving parts, not more' argument I offered earlier is incorrect. You are right that I am willing to incur the cost of an SSG as an additional moving part while being less willing to add a caching layer as an additional moving part. So in the end it really does come down to personal preferences. The SSG happens to be a moving part I enjoy and like using for some of the benefits (serverless local browsing, easy mirroring, etc.) I mentioned in my earlier comment. While mentioning those benefits, I also acknowledged that there are perfectly good non-SSG ways of attaining the same benefits too.
> My preferred/suggested alternative to SSG is dynamic rendering and serving of markdown content, and that gives me the same portability. Most markdown editors now respect certain formats of relative wiki links.
Yes, sounds like a good solution to me.
> If you already build, run and maintain a server side, SSG is just an extra step which complicates interactivity.
Yes, I agree with this as well. For me personally, the server-side program that runs the comment form feels like a burden. But I keep it because I do find value in the exchanges that happen in the comments. I have occasionally received good feedback and corrections there. Sometimes commenters share their own insights and knowledge which has helped me learn new things. So I keep the comments around. While some people might prefer one consolidated moving part, such as a server-side program that both generates the pages on demand and handles interactivity, I lean the other way. I prefer an SSG and then reluctantly incur an additional moving part in the form of a server-side program to handle comment forms. Since I lean towards the SSG approach, I have restricted the scope of server-side programming to comment forms only.
> If fun coding is your motivator, you should definitely at least entertain the throw-out-the-SSG way of thinking, y'know, for fun :)
I certainly do entertain it. I hope I have not given the impression that I am recommending SSGs to others. In threads like this, I am simply sharing my experience of how I approach these problems, not suggesting that my solution is better than anyone else's. Running a personal website is a labour of love and passion, and my intention here is to share that love for this hobby. The solution I have chosen is just one solution. It works for me and suits my preferences but I do not mean to imply that it is superior to other approaches. There are certainly other equally good, and in many cases better, solutions.
Anyone who wants/needs interactivity is digging themselves into a hole.
It's supported RSS since practically the beginning, and RSS later served as a foundation for a backup and restore system. A few years ago, I implemented SSG functionality (exports html, css, images, etc in a zip).
However, some people like building websites and are fine with that. Plus, it allows you to write another blog post :)
Every time a comment was added, it just generated a full-ass static web page.
I manage multiple Astro sites and eventually they have all needed at least a few small server endpoints. Doing that with Astro is very simple and it can be done without making the static portions of the site a pain to maintain.
Is anyone willing to give feedback on it whatsoever?
https://tariqdude.github.io/Github-Pages-Project-v1/visual-s...
https://gohugo.io/content-management/comments/
This includes a giant list of open source commenting systems.
I really don’t understand why people commonly say static site generators are a good candidate for building your own when there are a good selection of popular, stable options.
The only thing I don’t like about Hugo is the experience of using other people’s themes.
Getting someone else's SSG to do exactly what you want (and nothing more) takes longer than just building it yourself. Juice isn't worth the squeeze.
> It took me a weekend to write the initial Perl script that made this site. It took me another weekend to do the Rust rewrite (although porting all the content took two weeks). These are not complicated programs.
My last Hugo site took 30 minutes to deploy, not a whole weekend. Picked a theme, pasted in content.
> You want free web hosting? Hugo might be the right option.
An extremely good reason to pick Hugo especially if you don’t have the know-how to build your own SSG. You don’t need to know a programming language at all to use it.
Again, I have to throw criticism toward this idea that everyone who wants a static site generator already has the skills required to make one.
And I’m not saying it covers every use case like the kind of person who is willing to pay $100+ per year on a full blown solution like Shopify and Squarespace. It fits a niche: someone who wants their content online without coding with no hosting cost and doesn’t want to rely on third party platforms like Substack.
If you're fine for 3rd parties to own all your comments and content, why even take on the extra effort of hosting or managing or building your own website? That's basically what social media is for.
It’s going to be easier to self-host a drop-in comment system than an entire dynamic site plus/including comment system.
It's easier for a server to render markdown than it is for an SSG site to do server stuff.
Your suggestion for comments is to run a server/use a third party, and do SSG. My suggestion is to just run a server. One is clearly easier as it has fewer steps.
The idea that you can run a decent personal website without compromising on interactivity, and without running a server or using 3rd parties is a myth. As soon as you accept that you have to run a server, SSG becomes an unnecessary extra step.
With the SSG you’re just managing your static markdown for your site’s content, then you’re dropping in a tiny extra piece for comments.
The comment self-hosting is a simple docker container with minimal configuration, an out of the box just works type of service.
Building a personal website that is hosted along with the interactivity integrated is more like managing an entire application, which is exactly what Jeff described with his pain using Drupal. He didn’t actually need all the interactivity that a full blown hosted site offers.
For example, if I run a PHP, Django, or NodeJS based website, now I’ve got to periodically update my whole site’s codebase to keep up with supported/safe versions of PHP, Python, or Node/npm packages.
With the SSG plus comment system you’re pretty much just pulling latest docker image for the comment system and everything else is just static.
I think you’d also have to agree that outsourcing comments to a 3rd party service is potentially a simpler/cheaper exercise than outsourcing the entire site. I see that some of the Hugo supported commenting systems seem to have a free tier with no ads that should support Jeff’s traffic.
Another interactive example is Netlify’s forms system, which is included in their free product.
Hugo is very well established, but at the same time it's known for not caring too much about introducing breaking changes; I think any given project with that age should already respect its great userbase and provide a strong guarantee of backwards-compatibility with the inputs/outputs that it decides to draw for itself, not revolve in an eternal 0.x syndrome calling itself young enough to still be seeking its footing in terms of stability but I digress... and in fact, Hugo hasn't been great in that regard. Themes and well functioning inputs do break with updates, which here in this house of mine, is a big drawback.
As of today, the [docs](https://gohugo.io/templates/lookup-order/) still haven't been fully adjusted to reflect the new system:
> We did a complete overhaul of Hugo’s template system in v0.146.0. We’re working on getting all of the relevant documentation up to date, but until then, see this page.
I don't mind breaking changes, but it'd sure be nice if the documentation reflected the changes.
Best Python SSG is mostly down to Hugo and Pelican as far as I can tell.
I've always loved SSGs, but ActivityPub integration is also looking very attractive absent wider adoption of RSS.
I used to use Nikola, but gave up on that for two reasons. One was it was adding every possible feature which meant an ever growing number of dependencies which gets scary. And that is hard to use - eg how do you embed a youtube video becomes a trawl through documentation, plugins, arbitrary new syntax etc.
But the biggest problem and one that can affect all the SSG is that they try to do incremental builds by default. Nikola of the time was especially bad at that - it didn't realise some changes mattered such as changes to config files, themes, templates etc, and was woolly on timestamps of source files versus output files.
This meant it committed the cardinal sin: clean builds produce different output from incremental builds
Pelican has kept it simple.
I think the only downside is that the project site's documentation feels like its really well done...up to a point...Like they were on a great roll documenting stuff really well, and then stopped at like ~90% completion. By this i mean that the high level stuff is well documented...but little details are missing for the last 10%...Then again, it could be that because i only use python a little here or there, that maybe that's why some things "seem" like they're missing a few details. By the way, if any project maintainers are out there, please do not take offense at my opinion here, as I value very much what the project maintainers do (i mean, i still use Pelican)!
Other than my feelings towards the documentation, if you don't need to customize too much stuff w/Pelican, then its a really great SSG.
To be frank, in using it for well over a decade I think something broke only once or twice. It's pretty stable and they give plenty of deprecation warnings.
Using Zola's GitHub actions to test/build and deploy to GitHub pages too.
Some guides say to add submodules. I favor direct inclusion and just overriding layouts as you see fit.
Then a couple of months ago I started comparing the big SSG tools after wanting something a bit less held together with duct tape... after a lot of experimenting I settled on 11ty at the time, but I really don't enjoy writing Liquid templates, and writing reusable components using Liquid felt very clumsy. I just wish it was much easier to use the JSX based templates with 11ty, but every step of the way feels like I'm working against the "proper" way to do things.
So over Christmas holiday I been playing around with NextJS SSG, and while it does basically everything I want (with some complicated caveats) I also can't help feel like I'm trying to use a oil rig to make a pilot hole when a drill would do just fine...
Anyone got any recommendations on something somewhere in between 11ty and NextJS? I'd love something that's structured similar to 11ty, but using JSX with SSG that then gets hydrated into full blown client side components.
The other thing I've been meaning to try is going back to something custom again, but built on top of something like Tempest [1] to do most the heavy lifting of generating static pages, but obviously that wouldn't help at all with client side components.
Doesn't Eleventy support most of the common old-school templating languages? I once converted a site using Mustache from Punch [1] to Eleventy.
Eleventy is great, and in some ways I prefer it to Hugo if build time isn't an issue. At least templates don't break, like most of the comments here say.
I eventually redid the site from scratch (with a bit of vibecoding magic back when v0 got me into it) with Astro.
I found it hard to get next to reliably treat static content as actually static (and thus cacheable), and it felt like a huge bundle of complexity for such a simple use case.
As far as comments, I’ve seen it done, but as far as I know it’s a bunch of custom code. One example I found: https://andreas.scherbaum.la/post/2024-05-23_client-side-com...
This Christmas, I redesigned my website [1] into modular "middlewares" with the idea that each middleware has its own assets and embed.FS included, so that I can e.g. use the editor to write markdown files with a dynamic backend for publishing and rendering, and then I just generate a static version of my website for the CDN of choice. All parts of my website (website, weblog, wiki, editor, etc) are modular this way and just dispatch routes on a shared servemux.
The markdown editor turned out to be a nice standalone project [2] and I customized the commonmark format a bit with a header for meta data like title, description, tags and a teaser image that is integrated with the HTML templates.
Considering that most of my content was just markdown files already, the migration was pretty quick, and it's database free so I can just copy the files somewhere to have a backup of everything, which is also very nice.
Previously: <https://news.ycombinator.com/item?id=29384788>
I'm amazed there still isn't a decent free simple to host CMS solution with live page previews, a basic page builder, and simple hosting yet though. Is there one?
There's https://demo.decapcms.org/ (previously Netlify CMS) that you install and run via adding a short JavaScript snippet to a page. It connects to GitHub directly to edit content. You can running it locally or online, but you need some hosting glue to connect to GitHub. Netlify provides this but more options would be nice and I think they limit how many total users can connect on free plans. You can get something like a page builder set up via custom content blocks, but I don't think there's going to be a simple way to render live previews via Hugo (written in Go) in a browser. A JavaScript based SSG would help here, but now you have to deal with the JavaScript ecosystem.
@geerlingguy Not a huge deal but noticed (scanning with https://www.checkbot.io/) if you click a tag in a post, it has an unnecessary redirect causing a speed bump that's easy to fix e.g. the post has a link to https://www.jeffgeerling.com/tags/drupal which then redirects to https://www.jeffgeerling.com/tags/drupal/.
Internal redirects are really easy to miss without checking with a tool because browsers aren't noisy about it. Lots of sites have unnecessary redirects from URLs that use http:// instead of https://, www vs no-www, and missing/extra trailing slashes, where with some redirect configs you can get a chain of 2 or 3 redirects before you get to the destination page.
I was frustrated that (because my posts are less frequent) changes in Hugo and my local machine could lead to changes in what is generated.
So I attached a web hook from my websites GitHub repo to trigger an AWS Lambda which, on merge to main, automatically pulled in the repo + version locked Hugo + themes. It then did the static site build in-lambda and uploaded the result to the S3 bucket that backs my website.
This created a setup that now I can publish to my website from any machine with the ability to edit my git repo. I found it a wonderful mix of WordPress-like ability to edit my site anywhere along with assurance that there's nothing that can technically fail* (well, the failure would likely, ultimately block the deploy, but I made copies of my dependencies where I could, so very unlikely).
But really the main thing I love is not maintaining really anything here... I go months without any concern that the website functions... Unlike every WordPress or similar site I help my friends run.
prose is fully open source as well: https://github.com/picosh/pico
It even has a Hugo migration repo for when users want to make the jump
https://github.com/picosh/prose-hugo
Alternatively you can use https://pgs.sh to deploy your Hugo blog using just rsync. The entire workflow starts and finishes in the terminal.
(I see what you're getting at but Astro has to be _the worst_ example. I have migrated off hugo to my own SSG but I don't hate it)
At that time it was "make my hugo setup run, and remove 90% of the features I don't need" - but I couldn't tell you how much I have diverged, it's really not a lot of code (or dependencies, but I did not reimplement the hard parts, e.g. markdown and templating)
I was thinking of making a GitHub action that uploaded the image from a given branch, deleted it, set the URL, and finally merged only the md files to main.
Or do you just check in images to GitHub and call it a day?
Usually you just store the images in the same git repo as the markdown. How you initially host the static site once generated is up to you.
The problem with storing binaries in Git is when they change frequently, since that will quickly bloat the repo. But, images that are part of the website will ~never change over time, so they don't really cause problems.
Working Copy (git for iPad) handles submodules reasonably well, I have a few that I'm working on cloned on it and others are not so I don't use so much space.
Instead I eventually just created an environment in nix that had compatible dependency versions to what GitHub uses and have been pleased since.
This resonates with me! Both in terms of things I use and things I make - I want them to "just work"
Ironically, my company's blog and websites are built with Hugo.
Was formatting old articles any difficult when moving to a new way of publishing?
Today, if I were setting up a blog to host just some text and images, a vibe-coded SvelteKit project using the static adapter[1] would easily solve every single problem that I have. And I would still be able to use the full power of the web platform if I need anything further customized.
Jeff's approach of writing a separate comments application is interesting. I've seen people reuse GitHub issues to accomplish that, but that limits your audience participation to GitHub. The other obvious choice, I think, is a headless CMS. I'll be curious to see where he goes with it.
https://tariqdude.github.io/Github-Pages-Project-v1/visual-s...
I ran into all these problems just last summer trying to launch something new so I said fk it went all in on a overkill demo just to see whats possible
A blog is mostly a static site that changes occasionally, a static site genetator is a much better fit. Caveat: comments, but personally, I don't want to moderate, and the WordPress site I administerred for work didn't want comments either (but even with them disabled, somehow new comments and trackbacks got into the database). When I finally got approval to turn our blog into a mostly static site, it was joyous, and the server(s) stopped burning cpu like nobody's business to serve the same handful of pages. We used PHP to manage serving translated blog entries, but it's not that much slower than a fully static file when your PHP logic is dead simple and short.
This is because Google Scholar treats PDFs as first class citizen, so your Important blog posts can get added to academia.
maybe a plugin can solve this particular gripe...
Edit: thx for answers below!
Don't know what it is either, but I'd like to got off-topic and remember with fondness the time when you could subscribe to RSS feeds directly in Safari. Google Reader was replacable, a direct integration into the browser not.
And for a short time, RSS was the bee's knees across the entire Internet. Apple had the best support for it, and almost put NetNewsWire out to pasture, until they just removed all baked in RSS functionality, entirely :(
But I use Reeder across Mac, iPad, and iPhone to keep up with feeds.
Though I did run it on Drupal off a Pi cluster for a few weeks as an experiment.
I built some automation that helps me test and deploy changes to S3 as well: https://github.com/carlosonunez/https-hugo-bloggen. It's clunky but works for me! Feel free to fork/PR if you're interested, of course.
It was a great move; I couldn't be happier. Running my blog is basically free (because nobody reads it, lol, but also because it's served by S3 and CloudFront and the # of monthly requests is still within Free Tier).
At the time, some folks were questioning why I built this instead of moving to Netlify. I wanted control over how my sites were deployed and didn't want to pay some provider for convenience I knew I could build myself. Netlify got AI-pilled some time ago, which makes me feel vindicated in my position.
Or I might stick it somewhere else, as an easter egg, we'll see!