A few years ago, I decided to migrate my personal website to a Common Lisp (CL) based static site generator that I wrote myself. In hindsight, it is one of the best decisions I have made for my website. It started out at around 850 lines of code and has gradually grown to roughly 1500 lines. It handles statically rendering blog posts, arbitrary pages, a guestbook, comment pages, tag listings, per tag RSS feeds, a consolidated RSS feed, directory listing pages and so on.
I have found it an absolute joy to maintain this piece of little 'machinery' for my website. The best part is that I understand every line of code in it. Every line of code, including all the HTML and CSS, is handcrafted. This gives me two benefits. It helps me maintain my sense of aesthetics in every byte that makes up the website. Further, adding a new feature or section to the site is usually quite quick.
I built the generator as a set of layered, reusable functions, so most new features amount to writing a tiny higher level function that calls the existing ones. For example, last month I wanted to add a 'backlinks' page listing other pages on the web that link to my posts and it took me only about 40 lines of new CL code and less than 15 minutes from wishing for it to publishing it.
Over the years this little hobby project has become quite stable and no longer needs much tinkering. It mostly stays out of the way and lets me focus on writing, which I think is what really matters.
Only problem I find with self-hosted blogs and certain personalities like mine is that I spend more time tinkering with the blog engine than actually blogging.
I ended up migrating back to a hosted solution explicitly because it doesn't allow me such control, so the only thing I can do is write instead of endlessly tinkering with the site.
Honestly this is so true. I have a few blogs for various reasons, and the hosted ones are where I post most because it’s so effortless to do. There’s so much less inertia. You can go even further and post by email (I use Pagecord) which removes virtually all barriers to posting.
That said, building your own static site and faffing with all the tech is generally an enjoyable distraction for most techies
I ended up separating out a "plumbing" blog, from the "real" blogs, with no discussion of the tinkering allowed on the real ones - so the plumbing blog grew in details but didn't "count" for the non-meta blogging I was trying to accomplish. A little bit of sleight-of-hand but it worked for me...
In my case it was less about the discussion of the tinkering and more the tinkering itself. I'd spend all my blogging time tinkering with the site, to the point where it's never ready and never actually deployed. As of right now in my projects folder I have an (actually finished and usable) Ghost theme and a handful of Wagtail blog projects in various states of functionality. Neither have actually been deployed. (at least I learnt enough Wagtail to be dangerous so I guess that's a win)
I ended up subscribing to Bear Blog and calling it a day. In fact I need to delete those half-baked attempts so I am never tempted to get back to them.
Ha. Well, https://taoofmac.com was ported to Hy (https://github.com/rcarmo/sushy) in a week, then I eventually rewrote that in plain Python to do the current static site generator —- so I completely get it.
I am now slowly rebuilding it in TypeScript/Bun and still finding a lot of LISP-isms, so it’s been a fun exercise and a reminder that we still don’t have a nice, fast, batteries-included LISP able to do HTML/XML transforms neatly (I tried Fennel, Julia, etc., and even added Markdown support to Joker over the years, but none of them felt quite right, and Babashka carries too much baggage).
If anyone knows about a good lightweight LISP/Scheme dialect that has baked in SQLite and HTML parsing support, can compile to native code and isn’t on https://taoofmac.com/space/dev/lisp, I’d love to know.
I'm also happy with the freedom and stability of a single-purpose static site generator. My previous project, Tclssg, was public and reusable from the start. This had big upsides: I learned to work with users and was compelled to implement features I wouldn't have. I actually wrote documentation. Seeing others use it was one of the best parts of the work. However, it also put constraints on what I could do. I couldn't easily throw away or radically change features, like how templates are rendered by default. With an SSG that's only for my site, I can.
If I were maintaining multiple large sites or working with many collaborators, I'd rely on something standard or extract and publish my SSG. For a personal site, I believe custom is often better.
The current generator is around 900 SLOC of Python and 700 of Pandoc Lua. The biggest threats to stability have been my own rewrites and experimentation, like porting from Clojure to Python. I have documented its history on my site: https://dbohdan.com/about#technical-history.
I did the same thing, but implemented my site generator in Go.
My site has grown by a lot over the years, but I can still build it from scratch (from MD files, HTML snippets and static files) in less than one second!
Also have a RSS feed generator and it can highlight code in most programming languages, which is important to me as I write posts on many languages.
I did try Hugo before I went on to implement my own, and I got a few things from Hugo into mine, but Hugo just looked like far too overengineered for what I wanted (essentially, easy templating with markdown as the main language but able to include content from other files either in raw HTML or also markdown, with each file being able to define variables that can be used in the templating language which has support for the usual "expression language" constructs). I used the Go built-in parser for the expression language so it was super easy to implement it!
The comment form is implemented as a server-side program using Common Lisp and Hunchentoot. So this is the only part of the website that is not static. The server-side program accepts each comment and writes to a text file for manual review. Then I review the comments and add them to my blog.
In the end, the comments live like normal content files in my source code directory just like the other blog posts and HTML pages do. My static site generator renders the comment pages along with the rest of the website. So in effect, my static site generator also has a static comment pages generator within it.
Not the poster, but what I did was to have a CGI script which would receive incoming comments and write them to "/srv/data/blog/comments/XXX/TIMESTAMP.txt" or similar.
The next time I rebuilt the blogs the page "XXX" would render a loop of all the comments, ordered by timestamp, if anything were present.
The CGI would send a "thanks for your comment" reply to the submitter and an email to myself. If the comment were spam I'd just delete the static file.
On mine, I don't. Any interactivity is too much hassle for me to worry about wrt moderation etc. I also don't particularly care what random people have to say. If my friends like what I wrote, they can tell me on Signal or comment on the Bluesky post when I share the link.
> I decided to migrate my website to a Common Lisp based static site generator that I wrote myself.
Many programmers' first impulse when they start[0] to blog is to write their own blog engine. Props to you for not falling into that particular rabbit hole and actually using - as opposed to just tinkering on - that engine.
[0] you said you migrated it, implying you already had the habit of blogging, but still,
Writing blog generator is not only fun but also grants ultimate control, such as static syntax highlighting, equation rendering and custom build steps. Highly recommend!
Similar to my "Go 101" books website, about 1000 line of Go code (started from 500 lines at 9 years ago). The whole website can be built into a single Go binary.
Your wife’s Python version is quite impressive. It wouldn’t have occurred to me to do the simple thing and just do some string-replacement targeted at a narrow use-case instead of using a complicated templating engine.
> just do some string-replacement targeted at a narrow use-case instead of using a complicated templating engine.
A neat little in-between "string replacements" and "flown blown templating" is doing something like what hiccup introduced, basically using built-in data structures as the template. Hiccup looks something like this:
(h/html [:span {:class "foo"} "bar"])
And you get both the power of templates, something easier than "templating engine" and with the extra benefit of being able to use your normal programming language functions to build the "templates".
I also implemented something similar myself (called niccup) that also does the whole "data to html" shebang but with Nix and only built-in Nix types. So for my own website/blog, I basically do things like this:
Thank you. My current home page has about 70 entries. The HTML size is about 7 kB and the compressed transfer size is about 3 kB.
I created a test page with 2000 randomly generated entries here: <https://susam.net/code/test/2k.html>. Its actual size is about 240 kB and the compressed transfer size is about 140 kB.
It doesn't seem too bad, so I'll likely not introduce pagination, even in the unlikely event that I manage to write over a thousand posts. One benefit of having everything listed on the same page is that I can easily do a string search to find my old posts and visit them.
How large does the canvas need to get before pagination makes sense?
Modern websites are enormous in terms of how much needs to be loaded into memory- sure, not all of it is part of the rendered document, but is there a limit to the canvas size?
I'm thinking you could probably get 100,000+ entries and still be able to use CTRL+F on the site in a responsive way since even at 100,000+ entries you're still only about 10% of Facebooks "wall" application page. (Without additional "infinite scroll" entries)
I started blogging with emacs and an org-based solution, and it was horrid.
I had a vision of what I wanted the site to look like, but the org exporter had a default style it wanted. I spent more time ripping out all the cruft that the default org-html exporter insisted on adding than it would have taken to just write a new blog engine from scratch and I wish I had.
There's a way to set a custom export template, but I couldn't figure it out from the docs. I found and still do find the emacs/org docs to be poorly written for someone who doesn't already understand the emacs internals, and I wasn't willing to spend the time to become an emacs internals expert just to write a blog.
So I lived with a half-baked org->pandoc->html solution for a while but now I'm on Jekyll and much happier with my blogging experience.
I made the jump to Hugo too (from a managed service: svbtle) a long time ago, but I'll be really honest...
I regret it.
I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
I’ve slowly grown to realize there’s some software you just don’t need to update. A static site generator (almost certainly) won’t have security issues as long as you control the input and the output is just a bunch of static files.
Unless the new version of the software includes some feature I need, I can be totally fine just running an old version forever. I could just write down the version of the SSG my site builds with (or commit it to source control) and move on with my life. It’ll work as long as operating systems and CPU architectures/whatever don’t change too much (and in the worst case scenario, I’m sure the tech exists to emulate whatever conditions it needs to run) Some software is already ‘finished’ and there’s no need to update it, ever.
Is there any static site generator where you specify the version you use, and the launcher will simply run the old binary that you want?
Like most build systems work, for example when you set a "rust-version" in Cargo.toml and only bump it when you explicitely want to. This way it will still use the older version on a fresh checkout.
I used Zola for my SSG and can't think of the last breaking change I've hit. I just use the pattern of locked nix devshells for everything by default. The extra tools are used for processing images or cooklang files.
> Is there any static site generator where you specify the version you use, and the launcher will simply run the old binary that you want?
For Hugo, there is Hugo Version Manager (hvm)[0], a project maintained by Hugo contributor Joe Mooring. While the way it works isn't precisely what you described, it may come close enough.
I hate to say it, but even the existence of this tool is a danger sign.
I say this as someone who uses Hugo and is regularly burned (singed) by breaking changes.
Pinning your version is great until you trip across a bug (usually rendering, in my case) and need to upgrade to get rid of it. There goes a few hours. I won’t even mention the horror of needing a test suite to make sure the rendering of your old pages hasn’t changed significantly. (I ended up with large portions of text in a code block, never tracked the root cause down… probably something to do with too much indentation inside a bulleted list. It didn’t render that way several years before, though.)
I guess my very own "niccup" (basically hiccup-in-nix) fits that (https://embedding-shapes.github.io/niccup/), as you'll typically always include the library together with a strictly set version, so even when new versions are available, you'd need to explicitly upgrade if you want it.
Oh definitely. How can you suggest adding a binary to a git repository? It's a bad idea on many levels: it bloats the repository by several orders of magnitude, and it locks you to the chosen architecture and OS. Nope, nope, nope.
Second this. Once I setup GitHub actions with Hugo (there’s one readily available), I rarely build the blog locally anymore. New article drafts become GH pull requests, and once ready they get merged and published. This also works on mobile well enough.
> In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
Pretty sure the version of Hugo used to generate a site is included in metadata in the generated output.
If you have a copy of the site from when it last worked, then assuming my above memory is correct you should be able to get the exact version number from that. :)
If you use an off-the-shelf binary for any tool, you can put the binary in `${project}/bin/`, add it to `.gitignore`, document the download URL in `README.md` or an install script, and commit the checksum in a project-wide `SHA256SUMS` file (or `B3SUMS`, etc.). It's like a lo-fi version of Git LFS.
I had the same issue and I'm currently thinking whether it's easier to just Vibe Engineer my own static site generator with the exact features I need vs fighting with the hugo theme system.
My needs for a site are pretty simple, so I might just go with the custom-built one to be honest.
If it breaks, I can just go look in the mirror for the culprit =)
I had a similar issue, but with Jekyll. I had a customized theme and some update along the way broke everything. So, I very much agree with a sibling comment about not needing to update static site generators and it’s not just a Hugo thing. Sadly, my site was also being hosted/generated by GitHub, so I had no real choice in the update matter. (I’m not sure if pinning would have helped.)
> So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
No need for the entire binary.
Just put `go run github.com/gohugoio/[email protected] "$@"` into a `hugo.sh` script or similar that's in source control, and then run that script instead of the Hugo binary.
You'll need Go installed, but it's incredibly backwards compatible, so updating to newer Go versions is very unlikely to break running the old Hugo version.
That's somewhat untrue. Personal software only moves to your constraints. Shared software moves to others' as well. I use Mediawiki for my site (I would like others to be able to edit it) and version changes introduce changes in more than the sections I care about.
They tend to change and when I want to do something that the generator does not do, I either need to hack it in (which might break) or i need to fork the generator.
Binary search is a very old trick, going back to 1946 on computers, and probably thousands of years before that, since searching sorted lists goes back to at least ancient Babylon. https://en.wikipedia.org/wiki/Binary_search
I've been burned by this a few times and now I have the Hugo binary in source control. I had to dig through the releases a little bit to find the version that didn't break everything.
> I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
> Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
I've had the same issues as you, and yes, I agree that pinning a version is very important for Hugo.
It's more useful for once-and-done throwaway sites that need some form of structure that a static site generator can provide.
At least it’s practical to identify a specific version to use, and you can be reasonably confident it will work indefinitely. I remember that with the Hyde iteration of my site, somewhere along the way Hyde became impossible to install, and I was stuck with an existing installation, or a lot of effort to put it back together manually. Python packaging has improved a lot since then, so that I doubt that problem would apply on any new project, but it’s still far more plausible than in a language like Go or Rust.
I maintained a personal fork of Zola for my site (and a couple of others), and am content to just identify the Git repository and revision that’s used.
Zola updates broke my site a few times, quite apart from my patches not cleanly rebasing. I kept on the treadmill for a while, initially because of a couple of new features I did want, but then decided it wasn’t necessary. You don’t need to run the latest version; old is fine.
—⁂—
One piece of advice I would give for people updating their SSG: build your site with the old and new versions of the SSG, and diff the directories, to avoid regressions.
If there are dynamic values, normalise both builds before diffing: for example, if you have timestamp-based cachebusting, zero all such timestamps with something like `sed -i 's/\?t=[0-9]+/?t=0/' **/*`. Otherwise regressions may be masked.
I caught breakages a couple of times this way. Once was due to Zola changing how shortcodes or Markdown worked, which I otherwise might not have noticed. (Frankly, Markdown is horrible for things like this, and Zola’s shortcodes badly-designed; but really it’s mostly Markdown’s fault.)
A pretty light-grey comment as I came across it. Maybe I’m missing something odious about it? People downvote this, but as a VERY skeptical AI skeptic, it’s exactly the sort of use case that makes sense to me:
A) Low-stakes application with
B) nearly no attack surface that
C) you don’t use consistently enough to keep in your head, but
D) is simple enough for an experienced software developer to do a quick sanity check on and run it to see if it works.
Hell, do it in a sandbox if you feel better about it.
If it was a Django/Node/rails/Laravel/…Phoenix… (sorry, I’ve been out of my 12+ years web dev career a short 4 years and suddenly realized I can only remember like 4 server-side frameworks/environments now) application, something that would run on other people’s devices, or really anything else that produces an executable output, then yeah fuck that vibe coding bullshit. But unless you’ve got that thing spitting out an SPA for you, then I say go for it.
Yeah I feel like Claude Code is basically tailor made for a use-case like this. Where:
* I have forked some public repository that has kept up with upstream (IE; lots of example code to draw from)
* Upstream is publishing documentation on what's changing
* The errors are somewhat google-able
* Can be done in a VM and thrown away
* Limited attack surface anyway.
I think you're downvoted because the comment comes across as glib and handwavy (or not moving the discussion forward.. maybe?), and if it was a year ago I would probably argue against it.. but I think Claude Code can definitely help with this.
It just didn't exist as it does in 2023~ or whenever it was that I originally started having issues.
---
That said: it shouldn't be necessary. As others in this thread have articulated (well, imo) sometimes software is "done" and Hugo could be "done" software, except it's not; so the onus is on the operator to pin their definition of "done" version.. which is not what you'd expect.
What kind of issues? I use my own private theme called Brahma which I wrote from scratch. I keep it simple and has been since 2019. I have barely had any issues.
Given, mine is not sophisticated at all and simple by design. But curious what kind of issues pops up.
Nobody can point to a reason why it's a good idea for a site with any interactivity now.
All the supporters here are all the same: "I had to do a whole bunch of mental gymnastics and compromises to get <basic server side site feature> but it's worth it!" But they don't say why it was worth it, beyond "it's easy now <after lots of work costs sunk>".
When you try get to why they did it in the first place, it's universally some variation on "I got fed up with <some large server side package> so took the nuclear SSG route <and then had to eventually rewrite or get someone else's servers involved again>"
Part of this is a me problem: a personal website should be owned by the person, IMO. A lot of people are fine to let other people own parts of their personal websites, and SSGs encourage that. What even is a personal website if it's a theme that looks like someone else's, hosted and owned on someone else's server - why not just use Facebook at that point?!
I was nodding along until your last paragraph - SSGs encourage letting other people own parts of your personal site, really? Sure, people bolt on Disqus or something, but otherwise I am not sure I follow the argument. Isn't part of the appeal of SSGs that all you have is a bunch of html/css/js that you can drop on any server anywhere (even a solar-powered RPi can serve a lot of requests[1])?
> Isn't part of the appeal of SSGs that all you have is a bunch of html/css/js that you can drop on any server anywhere (even a solar-powered RPi can serve a lot of requests[1])?
This is the part I'm struggling with. That's the view I held from 2016 - 2024. Practically though, it's only true if you want a leaflet website with 0 interactivity.
If you want _any_ interactivity at all (like, _any_ written data of any kind, even server or visitor logs) then you need a server or a 3rd party.
This means for 99% of personal websites with an SSG, you need a real server or a 3rd party service.
When SSGs first came around (2010 - 2015) compute was getting expensive, server sides were getting big and complex, bot traffic solutions were lame, and all the big tech companies started offering free static hosting because it was an easy free thing to offer.
Compare this to now, 2026, it's apparently nothing special to handle hackernews front page on free or cheap compute. Things like Deno, Bun, even Go and Python make writing quick, small, modern server sides so much quicker, easier and safer. Cloudflare and or crowdsec can cover 99% of bot and traffic issues. It's possible to get multiple free multiple GB compute instances now.
I didn't mean to imply there's some sinister plot of people maliciously encouraging people to use SSGs to steal their stuff, but that's the reality that modern personal webdev has sleepwalked into. SSGs were first sold to make things better performing and easier than things were at the time. Pretty much any "server anywhere" you own now will be able to run a handwritten server doing SSR markdown -> HTML now.
So why force yourself to have to start entertaining ideas like making your visitors download multiple megabyte client side index files to implement search, or embedded iframes and massive JS external libraries for things like comment sections? Easier looking SSG patterns like that typically break all the stuff required to keep the web open and equal, like screen readers, low bandwidth connections and privacy. (Obviously SSR doesn't implicity solve these, but many of these things were originally conceived with SSR in mind and so are naturally more compatible).
Ask anyone who's been in and out of web dev for more than 15 years to really critically think about SSGs in depth, and I think they'll conclude they offer a complete solution for maybe 1% of websites, but seem to be recommended in 99% of places as the only worthy way to do websites now. But when you pick it apart and try it, you end up in Jeff's position: statically rendered pages (the easy bit) and a TODO with a list of compromising options for basic interactivity. In 5 years time, he'll have complex SSG pipelines that's running almost 24/7, or a complex mesh of dependencies on external services that are constantly changing or trying to start charging him more to deal with his own creations.
I can try that version, but it's entirely possible (and even: likely) that I was already using an old version of Hugo then; whatever was installed by my package manager - assuming I updated my machine somewhat recently.
If I used MacOS then Hugo was probably very old, since I often forget to update brew packages and end up running very old software.
But, that's what I thought to do first also.
In the end, it becomes not worth the hassle, and spending time fixing it means that whatever I was going to write gets pushed out of my head, and it's very difficult to even bother.
It may be worth considering whether you need a native binary (and the ability to run it) for the job at all. A static site generator doesn't need to do anything that browsers from the last 10 years can't do; a static site generator is fundamentally a classic batch processing job that takes a collection of (mostly plain text) files as input, processes it, and then outputs something else (in this case, a collection of post-processed content for the site).
If you encode the transformations that your desired SSG should perform by writing the processing rules as plain text source code that a browser is capable of executing (i.e., an "HTML tool" or something adjacent[1][2]), then you can just publish this "static site generator" itself as yet another page on your static site.
To spell it out: running the static site generator to create a new post* doesn't need to involve anything more than hitting /new.html (or whatever) on the live site, clicking the button for the type=file input on that page, using the browser file picker to open the directory where the source to your static site lives, and then saving the resulting ZIP somewhere so the contents can be copied to whatever host you're using.
If you already write your posts in Markdown, it makes sense for sure.
About a year ago I converted my 500+ post Jekyll blog to Hugo, overall it's been a net win but boy do I find myself looking up syntax in the docs a lot. Thankfully not so much nowadays but figuring out the templating syntax was rough at the time.
Jeff, you don't have to set draft to false. You can separate your drafts into a different directory and use Hugo's cascade feature to handle it. Also you don't have to update the date in your frontmatter if you prefix the file with YYYY-MM-DD and configure Hugo to use that.
Just a heads up, you didn't mention this in your post but Hugo adds a trailing slash for pretty URLs. I don't know if you had them before but it's potentially new behavior and canonical URL differences unless you adjust for that.
When I did the switch from Jekyll to Hugo, I wrote a 10,000 word post with the gory details and how I used Python and shell scripts to automate converting the posts, plus covered all of the gotchas I encountered. There are sections focused on the above things I mentioned too: https://nickjanetakis.com/blog/converting-my-500-page-blog-f...
> That gives near instant live reload when writing posts which makes a huge difference from waiting 4 seconds.
Mhm. Why? I can write all of my post and look at it only afterwards? Perhaps if there's a table or something tricky I want to check before. But normally, I couldn't care less about the reload speed.
> I use that plugin because it digests your assets by adding a SHA-256 hash to their file names. This lets me cache them with nginx. I can’t not have that feature.
> Mhm. Why? I can write all of my post and look at it only afterwards?
My site has a fixed max width which is what most tablets or desktops will view it as.
Sentence display width is something I pay attention to. For example sometimes I don't want 1 hanging word to have its own full line (a "hanger") because it looks messy. Other times I do want it because it helps break up a few paragraphs of similar length to make it easier to skim.
Seeing exactly what my site looks like while writing lets me see these things as I'm writing and having a fast preview enables that. Waiting 4 seconds stinks.
> Why? [asset digesting and cache busting with nginx]
It helps reduce page load speeds for visitors and saves bandwidth for both the visitor and your server. If their browser already has the exact CSS or JS file cached locally, this allows you to skip a server side call to even determine if the asset can be served locally or needs an update from the server.
The concept of digesting assets with infinitely long cache header times isn't new or something I came up with. It's been around for like 10+ years as a general purpose optimization.
I think this is silly, and it's a hill I'm willing to die on. I wrote this in a comment yesterday, and Jeff has fully confirmed/vindicated this in his post:
SSGs are good for static sites with no interactivity or feedback. If you want interactivity or feedback, someone (you or a 3rd party service provider) is going to have to run a server.
If you're running a server anyway, it seems trivial to serve content dynamically generated from markdown - all an SSG pipeline adds is more dependencies and stuff to break.
I know there's a fair few big nerd blogs powered by static sites, but when you really consider the full stack and frequency of work that's being done or the number of 3rd party external services they're having to depend on, they'd have been better by many metrics if the nerds had just written themselves a custom backend from the start.
Jeff: I think you'll regret this. I think you'll waste 5 - 10 years trying to shoehorn in basic interactivity like comments, and land on a compromised solution.
I also used and managed Drupal and Joomla before I went to SSGs, and then finally realised there's a sensible midpoint for the pain you're feeling: you write/run a simple server that dynamically compiles your markdown - good ol' SSR. It's significantly lighter, cheaper and easier than drupal, and lets you keep all the flexibility and features you need a server for. Don't take cave to the "self hosted tech was too hard so I took the easy route that forces me to also use 3rd party services instead" option.
SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
> If you're running a server anyway, it seems trivial to serve content dynamically generated from markdown.
Until you have enough visitors or evil AI bots scraping your site so that it crashes, or if you're using an auto-scaling provider, costs you real money.
The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
For blogs, number of reads vs number of comments or other actions that require a server is probably on the order of 100:1 or 1000:1, even more if many of the page loads are bots/scrapers.
> SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
> Until you have enough visitors or evil AI bots scraping your site so that it crashes, or if you're using an auto-scaling provider, costs you real money.
There's been multiple blog posts on HN from people who've received a hug of death and handled it fine with basically free or <$10 /month VMs
A couple of gigs of RAM and 2 cores can take viral posts and the associated bots. 99% of personal websites never go viral either.
> The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
This is my exact argument against SSGs, and Jeffs post proves it: it's easy to use SSG to generate web pages, but the moment you want comments, or any other bells and whistles, you do what Jeff's going to have to do and say you'll do it later because there's no obvious easy solution that doesn't work against and SSG.
> Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
EXACTLY! This is my point! Why not just SSR the markdown on the server you're already running?!
This is the opposite of what Jeff and 99% of other SSG users do, they switch to SSGs to get rid of dealing with servers, only to realise they need servers or third parties, but then they're sunk-cost-fallacied into their SSG by the time they realise.
You seem to think an SSG is some burden that people put up with due to sunk cost fallacy, but I don't see why.
The Markdown-to-templated-HTML pipeline code is the same whether it runs on each request or on content changes, so why not choose the one that's more efficient? Serving static HTML also means that the actually important part of my personal webpage (almost) never breaks when I'm not looking.
"Markdown-to-templated-HTML" is only 1 part of a website.
SSGs force people into particular ways of doing all the other parts of a website by depending on external stuff. This is often contrary to long term reliability, but nobody associates those challenges with the SSG that forced the external dependencies.
It becomes a sunk cost fallacy because people do what Jeff has done, they switch to an SSG in the promise of an easier website and proudly proclaim they're doing things the new best way. But they do the easy SSG bit (the content rendering) and then they create a TODO with all the compromised options for interactivity.
When they've got to a feature complete comparison, they've got a lot more dependencies and a lot less control/ownership, which inevitably leads to future frustrations.
The end destination for most nerdy personal website is a hand crafted minimal server with minimal to no dependencies.
I'm already self hosting my own cookieless analytics, and before, I hosted Drupal (LEMP stack) and Apache Solr on separate servers. I'm used to self-hosting, and any comment solution I use will be self-hosted as well (see: https://github.com/geerlingguy/jeffgeerling-com/issues/167)
The code surface with SSG + 1 or 2 small self-hosted OSS tools is much, much smaller than it ever was running Drupal or another CMS.
Yes, SSG pipeline + 1 or 2 small self-hosted OSS tools is way simpler than Drupal.
But all you've done in bought on all the pain and compromise of having to think from an SSG perspective, and that created problems which you've already identified you'll figure out in future
I'm suggesting 2 or 3 small self-hosted OSS tools, where one is a small hand crafted server that basically takes a markdown file, renders it, and serves it as plain HTML with a header/footer.
This is more homogenous, fewer unique parts/processes, and doesn't have the constraint of dealing with an SSG.
I remember my own personal pain from 2010 - 2016ish of managing Drupal and Joomla. I did exactly the same as you in 2016 and went all in on SSGs and in 2024, I realised all of the above. I feel like I wasted years of potential development time reinventing basic personal website features to try and work with an SSG and you literally have a ticket to do just that: https://github.com/geerlingguy/jeffgeerling-com/issues/167. 1 of your 3 solutions involves letting someone else host your comments:(
A custom framework/server is the end destination for all nerdy personal websites - I can't wait to see what you make when you realise this:)
edit/p.s. I love you and all your work. Sorry for sounding disagreeable, I'm excited to see what you learn from you SSG journey, I hope you prove me wrong!
Definitely not disagreeable, more just "there are two right answers" ;)
For me, an unstated reason for SSG is being able to scale to millions of requests per hour without scaling up my server to match.
Serving static HTML is insanely easy, even on a cheap $5-10/month VPS. Serving anything dynamic at all is an order of magnitude harder.
Though... I've been using Cloudflare since 2022, after I started getting regular targeted DDoSes (was fun initially, seeing someone target different parts of Drupal, until I just locked down everything except for the final comment pages). The site will randomly get 1-2 million requests in a few minutes, and now Cloudflare eats those quickly, instead of my VPS getting locked up.
Ideally, I'll be able to host without Cloudflare in front at some point, but every month, because of one or two attacks, the site's using 25-35 TB of bandwidth (at least according to CF).
But how in the world will you shove a decent search into a static site?
I really want to know because there is a Drupal 7 site that I need to migrate to something but I need good search on it (I’m using solr now).
Edit: I should have specified that I need more functionality than just word searching. I need filtering (ie faceted search) too. I’ve used a SSG that uses a JavaScript index and appreciate it, but that’s not going to cut it for this project.
The usual way is to create an index on generation time and serve it statically. JS just uses the index to do the search. It's a big file, so I'm not saying it's a great solution for everyone, but it works reasonably well.
Of course, for my site I just redirect the user to a search engine plus `site:stavros.io`.
See https://gohugo.io/tools/search/. Not sure how well they scale to thousands of posts, but they work by statically generating multiple static search index files at build time that are queried via client JavaScript when hosted. The search UX is actually really good because they tend to respond instantly as you type your query and allow complex queries.
I've used https://lunrjs.com/guides/getting_started.html briefly and it has lots of options for things like different fields, complex queries, fuzzy searching and wildcards. Didn't notice anything specific about dates but you could always add date as a field then filter out a date range manually at the end. I'm sure there's better libraries now as well.
We've gone from SSGs for ease, speed and reduced resources, to talking about implementing search with multiple megabyte client side indexes and hundereds of thousands of prerendered search result pages.
When does this become 1 step forward with the SSG and 2 steps back with search solutions like this?
You don't pre-render the search pages. You generate some search index files on the build step (something like a map of keywords to matching post URLs), and then client side JavaScript requests the search index files it needs on demand and generates the search results on the page. For a modest blog, I think the compressed index can be a few 100K. A single large image can be bigger than that.
Nothing is perfect, but the above is really simple to host, is low maintenance, and easy to secure.
Assuming 500 bytes of metadata + URL per blog post, a one megabyte index is enough for 2000 blog posts.
As already mentioned, you don't generate search result pages, because client side Javascript has been a thing for several decades already.
Your suggestion of converting markdown on every request also provides near zero value.
Writing a minimal server backend is also way easier if you separate it from the presentation part of the stack.
Based on https://news.ycombinator.com/item?id=46489563, it also seems like you fundamentally misunderstand the point. Interactivity is not the point. SSGs are used for publishing writing the same way PDF is used. Nobody sane thinks that they need a comment section in their PDFs.
Not if you're already running servers and server applications. If you already have patterns for running and deploying server software, an SSG requires an extra preprocessing step to generate the HTML for the server.
If you don't use an SSG, this step is done by virtue of the server running.
> SSGs are good for static sites with no interactivity or feedback. If you want interactivity or feedback, someone (you or a 3rd party service provider) is going to have to run a server.
For my website, I do both. Static HTML pages are generated with a static site generator. Comments are accepted using a server-side program I have written using Common Lisp and Hunchentoot.
I have always had a comments section on my website since its early days. Originally, my website was written as a set of PHP pages. Back then, I had a PHP page that served as the comment form. So later when I switched to Common Lisp, I rewrote the comment form in it.
It's a single, self-contained server side program that fits in a single file [1]. It runs as a service [2] on the web server [2], serves the comment and subscriber forms, accepts the form submissions and writes them to text files on the web server.
Nice! So you weren't forced to rewrite a comments solution when you shifted to an SSG, you just coincidentally had to do them at the same time?
It looks like you did exactly what Jeff did: got fed up with big excessive server sides and went the opposite way and deployed and wrote your own minimal server side solutions instead.
There's nothing wrong with that, but what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
The common sales points for SSGs are often:
- SSGs are easier (doesn't apply to you because you had to rewrite all your comment stuff anyway)
- cheaper (doesn't apply to you since you're already running a server for comments, and markdown SSR on top would be minimal)
- fewer dependencies (doesn't apply to you, the SSG you use is an added dependency to your existing server)
This largely applies to Jeff's site too.
Don't get me wrong, from a curious nerd perspective, SSGs presented the fun challenge of trying to make them interactive. But now, in 2026, they seem architecturally inappropriate for all but the most static of leaflet sites.
> [...] what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
I was not trying to solve a specific problem. This is a hobby project and my choices were driven mostly by personal preference and my sense of aesthetics.
Moving to a fully static website made the stack simpler and more enjoyable to work with. I did not like having to run a local web server just to preview posts. Recomputing identical HTML on every request also felt wasteful (no matter how trivially) when the output never changes between requests. Some people solve this with caching but I prefer fewer moving parts, not more. This is a hobby project, after all.
There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
An added bonus is portability. The entire site can be browsed locally without a server. In fact, I use relative internal links in all my HTML (e.g., '../../foo/bar.html' instead of '/foo/bar.html') so I can browse the whole site directly from the local filesystem, from any directory, without spinning up a web server. Because everything is static, the site can also be mirrored trivially to hosts that do not support server-side programming, such as https://susam.github.io/ and https://susam.codeberg.page/, in addition to https://susam.net/. I could have achieved this by crawling a dynamic site and snapshotting it, which would also be a perfectly acceptable solution. Static site generation is simply another acceptable solution; one that I enjoy working with.
> That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times.
This, definitely.
I think until you experience your first few DDoSes, you don't think about the kind of gains you get from going completely static (or heavily caching, sometimes at the expense of site functionality).
I started writing a blog engine back when I was in college. I've been working on it ever since, and it's let me implement and play with cool web features. I implemented Markdown over 10 years ago, and I don't regret it. The Markdown is converted to HTML once on save.
It's supported RSS since practically the beginning, and RSS later served as a foundation for a backup and restore system. A few years ago, I implemented SSG functionality (exports html, css, images, etc in a zip).
This is where I've found Astro to really shine. Most people reached for it because of the whole "islands of interactivity" concept, but IMO the ability to easily build a mostly static site with server APIs when needed is a the killer feature.
I manage multiple Astro sites and eventually they have all needed at least a few small server endpoints. Doing that with Astro is very simple and it can be done without making the static portions of the site a pain to maintain.
Just a small note - I see a lot of warnings about "Long Tasks" in the dev console log (Chromium) - likely related to the animation which is jittering on my Mac M1.
This includes a giant list of open source commenting systems.
I really don’t understand why people commonly say static site generators are a good candidate for building your own when there are a good selection of popular, stable options.
The only thing I don’t like about Hugo is the experience of using other people’s themes.
> It took me a weekend to write the initial Perl script that made this site. It took me another weekend to do the Rust rewrite (although porting all the content took two weeks). These are not complicated programs.
My last Hugo site took 30 minutes to deploy, not a whole weekend. Picked a theme, pasted in content.
> You want free web hosting? Hugo might be the right option.
An extremely good reason to pick Hugo especially if you don’t have the know-how to build your own SSG. You don’t need to know a programming language at all to use it.
Again, I have to throw criticism toward this idea that everyone who wants a static site generator already has the skills required to make one.
And I’m not saying it covers every use case like the kind of person who is willing to pay $100+ per year on a full blown solution like Shopify and Squarespace. It fits a niche: someone who wants their content online without coding with no hosting cost and doesn’t want to rely on third party platforms like Substack.
Pretty much every single option there involved letting a 3rd party collect and own the comments.
If you're fine for 3rd parties to own all your comments and content, why even take on the extra effort of hosting or managing or building your own website? That's basically what social media is for.
If you want interactivity, we agree, you have to either run a server, or you have to use a 3rd party.
It's easier for a server to render markdown than it is for an SSG site to do server stuff.
Your suggestion for comments is to run a server/use a third party, and do SSG. My suggestion is to just run a server. One is clearly easier as it has fewer steps.
The idea that you can run a decent personal website without compromising on interactivity, and without running a server or using 3rd parties is a myth. As soon as you accept that you have to run a server, SSG becomes an unnecessary extra step.
With the SSG you’re just managing your static markdown for your site’s content, then you’re dropping in a tiny extra piece for comments.
The comment self-hosting is a simple docker container with minimal configuration, an out of the box just works type of service.
Building a personal website that is hosted along with the interactivity integrated is more like managing an entire application, which is exactly what Jeff described with his pain using Drupal. He didn’t actually need all the interactivity that a full blown hosted site offers.
For example, if I run a PHP, Django, or NodeJS based website, now I’ve got to periodically update my whole site’s codebase to keep up
with supported/safe versions of PHP, Python, or Node/npm packages.
With the SSG plus comment system you’re pretty much just pulling latest docker image for the comment system and everything else is just static.
I think you’d also have to agree that outsourcing comments to a 3rd party service is potentially a simpler/cheaper exercise than outsourcing the entire site. I see that some of the Hugo supported commenting systems seem to have a free tier with no ads that should support Jeff’s traffic.
Another interactive example is Netlify’s forms system, which is included in their free product.
I'd love to see more reasoning about the decision process to select one static site generator in particular. There are a ton of them, and for sure a bunch of them that we could call "the big ones" so anyone deciding to migrate will probably go through the aame process of evaluating and choosing. i.e. Hugo, Eleventy (11ty), Jekyll, and a couple more are the most known. Seeing Jeff's decision process could be interesting.
Hugo is very well established, but at the same time it's known for not caring too much about introducing breaking changes; I think any given project with that age should already respect its great userbase and provide a strong guarantee of backwards-compatibility with the inputs/outputs that it decides to draw for itself, not revolve in an eternal 0.x syndrome calling itself young enough to still be seeking its footing in terms of stability but I digress... and in fact, Hugo hasn't been great in that regard. Themes and well functioning inputs do break with updates, which here in this house of mine, is a big drawback.
> We did a complete overhaul of Hugo’s template system in v0.146.0. We’re working on getting all of the relevant documentation up to date, but until then, see this page.
I don't mind breaking changes, but it'd sure be nice if the documentation reflected the changes.
I used to use Nikola, but gave up on that for two reasons. One was it was adding every possible feature which meant an ever growing number of dependencies which gets scary. And that is hard to use - eg how do you embed a youtube video becomes a trawl through documentation, plugins, arbitrary new syntax etc.
But the biggest problem and one that can affect all the SSG is that they try to do incremental builds by default. Nikola of the time was especially bad at that - it didn't realise some changes mattered such as changes to config files, themes, templates etc, and was woolly on timestamps of source files versus output files.
This meant it committed the cardinal sin: clean builds produce different output from incremental builds
I don't know if Pelican is as popular nowadays or not...but i've used it for a few years now...and it does the trick quite nicely! I'd highly recommend it!
I think the only downside is that the project site's documentation feels like its really well done...up to a point...Like they were on a great roll documenting stuff really well, and then stopped at like ~90% completion. By this i mean that the high level stuff is well documented...but little details are missing for the last 10%...Then again, it could be that because i only use python a little here or there, that maybe that's why some things "seem" like they're missing a few details. By the way, if any project maintainers are out there, please do not take offense at my opinion here, as I value very much what the project maintainers do (i mean, i still use Pelican)!
Other than my feelings towards the documentation, if you don't need to customize too much stuff w/Pelican, then its a really great SSG.
To be frank, in using it for well over a decade I think something broke only once or twice. It's pretty stable and they give plenty of deprecation warnings.
Why Hugo over Astro for something lightweight? Or why not even bashblog? Seems like a strange choice to go with Hugo if he's aiming for lightweight and speed.
Just by being written in JS/TS and using node I suppose keeping it running is a task of its own (or keep a node_modules folder of 500MB) - compared to a hugo binary that will most probably work on any linux or mac for the next 10 years.
(I see what you're getting at but Astro has to be _the worst_ example. I have migrated off hugo to my own SSG but I don't hate it)
Interesting. I'm currently looking to migrate from Hugo to Zola. Personally I feel like I grok the configuration and templating options for Zola better than I do for Hugo.
I've been using Hugo for the past 3 years. Biggest lesson I learned is to just fork the theme you're using and don't use submodules. There's no rush in keeping your theme up to date. Also you have complete control over the theme when its forked. I've only had stuff break on occasion when updating to a newer version of Hugo, had to change a couple things with the theme which did not take too long to fix. Curious to see how comments will be implemented though. Does not sound straightforward to add to a SSG.
This. A theme is individual to the site regardless... I almost migrated my Drupal theme over directly but gave up realizing I had so many views-view-view-field-data-field-body-formatted style classes lol.
Some guides say to add submodules. I favor direct inclusion and just overriding layouts as you see fit.
I ran Hugo when I launched my blog last year. I made 18 total posts. Probably 3/4 of those had issues when trying to publish due to issues with Hugo. Found it so frustrating.
I recently moved off Hugo as well to a DIY Python static site generator for my own blog. The trouble I had was I found it frustrating to have to learn how to do something the Hugo way when I knew I could quickly code it in a language I was already familiar with.
Slightly off topic, but is there any sensible way yet to federate a statically-geenerated website to Mastodon? Ideally being able to show the comments on the website. I get the staticness doesn't really help here, but I imagine this is a problem that will find a cleaver solution eventually?
A long while ago I wrote a very simple static site generator for personal site, mainly just to play around with using GitHub/Cloudflare pages to host my personal site.
Then a couple of months ago I started comparing the big SSG tools after wanting something a bit less held together with duct tape... after a lot of experimenting I settled on 11ty at the time, but I really don't enjoy writing Liquid templates, and writing reusable components using Liquid felt very clumsy. I just wish it was much easier to use the JSX based templates with 11ty, but every step of the way feels like I'm working against the "proper" way to do things.
So over Christmas holiday I been playing around with NextJS SSG, and while it does basically everything I want (with some complicated caveats) I also can't help feel like I'm trying to use a oil rig to make a pilot hole when a drill would do just fine...
Anyone got any recommendations on something somewhere in between 11ty and NextJS? I'd love something that's structured similar to 11ty, but using JSX with SSG that then gets hydrated into full blown client side components.
The other thing I've been meaning to try is going back to something custom again, but built on top of something like Tempest [1] to do most the heavy lifting of generating static pages, but obviously that wouldn't help at all with client side components.
> after a lot of experimenting I settled on 11ty at the time, but I really don't enjoy writing Liquid templates, and writing reusable components using Liquid felt very clumsy. I just wish it was much easier to use the JSX based templates with 11ty, but every step of the way feels like I'm working against the "proper" way to do things.
Doesn't Eleventy support most of the common old-school templating languages? I once converted a site using Mustache from Punch [1] to Eleventy.
Eleventy is great, and in some ways I prefer it to Hugo if build time isn't an issue. At least templates don't break, like most of the comments here say.
I eventually redid the site from scratch (with a bit of vibecoding magic back when v0 got me into it) with Astro.
I moved my startup’s marketing site and blog from NextJS to Astro, and I’m happy with it. It’s in that middle ground—focused on primarily static sites but with the ability to still write bits of backend logic as needed.
I found it hard to get next to reliably treat static content as actually static (and thus cacheable), and it felt like a huge bundle of complexity for such a simple use case.
I did a migration to markdown, too, but I decided to backport my previous nodejs code to Go, as I wanted the editor part to be still available as a standalone binary.
This Christmas, I redesigned my website [1] into modular "middlewares" with the idea that each middleware has its own assets and embed.FS included, so that I can e.g. use the editor to write markdown files with a dynamic backend for publishing and rendering, and then I just generate a static version of my website for the CDN of choice. All parts of my website (website, weblog, wiki, editor, etc) are modular this way and just dispatch routes on a shared servemux.
The markdown editor turned out to be a nice standalone project [2] and I customized the commonmark format a bit with a header for meta data like title, description, tags and a teaser image that is integrated with the HTML templates.
Considering that most of my content was just markdown files already, the migration was pretty quick, and it's database free so I can just copy the files somewhere to have a backup of everything, which is also very nice.
I really like Hugo as well. I've found it significantly faster than Jekyll which makes iterating much more pleasant and it's a single binary to download/run vs having to deal with Ruby and its package manager.
I'm amazed there still isn't a decent free simple to host CMS solution with live page previews, a basic page builder, and simple hosting yet though. Is there one?
There's https://demo.decapcms.org/ (previously Netlify CMS) that you install and run via adding a short JavaScript snippet to a page. It connects to GitHub directly to edit content. You can running it locally or online, but you need some hosting glue to connect to GitHub. Netlify provides this but more options would be nice and I think they limit how many total users can connect on free plans. You can get something like a page builder set up via custom content blocks, but I don't think there's going to be a simple way to render live previews via Hugo (written in Go) in a browser. A JavaScript based SSG would help here, but now you have to deal with the JavaScript ecosystem.
> That's especially odd as I think those links were the defaults from the Archie theme
Internal redirects are really easy to miss without checking with a tool because browsers aren't noisy about it. Lots of sites have unnecessary redirects from URLs that use http:// instead of https://, www vs no-www, and missing/extra trailing slashes, where with some redirect configs you can get a chain of 2 or 3 redirects before you get to the destination page.
I had a similar push years ago, but I did take this approach once step further. For a similar reason Jeff mentions -- lower maintenance over time.
I was frustrated that (because my posts are less frequent) changes in Hugo and my local machine could lead to changes in what is generated.
So I attached a web hook from my websites GitHub repo to trigger an AWS Lambda which, on merge to main, automatically pulled in the repo + version locked Hugo + themes. It then did the static site build in-lambda and uploaded the result to the S3 bucket that backs my website.
This created a setup that now I can publish to my website from any machine with the ability to edit my git repo. I found it a wonderful mix of WordPress-like ability to edit my site anywhere along with assurance that there's nothing that can technically fail* (well, the failure would likely, ultimately block the deploy, but I made copies of my dependencies where I could, so very unlikely).
But really the main thing I love is not maintaining really anything here... I go months without any concern that the website functions... Unlike every WordPress or similar site I help my friends run.
Exactly; and I'm currently tinkering with different deployment options. One thing I may do to speed up the deploy is run the Hugo compilation on the server itself, so the only push that needs to happen for a new post is a few KB via git. A post-receive hook would then run Hugo and deploy into my public www dir.
The best of both worlds is hosting the binary independently of git in some cloud storage and just have a script that fetches it (and set it in .gitignore). git itself doesn't like binaries very much and it will bloat your git clone speed/size if you update the binary ad it will effectively store all versions.
I also wanted to spend less time maintaining my personal blog and more time writing for it. After trying custom software, WordPress and Jekyll, I'm happily using Ghost for the blog which hits a sweet spot of features and simplicity, with the plugin and security update headaches of WordPress.
I know this isn’t quite the spirit of self hosting, but for people that aren’t ready to self they host can use https://prose.sh which interops with Hugo. It’s a safe stepping stone into self hosting a blog for anyone who wants to get started slowly.
I like Hugo, but I’ve not found a nice workflow to automatically put the images on a CDN.
I was thinking of making a GitHub action that uploaded the image from a given branch, deleted it, set the URL, and finally merged only the md files to main.
If you're okay with the images being on a CDN, why wouldn't you also be okay with the HTML and CSS also being on the CDN? Just fronting the entire static site with a pull-through CDN is an easy solution that doesn't require any complicated workflow.
I’m talking about integrating with GitHub. Publishing to Cloudflare for instance is fine, but where do you put the images between drafting and publishing?
Or do you just check in images to GitHub and call it a day?
I wasn't suggesting publishing to Cloudflare, just that if you're concerned about the complexity of the workflow of getting images into the CDN, simply fronting whatever host you're using with a CDN of some kind (which could be Cloudflare) will solve that.
Usually you just store the images in the same git repo as the markdown. How you initially host the static site once generated is up to you.
The problem with storing binaries in Git is when they change frequently, since that will quickly bloat the repo. But, images that are part of the website will ~never change over time, so they don't really cause problems.
> You’re talking to me like a total idiot, having assumed I know nothing about this.
Sorry I tried to help? If that's the response I get for helping, good luck...
> All I meant was a way to avoid storing images in git, the rest is quite simple.
There is no good way to do that, and no way that I would recommend. Git is the correct solution, if that is where you are storing the markdown. No fancy git tools are required.
I commit the images alongside the markdown files in GitHub. My site is has numerous images and there are logical groups of posts. I make those logical groups of posts a git submodule, so I don't have all posts on my machine (or iPad) at one time.
Working Copy (git for iPad) handles submodules reasonably well, I have a few that I'm working on cloned on it and others are not so I don't use so much space.
Tried to migrate to Hugo from Jekyll multiple times and have bounced off every time. Don't really know Go very well, but better than Ruby, and used this as justification -- since dealing with Jekyll updates was sometimes a headache (I use GitHub Pages for free hosting and let them build things for me when I push updates).
Instead I eventually just created an environment in nix that had compatible dependency versions to what GitHub uses and have been pleased since.
This has inspired me to move my personal blog to Hugo aswell. I have been using Hashnode[0] for the past few years and while it's okay, they recently automatically deleted one of my blog posts which was written in my local language, Chichewa and was one of my popular amongst, even non-developers.
Ironically, my company's blog and websites are built with Hugo.
Wordpress is fine for what it is, but it's also an upgrade treadmill. You can kind of bypass that by letting the php process running wordpress have access to write to the code directory so wordpress can upgrade itself, but then your php process has access to write to the code directory.
A blog is mostly a static site that changes occasionally, a static site genetator is a much better fit. Caveat: comments, but personally, I don't want to moderate, and the WordPress site I administerred for work didn't want comments either (but even with them disabled, somehow new comments and trackbacks got into the database). When I finally got approval to turn our blog into a mostly static site, it was joyous, and the server(s) stopped burning cpu like nobody's business to serve the same handful of pages. We used PHP to manage serving translated blog entries, but it's not that much slower than a fully static file when your PHP logic is dead simple and short.
My impression is that tools that grew complex only because they want to serve every use case under the son got obsoleted by AI, and static site generators like Hugo are a good example.
Today, if I were setting up a blog to host just some text and images, a vibe-coded SvelteKit project using the static adapter[1] would easily solve every single problem that I have. And I would still be able to use the full power of the web platform if I need anything further customized.
I've used a lot of static site generators including Hugo and Jekyll. Frankly, I find Go templating and other Jinja style templating an exhausting mental exercise. I don't like it for the same reason that I don't like using Go templating for server side rendering; I would prefer to have an entirely different code base that runs my frontend that only does frontend logic. Components just make that much sense and template partials will never compete with the flexibility of components. That was how I landed on Next.js and MDX for my blog. I get Markdown and I get components where Markdown just won't do and it's all statically compiled.
Jeff's approach of writing a separate comments application is interesting. I've seen people reuse GitHub issues to accomplish that, but that limits your audience participation to GitHub. The other obvious choice, I think, is a headless CMS. I'll be curious to see where he goes with it.
I ran into all these problems just last summer trying to launch something new so I said fk it went all in on a overkill demo just to see whats possible
Don't know what it is either, but I'd like to got off-topic and remember with fondness the time when you could subscribe to RSS feeds directly in Safari. Google Reader was replacable, a direct integration into the browser not.
And for a short time, RSS was the bee's knees across the entire Internet. Apple had the best support for it, and almost put NetNewsWire out to pasture, until they just removed all baked in RSS functionality, entirely :(
But I use Reeder across Mac, iPad, and iPhone to keep up with feeds.
I built some automation that helps me test and deploy changes to S3 as well: https://github.com/carlosonunez/https-hugo-bloggen. It's clunky but works for me! Feel free to fork/PR if you're interested, of course.
It was a great move; I couldn't be happier. Running my blog is basically free (because nobody reads it, lol, but also because it's served by S3 and CloudFront and the # of monthly requests is still within Free Tier).
At the time, some folks were questioning why I built this instead of moving to Netlify. I wanted control over how my sites were deployed and didn't want to pay some provider for convenience I knew I could build myself. Netlify got AI-pilled some time ago, which makes me feel vindicated in my position.
Haven't decided if it'll pop back on the bottom or not yet; I just forgot about it until yesterday, and when I was stuffing it down there it looked out of place, so I gave up for now.
Or I might stick it somewhere else, as an easter egg, we'll see!
For long term stuff like a blog, nothing seems to beat static sites generated before deployment, instead of automated tools like Hugo. I tried Hugo years ago and some tiny config or update would suddenly expose all site variables to visitors, which was an incredible security risk. Wordpress and Drupal are a war zone of attacks - judging by the server logs of any of web site. These days you can custom write or design any page, click to build all menus, site maps, rss with something as simple as gulp, and move the files out via SFTP. Fast, performant, and secure.
I'm fairly sure you're confusing Hugo with something else, Hugo is strictly a tool for building static websites. AFAIK, there are no features of Hugo which isn't for static website building.
Um. I don’t understand how Hugo is not a tool to create “ static sites generated before deployment”. I run Hugo to build all static content, check it locally, then push it via rsync.
I have found it an absolute joy to maintain this piece of little 'machinery' for my website. The best part is that I understand every line of code in it. Every line of code, including all the HTML and CSS, is handcrafted. This gives me two benefits. It helps me maintain my sense of aesthetics in every byte that makes up the website. Further, adding a new feature or section to the site is usually quite quick.
I built the generator as a set of layered, reusable functions, so most new features amount to writing a tiny higher level function that calls the existing ones. For example, last month I wanted to add a 'backlinks' page listing other pages on the web that link to my posts and it took me only about 40 lines of new CL code and less than 15 minutes from wishing for it to publishing it.
Over the years this little hobby project has become quite stable and no longer needs much tinkering. It mostly stays out of the way and lets me focus on writing, which I think is what really matters.
I ended up migrating back to a hosted solution explicitly because it doesn't allow me such control, so the only thing I can do is write instead of endlessly tinkering with the site.
That said, building your own static site and faffing with all the tech is generally an enjoyable distraction for most techies
I ended up subscribing to Bear Blog and calling it a day. In fact I need to delete those half-baked attempts so I am never tempted to get back to them.
You should write a blog about it, like geerlingguy did.
I am now slowly rebuilding it in TypeScript/Bun and still finding a lot of LISP-isms, so it’s been a fun exercise and a reminder that we still don’t have a nice, fast, batteries-included LISP able to do HTML/XML transforms neatly (I tried Fennel, Julia, etc., and even added Markdown support to Joker over the years, but none of them felt quite right, and Babashka carries too much baggage).
If anyone knows about a good lightweight LISP/Scheme dialect that has baked in SQLite and HTML parsing support, can compile to native code and isn’t on https://taoofmac.com/space/dev/lisp, I’d love to know.
If I were maintaining multiple large sites or working with many collaborators, I'd rely on something standard or extract and publish my SSG. For a personal site, I believe custom is often better.
The current generator is around 900 SLOC of Python and 700 of Pandoc Lua. The biggest threats to stability have been my own rewrites and experimentation, like porting from Clojure to Python. I have documented its history on my site: https://dbohdan.com/about#technical-history.
Also have a RSS feed generator and it can highlight code in most programming languages, which is important to me as I write posts on many languages.
I did try Hugo before I went on to implement my own, and I got a few things from Hugo into mine, but Hugo just looked like far too overengineered for what I wanted (essentially, easy templating with markdown as the main language but able to include content from other files either in raw HTML or also markdown, with each file being able to define variables that can be used in the templating language which has support for the usual "expression language" constructs). I used the Go built-in parser for the expression language so it was super easy to implement it!
Used this for code syntax higlighting: https://github.com/alecthomas/chroma And this for markdown: https://github.com/russross/blackfriday
The rest I implemented myself in simple to read Go code.
The comment form is implemented as a server-side program using Common Lisp and Hunchentoot. So this is the only part of the website that is not static. The server-side program accepts each comment and writes to a text file for manual review. Then I review the comments and add them to my blog.
In the end, the comments live like normal content files in my source code directory just like the other blog posts and HTML pages do. My static site generator renders the comment pages along with the rest of the website. So in effect, my static site generator also has a static comment pages generator within it.
The next time I rebuilt the blogs the page "XXX" would render a loop of all the comments, ordered by timestamp, if anything were present.
The CGI would send a "thanks for your comment" reply to the submitter and an email to myself. If the comment were spam I'd just delete the static file.
Many programmers' first impulse when they start[0] to blog is to write their own blog engine. Props to you for not falling into that particular rabbit hole and actually using - as opposed to just tinkering on - that engine.
[0] you said you migrated it, implying you already had the habit of blogging, but still,
A neat little in-between "string replacements" and "flown blown templating" is doing something like what hiccup introduced, basically using built-in data structures as the template. Hiccup looks something like this:
And you get both the power of templates, something easier than "templating engine" and with the extra benefit of being able to use your normal programming language functions to build the "templates".I also implemented something similar myself (called niccup) that also does the whole "data to html" shebang but with Nix and only built-in Nix types. So for my own website/blog, I basically do things like this:
And it's all "native" Nix, but "compiles" to HTML at build-time, great for static websites :)Thank you. That was, in fact, the inspiration behind writing my own in CL.
You list all the links to posts on the landing page: what if you have 1000, 2000 posts ? Have you thought of paginating them ?
I created a test page with 2000 randomly generated entries here: <https://susam.net/code/test/2k.html>. Its actual size is about 240 kB and the compressed transfer size is about 140 kB.
It doesn't seem too bad, so I'll likely not introduce pagination, even in the unlikely event that I manage to write over a thousand posts. One benefit of having everything listed on the same page is that I can easily do a string search to find my old posts and visit them.
How large does the canvas need to get before pagination makes sense?
Modern websites are enormous in terms of how much needs to be loaded into memory- sure, not all of it is part of the rendered document, but is there a limit to the canvas size?
I'm thinking you could probably get 100,000+ entries and still be able to use CTRL+F on the site in a responsive way since even at 100,000+ entries you're still only about 10% of Facebooks "wall" application page. (Without additional "infinite scroll" entries)
I had a vision of what I wanted the site to look like, but the org exporter had a default style it wanted. I spent more time ripping out all the cruft that the default org-html exporter insisted on adding than it would have taken to just write a new blog engine from scratch and I wish I had.
There's a way to set a custom export template, but I couldn't figure it out from the docs. I found and still do find the emacs/org docs to be poorly written for someone who doesn't already understand the emacs internals, and I wasn't willing to spend the time to become an emacs internals expert just to write a blog.
So I lived with a half-baked org->pandoc->html solution for a while but now I'm on Jekyll and much happier with my blogging experience.
I regret it.
I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
Unless the new version of the software includes some feature I need, I can be totally fine just running an old version forever. I could just write down the version of the SSG my site builds with (or commit it to source control) and move on with my life. It’ll work as long as operating systems and CPU architectures/whatever don’t change too much (and in the worst case scenario, I’m sure the tech exists to emulate whatever conditions it needs to run) Some software is already ‘finished’ and there’s no need to update it, ever.
Like most build systems work, for example when you set a "rust-version" in Cargo.toml and only bump it when you explicitely want to. This way it will still use the older version on a fresh checkout.
Once setup, all you need to do is:
The beauty of this approach is that it extends to almost any CLI tool you can think of :)My devshell can be found here and is dead simple: https://github.com/stusmall/stuartsmall.com/blob/main/defaul...
I used Zola for my SSG and can't think of the last breaking change I've hit. I just use the pattern of locked nix devshells for everything by default. The extra tools are used for processing images or cooklang files.
For Hugo, there is Hugo Version Manager (hvm)[0], a project maintained by Hugo contributor Joe Mooring. While the way it works isn't precisely what you described, it may come close enough.
[0]: https://github.com/jmooring/hvm
I say this as someone who uses Hugo and is regularly burned (singed) by breaking changes.
Pinning your version is great until you trip across a bug (usually rendering, in my case) and need to upgrade to get rid of it. There goes a few hours. I won’t even mention the horror of needing a test suite to make sure the rendering of your old pages hasn’t changed significantly. (I ended up with large portions of text in a code block, never tracked the root cause down… probably something to do with too much indentation inside a bulleted list. It didn’t render that way several years before, though.)
0: not all, I use cargo to manage the rust toolchain
Also, you know that you can do a binary search for the version that works for you? 0.154.0, 0.77.0, 0.115.0 ... (had to do it once myself)
[0]: https://github.com/oslc-op/website/blob/9b63c72dbb28c2d3733c...
Alternatively there's apparently some nix flakes that have been developed.
So, there's options.
I just recommend pinning your version and being intentional about upgrades.
Oh definitely. How can you suggest adding a binary to a git repository? It's a bad idea on many levels: it bloats the repository by several orders of magnitude, and it locks you to the chosen architecture and OS. Nope, nope, nope.
Pretty sure the version of Hugo used to generate a site is included in metadata in the generated output.
If you have a copy of the site from when it last worked, then assuming my above memory is correct you should be able to get the exact version number from that. :)
My needs for a site are pretty simple, so I might just go with the custom-built one to be honest.
If it breaks, I can just go look in the mirror for the culprit =)
Or use HVM and submit the .hvm file (which is just a text file with the Hugo version that you use)
Right now, you are pretty much locked into the theme (and it's version) when you set up your website for the first time.
No need for the entire binary.
Just put `go run github.com/gohugoio/[email protected] "$@"` into a `hugo.sh` script or similar that's in source control, and then run that script instead of the Hugo binary.
You'll need Go installed, but it's incredibly backwards compatible, so updating to newer Go versions is very unlikely to break running the old Hugo version.
Had the same problem. Binary search is the latest trick people use.
For SSG there's not much point in upgrading if everything works, and planned migration beats the churn in this case.
You can just print a Hugo version in a HTML comment to track it in git.
Hugo-papermod, the most famous Hugo theme, doesn't support the latest 10 releases of Hugo.
So, everyone using it is locked into using an old version (e.g. via Docker).
> Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
I've had the same issues as you, and yes, I agree that pinning a version is very important for Hugo.
It's more useful for once-and-done throwaway sites that need some form of structure that a static site generator can provide.
I maintained a personal fork of Zola for my site (and a couple of others), and am content to just identify the Git repository and revision that’s used.
Zola updates broke my site a few times, quite apart from my patches not cleanly rebasing. I kept on the treadmill for a while, initially because of a couple of new features I did want, but then decided it wasn’t necessary. You don’t need to run the latest version; old is fine.
—⁂—
One piece of advice I would give for people updating their SSG: build your site with the old and new versions of the SSG, and diff the directories, to avoid regressions.
If there are dynamic values, normalise both builds before diffing: for example, if you have timestamp-based cachebusting, zero all such timestamps with something like `sed -i 's/\?t=[0-9]+/?t=0/' **/*`. Otherwise regressions may be masked.
I caught breakages a couple of times this way. Once was due to Zola changing how shortcodes or Markdown worked, which I otherwise might not have noticed. (Frankly, Markdown is horrible for things like this, and Zola’s shortcodes badly-designed; but really it’s mostly Markdown’s fault.)
I’ve had amazing success debugging compile errors with Claude Code.
Perhaps a coding agent could help you get it going again?
A) Low-stakes application with
B) nearly no attack surface that
C) you don’t use consistently enough to keep in your head, but
D) is simple enough for an experienced software developer to do a quick sanity check on and run it to see if it works.
Hell, do it in a sandbox if you feel better about it.
If it was a Django/Node/rails/Laravel/…Phoenix… (sorry, I’ve been out of my 12+ years web dev career a short 4 years and suddenly realized I can only remember like 4 server-side frameworks/environments now) application, something that would run on other people’s devices, or really anything else that produces an executable output, then yeah fuck that vibe coding bullshit. But unless you’ve got that thing spitting out an SPA for you, then I say go for it.
* I have forked some public repository that has kept up with upstream (IE; lots of example code to draw from)
* Upstream is publishing documentation on what's changing
* The errors are somewhat google-able
* Can be done in a VM and thrown away
* Limited attack surface anyway.
I think you're downvoted because the comment comes across as glib and handwavy (or not moving the discussion forward.. maybe?), and if it was a year ago I would probably argue against it.. but I think Claude Code can definitely help with this.
It just didn't exist as it does in 2023~ or whenever it was that I originally started having issues.
---
That said: it shouldn't be necessary. As others in this thread have articulated (well, imo) sometimes software is "done" and Hugo could be "done" software, except it's not; so the onus is on the operator to pin their definition of "done" version.. which is not what you'd expect.
Yep. I missed the mark.
OP seemed down and out about their blog being broken. So I was trying to put the idea across as not something to be afraid of.
I should’ve just said it - LLMs are perfect for this use case.
I am the parent, and I am indeed down about it. :P
It's a fair fix today like I mentioned, but back when it happened it wasn't available, and anyway, as I mentioned it shouldn't have been necessary.
Given, mine is not sophisticated at all and simple by design. But curious what kind of issues pops up.
No need for docker.
Nobody can point to a reason why it's a good idea for a site with any interactivity now.
All the supporters here are all the same: "I had to do a whole bunch of mental gymnastics and compromises to get <basic server side site feature> but it's worth it!" But they don't say why it was worth it, beyond "it's easy now <after lots of work costs sunk>".
When you try get to why they did it in the first place, it's universally some variation on "I got fed up with <some large server side package> so took the nuclear SSG route <and then had to eventually rewrite or get someone else's servers involved again>"
Part of this is a me problem: a personal website should be owned by the person, IMO. A lot of people are fine to let other people own parts of their personal websites, and SSGs encourage that. What even is a personal website if it's a theme that looks like someone else's, hosted and owned on someone else's server - why not just use Facebook at that point?!
1: https://www.vice.com/en/article/this-solar-powered-low-tech-...
This is the part I'm struggling with. That's the view I held from 2016 - 2024. Practically though, it's only true if you want a leaflet website with 0 interactivity.
If you want _any_ interactivity at all (like, _any_ written data of any kind, even server or visitor logs) then you need a server or a 3rd party.
This means for 99% of personal websites with an SSG, you need a real server or a 3rd party service.
When SSGs first came around (2010 - 2015) compute was getting expensive, server sides were getting big and complex, bot traffic solutions were lame, and all the big tech companies started offering free static hosting because it was an easy free thing to offer.
Compare this to now, 2026, it's apparently nothing special to handle hackernews front page on free or cheap compute. Things like Deno, Bun, even Go and Python make writing quick, small, modern server sides so much quicker, easier and safer. Cloudflare and or crowdsec can cover 99% of bot and traffic issues. It's possible to get multiple free multiple GB compute instances now.
I didn't mean to imply there's some sinister plot of people maliciously encouraging people to use SSGs to steal their stuff, but that's the reality that modern personal webdev has sleepwalked into. SSGs were first sold to make things better performing and easier than things were at the time. Pretty much any "server anywhere" you own now will be able to run a handwritten server doing SSR markdown -> HTML now.
So why force yourself to have to start entertaining ideas like making your visitors download multiple megabyte client side index files to implement search, or embedded iframes and massive JS external libraries for things like comment sections? Easier looking SSG patterns like that typically break all the stuff required to keep the web open and equal, like screen readers, low bandwidth connections and privacy. (Obviously SSR doesn't implicity solve these, but many of these things were originally conceived with SSR in mind and so are naturally more compatible).
Ask anyone who's been in and out of web dev for more than 15 years to really critically think about SSGs in depth, and I think they'll conclude they offer a complete solution for maybe 1% of websites, but seem to be recommended in 99% of places as the only worthy way to do websites now. But when you pick it apart and try it, you end up in Jeff's position: statically rendered pages (the easy bit) and a TODO with a list of compromising options for basic interactivity. In 5 years time, he'll have complex SSG pipelines that's running almost 24/7, or a complex mesh of dependencies on external services that are constantly changing or trying to start charging him more to deal with his own creations.
I really hope I'm wrong.
If I used MacOS then Hugo was probably very old, since I often forget to update brew packages and end up running very old software.
But, that's what I thought to do first also.
In the end, it becomes not worth the hassle, and spending time fixing it means that whatever I was going to write gets pushed out of my head, and it's very difficult to even bother.
I'll probably go back to Svbtle.
If you encode the transformations that your desired SSG should perform by writing the processing rules as plain text source code that a browser is capable of executing (i.e., an "HTML tool" or something adjacent[1][2]), then you can just publish this "static site generator" itself as yet another page on your static site.
To spell it out: running the static site generator to create a new post* doesn't need to involve anything more than hitting /new.html (or whatever) on the live site, clicking the button for the type=file input on that page, using the browser file picker to open the directory where the source to your static site lives, and then saving the resulting ZIP somewhere so the contents can be copied to whatever host you're using.
1. <https://simonwillison.net/2025/Dec/10/html-tools/>
2. <https://crussell.ichi.city/pager.app.htm>
* in fact, there's nothing stopping you from, say, putting a textarea on that page and typing out your post right there, before the new build
About a year ago I converted my 500+ post Jekyll blog to Hugo, overall it's been a net win but boy do I find myself looking up syntax in the docs a lot. Thankfully not so much nowadays but figuring out the templating syntax was rough at the time.
Jeff, you don't have to set draft to false. You can separate your drafts into a different directory and use Hugo's cascade feature to handle it. Also you don't have to update the date in your frontmatter if you prefix the file with YYYY-MM-DD and configure Hugo to use that.
Just a heads up, you didn't mention this in your post but Hugo adds a trailing slash for pretty URLs. I don't know if you had them before but it's potentially new behavior and canonical URL differences unless you adjust for that.
When I did the switch from Jekyll to Hugo, I wrote a 10,000 word post with the gory details and how I used Python and shell scripts to automate converting the posts, plus covered all of the gotchas I encountered. There are sections focused on the above things I mentioned too: https://nickjanetakis.com/blog/converting-my-500-page-blog-f...
Why? I'm using Jekyll and been happy with it. What am I missing?
> That gives near instant live reload when writing posts which makes a huge difference from waiting 4 seconds.
Mhm. Why? I can write all of my post and look at it only afterwards? Perhaps if there's a table or something tricky I want to check before. But normally, I couldn't care less about the reload speed.
> I use that plugin because it digests your assets by adding a SHA-256 hash to their file names. This lets me cache them with nginx. I can’t not have that feature.
Why?
My site has a fixed max width which is what most tablets or desktops will view it as.
Sentence display width is something I pay attention to. For example sometimes I don't want 1 hanging word to have its own full line (a "hanger") because it looks messy. Other times I do want it because it helps break up a few paragraphs of similar length to make it easier to skim.
Seeing exactly what my site looks like while writing lets me see these things as I'm writing and having a fast preview enables that. Waiting 4 seconds stinks.
> Why? [asset digesting and cache busting with nginx]
It helps reduce page load speeds for visitors and saves bandwidth for both the visitor and your server. If their browser already has the exact CSS or JS file cached locally, this allows you to skip a server side call to even determine if the asset can be served locally or needs an update from the server.
The concept of digesting assets with infinitely long cache header times isn't new or something I came up with. It's been around for like 10+ years as a general purpose optimization.
SSGs are good for static sites with no interactivity or feedback. If you want interactivity or feedback, someone (you or a 3rd party service provider) is going to have to run a server.
If you're running a server anyway, it seems trivial to serve content dynamically generated from markdown - all an SSG pipeline adds is more dependencies and stuff to break.
I know there's a fair few big nerd blogs powered by static sites, but when you really consider the full stack and frequency of work that's being done or the number of 3rd party external services they're having to depend on, they'd have been better by many metrics if the nerds had just written themselves a custom backend from the start.
Jeff: I think you'll regret this. I think you'll waste 5 - 10 years trying to shoehorn in basic interactivity like comments, and land on a compromised solution.
I also used and managed Drupal and Joomla before I went to SSGs, and then finally realised there's a sensible midpoint for the pain you're feeling: you write/run a simple server that dynamically compiles your markdown - good ol' SSR. It's significantly lighter, cheaper and easier than drupal, and lets you keep all the flexibility and features you need a server for. Don't take cave to the "self hosted tech was too hard so I took the easy route that forces me to also use 3rd party services instead" option.
SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Until you have enough visitors or evil AI bots scraping your site so that it crashes, or if you're using an auto-scaling provider, costs you real money.
The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
For blogs, number of reads vs number of comments or other actions that require a server is probably on the order of 100:1 or 1000:1, even more if many of the page loads are bots/scrapers.
> SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
Looks like Jeff plans to do exactly that: https://github.com/geerlingguy/jeffgeerling-com/issues/167
There's been multiple blog posts on HN from people who've received a hug of death and handled it fine with basically free or <$10 /month VMs
A couple of gigs of RAM and 2 cores can take viral posts and the associated bots. 99% of personal websites never go viral either.
> The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
This is my exact argument against SSGs, and Jeffs post proves it: it's easy to use SSG to generate web pages, but the moment you want comments, or any other bells and whistles, you do what Jeff's going to have to do and say you'll do it later because there's no obvious easy solution that doesn't work against and SSG.
> Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
EXACTLY! This is my point! Why not just SSR the markdown on the server you're already running?!
This is the opposite of what Jeff and 99% of other SSG users do, they switch to SSGs to get rid of dealing with servers, only to realise they need servers or third parties, but then they're sunk-cost-fallacied into their SSG by the time they realise.
The Markdown-to-templated-HTML pipeline code is the same whether it runs on each request or on content changes, so why not choose the one that's more efficient? Serving static HTML also means that the actually important part of my personal webpage (almost) never breaks when I'm not looking.
SSGs force people into particular ways of doing all the other parts of a website by depending on external stuff. This is often contrary to long term reliability, but nobody associates those challenges with the SSG that forced the external dependencies.
It becomes a sunk cost fallacy because people do what Jeff has done, they switch to an SSG in the promise of an easier website and proudly proclaim they're doing things the new best way. But they do the easy SSG bit (the content rendering) and then they create a TODO with all the compromised options for interactivity.
When they've got to a feature complete comparison, they've got a lot more dependencies and a lot less control/ownership, which inevitably leads to future frustrations.
The end destination for most nerdy personal website is a hand crafted minimal server with minimal to no dependencies.
The code surface with SSG + 1 or 2 small self-hosted OSS tools is much, much smaller than it ever was running Drupal or another CMS.
But all you've done in bought on all the pain and compromise of having to think from an SSG perspective, and that created problems which you've already identified you'll figure out in future
I'm suggesting 2 or 3 small self-hosted OSS tools, where one is a small hand crafted server that basically takes a markdown file, renders it, and serves it as plain HTML with a header/footer.
This is more homogenous, fewer unique parts/processes, and doesn't have the constraint of dealing with an SSG.
I remember my own personal pain from 2010 - 2016ish of managing Drupal and Joomla. I did exactly the same as you in 2016 and went all in on SSGs and in 2024, I realised all of the above. I feel like I wasted years of potential development time reinventing basic personal website features to try and work with an SSG and you literally have a ticket to do just that: https://github.com/geerlingguy/jeffgeerling-com/issues/167. 1 of your 3 solutions involves letting someone else host your comments:(
A custom framework/server is the end destination for all nerdy personal websites - I can't wait to see what you make when you realise this:)
edit/p.s. I love you and all your work. Sorry for sounding disagreeable, I'm excited to see what you learn from you SSG journey, I hope you prove me wrong!
For me, an unstated reason for SSG is being able to scale to millions of requests per hour without scaling up my server to match.
Serving static HTML is insanely easy, even on a cheap $5-10/month VPS. Serving anything dynamic at all is an order of magnitude harder.
Though... I've been using Cloudflare since 2022, after I started getting regular targeted DDoSes (was fun initially, seeing someone target different parts of Drupal, until I just locked down everything except for the final comment pages). The site will randomly get 1-2 million requests in a few minutes, and now Cloudflare eats those quickly, instead of my VPS getting locked up.
Ideally, I'll be able to host without Cloudflare in front at some point, but every month, because of one or two attacks, the site's using 25-35 TB of bandwidth (at least according to CF).
I really want to know because there is a Drupal 7 site that I need to migrate to something but I need good search on it (I’m using solr now).
Edit: I should have specified that I need more functionality than just word searching. I need filtering (ie faceted search) too. I’ve used a SSG that uses a JavaScript index and appreciate it, but that’s not going to cut it for this project.
Of course, for my site I just redirect the user to a search engine plus `site:stavros.io`.
When does this become 1 step forward with the SSG and 2 steps back with search solutions like this?
Nothing is perfect, but the above is really simple to host, is low maintenance, and easy to secure.
Assuming 500 bytes of metadata + URL per blog post, a one megabyte index is enough for 2000 blog posts.
As already mentioned, you don't generate search result pages, because client side Javascript has been a thing for several decades already.
Your suggestion of converting markdown on every request also provides near zero value.
Writing a minimal server backend is also way easier if you separate it from the presentation part of the stack.
Based on https://news.ycombinator.com/item?id=46489563, it also seems like you fundamentally misunderstand the point. Interactivity is not the point. SSGs are used for publishing writing the same way PDF is used. Nobody sane thinks that they need a comment section in their PDFs.
You may be interested in Backdrop, which is a maintained fork of Drupal 7.
https://backdropcms.org/
(No experience with it personally. Only know about it from a friend who uses it.)
This is the exact opposite of what static site generation does.
If you don't use an SSG, this step is done by virtue of the server running.
Every time a comment was added, it just generated a full-ass static web page.
For my website, I do both. Static HTML pages are generated with a static site generator. Comments are accepted using a server-side program I have written using Common Lisp and Hunchentoot.
It's a single, self-contained server side program that fits in a single file [1]. It runs as a service [2] on the web server [2], serves the comment and subscriber forms, accepts the form submissions and writes them to text files on the web server.
[1] https://github.com/susam/susam.net/blob/0.4.0/form.lisp
[2] https://github.com/susam/susam.net/blob/0.4.0/etc/form.servi...
[3] https://github.com/susam/susam.net/blob/0.4.0/etc/nginx/http...
It looks like you did exactly what Jeff did: got fed up with big excessive server sides and went the opposite way and deployed and wrote your own minimal server side solutions instead.
There's nothing wrong with that, but what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
The common sales points for SSGs are often:
- SSGs are easier (doesn't apply to you because you had to rewrite all your comment stuff anyway)
- cheaper (doesn't apply to you since you're already running a server for comments, and markdown SSR on top would be minimal)
- fewer dependencies (doesn't apply to you, the SSG you use is an added dependency to your existing server)
This largely applies to Jeff's site too.
Don't get me wrong, from a curious nerd perspective, SSGs presented the fun challenge of trying to make them interactive. But now, in 2026, they seem architecturally inappropriate for all but the most static of leaflet sites.
I was not trying to solve a specific problem. This is a hobby project and my choices were driven mostly by personal preference and my sense of aesthetics.
Moving to a fully static website made the stack simpler and more enjoyable to work with. I did not like having to run a local web server just to preview posts. Recomputing identical HTML on every request also felt wasteful (no matter how trivially) when the output never changes between requests. Some people solve this with caching but I prefer fewer moving parts, not more. This is a hobby project, after all.
There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
An added bonus is portability. The entire site can be browsed locally without a server. In fact, I use relative internal links in all my HTML (e.g., '../../foo/bar.html' instead of '/foo/bar.html') so I can browse the whole site directly from the local filesystem, from any directory, without spinning up a web server. Because everything is static, the site can also be mirrored trivially to hosts that do not support server-side programming, such as https://susam.github.io/ and https://susam.codeberg.page/, in addition to https://susam.net/. I could have achieved this by crawling a dynamic site and snapshotting it, which would also be a perfectly acceptable solution. Static site generation is simply another acceptable solution; one that I enjoy working with.
This, definitely.
I think until you experience your first few DDoSes, you don't think about the kind of gains you get from going completely static (or heavily caching, sometimes at the expense of site functionality).
Anyone who wants/needs interactivity is digging themselves into a hole.
It's supported RSS since practically the beginning, and RSS later served as a foundation for a backup and restore system. A few years ago, I implemented SSG functionality (exports html, css, images, etc in a zip).
https://github.com/theandrewbailey/gram
However, some people like building websites and are fine with that. Plus, it allows you to write another blog post :)
I manage multiple Astro sites and eventually they have all needed at least a few small server endpoints. Doing that with Astro is very simple and it can be done without making the static portions of the site a pain to maintain.
Is anyone willing to give feedback on it whatsoever?
https://tariqdude.github.io/Github-Pages-Project-v1/visual-s...
https://gohugo.io/content-management/comments/
This includes a giant list of open source commenting systems.
I really don’t understand why people commonly say static site generators are a good candidate for building your own when there are a good selection of popular, stable options.
The only thing I don’t like about Hugo is the experience of using other people’s themes.
Getting someone else's SSG to do exactly what you want (and nothing more) takes longer than just building it yourself. Juice isn't worth the squeeze.
> It took me a weekend to write the initial Perl script that made this site. It took me another weekend to do the Rust rewrite (although porting all the content took two weeks). These are not complicated programs.
My last Hugo site took 30 minutes to deploy, not a whole weekend. Picked a theme, pasted in content.
> You want free web hosting? Hugo might be the right option.
An extremely good reason to pick Hugo especially if you don’t have the know-how to build your own SSG. You don’t need to know a programming language at all to use it.
Again, I have to throw criticism toward this idea that everyone who wants a static site generator already has the skills required to make one.
And I’m not saying it covers every use case like the kind of person who is willing to pay $100+ per year on a full blown solution like Shopify and Squarespace. It fits a niche: someone who wants their content online without coding with no hosting cost and doesn’t want to rely on third party platforms like Substack.
If you're fine for 3rd parties to own all your comments and content, why even take on the extra effort of hosting or managing or building your own website? That's basically what social media is for.
It’s going to be easier to self-host a drop-in comment system than an entire dynamic site plus/including comment system.
It's easier for a server to render markdown than it is for an SSG site to do server stuff.
Your suggestion for comments is to run a server/use a third party, and do SSG. My suggestion is to just run a server. One is clearly easier as it has fewer steps.
The idea that you can run a decent personal website without compromising on interactivity, and without running a server or using 3rd parties is a myth. As soon as you accept that you have to run a server, SSG becomes an unnecessary extra step.
With the SSG you’re just managing your static markdown for your site’s content, then you’re dropping in a tiny extra piece for comments.
The comment self-hosting is a simple docker container with minimal configuration, an out of the box just works type of service.
Building a personal website that is hosted along with the interactivity integrated is more like managing an entire application, which is exactly what Jeff described with his pain using Drupal. He didn’t actually need all the interactivity that a full blown hosted site offers.
For example, if I run a PHP, Django, or NodeJS based website, now I’ve got to periodically update my whole site’s codebase to keep up with supported/safe versions of PHP, Python, or Node/npm packages.
With the SSG plus comment system you’re pretty much just pulling latest docker image for the comment system and everything else is just static.
I think you’d also have to agree that outsourcing comments to a 3rd party service is potentially a simpler/cheaper exercise than outsourcing the entire site. I see that some of the Hugo supported commenting systems seem to have a free tier with no ads that should support Jeff’s traffic.
Another interactive example is Netlify’s forms system, which is included in their free product.
Hugo is very well established, but at the same time it's known for not caring too much about introducing breaking changes; I think any given project with that age should already respect its great userbase and provide a strong guarantee of backwards-compatibility with the inputs/outputs that it decides to draw for itself, not revolve in an eternal 0.x syndrome calling itself young enough to still be seeking its footing in terms of stability but I digress... and in fact, Hugo hasn't been great in that regard. Themes and well functioning inputs do break with updates, which here in this house of mine, is a big drawback.
As of today, the [docs](https://gohugo.io/templates/lookup-order/) still haven't been fully adjusted to reflect the new system:
> We did a complete overhaul of Hugo’s template system in v0.146.0. We’re working on getting all of the relevant documentation up to date, but until then, see this page.
I don't mind breaking changes, but it'd sure be nice if the documentation reflected the changes.
Best Python SSG is mostly down to Hugo and Pelican as far as I can tell.
I've always loved SSGs, but ActivityPub integration is also looking very attractive absent wider adoption of RSS.
I used to use Nikola, but gave up on that for two reasons. One was it was adding every possible feature which meant an ever growing number of dependencies which gets scary. And that is hard to use - eg how do you embed a youtube video becomes a trawl through documentation, plugins, arbitrary new syntax etc.
But the biggest problem and one that can affect all the SSG is that they try to do incremental builds by default. Nikola of the time was especially bad at that - it didn't realise some changes mattered such as changes to config files, themes, templates etc, and was woolly on timestamps of source files versus output files.
This meant it committed the cardinal sin: clean builds produce different output from incremental builds
Pelican has kept it simple.
I think the only downside is that the project site's documentation feels like its really well done...up to a point...Like they were on a great roll documenting stuff really well, and then stopped at like ~90% completion. By this i mean that the high level stuff is well documented...but little details are missing for the last 10%...Then again, it could be that because i only use python a little here or there, that maybe that's why some things "seem" like they're missing a few details. By the way, if any project maintainers are out there, please do not take offense at my opinion here, as I value very much what the project maintainers do (i mean, i still use Pelican)!
Other than my feelings towards the documentation, if you don't need to customize too much stuff w/Pelican, then its a really great SSG.
To be frank, in using it for well over a decade I think something broke only once or twice. It's pretty stable and they give plenty of deprecation warnings.
(I see what you're getting at but Astro has to be _the worst_ example. I have migrated off hugo to my own SSG but I don't hate it)
Using Zola's GitHub actions to test/build and deploy to GitHub pages too.
Some guides say to add submodules. I favor direct inclusion and just overriding layouts as you see fit.
As far as comments, I’ve seen it done, but as far as I know it’s a bunch of custom code. One example I found: https://andreas.scherbaum.la/post/2024-05-23_client-side-com...
Then a couple of months ago I started comparing the big SSG tools after wanting something a bit less held together with duct tape... after a lot of experimenting I settled on 11ty at the time, but I really don't enjoy writing Liquid templates, and writing reusable components using Liquid felt very clumsy. I just wish it was much easier to use the JSX based templates with 11ty, but every step of the way feels like I'm working against the "proper" way to do things.
So over Christmas holiday I been playing around with NextJS SSG, and while it does basically everything I want (with some complicated caveats) I also can't help feel like I'm trying to use a oil rig to make a pilot hole when a drill would do just fine...
Anyone got any recommendations on something somewhere in between 11ty and NextJS? I'd love something that's structured similar to 11ty, but using JSX with SSG that then gets hydrated into full blown client side components.
The other thing I've been meaning to try is going back to something custom again, but built on top of something like Tempest [1] to do most the heavy lifting of generating static pages, but obviously that wouldn't help at all with client side components.
[1]: https://tempestphp.com
Doesn't Eleventy support most of the common old-school templating languages? I once converted a site using Mustache from Punch [1] to Eleventy.
Eleventy is great, and in some ways I prefer it to Hugo if build time isn't an issue. At least templates don't break, like most of the comments here say.
I eventually redid the site from scratch (with a bit of vibecoding magic back when v0 got me into it) with Astro.
[1] https://github.com/laktek/punch
I found it hard to get next to reliably treat static content as actually static (and thus cacheable), and it felt like a huge bundle of complexity for such a simple use case.
This Christmas, I redesigned my website [1] into modular "middlewares" with the idea that each middleware has its own assets and embed.FS included, so that I can e.g. use the editor to write markdown files with a dynamic backend for publishing and rendering, and then I just generate a static version of my website for the CDN of choice. All parts of my website (website, weblog, wiki, editor, etc) are modular this way and just dispatch routes on a shared servemux.
The markdown editor turned out to be a nice standalone project [2] and I customized the commonmark format a bit with a header for meta data like title, description, tags and a teaser image that is integrated with the HTML templates.
Considering that most of my content was just markdown files already, the migration was pretty quick, and it's database free so I can just copy the files somewhere to have a backup of everything, which is also very nice.
[1] https://cookie.engineer
[2] https://github.com/cookiengineer/golocron
Previously: <https://news.ycombinator.com/item?id=29384788>
I'm amazed there still isn't a decent free simple to host CMS solution with live page previews, a basic page builder, and simple hosting yet though. Is there one?
There's https://demo.decapcms.org/ (previously Netlify CMS) that you install and run via adding a short JavaScript snippet to a page. It connects to GitHub directly to edit content. You can running it locally or online, but you need some hosting glue to connect to GitHub. Netlify provides this but more options would be nice and I think they limit how many total users can connect on free plans. You can get something like a page builder set up via custom content blocks, but I don't think there's going to be a simple way to render live previews via Hugo (written in Go) in a browser. A JavaScript based SSG would help here, but now you have to deal with the JavaScript ecosystem.
@geerlingguy Not a huge deal but noticed (scanning with https://www.checkbot.io/) if you click a tag in a post, it has an unnecessary redirect causing a speed bump that's easy to fix e.g. the post has a link to https://www.jeffgeerling.com/tags/drupal which then redirects to https://www.jeffgeerling.com/tags/drupal/.
Internal redirects are really easy to miss without checking with a tool because browsers aren't noisy about it. Lots of sites have unnecessary redirects from URLs that use http:// instead of https://, www vs no-www, and missing/extra trailing slashes, where with some redirect configs you can get a chain of 2 or 3 redirects before you get to the destination page.
I was frustrated that (because my posts are less frequent) changes in Hugo and my local machine could lead to changes in what is generated.
So I attached a web hook from my websites GitHub repo to trigger an AWS Lambda which, on merge to main, automatically pulled in the repo + version locked Hugo + themes. It then did the static site build in-lambda and uploaded the result to the S3 bucket that backs my website.
This created a setup that now I can publish to my website from any machine with the ability to edit my git repo. I found it a wonderful mix of WordPress-like ability to edit my site anywhere along with assurance that there's nothing that can technically fail* (well, the failure would likely, ultimately block the deploy, but I made copies of my dependencies where I could, so very unlikely).
But really the main thing I love is not maintaining really anything here... I go months without any concern that the website functions... Unlike every WordPress or similar site I help my friends run.
prose is fully open source as well: https://github.com/picosh/pico
It even has a Hugo migration repo for when users want to make the jump
https://github.com/picosh/prose-hugo
Alternatively you can use https://pgs.sh to deploy your Hugo blog using just rsync. The entire workflow starts and finishes in the terminal.
I was thinking of making a GitHub action that uploaded the image from a given branch, deleted it, set the URL, and finally merged only the md files to main.
Or do you just check in images to GitHub and call it a day?
Usually you just store the images in the same git repo as the markdown. How you initially host the static site once generated is up to you.
The problem with storing binaries in Git is when they change frequently, since that will quickly bloat the repo. But, images that are part of the website will ~never change over time, so they don't really cause problems.
Sorry I tried to help? If that's the response I get for helping, good luck...
> All I meant was a way to avoid storing images in git, the rest is quite simple.
There is no good way to do that, and no way that I would recommend. Git is the correct solution, if that is where you are storing the markdown. No fancy git tools are required.
Working Copy (git for iPad) handles submodules reasonably well, I have a few that I'm working on cloned on it and others are not so I don't use so much space.
Instead I eventually just created an environment in nix that had compatible dependency versions to what GitHub uses and have been pleased since.
Ironically, my company's blog and websites are built with Hugo.
[0]: https://code.zikani.me
This resonates with me! Both in terms of things I use and things I make - I want them to "just work"
A blog is mostly a static site that changes occasionally, a static site genetator is a much better fit. Caveat: comments, but personally, I don't want to moderate, and the WordPress site I administerred for work didn't want comments either (but even with them disabled, somehow new comments and trackbacks got into the database). When I finally got approval to turn our blog into a mostly static site, it was joyous, and the server(s) stopped burning cpu like nobody's business to serve the same handful of pages. We used PHP to manage serving translated blog entries, but it's not that much slower than a fully static file when your PHP logic is dead simple and short.
This is because Google Scholar treats PDFs as first class citizen, so your Important blog posts can get added to academia.
maybe a plugin can solve this particular gripe...
Today, if I were setting up a blog to host just some text and images, a vibe-coded SvelteKit project using the static adapter[1] would easily solve every single problem that I have. And I would still be able to use the full power of the web platform if I need anything further customized.
[1]: https://svelte.dev/docs/kit/adapter-static
Jeff's approach of writing a separate comments application is interesting. I've seen people reuse GitHub issues to accomplish that, but that limits your audience participation to GitHub. The other obvious choice, I think, is a headless CMS. I'll be curious to see where he goes with it.
https://tariqdude.github.io/Github-Pages-Project-v1/visual-s...
I ran into all these problems just last summer trying to launch something new so I said fk it went all in on a overkill demo just to see whats possible
Was formatting old articles any difficult when moving to a new way of publishing?
Edit: thx for answers below!
[0]: https://reederapp.com
[1]: https://1password.com
[2]: https://github.com/quoid/userscripts
Don't know what it is either, but I'd like to got off-topic and remember with fondness the time when you could subscribe to RSS feeds directly in Safari. Google Reader was replacable, a direct integration into the browser not.
And for a short time, RSS was the bee's knees across the entire Internet. Apple had the best support for it, and almost put NetNewsWire out to pasture, until they just removed all baked in RSS functionality, entirely :(
But I use Reeder across Mac, iPad, and iPhone to keep up with feeds.
[1] https://github.com/quoid/userscripts
Though I did run it on Drupal off a Pi cluster for a few weeks as an experiment.
I built some automation that helps me test and deploy changes to S3 as well: https://github.com/carlosonunez/https-hugo-bloggen. It's clunky but works for me! Feel free to fork/PR if you're interested, of course.
It was a great move; I couldn't be happier. Running my blog is basically free (because nobody reads it, lol, but also because it's served by S3 and CloudFront and the # of monthly requests is still within Free Tier).
At the time, some folks were questioning why I built this instead of moving to Netlify. I wanted control over how my sites were deployed and didn't want to pay some provider for convenience I knew I could build myself. Netlify got AI-pilled some time ago, which makes me feel vindicated in my position.
Or I might stick it somewhere else, as an easter egg, we'll see!