Writing is rewarding in many different forms, but blogging is somewhat unique in the means of its delivery. It’s not the only writing platform that requires technology, technology has been involved in writing for almost as long as writing has been a thing. What makes blogging unique is the accessibility of that technology to the author. Sure, you could sign up for a managed service platform like Wordpress or Blogger, but you could also just configure and run all of this yourself as I do. The power, flexibility, and freedom that comes with that might not be for everybody, but it could be for anybody, and that is powerful.
How I run this blog
This blog is generated with a static site generator and hosted on a Raspberry Pi-based Kubernetes cluster. I use Hugo.
Static Sites
I really like the idea of a static site generator. If the purpose of your site is to deliver content which does not change rapidly (read: static), then you can cut out so much runtime complexity from your deployment setup by simply serving up files. Additionally, because there’s basically zero runtime going on with this site, if it ever gets the fire hose of internet attention, pivoting the deployment model to use your CDN of choice is very simple. Until that point, however, it’s less necessary.
I’m a firm believer that in order to manage scale, it’s important to pay attention to both scaling directions. So many technology platform focus only on what you might need to do to scale up or out. It’s either vertical or horizontal scaling only in the up direction. Little to no consideration is paid to how many resources are used in the base (unscaled) case or how to improve performance while keeping resources constant or even reduced. I heard TigerBeetle give an InfoQ talk where they said that scaling was not just "do more with more", but "do more with less" and that resonated strongly with my experience both in manufacturing automation and system design.
Kubernetes
Version 1 of my deployment strategy takes advantage of a Kubernetes cluster I
already have running in my homelab. There are improvements I’d like to make
before I call this done, but for now, I’m running a stock nginx container with
a hand-written deployment spec. I’m injecting the static site content into a
persistent volume backed by Longhorn manually with
kubectl cp, which is gross, but it works. I run my own private hosted git
server and intend to at some point build a more automated deployment pipeline
for this.
My original plan was to use a server-side commit hook to trigger an rsync of
the public content into the persistent volume of the web server pod, but in
order to do that, I would need to customize the nginx image to be able to
securely receive SSH traffic. I’ve done all of this kind of plumbing before,
but I didn’t want to have all of that block getting the site up and running.
Additionally, having Longhorn share writable persistent volumes among pods on
different nodes involves some built-in NFS trickery which just adds to the
complexity of the system. When I run into those kinds of situations, I like to
take some time to see if I can design it out of the system. For now manually
deploying the website when I post is good enough while I figure out a more
elegant deployment mechanism without importing a world’s worth of someone
else’s dependencies. A Git Ops-style continuous deployment mechanism deserves
its own post anyway.