I’m a big advocate of using services like Heroku or AWS Elastic Beanstalk rather than running your own servers where you can, but sometimes it’s the right thing to do. For my home automation setup, I want to keep as much of it on the local network as possible, so this was definitely one of those situations.
So I bought an Intel NUC - it’s small, fairly quiet so I can run it indoors, and yet fairly powerful (Core i5, 4GB RAM, 1TB HDD in the one I bought).
I want Infrastructure as Code. I want a documented reproducible version-controlled setup. One perspective is that this is overkill for “just a home server” - my position is that it’s even more necessary, because I’m going to fiddle with it sporadically, it’s highly unique, and it’s only me maintaining it. However I still want a fairly minimal setup.
In the past I’ve done things like writing a bunch of Puppet or Ansible and/or running a suite of VMs, but this can get tedious fast - hoping that a module exists, handling dependencies, encoding installation instructions as config management, etc.
What I really want is akin to a self-hosted PaaS that I can easily deploy on a single box, and that allows me to define apps declaratively.
Thanks to Docker, Docker Compose, Traefik, and a bit of systemd config, that’s fairly straightforward!
TL;DR: Build/use services with Docker, define how they should run with Docker Compose, hook them into the OS lifecycle with systemd config, point a wildcard domain at the host server, and use Traefik as a reverse proxy for individual domains.
Docker provides a nice 12factor-y interface between apps and the underlying system.
Dockerfiles provide an accessible way of creating an image, and the contained app is (relatively) isolated from the host.
Plenty of services ship official Docker images, and it’s often fairly straightforward to DIY where they don’t.
So, for example, with Prometheus, I can just
and off I go!
Compose is a tool for defining and running multi-container Docker applications
This lets me take a Docker image and say “use this config, bind to these ports, run on this network, mount these volumes” etc.
For example, a basic
docker-compose.yml for running Home Assistant might look like:
This is my basic unit of “service declaration” - I have a git repo, it contains a directory per service, each directory contains a
docker-compose.yml describing how to run the service (e.g. my Home Assistant config)
I want my services to start when my host starts, and stop when my host stops.
My server runs Debian as a host OS, and that uses the systemd System and Service Manager.
DNS and port conventions are great, hardcoded host names / magic port numbers much less so.
I don’t want to have to point my phone at
http://$host-name.local:12345/, I want
http://service.example.com/ - even better if I can sort HTTPS!
- Declare a shared Docker network for your services
- Deploy a Traefik instance using this network, giving it access to the host’s Docker socket (note that you are trusting Traefik significantly with this)
- Configure your other services to use the same network and set appropriate labels for Traefik to handle accordingly
I have a wildcard domain (
*.srv.example.com) pointed at the host via my local DNS resolver, so new services automagically appear with FQDNs on port 80 - nice, memorable and clear.
Adding a new service is pretty straightforward - build/find a Docker image, add
docker-compose.yml config to run it, add a new entry to the
Makefile for systemd config,
make install and run it - Traefik will automagically proxy to it based on the labels.
Updating a service is relatively straightforward - update the version in the
docker-compose.yml, restart the service. I hope that eventually Dependabot will get Docker Compose support, at which point it can PR in new dependencies on GitHub, and I’ll handle my host config fetching updates and restarting services, but this works adequately for now.
Some of the backing service config is still somewhat manual - e.g. data volumes and networks are manually created, but that’s fairly straightforward -
mkfs and add it to
/etc/fstab - I could ansible it and I’ll probably regret not doing so at some point, but the returns don’t feel hugely worthwhile - their existance is at least documented in my Compose files if I ever need to recreate them.
So, this all feels like a pretty sweet spot - I can easily deploy new services and configure them as I need, while keeping the bulk of the config in git with all the benefits of Infrastructure as Code, and in a fairly self-contained fashion away from whatever else I have on the host - all without a somewhat heavier approach like deploying Kubernetes or CloudFoundry etc.
If you’re deploying services to the internet, life is short, use someone else’s PaaS if you can - for a simple home server setup though, this is quite nice!