A couple of weeks ago, news reached me from me previous provider that they were retiring all their vservers, putting me on the quest of finally moving my server and my website.
As my website is not quite up to date anymore (to say the least), I decided that this was the perfect opportunity to finally get myself moving and augment the old site with a blog where I could dump the stuff I ocassionally find worthy of sharing. So, I got myself a shiny new KVM-based server and started out to combine those three lovely pieces of sofware.
Gentoo has been my linux distro of choice for nearly ten years now. I don't want to dwell on the reasons why, tastes are different; suffice to say that I like the rolling release system, the flexibility and the vast size of the portage tree. In addition, Gentoo supports hardening via grescurity / PaX and has all recent versions of docker in portage (gentoo's package system).
Docker is a system for isolating server processes in separate containers that can be plugged together. Docker differentiates between images (file system images) and containers which provide the environment for a specific process each. Containers are created as copies of images and can be converted back to images via "committing". The process of building new images can be scripted using so-called Dockerfiles.
Containers can communicate with each over via shared folders and via the network, and docker provides each container with details on the network configuration of its dependencies via environment variables. Docker implements copy-on-write, so the process of creating containers and images is pretty cheap when it comes to resources.
I find Docker pretty neat for a number of reasons
- All dependencies of each process are provided by the container instead of the host system
- Containerized processes are completely isolated and cannot cause any harm to each other or the host
- An application can be split among separate containers (administration / storage / application) which can be managed, backed up and replaced individually.
Gentoo Hardened and Docker
Kernel Configuration and PaX
The first descision to take was the amount of hardening to include into the kernel config. While mandatory access control via SELinux or Grsecurity greatly enhances security, it requires access control to be configured properly for the system. If you run a container, the MAC system will also be enforced inside the container, which is a potential source of issues.
Therefore, I decided to leave out MAC and just go with the other PaX and Grsecurity memory / file system hardening options. While this is straightforward, there are some booby traps related to chroot handling hidden here if you intent to use containers. I followed this great blogpost on LXC and Grsecurity, which saved me a lot of trial-and-error.
The remaining required kernel config options for LXC / Docker are covered in the Gentoo wiki. In addition, the LXC ebuild will warn you if any required kernel config options are missing. Therefore, even if you are not going to actually use LXC, it is a good idea to enable the corresponding useflag in order to get this validation.
AuFS vs. Devicemapper vs. Btrfs - which storage backend to choose?
I've had my kernel panicked by AuFS before while playing with docker before, so I decided to go with the devicemapper backend instead which works with the vanilla kernel. To this end, I just enabled all devicemapper targets as modules (although the thin provision target should be sufficient). Btrfs would be another great option, but I didn't want to try that one on a server just yet :)
Docker is in portage, so installing should be simple. Still, I encountered a linker bug which prevented Docker 0.9.1 from building, 0.l0.0 works fine however. The issue is related to the PaX toolchain and can be worked around by passing additional linker flags.
I decided to split Ghost into three containers.
1. Storage Container
Ghost needs storage for images, themes and its SQlite database (I could also have gone for MySQL, but I figured SQlite would be sufficient and simpler to set up). As I want to be able to update Ghost without touching the data, a separate container for a storage volume makes sense.
2. Admin container
Ghost's storage folder also contains all installed themes. Installation and modification of themes works by manually adding and modifying files there, so I want to be able to access and manipulate the storage folder. To this end, I decided to create an admin container which runs a shell and can access the storage container.
3. Ghost container
A separate container contains ghost itself. As this container does not contain any mutable data, updating ghost is as simple as creating a new ghost image and recreating the ghost container from the updated image.
Image and Container setup
You can find the corresponding Dockerfiles and scripts on Github. Each Dockerfile comes with a Makefile which will build and tag the image.
1. Storage container
The storage container is the simplest. As it will just be a data storage and doesn't actually run any processes, I based it on the busybox container as suggested by the Docker documentation.
FROM busybox VOLUME ["/var/storage"]
The Makefile will tag the Image as
cs-storage:latest, and from this, the storage container is created via
$ docker run --name=ghost-storage cs-storage
2. Base image for the admin and ghost containers
I chose CentOS as base image for both admin and ghost containers. The corresponding Dockerfile is
FROM centos:latest RUN rpm -Uvh http://download-i2.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm RUN yum -y update RUN yum install -y vim screen nodejs git sudo npm RUN mkdir /docker WORKDIR /docker ADD install_paxctl.sh /docker/install_paxctl.sh RUN sh /docker/install_paxctl.sh RUN paxctl -c -m /usr/bin/node RUN HOME=/docker npm install -g grunt-cli bower RUN groupadd -g 1000 user && useradd -g user -d /user -m -u 1000 user
Apart from installing the necessary yum and npm packages, this file does
- Install the EPEL repo: necessary for NodeJS on CentOS
- PaX mark Node: this is a nasty bit of trickery required to run Node on a PaX enabled system. The Gentoo ebuild takes care of this automatically, but we have to do it manually in the container. As
paxctlis not available as a package from CentOS or EPEL, the
install_paxctl.shscript pulls and builds it manually
- Add a user for running stuff: even if Docker uses capabilities to take away most root privileges inside the container, I feel bad running the server as root. Moreover, bower will refuse to run as root.
The Makefile will tag the image as
3. Admin container
The admin container is created directly from the base image via
$ docker run -it --volumes-from=ghost-storage --name=ghost-adm cs-base /bin/bash
which will also attach to the shell running inside the container. When the shell has finished, you can always rerun the container via
$ docker start -ia ghost-adm
Before running ghost for the first time, we have to setup the permissions and directory structure withing the storage directory. Inside the container, we run
$ chmod 777 /var/storage $ su user $ for i in apps images data themes; do mkdir -p /var/storage/ghost/content/$i; done
After that, we can change to
/var/storage/ghost/content/themes and place our themes there. As a minimum we will want the default casper theme from github.
Whenever we want to add new themes, change existing ones or modify the content directory in any way later, we can fire up the admin container and do our work there.
4. Ghost container
The ghost container itself also derives from the base image. The docker file I used is
FROM cs-base USER user WORKDIR /user ENV HOME /user ADD install_ghost.sh /user/install_ghost.sh ADD config.sh /user/config.sh RUN /bin/sh /user/install_ghost.sh ADD config.js /user/Ghost/config.js ENV NODE_ENV production WORKDIR /user/Ghost CMD ["node", "index.js"]
config.js files are the config files for the installation script and for ghost itself. Example files exist in the repo (which are nearly the exact versions I used). The installation script pulls docker from github and runs
bower. The version used is configured in the installation script config.
The ghost config I used
- Locates the sqlite database in /var/storage/ghost
- Sets the content directory to /var/storage/ghost/content
- Configures ghost to listen on all interfaces in the container
An important pitfall in our configuration is this ghost bug. If we change the content path the way we do, ghost returns broken URLs for uploaded images. At the time of writing, this bug has been fixed on master, but not in the latest release, so for our configuration to work, we must use ghost from master ;)
The Makefile tags the ghost images as
cs-ghost. The container is created via
docker run -d --volumes-from=ghost-storage -p 127.0.0.1:2368:2368 --name=ghost cs-ghost
The additional network config tells docker to expose ghost on its default port 2368 on 127.0.0.1 . Voilà - we now have the ghost server up and running. Just like in the default ghost config, it listens only on the lo interface, so it is not accessible to the outside by default.
Exposing the blog on the web
Similar to ghosts default configuration, this container setup listens only on the loopback interface. If we want the blog to be accessible, we can setup an reverse proxy like nginx to forward connections to ghost. This is described at length in the ghost documentation. Nginx can also be used to encrypt admin logins via HTTPS (very advisable I'd say) and add gzip compression.
Automatically starting Ghost on boot
That one is easy now: if we add the docker service to start on boot, docker will automatically restart the ghost container on boot.
Adding and modifiying themes
Adding and modifying new themes is a simple matter of firing up the admin container. From there we can modify the installed themes to our leisure. Once we are finished, we must restart the ghost container via
$ docker restart ghost
If we want to backup database, themes and uploaded content, we can fire up a busybox container which exports the storage directory as a tar archive
$ docker run -a stdout --rm --volumes-from=ghost-storage busybox tar -c /var/storage | gzip > backup.tar.gz
rm option will discard the container as soon as the command has completed. Note that the storage volume is not part of the storage container (a slightly confusing notion), so just commit or exporting the container will not suffice.
Updating ghost in this setup is a simple matter of rebuilding the ghost image (preferably from a rebuild base image in order to pull in any new CentOS updates), stopping the old container and creating a new one.