Automated Docker Deployment to Kubernetes with Buildbot
In this blogpost we’ll show you how to use Buildbot to automatically build and deploy a containerized microservice application to a Kubernetes cluster.
We’ll walk through the various relevant Buildbot concepts as we go, but we do assume that you have a basic working knowledge of git
and Kubernetes.
Why?
In short: build automation is good.
In long: automating the build and deploy process for your application avoids deployment errors (one manual docker build and push + one manual Kubernetes update per microservice * the number of microservices in your application = a lot of wasted dev time and possibilities for error).
Automated deployment means easy access to a dev instance guaranteed to be running the most recent code. This is useful for everyone on the project, from engineers troubleshooting bugs and running tests to managers demoing the application to potential customers.
Build automation of this sort also leads neatly into automated integrated testing, via the build system running the test suite against the dev instance every time it does a deploy.
For all these reasons and more, I recently built out a Buildbot configuration to perform automated deployment of one of our applications to a Kubernetes cluster.
Prerequisites
To follow along with this tutorial you’ll need the following:
- A Kubernetes cluster
- A
git
server containing your code accessible atgit.local
(we use GitLab here at TwoSix, but pretty much anygit
hosting solution will work) - A docker registry at
registry.local
(we use Harbor, but pretty much anything will work) - A server running Buildbot with a worker called
worker0
- A
spacecat
A spacecat
?
We’re assuming that your application is called spacecat
and that each of its constituent microservices lives in its own git
repo with its own Dockerfile
under git.local/spacecat
.
The overall repo arrangement should look something like the following:
spacecat/ |-service0 | |-app.py | |-requirements.txt | |-Dockerfile | ... |-service1 | ... ...
Step 1: Scripts
To begin with, we need to create some scripts for Buildbot to run to build and deploy our docker images.
Build
Put the following in a file called build-project.sh
on your Buildbot server:
#!/bin/bash PROJECT=$1 rm -rf $PROJECT git clone --recursive [email protected]:spacecat/$PROJECT cd $PROJECT git checkout dev if [ -f "docker-build.sh" ]; then ./docker-build.sh; else docker build -t spacecat/${PROJECT,a,a} .; fi;
This script copies a repository from your git
server, checks out the dev
branch, and builds a docker image called spacecat/$PROJECT
.
By convention, if a project needs something more than just a docker build
to produce a docker image we encapsulate its build process in a script called docker-build.sh
located at the root of the repository.
Push
Once we’ve used this script to build all our images, we need to push them up to our docker registry at registry.local
so Kubernetes and our fellow devs can pull them. To do this, put the following in a script called push-all-dockers.sh
on your Buildbot server, filling in the list of PROJECTS
with all the services you wish to push for deployment:
#!/bin/bash PROJECTS=(service0 service1 service2 ...) TIMESTAMP=`date "+%Y-%m-%d"` for PROJECT in ${PROJECTS[@]}; do docker tag spacecat/$PROJECT registry.local/buildbot/$PROJECT:latest docker tag spacecat/$PROJECT registry.local/buildbot/$PROJECT:buildbot-$TIMESTAMP docker push registry.local/buildbot/$PROJECT:latest docker push registry.local/buildbot/$PROJECT:buildbot-$TIMESTAMP done
This script tags each of the build images with a timestamp and pushes them to your registry. We push to the buildbot
folder on the registry to segregate automated builds from any hand-built images we may have pushed, which go in a separate folder. Feel free to alter this if you organize your docker registry differently.
Deploy
Finally, once we’ve got our images build and into registry.local
, we need to tell Kubernetes to pull the images and deploy them. To do this, put the following in a script called deploy-project.py
on your Buildbot server, again filling in the list of IMAGES
as appropriate:
import datetime from kubernetes import client, config IMAGE_PREFIX = 'registry.local/buildbot' IMAGE_TAG = 'buildbot-%s' % datetime.date.today().isoformat() # eg 2017-10-02 NAMESPACE = 'spacecat' IMAGES = ['service0', 'service1', ...] # NB: requires ~/.kube/config to be present and correct config.load_kube_config() api = client.AppsV1beta1Api() deployments = api.list_namespaced_deployment(NAMESPACE) for deployment in deployments.items: found_container = False for container in deployment.spec.template.spec.containers: current_image = container.image.split('/')[-1].split(':')[0] # eg registry.local/spacecat/core:latest -> core if current_image in IMAGES: # we've built a newer version of this container, so patch it container.image = '%s/%s:%s' % (IMAGE_PREFIX, current_image, IMAGE_TAG) found_container = True if not found_container: continue # no need to patch the deployment if we didn't modify any of its containers api.patch_namespaced_deployment(deployment.metadata.name, NAMESPACE, deployment)
This script uses the Kubernetes API to iterate through the deployments on your cluster, updating any containers that use any of the images that we just built to use the new versions we just pushed to the registry.
Step 2: Buildbot Configuration
Now that we’ve got scripts to build and deploy our app (already a great improvement on the dread clumsiness of a fully manual deploy!), it’s time to bring in Buildbot to do it all automatically.
Builders and Schedulers
Buildbot thinks about the world in terms on builders
and schedulers
. If you want the full details you can read their docs, but we won’t need much beyond the basics, so here’s a quick explanation:
Builders
Builders
tell Buildbot how to do something. We’ll create individual builders
for each step of our build and deploy process (i.e. for each script we created above) and then weave them together.
While using multiple builders
like this requires a bit more configuration work up front than using one monolithic builder
for the entire process, it pays off in more granular control over the process and better error reporting, since Buildbot doesn’t work in steps smaller than a builder
.
Schedulers
Schedulers
tell Buildbot when to run builders
. We’ll use them to combine our builders
, run the build and deploy process automatically, and provide a way to kick it all off on-demand as well.
The Setup
All of the following code goes in the main Buildbot configuration file, master.cfg
. We assume that master.cfg
has not been substantially modified from the template provided by Buildbot and, specifically, that there is a block like
# This is the dictionary that the buildmaster pays attention to. We also use # a shorter alias to save typing. c = BuildmasterConfig = {}
near the top of the file.
We’ll create builders
for our basic tasks, attach schedulers
to them, then combine them to set up the fully automated pipeline.
Basic Tasks
We begin by wrapping the first of the scripts we created above and telling Buildbot how to build our docker images and clean up afterwards:
# build-dockers factory = util.BuildFactory() # Create a directory to build everything factory.addStep(steps.MakeDirectory(dir="build")) # For every project... projects = ['service0', 'service1', ...] for project in projects : # ...call build-project.sh to check out and build the project factory.addStep(steps.ShellCommand( command='/home/user/build-project.sh %s' % (project), workdir='build', usePTY=True )) # Clean up by removing all build directories factory.addStep(steps.ShellCommand( command='/bin/bash -c "rm -rf build"', alwaysRun=True, workdir='' )) c['builders'].append( util.BuilderConfig(name="build-dockers", workernames=["worker0"], factory=factory))
This builder
, as its name build-dockers
implies, uses build-project.sh
to build our docker images in a build
folder.
# push-dockers factory = util.BuildFactory() factory.addStep(steps.ShellCommand( command='/home/user/push-all-dockers.sh')) c['builders'].append( util.BuilderConfig(name='push-dockers', workernames=['worker0'], factory=factory)) # deploy-dockers factory = util.BuildFactory() factory.addStep(steps.ShellCommand( command='/usr/bin/python /home/user/deploy-project.py')) c['builders'].append( util.BuilderConfig(name='deploy-dockers', workernames=['worker0'], factory=factory))
These two builder
s wrap the push and deploy scripts we created earlier to push our images to the registry and deploy them to our Kubernetes cluster.
Pulling Them Together
Builders
can call other builders
, so we’ll create a spacecat-build-push-deploy
builder
that encapsulates our entire process.
However, they can only do so via schedulers
, so first let’s add a scheduler
for each of the three basic task builders we just created:
c['schedulers'].append(schedulers.Triggerable( name='trigger-build-dockers', builderNames=['build-dockers'])) c['schedulers'].append(schedulers.Triggerable( name='trigger-push-dockers', builderNames=['push-dockers'])) c['schedulers'].append(schedulers.Triggerable( name='trigger-deploy-dockers', builderNames=['deploy-dockers']))
With that out of the way, the following creates the spacecat-build-push-deploy
builder
:
# spacecat-build-push-deploy factory = util.BuildFactory() factory.addStep(steps.Trigger(schedulerNames=['trigger-build-dockers'], waitForFinish=True, haltOnFailure=True)) factory.addStep(steps.Trigger(schedulerNames=['trigger-push-dockers'], waitForFinish=True, haltOnFailure=True)) factory.addStep(steps.Trigger(schedulerNames=['trigger-deploy-dockers'], waitForFinish=True, haltOnFailure=True)) c['builders'].append( util.BuilderConfig(name="spacecat-build-push-deploy", workernames=["worker0"], factory=factory))
Note that we mark each step as haltOnFailure
, which tells Buildbot not to move on to the next step if the current one fails. For example, if pushing images fails, there’s no reason to go on to the next step and attempt to deploy.
Making It Happen
Finally, we add two more schedulers
:
# Schedule a build, push, and deploy of Spacecat at 2:00 AM every night c['schedulers'].append(schedulers.Nightly( name='nightly-spacecat-build-push-deploy', builderNames=['spacecat-build-push-deploy'], hour=2, minute=0)) # Allow us to build, push, and deploy Spacecat when we want c['schedulers'].append(schedulers.ForceScheduler( name="force-spacecat-build-push-deploy", builderNames=["spacecat-build-push-deploy"]))
The first tells Buildbot to update our Kubernetes cluster every night at 2 AM (a nice dead time when no one should be in the office doing work!) and the second tells Buildbot to put a button in its web interface that we can push to run the update process on-demand.
Step 3: Authentication and Final Configuration
Now that we’ve got a master.cfg
that tells Buildbot how to use our scripts to build and deploy our project, we need to wire all our services together and get Buildbot talking to our git
server, our docker repository, and our Kubernetes cluster.
The Git Server
Buildbot needs to be able pull the code it’s going to build, so we need to give it access to our git
server.
We’re using GitLab at Two Six Labs, which provides Deploy Keys
. These are SSH
keys managed via the GitLab UI that can be granted read-only access to repositories. (Buildbot’s interactions with GitLab are limited to cloning repositories, so while we could give the key write access, we shouldn’t.)
- Follow the GitLab documentation to create a deploy key
- Add it to each of the projects to be build by Buildbot
- Download the private key file (
id_rsa
) for the deploy key - Upload it to the Buildbot server as
~/.ssh/id_rsa
- Run
eval `ssh-agent`; ssh-add ~/.ssh/id_rsa
to install the key
If you’re not using GitLab you’ll have to do something else to give Buildbot access to your code; consult the documentation for your system.
The Docker Registry
Buildbot needs to be able to push the docker images it builds to our docker registry, so we need to authenticate its local docker daemon against our registry.
Install docker
on the Buildbot server and then log in to the registry with
docker login registry.local
If the registry is served insecurely, which will prompt an error from docker login
, you’ll need to edit /etc/docker/daemon.json
to contain before logging in:
{ "insecure-registries": [ "registry.local" ] }
The Kubernetes Cluster
Buildbot needs to be able to update our Kubernetes cluster to use the new images it builds, so we also need to give it access to our cluster.
Install the kubectl
package and use pip
to install the python package kubernetes
.
Grab the kube-config
file for your cluster, and upload it to the Buildbot server as ~/.kube/config
.
(If you installed Kubernetes manually, you will have created a kube-config
as part of the setup process; if you used a tool like Rancher to create your Kubernetes cluster it should provide you a way to export a pre-built kube-config
file for your cluster)
Buildbot
Now that the Buildbot server’s been authorized to talk to the rest of our infrastructure, the last step is to get Buildbot set up!
Upload the master.cfg
we created in the last section to the Buildbot server and place it where Buildbot expects it. I got Buildbot installed in ~/buildbot
, so master.cfg
should be placed at ~/buildbot/master/master.cfg
.
Log into the Buildbot server and run buildbot reconfig master
to restart Buildbot with the new config. Restart the worker with buildbot-worker restart worker0
.
(If you installed Buildbot inside a virtualenv
, you’ll need to activate
the virtualenv
before running the commands above.)
Step 4: Wrapping Up
And with that, we’re done. Time to sit back and enjoy the fruits of your labor!
If you’re too impatient to wait till the next morning to see the nice shiny new images in your Kubernetes cluster you can go into the Buildbot web UI, navigate to the page for the spacecat-build-push-deploy
builder
, and click the force-spacecat-build-push-deploy
button to force Buildbot to run the update-and-deploy process immediately.
Coda: A Note on Docker Images
As you may have noticed while reading through the build scripts above, we never set up Buildbot to remove old docker images. Keeping layers around between builds makes the nightly upgrade process faster, since Buildbot can resuse the cached layers for any service that didn’t change, but does consume more hard drive space and means that Buildbot isn’t building images from scratch each time.
If this is a concern for you, you could create a clean-images
builder
that removes all docker images from the Buildbot server and reclaims the hard drive space:
# clean-dockers factory = util.BuildFactory() factory.addStep(steps.ShellCommand( command='docker rmi -f $(docker images -q)')) c['builders'].append( util.BuilderConfig(name='clean-dockers', workernames=['worker0'], factory=factory))
This can builder can then be run as either the last step of the spacecat-build-push-deploy
builder
(to ensure all builds are from scratch) or a standalone, manually triggerable builder used to reclaim disk space when the Buildbot server is running low.