Kubernetes Continuous Delivery Made Easy

April 24, 2018

A declarative approach to continous delivery on Kubernetes

Creating a Kubenetes cluster is easy! Easy enough that it can be done by issuing a simple voice command - “give me an 8 node kubernetes, please!” see KubeCon North America December 2017 keynote by Kelsey Hightower. Pretty cool right?

What about setting up Continuous Delivery (CD) for services running on Kubernetes? It is not so easy!

Kubernetes states that it

does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements ref.

Which is simply saying that there is no one-size-fits-all solution for Continuous Delivery.

The easiest way to get deployments working on Kubernetes is by using kubectl. It requires patching the PodSpec and updating the container version. For example:

  kubectl patch deployment myapp --patch \
  $'spec:\n template:\n  spec:\n   containers:\n   - name: mycontainer\n     image: nearmap/cvm-example:myversion'

This is manual and fiddly and decidedly not continous delivery. A typical CD pipeline is triggered by code commits and passes through multiple stages involving code reviews, builds, tests (unit/system/UAT) and finally a deployment to a production environment if all stages pass successfully. This could look something like

standard pipeline

A Kubernetes deployment involves interacting with the API Server which is the control plane of the whole cluster. In a CD pipeline the CD system will need access to this server (both in terms of network and authorization). This means the following actions are required:

  1. Configure RBAC access with limited permissions to perform Kubernetes workload updates
  2. Expose the API server’s network interface to the CD system (this could even mean opening it up to the world if cloud services are used)
  3. Install Kubectl (or use REST endpoints) on the CD agents
  4. Trigger kubectl commands to update the container versions
  5. Watch for status of rollout and check if rollback is required
  6. If rollout did not finish in a desired time i.e. progressDeadlineSeconds then process the rollback using version history if available.

This is very manual and error prone. Such integrations become incredibly hard when organizations, for obvious security reasons, want to keep their admin interfaces locked down to trusted networks. The possibility of using cloud service providers for CI then comes at the expense of security.

Kubectl is not a CI/CD tool.

Other solutions?

There are existing solutions for setting up CD pipelines for Kubernetes. Skaffold, Spinnaker, Helm and Weave Flux are some of the most popular ones we have looked at.

While each of these focus on a particular approach to Continuous Delivery, at Nearmap we felt that none of them were exactly what we were looking for.

Like many organisations we have hundreds of services running in multiple clusters and need a lightweight and automated system for pushing out the many software updates we make every day. We use managed services for all our build and integration systems that run on external systems in the cloud. Providing these with access to all our clusters is not something we want to set up and maintain.

Some of these solutions embrace storing all configuration and version changes in source control. Sometimes termed “Gitops”, all cluster changes are triggered by a commit to a repository containing configuration specific to that cluster. This certainly has some merit since all changes are versioned and it is easy to revert to a previous commit. However, our experience of storing version numbers in Git has shown that it creates a lot of noise for services that change frequently (especially compared to the rest of the service configuration) and it is very rare that teams roll back since the impetus is to roll forward to fix problems discovered in production. (That’s not to say we shouldn’t be able to roll back, just that it is a rare event and we don’t need to optimize for it).

We also prefer to keep a single set of configuration for the application alongside the code and not have to deal with separate repositories or config files for different clusters. This is especially achievable with Kubernetes due to its DNS and external service support.

Ultimately we want to keep our existing managed CI infrastructure and have a simple and fully automated pipeline that runs our verification tests and pushes updates out to our clusters without the need to set up special access rules and lots of infrastructure.

Enter the Container Version Manager

Our solution uses intrinsic Kubernetes principles such as custom resource definitions (CRD), a custom Kubernetes controller, update strategies and label/field selectors. We call it Container Version Manager (aka CVManager) and its aim is to make Continuous Delivery dead simple.

CVManager defines a custom Kubernetes resource ContainerVersion that defines rules for managing container versions. This looks like

  kind: ContainerVersion
  apiVersion: custom.k8s.io/v1
  metadata:
    name: myappcv
  spec:
    imageRepo: <AWS_ACC_ID>.dkr.ecr.us-east-1.amazonaws.com/nearmap/cvm-example
    tag: demo
    pollIntervalSeconds: 300
    selector:
      cvapp: myapp
    container:
      name: myapp

CVManager then uses native Kubernetes concepts to ensure that container versions of specified workload (and batch job) resources are always up to date.

Setting up your CD pipeline becomes a one-time process with 3 simple steps:

  1. Install CVManager in your Kubernetes cluster (a simple Helm install away)
  2. Define CV resources for your workloads
  3. Set up a rule in your CI system to tag container images with the current version (we provide tools to help with that)

An example application cvm-example demonstrates this easy setup process. We’ll use CircleCI a free CI tool for the workflow and ECR (AWS) as our container registry.

Clone the cvm-example repostory from git:

 git clone git@github.com:nearmap/cvm-example

or if you are a Go user

 go get github.com/nearmap/cvm-example

1.Install CVM in your Kubernetes cluster

This defines CV custom resouce definitions and creates a CV controller service.

To install CVM, see below: Using kubectrl

  kubectl apply -f https://raw.githubusercontent.com/nearmap/cvmanager/master/k8s/kubectl/cvmanager.yaml

or using helm

  helm install --name cvmanagerapp ./cvmanager

2.Define CV resources for your workloads

In this step you define CV resources for your pods in workload i.e. deployments, replicaset, daemonset, stateful (all batch workload cron and job are also supported). In the cvm-example we have a Kubernetes deployment workload that we intend to set up for continous delivery.

The cv resource for the myapp deployment is

  kind: ContainerVersion
  apiVersion: custom.k8s.io/v1
  metadata:
    name: myappcv
  spec:
    imageRepo: <AWS_ACC_ID>.dkr.ecr.us-east-1.amazonaws.com/nearmap/cvm-example
    tag: demo
    pollIntervalSeconds: 300
    selector:
      cvapp: myapp
    container:
      name: myapp

Here you can see a reference to the container repository, the tag that indicates the current version and selectors to identify the target workloads.

3.Set up a deploy step in your CI tool

In this final step we set up a rule to update a container image with a tag (demo) to indicate that this image is the current version that should be running on the cluster.

CVM provides a simple docker based CLI command that can be used in your pipeline’s deploy step. An example of this command follows (also see CircleCI config file in cvm-example):

  deploy: &deploy-env
    name: Deploy to env
    command: |
      docker run nearmap/cvmanager cr tags add \
        --repo ${ECR_ENDPOINT}/nearmap/cvm-example \
        --tags demo \
        --version ${CIRCLE_SHA1}

The demo tag here is a reference environment tag (in other words a meta tag) in the container registry which is used by CVManager to identify the current version a container should be running. See below: Tags on container registry

Your CI/CD pipeline is now ready.

Watch the full demo of CVManager that shows the installation process in detail:

Benefits

Apart from simplifying your CD pipeline, CVManager also provides full visibility into container version rollout and history.

1.Current container versions

REST interfaces allow you to query the current versions of your containers. We provides both JSON and HTML formats for ease of intergation into APIs, and for monitoring on screenboards etc. For example:

Current container version

2.CV rollout dashboard

Stats and events emitted by the controller give full visibility into ongoing deployments. For example:

CV rollout monitoring

3.CV Update history

CVManager can also capture rollout history. This is stored in a configmap and is also served over our REST iterface. For example:

CV update history ConfigMap

Nearmap Kubernetes clusters run on Amazon Web Service (AWS). For this reason, CVManager is heavily focused on Elastic Container Registry (ECR) but Dockerhub support is also provided. We are also working on adding support for additional container registries in the future.

This tool is open sourced and available under a MIT liscense here. The docker images of this tool are avaialble on dockerhub. The cvm-example app used in this post is also available.

We welcome feedback and contributions.

Happy deploying.

By Suneeta Mall & Simon Cochrane