Real World Deployments On Kubernetes

Note: I assume that you have a basic knowledge of how Kubernetes works and what are its main components, specifically Replication Controllers (RC), Pods and the CLI kubectl.

Roll it up

The recommended way of deploying applications in Kubernetes is via its rolling-update CLI command. The synopsis for the command is as follows:

kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]

The way rolling-update works is by increasing the number of running pods on NEW_CONTROLLER one by one while decreasing the number of OLD_CONTROLLER_NAME pods. This ensures you have live pods of the service at all times, even during deployment.

You supply the old controller name you want to replace (the one that’s currently running) and one of the two options for the new controller:
1. A new controller name and a new docker image to deploy
2. A new controller spec – a YAML file describing the RC

The first method of supplying a new RC name and a docker image can’t really work in the long run since it has several major disadvantages:

  1. You can’t use it to deploy a multi-container pod
  2. You are limited to a change in the docker image name only. You can’t deploy changes to the Replication Controller itself:
    1. Adding environment variables to the container
    2. Changing resources limits/requests
    3. Mounting volumes
    4. The list goes on…

Sooner or later you will need those capabilities so you’d better build your deployment flow around the second method of providing a full Replication Controller YAML file to the rolling-update command.

So What’s Wrong Here?

While this method of deployment seems reasonable, it is manual in several ways:

  1. You have to manually search for the OLD_CONTROLLER_NAME before you can deploy.
  2. You have to generate a new docker image yourself, then take the name/tag of that image and replace the old docker image name/tag in the RC YAML file.
  3. You have to pick a new controller name and replace the name on old RC YAML file.
  4. You have to make sure you have at least one different selector on the RC YAML compared to the currently running RC.
  5. If you have a multi-container pod the situation gets even worse since you have to coordinate all the above for more than one Docker image.

If you deploy often you can’t really go through this procedure manually every time. What you really want is to one-click-deploy and have an automated way to:

  1. Build the relevant docker images for the project.
  2. Build a new RC YAML file, with all the changes required by Kubernetes:
    1. New images for the containers to run
    2. A unique (and thus by definition different) RC name for the new RC
    3. A unique selector tag which is required by kubernetes.
  3. Find the OLD_CONTROLLER_NAME for you.
  4. Deploy by replacing the old controller with the new one via the rolling-update command.

At nanit.com, we have a set of conventions for each service, which does all of this for us. Let’s see how a simple service which is composed of an API application server proxied by a NginX looks like.

Directory Structure

  • nanit/
    • api/
      • server/
        • Dockerfile
        • server-code
      • nginx/
        • nginx.conf
        • Dockerfile
      • kube/
        • rc.yml
      • Makefile

This is pretty reasonable. Inside the API project, we have a directory for the server code,  a directory for the Nginx Dockerfile and conf, and a directory for the kubernetes resources files called kube. On the root of API we have a Makefile to orchestrate everything for us.

The interesting parts are the contents of the Makefile and the rc.yml.

The RC YAML File


apiVersion: v1
kind: ReplicationController
metadata:
name: api-{{RC_TAG}}
labels:
app: api
tag: "{{RC_TAG}}"
spec:
replicas: 3
selector:
app: api
tag: "{{RC_TAG}}"
template:
metadata:
labels:
app: api
tag: "{{RC_TAG}}"
spec:
containers:
– name: api-app
image: {{APP_IMAGE}}
resources:
requests:
cpu: 500m
ports:
– name: application
containerPort: 3000
– name: api-nginx
image: {{NGINX_IMAGE}}
resources:
requests:
cpu: 100m
ports:
– name: http
containerPort: 80

view raw

rc.yml

hosted with ❤ by GitHub

The YAML file looks like a regular RC definition, but with a few interesting placeholders for RC_TAG, API_IMAGE and NGINX_IMAGE. These will be replaced by the Makefile when we issue the deploy command.

The RC_TAG is a tag unique for each deployment. Any change to the api application code, to the nginx or to the rc.yml itself should generate a new unique RC_TAG so we can deploy changes to each of them. The RC_TAG goes both to the RC name and the tag selector, which adheres to Kubernetes’s requirement of a different RC name and at least one different selector label. API_IMAGE and  NGINX_IMAGE go to each of the containers respectively.

Now we have to replace these placeholders with real deployment values, and run the kubectl rolling-update command. This is where the Makefile joins the party.

The Makefile


APP_IMAGE_TAG=$(shell git log -n 1 –pretty=format:%h server)
NGINX_IMAGE_TAG=$(shell git log -n 1 –pretty=format:%h nginx)
KUBE_TAG=$(shell git log -n 1 –pretty=format:%h kube)
RC_TAG=$(APP_IMAGE_TAG)$(NGINX_IMAGE_TAG)$(KUBE_TAG)
APP_IMAGE=my-repo/api-app:$(APP_IMAGE_TAG)
NGINX_IMAGE=my-repo/api-nginx:$(NGINX_IMAGE_TAG)
OLD_RC=$(shell kubectl get rc -l app=api -o template '–template={{(index .items 0).metadata.name}}')
define generate-rc
sed -e 's/{{RC_TAG}}/$(RC_TAG)/g;s/{{APP_IMAGE}}/$(APP_IMAGE)/g;s/{{NGINX_IMAGE}}/$(NGINX_IMAGE)/g' kube/rc.yml
endef
define get-current-rc
kubectl get rc api-$(RC_TAG)
endef
deploy: docker
$(call get-current-rc) || $(call generate-rc) | kubectl rolling-update $(OLD_RC) –update-period="5s" -f –
docker: docker-api docker-nginx
docker-nginx:
docker pull $(NGINX_IMAGE) || (docker build -t $(NGINX_IMAGE) nginx && docker push $(NGINX_IMAGE))
docker-api:
docker pull $(APP_IMAGE) || (docker build -t $(APP_IMAGE) server && docker push $(APP_IMAGE))

view raw

Makefile

hosted with ❤ by GitHub

Let’s go over the interesting parts of this Makefile:

  1. On lines 1-3 we get the latest git commit sha to each of the folders on the porject – server / nginx / kube – and set them as the tag respectively. This promises that every change to each of the components will generate a new tag.
  2. Line 4: The RC_TAG is composed of the three generated tags, meaning a change to a single component generates a new RC_TAG, just like we wanted.
  3. Lines 6-7 are just the name of the docker images we want to use, with the respective tags attached of course.
  4. Line 9 uses kubectl in order to get the name of the currently running RC so we can use it in the rolling-update command as OLD_CONTROLLER_NAME.
  5. Line 11-13 is a simple sed command to replace all the template variables in our YAML file with the newly generated values.

This is it. We can now type make deploy and everything happens like magic:

  1. It will call the docker target of both APP and NginX, build and push the images to our repository. Pay attention that since we pull the images before building we’re not wasting time building an already built docker image.
  2. After all docker images are ready, it will first check we don’t already have a RC matching the current RC_TAG by running $(call get-current-rc). If we try to deploy the same RC, kubectl rolling-update exits with failure and we wanted to avoid it.
  3. If we don’t have the new RC running, it means we have to deploy it. We generate the new RC YAML file and feed it into the rolling-update command with a 5 seconds delay between pod replacements. The default is one minute which means the deployment time = number of replicas * 1 minute. A awful lot of time to deploy if you have many replicas of a service.

At the end of the process above, we have a new version of our API deployed and running. It may be a source code change to the API, but also configuration changes to the NginX proxy or even a new environment variable added to rc.yml. The beautiful thing is that we don’t even have to think about it.

We know that whenever we want to deploy, we’re just one click away from doing so.

2 thoughts on “Real World Deployments On Kubernetes

Leave a comment