7 Reasons to Choose Kubernetes 1.2 as Your Docker Orchestration Framework

Kubernetes is one of the strongest contestants for the Docker orchestration framework throne. It was true on version 1.1 (you can read here why) and it even becomes clearer in its newest release — 1.2.

If you’re looking for a way to deploy your Docker containers into any of your environments, Kubernetes just gave you at least 7 new reasons to choose it for the job.


Deployments were alpha and disabled by default in K8s 1.1. In 1.2, they are beta (considered stable) and enabled when you start a new cluster.

I won’t go into the full details why making deploying application was a little bit tedious in K8s 1.1 (read here for more info). The main points are:

  1. You had to calculate a unique value for each deployment yourself and put it into the Replication-Controller definition file.
  2. You had to have different procedures for creating a Replication-Controller the first time and updating an existing one.
  3. You had to find the existing Replication-Controller in the system before you could deploy the new version of it via Rolling Update.

Deployments come to replace the Replication-Controller/Rolling-Update procedure. They are declarative which is great: It means you don’t have to tell the cluster what to do, you just declare what you want to have and the cluster takes care of anything needed to do in order to bring itself to the desired state. No need to calculate a unique value yourself or even find the existing deployment you want to update anymore.

The official walkthrough guides to use kubectl create for creating a deployment and kubectl apply for updating a deployment, but from my experience, you can use apply in both cases which means you don’t need a different procedure for creation and update anymore.

Last great thing about the Deployments feature is the support for rollbacks. Rolling back in K8s 1.1 was done by re-deploying the old Replication-Controller. In K8s 1.2 you can use the record flag when you create a deployment. This will allow you to rollback a deployment to previous versions whenever you need to.

Multi Availability Zone Support

One of the major disadvantages of K8s until 1.2 was its lack of support for spreading applications across different AZs. It meant that your cluster only lives in a single AZ, and in case this AZ had some sort of outage, you could have lost your whole cluster. The only way you could handle these kinds of disasters is by managing multiple clusters but the overhead of doing so is unaffordable.

K8s 1.2 brings with it a full support for Multi-AZ. You can easily spawn nodes on any AZ and the scheduler is fully aware of it when it schedules your pods to different nodes.

While this is a significant improvement in this area, the Multi-AZ support does not apply to the K8s master and its components. Your master still lives in a single AZ and if that AZ has an outage you’ll get into a weird state: the cluster will be fully functional but the master won’t, which means operations like deployment can’t be handled.

ConfigMaps & Secrets As Environment Variables

K8s 1.1 had one built-in option to store configuration by secrets. While secrets are still the recommended way to store sensitive data, ConfigMaps allow us to store non-sensitive configuration in a more direct and convenient way.

The nice tweak in K8s 1.2 is that Secrets and ConfigMaps are consumable not only as volumes (The only option in K8s 1.1) but also as environment variables to your container definition. A lot more convenient than mounting a volume and reading a file from it on application startup in order to get a simple configuration item.


Having a K8s cluster sometimes makes us forget we have nodes in the cluster. We create containers and most of the time we don’t even know which node they’re running on.

There are some times, though, when we need to handle tasks which are node related. An example would be an application which gathers stats from a node and ships them to some metrics server. Another example would be collecting all the logs from all containers running in a node and sending them to our logging system. In each of these cases, we need a single container to run per node.

K8s 1.1 only offered us Static Pods in order to achieve this. In order to define a static pod, we would have to put a file with the pod definition under a specific folder on each of the nodes. This is obviously not convenient since:

  1. If we want to add static pods we have to alter each of the running nodes on the cluster.
  2. Static pods were managed locally by the kubelet so we couldn’t query the API about them or make any other operation on them.

K8s 1.2 introduces Daemon-Sets which provides us a more convenient way to run a pod per node. Pods in Daemon-Sets are visible like every other pod in the system. You can delete a Daemon-Set and create Daemon-Sets as you wish via the API. No need to alter files on nodes anymore.

Cluster-Size & Performance

Cluster size is an important issue to a company which has a decision to make about its core infrastructure components. We never know how big we’re going to be in a year from now and we want to be 100% positive that the tools we choose today will not limit us in any way in the future.

The new 1.2 release officially supports 1000 nodes per cluster with 30,000 pods running simultaneously.

While these numbers may be good or bad (depending on your subjective needs), it is encouraging to see what progress have been made by the team since the 1.1 release with a X10 scale improvement.

Expect to see even higher numbers on 1.3.


Jobs allow you to run pods and verify that a certain number of them completed successfully. In K8s 1.1 we could create bare pods (without a Replication-Controller) but these pods were not guaranteed to finish at all. If for example, the node the pod was running on got rebooted in the middle of execution, the pod would not have been restarted on another node. Jobs make sure situations like these won’t occur by verifying the job we ran finished successfully.

Not a world changing feature but definitely a useful one.

Project Progress

Above all the features and enhancements described here, you can easily get the feel of the huge progress made since the 1.1. Every issue is being responded in a matter of hours and prioritized by the owners. Long awaited features are always around the corner. More and more contributors are joining the party and help to improve this project by committing code, opening and discussing issues, documenting things etc. This is probably one of the OSS projects I’ve most enjoyed using.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s