Our Journey to EKS

<TLDR> Check out eks_cli — a one stop shop for bootstrapping and managing your EKS cluster </TLDR>

Preface

We’ve been running Kubernetes over AWS since the very early kube-up.sh days. Configuration options were minimal and were passed tokube-up.sh with a mix of confusing environment variables. Concepts like high availability, Multi-AZ awareness and cluster management almost didn’t exist back then.

When kops came to life things became much better. Working with a command line utility made cluster creation a lot easier. Environment variables got replaced by well documented flags. Cluster state was saved and changes could be easily made to existing clusters thanks to the dry-run mechanism that allowed you to review upcoming infrastructure changes before you actually applied them. kops became the de-facto standard for managing Kubernetes clusters on AWS.

A few months ago AWS released their native support for Kubernetes clusters — EKS. As a company that heavily relies on Kubernetes, checking EKS was almost an inevitable step. The process of evaluating EKS lead us to come with eks_cli — A one stop shop for bootstrapping and managing your EKS cluster.

EKS In Action

Creating an EKS cluster is not a pleasant experience, to say the least. You have to go through several manual steps, record and collect each step’s outputs (IAM roles, VPC ids etc) and feed them into the next steps. You also have to keep all these outputs for future changes you’d like to perform on the cluster. Cluster creation time can also be frustrating when creating ad-hoc clusters: it takes no less than 12 minutes from the cluster creation request to the time the Kubernetes control plane actually responds to requests.

When the Kubernetes control plane is up you need to start adding worker nodes (or node groups) to the cluster to run your workloads. This process is tedious as well — you have to manually create a CloudFormation stack, feed all previous mentioned outputs, wait for the stack creation to end and alter an aws-auth ConfigMap on the cluster with the stack Role ARN to allow these nodes to register themselves on the cluster. It means that adding several node groups requires you to keep a record of all cluster node groups so you can properly edit the ConfigMap.

So, you have several node groups up, they have all successfully registered to the cluster (you filled their Role ARNS one by one) — everything should be working properly now, right? not exactly. Apparently, node groups cannot communicate within themselves by default. I encountered that when our Jenkins instance could not resolve a DNS query since the kube-dns pod was scheduled to a different node group. We had to manually create a Security Group with proper ingress/egress rules and attach it to all our node groups instances for them be able to communicate with each other.

Sharing cluster access with co-workers is also a manual process: You can either create a IAM Role or use the AWS IAM User. Either way you have to manually edit the same aws-auth ConfigMap on the cluster. Just remember not to mess up the node groups Role ARN because they won’t be able to register themselves on the cluster anymore.

So… cluster is up, node groups happily communicate and even your co-workers have access to the cluster. What you’re going to find next is that there are no sane defaults set: no default storage class is set and dns-autoscaler is not installed so you’re left with a single DNS pod running. These two are a must on any production grade Kubernetes cluster so be sure to add them to your cluster bootstrap list.

Other things we found inconsistent with previous Kubernetes installations we worked with:

  • Kubernetes API proxy doesn’t work in basic auth mode. We used it to expose different services under the Kubernetes API server domain. To solve this we had to change all proxied services to LoadBalancer type and assign specific DNS records to point at them.
  • Nodes have no ExternalIP in their IP addresses list. I am pretty sure it is not AWS’s fault (see kubelet issue here — https://github.com/kubernetes/kubernetes/issues/63158) but still, it is worth mentioning if you had any dependency on public ips.

The last thing I want to mention is the lack of roadmap and progress on EKS development. Kubrenetes is a fast paced project. Decisions taken today might be affected by EKS development and it doesn’t seem AWS treats it that way. For some companies (nanit, for example), cluster upgrades require a lot of attention and work. The absent of in-place cluster upgrades feature might make us look for other alternatives. The AWS team doesn’t state anywhere whether in-place upgrade will be available or not.
Another example would be postponing transition to EKS until version 1.11 is supported. We tried getting some info regarding 1.11 support (https://forums.aws.amazon.com/thread.jspa?threadID=285220) but no real answer was given.
In general, getting info regarding upcoming features and time frames is nearly impossible.

Final Words

Creating a production grade EKS cluster was a long journey. I remember the word that echoed in the back of my head during most this process — “Really?!”.

To sum up, I think EKS’s current state is a half baked service. It involves far more manual intervention than I’d expect from an AWS service. Luckily projects like eksctl and eks_cli have been created to mitigate EKS lack of automation.
I am more than confident that the AWS team will transition EKS into a mature and complete service in the future, but until then we’re left with external projects to automate the process for us.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s