AWS Elastic Kubernetes Service (EKS) Review

Spoilers

A bit of history

Over the last 5 years, containers have become the de-facto standard by which code is currently run. You may agree with this paradigm or think its a giant waste of resources (I waver between these two opinions on a weekly basis), but this is where we are. AWS, as perhaps the most popular cloud hosting platform, bucked the trend and attempted to launch their own container orchestration system called ECS. This was in direct competition to Kubernetes, the open-source project being embraced by organizations of every size.

ECS isn't a bad service, but unlike running EC2 instances inside of a VPC, switching to ECS truly removes any illusion that you will be able to migrate off of AWS. I think the service got a bad reputation for its deep tie-in with AWS along with some of the high Fargate pricing for the actual CPU units. As years went on, it became clear to the industry at large that ECS was not going to become the standard way we run containerized applications. This, in conjunction with the death of technology like docker swarm meant a clear winner started to emerge in k8s.

AWS EKS was introduced in 2017 and became generally available in 2018. When it launched, the response from the community was....tepid at best. It was missing a lot of the features found in competing products and seemed to be far behind the offerings from ECS at the time. The impression among folks I was talking to at the time was: AWS was forced to launch a Kubernetes service to stay competitive, but they had zero interest in diving into the k8s world and this was to meet the checkbox requirement. AWS itself didn't really want the service to explode in popularity was the rumor being passed around various infrastructure teams.

This is how EKS stacked up to its competition at launch

Not a super strong offering at launch

I became involved with EKS at a job where they had bet hard on docker swarm, it didn't work out and they were seeking a replacement. Over the years I've used it quite a bit, both as someone whose background is primary in AWS and someone who generally enjoys working with Kubernetes. So I've used the service extensively for production loads over years and have seen it change and improve in that time.

EKS Today

So its been years and you might think to yourself "certainly AWS has fixed all these problems and the service is stronger than ever". You would be...mostly incorrectly. EKS remains both a popular service and a surprisingly difficult one to set up correctly. Unlike services like RDS or Lambda, where the quality increases by quite a bit on a yearly basis, EKS has mostly remained a clearly internally disliked product.

This is how I like to imagine how AWS discusses EKS

New customer experience

There exists out of the box tooling for making a fresh EKS cluster that works well, but the tool isn't made or maintained by Amazon, which seems strange. eksctl which is maintained by Weaveworks here mostly works as promised. You will get a fully functional EKS cluster through a CLI interface that you can deploy to. However you are still pretty far from an actual functional k8s setup, which is a little bizarre.

Typically the sales pitch for AWS services is they are less work than doing it yourself. The pitch for EKS is its roughly the same amount of work to set it up, it is less maintenance in the long term. Once running, it keeps running and AWS managing the control plane means its difficult to get into a situation in which you cannot fix the cluster, which very nice. Typically k8s control plane issues are the most serious and everything else you can mostly resolve with tweaks.

However if you are considering going with EKS, understand you are going to need to spend a lot of time reading before you touch anything. You need to make hard-to-undo architectural decisions early in the setup process and probably want to experiment with test clusters before going with a full-on production option. This is not like RDS, where you can mostly go with the defaults, hit "new database" and continue on with your life.

Where do I go to get the actual setup information?

The best resource I've found is EKS Workshop, which walks you through tutorials of all the actual steps you are going to need to follow. Here is the list of things you would assume come out of the box, but strangely do not:

  • You are going to need to set up autoscaling. There are two options, the classic Autoscaler and the new AWS-made hotness which is Karpenter. You want to use Karpenter, it's better and doesn't have all the lack-of-awareness when it comes to AZs and EBS storage (basically autoscaling doesn't work if you have state on your servers unless you manually configure it to work, it's a serious problem). AWS has a blog talking about it.
  • You will probably want to install some sort of DNS service, if for nothing else so services not running inside of EKS can find your dynamic endpoints. You are ganna wanna start here.
  • You almost certainly want Prometheus and Grafana to see if stuff is working as outlined here.
  • You will want to do a lot of reading about IAM and RBAC to understand specifically how those work together. Here's a link. Also just read this entire thing from end to end: link.
  • Networking is obviously a choice you will need to make. AWS has a network plugin that I recommend, but you'll still need to install it: link.
  • Also you will need a storage driver. EBS is supported by kubernetes in the default configuration but you'll want the official one for all the latest and greatest features. You can get that here.
  • Maybe most importantly you are going to need some sort of ingress and load balancer controller. Enjoy setting that up with the instructions here.
  • OIDC auth and general cluster access is pretty much up to you. You get the IAM auth out of the box which is good enough for most use cases, but you need to understand RBAC and how that interacts with IAM. It's not as complicated as that sentence implies it will be, but it's also not super simple.

That's a lot of setup

I understand that part of the appeal of k8s is that you get to make all sorts of decisions on your own. There is some power in that, but in the same way that RDS became famous not because you could do anything you wanted, but because AWS stopped you from doing dumb stuff that would make you life hard; I'm surprised EKS is not more "out of the box". My assumption would have been that AWS launched the clusters with all their software installed by default and let you switch off those defaults in a config file or CLI option.

There is an ok Terraform module for this, but even this was not plug and play. You can find that here. This is what I use for most things and it isn't the fault of the module maintainers, EKS touches a lot of systems and because it is so customizable, it's hard to really turn into a simple module. I don't really considering this the problem of the maintainers, they've already applied a lot of creativity to getting around limitations in what they can do through the Go APIs.

Overall Impression - worth it?

I would still say yes after using it for a few years. The AWS SSO integration is great, allowing us to have folks access the EKS cluster through their SSO access even with the CLI.  aws sso login --profile name_of_profile and aws eks --profile name_of_profile --region eu-west-1 update-kubeconfig --name name_of_cluster is all that is required for me to interact with the cluster through normal kubectl commands, which is great. I also appreciate the regular node AMI upgrades, removing the responsibility from me and the team for keeping on top of those.

Storage

There are some issues here. The ebs experience is serviceable but slow, resizing and other operations are very rate-limited and can be an error-prone experience. So if your service relies a lot on resizing volumes or changing volumes, ebs is not the right choice inside of EKS, you are probably going to want the efs approach. I also feel like AWS could provide more feedback to you about the dangers of relying on ebs since they are tied to AZs, requiring some planning on your part before starting to keep all those pieces working. Otherwise the pods will simply be undeployable. By default the old autoscaling doesn't resolve this problem, requiring you to set node affinities.  

Storage and Kubernetes have a bad history, like a terrible marriage. I understand the container purists which say "containers should never have state, it's against the holy container rules" but I live in the real world where things must sometimes save to disk. I understand we don't like that, but like so many things in life, we must accept the things we cannot change. That being said it is still a difficult design pattern in EKS that requires planning. You might want to consider some node affinity for applications that do require persistent storage.

Networking

The AWS CNI link is quite good and I love that it allows me to generate IP addresses on the subnets in my VPCs. Not only does that simplify some of the security policy logic and the overall monitoring and deployment story, but mentally it is easier to not add another layer of abstraction to what is already an abstraction. The downside is they are actual network interfaces, so you need to monitor free IP addresses and ENIs.

AWS actually provides a nice tool to keep an eye on this stuff which you can find here. But be aware that if you intend on running a large cluster or if your cluster might scale up to be quite large, you are gonna get throttled with API calls. Remember the maximum number of ENIs you can attach to a node is determined by instance type, so its not an endlessly flexible approach. AWS maintains a list of ENI and IP limits per instance type here.

The attachment process also isn't instant, so you might be waiting a while for all this to come together. The point being you will need to understand the networking pattern of your pods inside of the cluster in a way you don't need to do as much with other CNIs.

Load Balancing and Ingress

This part of the experience is so bad it is kind of incredible. If you install the tool AWS recommends, you are going to get a new ALB for every single Ingress defined unless you group them together into groups. Ditto with every single internal LoadBalancer target, you'll end up with NLBs for each one. Now you can obviously group them together, but this process is incredibly frustrating in practice.

What do you mean?

You have a new application and you add this to your YAML:

kind: Ingress
metadata:
  namespace: new-app
  name: new-app
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: new-app
              servicePort: 80

Every time you do this you are going to end up with another ALB, which aren't free to use. You can group them together as shown here but my strong preference would have been to have AWS do this. I get they are not a charity, but it seems kind of crazy that this becomes the default unless you and the rest of your organization know how to group stuff together (and what kinds of apps can even be grouped together).

I understand it is in AWS's best interest to set it up like this, but in practice it means you'll see ALB and NLB cost exceed EC2 costs because by default you'll end up with so many of the damn things barely being used. My advice for someone setting up today would be to go with HAProxy as shown here. It's functionally a better experience and you'll save so much money you can buy the HAProxy enterprise license from the savings.

Conclusion in 2022

If you are going to run Kubernetes, don't manage it yourself unless you can dedicate staff to it. EKS is a very low-friction day to day experience but it frontloads the bad parts. Setup is a pain in the ass and requires a lot of understanding about how these pieces fit together that is very different from more "hands off" AWS products.

I think things are looking good though that AWS has realized it makes more sense to support k8s as opposed to fighting it. Karpenter as the new autoscaler is much better, I'm optimistic they'll do something with the load balancing situation and the k8s add-on stack is relatively stable. If folks are interested in buying-in today I think you'll be in ok shape in 5 years. Clearly whatever internal forces at AWS that wanted this service to fail have been pushed back by its success.

However if I were a very small company new to AWS I wouldn't touch this with a ten foot pole. It's far too complicated for the benefits and any sort of concept of "portability" is nonsense bullshit. You'll be so tied into the IAM elements of RBAC that you won't be able to lift and shift anyway, so if you are going to buy in then don't do this. EKS is for companies already invested in k8s or looking for a deeply tweakable platform inside of AWS when they have outgrown services like Lightsail and Elastic Beanstalk.  If that isn't you, don't waste your time.