DevOps Crash Course - Section 2: Servers

via GIFER

Section 2 - Server Wrangling

For those of you just joining us you can see part 1 here.

If you went through part 1, you are a newcomer to DevOps dropped into the role with maybe not a lot of prep. We now have a good idea of what is running in our infrastructure, how it gets there and we have an idea of how secure our AWS setup is from an account perspective.

Next we're going to build on top of that knowledge and start the process of automating more of our existing tasks, allowing AWS (or really any cloud provider) to start doing more of the work. This provides a more reliable infrastructure and means we can focus more on improving quality of life.

What matters here?

Before we get into the various options for how to run your actual code in production, let's take a step back and walk about what matters. Whatever choice you and your organization end up making, here are the important questions we are trying to answer.

  • Can we, demonstrably and without human intervention, stand up a new server and deploy code to it?
  • Do we have an environment or way for a developer to precisely replicate the environment that their code runs in production in another place not hosting or serving customer traffic?
  • If one of our servers become unhealthy, do we have a way for customer traffic to stop being sent to that box and ideally for that box to be replaced without human intervention?

Can you use "server" in a sentence?

Alright you caught me. It's becoming increasingly more difficult to define what we mean what we say "server". To me, a server is still the piece of physical hardware running in a data center and the software for a particular host is a virtual machine. You can think of EC2 instances as virtual machines, different from docker containers in ways we'll discuss later. For our purposes EC2 instances are usually what we mean when we say servers. These instances are defined through the web UI, CLI or Terraform and launched in every possible configuration and memory size.

Really a server is just something that allows customers to interact with our application code. An API Gateway connected to Lambdas still meets my internal definition of server, except in that case we are completely hands off.

What are we dealing with here

In conversations with DevOps engineers who have had the role thrust on them or perhaps dropped into a position where maintenance hasn't happened a lot, a common theme has emerged. They often are placed in charge of a stack that looks something like this:

A user flow normally looks something like this:

  1. A DNS request is made to the domain and it points towards (often) a classic load balancer. This load balancer handles SSL termination and then forwards traffic on to servers running inside of a VPC on port 80.
  2. These servers are often running whatever linux distribution was installed on them when they were made. Something they are in autoscaling groups, but often they are not. These servers normally have Nginx installed along with something called uWSGI. Requests come in, they are handed by Nginx to the uWSGI workers and these interact with your application code.
  3. The application will make calls to the database server, often running MySQL because that is what the original developer knew to use. Sometimes these are running on a managed database service and sometimes they are not.
  4. Often with these sorts of stacks the deployment is something like "zip up a directory in the CICD stack, copy it to the servers, then unzip and move to the right location".
  5. There are many times some sort of script to remove that box from the load balancer at the time of deploy and then readd them.

Often there are additional AWS services being used, things like S3, Cloudfront, etc but we're going to focus on the servers right now.

This stack seems to work fine. Why should I bother changing it?

Configuration drift. The inevitable result of launching different boxes at different times and hand-running commands on them which make them impossible to test or reliably replicate.

Large organizations go through lots of work to ensure every piece of their infrastructure is uniformly configured. There are tons of tools to do this, from things like Ansible Tower, Puppet, Chef, etc to check each server on a regular cycle and ensure the correct software is installed everywhere. Some organizations rely on building AMIs, building whole new server images for each variation and then deploying them to auto-scaling groups with tools like Packer. All of this work is designed to try and eliminate differences between Box A and Box B. We want all of our customers and services to be running on the same platforms we have running in our testing environment. This catches errors in earlier testing environments and means our app in production isn't impacted.

The problem we want to avoid is the nightmare one for DevOps people, which is where you can't roll forward or backwards. Something in your application has broken on some or all of your servers but you aren't sure why. The logs aren't useful and you didn't catch it before production because you don't test with the same stack you run in prod. Now your app is dying and everyone is trying to figure out why, eventually discovering some sort of hidden configuration option or file that was set a long time ago that now causes problem.

These sorts of issues plagued traditional servers for a long time, resulting in bespoke hand-crafted servers that could never be replaced without tremendous amounts of work. You either destroyed and remade your servers on a regular basis to ensure you still could or you accepted the drift and tried to ensure that you had some baseline configuration that would launch your application. Both of these scenarios suck and for a small team its just not reasonable to expect you to run that kind of maintenance operation. It's too complicated.

What if there was a tool that let you test exactly like you run your code in production? It would be easy to use, work on your local machine as well as on your servers and would even let you quickly audit the units to see if there are known security issues.

This all just sounds like Docker

It is Docker! You caught me, it's just containers all the way down. You've probably used Docker containers a few times in your life, but running your applications inside of containers is the vastly preferred model for how to run code. It simplifies testing, deployments and dependency management, allowing you to move all of it inside of git repos.

What is a container?

Let's start with what a container isn't. We talked about virtual machines before. Docker is not a virtual machine, it's a different thing. The easiest way to understand the difference is to think of a virtual machine like a physical server. It isn't, but for the most part the distinction is meaningless for your normal day to day life. You have a full file system with a kernel, users, etc.

Containers are just what your application needs to run. They are just ways of moving collections of code around along with their dependencies. Often new folks to DevOps will think of containers in the wrong paradigm, asking questions like "how do you back up a container" or using them as permanent stores of state. Everything in a container is designed to be temporary. It's just a different way of running a process on the host machine. That's why if you run ps fauxx | grep name_of_your_service on the host machine you still see it.

Are containers my only option?

Absolutely not. I have worked with organizations that manage their code in different ways outside of containers. Some of the tools I've worked with have been NPM for Node applications, RPMs for various applications linked together with Linux dependencies. Here are the key questions when evaluating something other than Docker containers?

  • Can you reliably stand up a new server using a bash script + this package? Typically bash scripts should be under 300 lines, so if we can make a new server with a script like that and some "other package" I would consider us to be in ok shape.
  • How do I roll out normal security upgrades? All linux distros have constant security upgrades, how do I do that on a normal basis while still confirming that the boxes still work?
  • How much does an AWS EC2 maintenance notice scare me? This is where AWS or another cloud provider emails you and says "we need to stop one of your instances randomly due to hardware failures". Is it a crisis for my business or is it a mostly boring event?
  • If you aren't going to use containers but something else, just ensure there is more than one source of truth for that.
  • For Node I have had a lot of sucess with Verdaccio as an NPM cache: https://verdaccio.org/
  • However in general I recommend paying for Packagecloud and pushing whatever package there: https://packagecloud.io/

How do I get my application into a container?

I find the best way to do this is to sit down with the person who has worked on the application the longest. I will spin up a brand new, fresh VM and say "can you walk me through what is required to get this app running?". Remember this is something they likely have done on their own machines a few hundred times, so they can pretty quickly recite the series of commands needed to "run the app". We need to capture those commands because they are how we write the Dockerfile, the template for how we make our application.

Once you have the list of commands, you can string them together in a Dockerfile.

How Do Containers Work?

It's a really fascinating story.  Let's teleport back in time. It's the year 2000, we have survived Y2K, the most dangerous threat to human existence at the time. FreeBSD rolls out a new technology called "jails". FreeBSD jails were introduced in FreeBSD 4.X and are still being developed now.

Jails are layered on top of chroot, which allows you to change the root directory of processes. For those of you who use Python, think of chroot like virtualenv. It's a safe distinct location that allows you to simulate having a new "root" directory. These processes cannot access files or resources outside of that environment.

Jails take that concept and expanded it, virtualizing access to the file system, users, networking and every other part of the system. Jails introduce 4 things that you will quickly recognize as you start to work with Docker:

  • A new directory structure of dependencies that a process cannot escape.
  • A hostname for the specific jail
  • A new IP address which is often just an alias for an existing interface
  • A command that you want to run inside of the jail.
www {
    host.hostname = www.example.org;           # Hostname
    ip4.addr = 192.168.0.10;                   # IP address of the jail
    path = "/usr/jail/www";                    # Path to the jail
    devfs_ruleset = "www_ruleset";             # devfs ruleset
    mount.devfs;                               # Mount devfs inside the jail
    exec.start = "/bin/sh /etc/rc";            # Start command
    exec.stop = "/bin/sh /etc/rc.shutdown";    # Stop command
}
What a Jail looks like.

From FreeBSD the technology makes it way to Linux via the VServer project. As time went on more people build on the technology, taking advantage of cgroups. Control groups, shorted to cgroups is a technology that was added to Linux in 2008 from engineers at Google. It is a way of defining a collection of processes that are bound by the same restrictions. Progress has continued with cgroups from its initial launch, now at a v2.

There are two parts of a cgroup, a core and a controller. The core is responsible for organizing processes. The controller is responsible for distributing a type of resource along the hierarchy. With this continued work we have gotten incredible flexibility with how to organize, isolate and allocate resources to processes.

Finally in 2008 we got Docker, adding a simple CLI, the concept of a Docker server, a way to host and share images and more. Now containers are too big for one company, instead being overseen by the Open Container Initiative. Now instead of there being exclusively Docker clients pushing images to Dockerhub running on Docker server, we have a vibrant and strong open-source community around containers.

I could easily fill a dozen pages with interesting facts about containers, but the important thing is that containers are a mature technology built on a proven pattern of isolating processes from the host. This means we have complete flexibility for creating containers and can easily reuse a simple "base" host regardless of what is running on it.

For those interested in more details:

Anatomy of a Dockerfile

FROM debian:latest
# Copy application files
COPY . /app
# Install required system packages
RUN apt-get update
RUN apt-get -y install imagemagick curl software-properties-common gnupg vim ssh
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get -y install nodejs
# Install NPM dependencies
RUN npm install --prefix /app
EXPOSE 80
CMD ["npm", "start", "--prefix", "app"]
This is an example of a not great Dockerfile. Source

When writing Dockerfiles, open a tab to the official Docker docs. You will need to refer to them all the time at first, because very little about it. Typically Dockerfile are stored in the top level of an existing repository and their file operations, such as COPY as shown above, operate on that principal. You don't have to do that, but it is a common pattern to see the Dockerfile at the root level of a repo. Whatever you do keep it consistent.

Formatting

Dockerfile instructions are not case-sensitive, but are usually written in uppercase so that they can be differentiated from arguments more easily. Comments have the hash symbol (#) at the beginning of the line.

FROM

First is a FROM, which just says "what is our base container that we are starting from". As you progress in your Docker experience, FROM containers are actually great ways of speeding up the build process. If all of your containers have the same requirements for packages, you can actually just make a "base container" and then use that as a FROM. But when building your first containers I recommend just sticking with Debian.

Don't Use latest

Docker images rely on tags, which you can see in the example above as: debian:latest. This is docker for "give me the more recently pushed image". You don't want to do that for production systems. Typically upgrading the base container should be a affirmative action, not just something you accidentally do.

The correct way to reference a FROM image in a Dockerfile is through the use of a hash. So we want something like this:

FROM debian@sha256:c6e865b5373b09942bc49e4b02a7b361fcfa405479ece627f5d4306554120673

Which I got from the Debian Dockerhub page here. This protects us in a few different ways.

  • We won't accidentally upgrade our containers without meaning to
  • If the team in charge of pushing Debian containers to Dockerhub makes a mistake, we aren't suddenly going to be in trouble
  • It eliminates another source of drift

But I see a lot of people using Alpine

That's fine, use Alpine if you want. I have more confidence in Debian when compared to Alpine and always base my stuff off Debian. I think its a more mature community and more likely to catch problems. But again, whatever you end up doing, make it consistent.

If you do want a smaller container, I recommend minideb. It still lets you get the benefits of Debian with a smaller footprint. It is a good middle ground.

COPY

COPY is very basic. The . just means "current working directory", which in this case if the Dockerfile is at the top level of a git repository. It just takes whatever you specify and copy it into the Dockerfile.

COPY vs ADD

A common question I get is "what is the difference between COPY and ADD". Super basic, ADD is for going out and fetching something from a URL or opening a compressed file into the container. So if all you need to do is copy some files into the container from the repo, just use COPY. If you have to grab a compressed directory from somewhere or unzip something use ADD.

RUN

RUN is the meat and potatoes of the Dockerfile. These are the bash commands we are running in order to basically put together all the requirements. The file we have above doesn't follow best practices. We want to compress the RUN commands down so that they are all part of one layer.

RUN wget https://github.com/samtools/samtools/releases/download/1.2/samtools-1.2.tar.bz2 \
&& tar jxf samtools-1.2.tar.bz2 \
&& cd samtools-1.2 \
&& make \
&& make install
A good RUN example so all of these are one layer

WORKDIR

Allows you to set the directory inside the container from which all the other commands will run. Saves you from having to write out the absolute path every time.

CMD

The command we are executing when we run the container. Usually for most web applications this would be where we run the framework start command. This is an example from a Django app I run:

CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

If you need more detail, Docker has a decent tutorial: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

One more thing

This obviously depends on your application, but many applications will also need a reverse proxy. This allows Nginx to listen on port 80 inside the container and forward requests on to your application. Docker has a good tutorial on how to add Nginx to your container: https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/

I cannot stress this enough, making the Dockerfile to run the actual application is not something a DevOps engineer should try to do on your own. You likely can do it by reverse engineering how your current servers work, but you need to pull in the other programmers in your organization.

Docker also has a good tutorial from beginning to end for Docker novices here: https://docs.docker.com/get-started/

Docker build, compose etc

Once you have a Dockerfile in your application repository, you are ready to move on to the next steps.

  1. Have your CICD system build the images. Typing in the words "name of your CICD + build docker images" into Google to see how.
  2. You'll need to make an IAM user for your CICD system in order to push the docker images from your CI workers to the ECR private registry. You can find the required permissions here.
  3. Get ready to push those images to a registry. For AWS users I strongly recommend AWS ECR.
  4. Here is how you make a private registry.
  5. Then you need to push your image to the registry. I want to make sure you see AWS ECR helper, a great tool that makes the act of pushing from your laptop much easier. https://github.com/awslabs/amazon-ecr-credential-helper. This also can help developers pull these containers down for local testing.
  6. Pay close attention to tags. You'll notice that the ECR registry is part of the tag along with the : and then a version information. You can use different registries for different applications or use the same registry for all your applications. Remember secrets shouldn't be in your container regardless, or customer data.
  7. Go get a beer, you earned it.

Some hard decisions

Up to this point, we've been following a pretty conventional workflow. Get stuff into containers, push the containers up to a registry, automate the process of making new containers. Now hopefully we have our developers able to test their applications locally and everyone is very impressed with you and all the work you have gotten done.

The reason we did all this work is because now that our applications are in Docker containers, we have a wide range of options for ways to quickly and easily run this application. I can't tell you what the right option is for your organization without being there, but I can lay out the options so you can walk into the conversation armed with the relevant data.

Deploying Docker containers directly to EC2 Instances

This is a workflow you'll see quite a bit among organizations just building confidence in Docker. It works something like this -

  • Your CI system builds the Docker container using a worker and the Dockerfile you defined before. It pushes it to your registry with the correct tag.
  • You make a basic AMI with a tool like packer.
  • New Docker containers are pulled down to the EC2 instances running the AMIs we made with Packer.

Packer

Packer is just a tool that spins up an EC2 instance, installs the software you want installed and then saves it as an AMI. These AMIs can be deployed when new machines launch, ensuring you have identical software for each host. Since we're going to be keeping all the often-updated software inside the Docker container, this AMI can be used as a less-often touched tool.

First, go through the Packer tutorial, it's very good.

Here is another more comprehensive tutorial.

Here are the steps we're going to follow

  1. Install Packer: https://www.packer.io/downloads.html
  2. Pick a base AMI for Packer. This is what we're going to install all the other software on top of.

Here is a list of Debian AMI IDs based on regions: https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye which we will use for our base image. Our Packer JSON file is going to look something like this:

{
    "variables": {
        "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
        "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}"
    },
    "builders": [
        {
            "access_key": "{{user `aws_access_key`}}",
            "ami_name": "docker01",
            "instance_type": "t3.micro",
            "region": "eu-west-1",
            "source_ami": "ami-05b99bc50bd882a41",
            "ssh_username": "admin",
            "type": "amazon-ebs"
        }
    ]
}
``

The next step is to add a provisioner step, as outlined in the Packer documentation you can find here. Basically you will write a bash script that installs the required software to run Docker. Docker actually provides you with a script that should install what you need which you can find here.

The end process you will be running looks like this:

  • CI process builds a Docker image and pushes it to ECR.
  • Your deployment process is to either configure your servers to pull the latest file from ECR with a cron job so your servers are eventually consistent, or more likely to write a deployment job which connects to each server, run Docker pull and then restarts the containers as needed.

Why this a bad long-term strategy

A lot of organizations start here, but its important not to end here. This is not a good sustainable long-term workflow.

  • This is all super hand-made, which doesn't fit with our end goal
  • The entire process is going to be held together with hand-made scripts, You need something to remove the instance you are deploying to from the load balancer, pull the latest image, restart it, etc.
  • You'll need to configure a health check on the Docker container to know if it has started correctly.

Correct ways to run containers in AWS

If you are trying to very quickly make a hands-off infrastructure with Docker, choose Elastic Beanstalk

Elastic Beanstalk is the AWS attempt to provide infrastructure in a box. I don't approve of everything EB does, but it is one of the fastest ways to stand up a robust and easy to manage infrastructure. You can check out how to do that with the AWS docs here.

AWS with EB stands up everything you need, from a load balancer to the server and even the database if you want. It is pretty easy to get going, but Elastic Beanstalk is not a magic solution.

Elastic Beanstalk is a good solution if:

  1. You are attempting to run a very simple application. You don't have anything too complicated you are trying to do. In terms of complexity, we are talking about something like a Wordpress site.
  2. You aren't going to need anything in the .ebextension world. You can see that here.
  3. There is a good logging and metrics story that developers are already using.
  4. You want rolling deployments, load balancing, auto scaling, health checks, etc out of the box

Don't use Elastic Beanstalk if:

  1. You need to do a lot of complicated networking stuff on the load balancer for your application.
  2. You have a complicated web application. Elastic Beanstalk had bad documentation and its hard to figure out why stuff is working or not.
  3. Service-to-service communication is something you are going to need now or in the future.

If you need something more robust, try ECS

AWS ECS is a service by AWS designed to quickly and easy run Docker containers. You can find the tutorial here: https://aws.amazon.com/getting-started/hands-on/deploy-docker-containers/

Use Elastic Container Service if:

  1. You are already heavily invested in AWS resources. The integration with ECS and other AWS resources is deep and works well.
  2. You want the option of going completely unmanaged server with Fargate
  3. You have looked at the cost of running a stack on Fargate and are OK with it.

Don't use Elastic Container Service if:

  1. You may need to deploy this application to a different cloud provider

What about Kubernetes?

I love Kubernetes, but its too complicated to get into this article. Kubernetes is a full stack solution that I adore but is probably too complicated for one person to run. I am working on a Kubernetes writeup, but if you are a small team I wouldn't strongly consider it. ECS is just easier to get running and keep running.

Coming up!

  • Logging, metrics and traces
  • Paging and alerts. What is a good page vs a bad page
  • Databases. How do we move them, what do we do with them
  • Status pages. Where do we tell customers about problems or upcoming maintenance.
  • CI/CD systems. Do we stick with Jenkins or is there something better?
  • Serverless. How does it work, should we be using it?
  • IAM. How do I give users and applications access to AWS without running the risk of bringing it all down.

Questions / Concerns?

Let me know on twitter @duggan_mathew