Skip to content

Can We Make Idiot-Proof Infrastructure pt1?

One complaint I hear all the time online and in real life is how complicated infrastructure is. You either commit to a vendor platform like ECS, Lightsail, Elastic Beanstalk or Cloud Run or you go all in with something like Kubernetes. The first are easy to run but lock you in and also sometimes get abandoned by the vendor (looking at you Beanstalk). Kubernetes runs everywhere but it is hard and complicated and has a lot of moving parts.

The assumption seems to be that with containers there should be an easier way to do this. I thought it was an interesting thought experiment. Could I, a random idiot, design a simpler infrastructure? Something you could adopt to any cloud provider without doing a ton of work, that is relatively future proof and that would scale to the point when something more complicated made sense? I have no idea but I thought it could be fun to try.

Fundamentals of Basic Infrastructure

Here are the parameters we're attempting to work within:

  • It should require minimal maintenance. You are a small crew trying to get a product out the door and you don't want to waste a ton of time.
  • You cannot assume you will detect problems. You lack the security and monitoring infrastructure to truly "audit" the state of the world and need to assume that you won't be able to detect a breach. Anything you put out there has to start as secure as possible and pretty much fix itself.
  • Controlling costs is key. You don't have the budget for surprises and massive spikes in CPU usage is likely a problem and not organic growth (or if it is organic growth, you'll want to likely be involved with deciding what to do about it)
  • The infrastructure should be relatively portable. We're going to try and keep everything movable without too many expensive parts.
  • Perfect uptime isn't the goal. Restarting containers isn't a hitless operation and while there are ways to queue up requests and replay them, we're gonna try to not bite off that level of complexity with the first draft. We're gonna drop some requests on the floor, but I think we can minimize that number.

Basic Setup

You've got your good idea, you've written some code and you have a private repo in GitHub. Great, now you need to get the thing out onto the internet. Let's start with some good tips before we get anywhere near to the internet itself.

  • Semantic Versioning is your friend. If you get into the habit now of structuring commits and cutting releases, you are going to reap those benefits down the line. It seems silly right this second when the entirety of the application code fits inside of your head, but soon that won't be the case if you continue to work on it. I really like Release-Please as a tool to cut releases automatically based on commits and let you use the version number to be a meaningful piece of data for you to work off.
  • Containers are mandatory. Just don't overthink this and commit early. Don't focus on container disk space usage. Disk space is not our largest concern. We want an easy to work with platform with a minimum amount of surface area for attacks. While Distroless isn't actually....without a linux Distro (I'm not entirely clear why that name was chosen), it is a great place to start. If you can get away with using these, this is what you want to do. Link
  • Be careful about what dependencies you rely on in the early phase. So many jobs I've had there are a few unmaintained packages that are mission critical impossible to remove load-bearing weights around our necks. If you can do it with the standard library great. When you find a dependency on the internet, look at what you need it to do and see "can I just copy paste the 40 lines of code I need from this" vs adding a new dependency forever. Dependency minimization isn't very cool right now but I think especially when starting out it pays off big.
  • Healthcheck. You need some route on your app that you can hit which provides a good probability that the application is up and functional. /health or whatever, but this is gonna be pretty key to the rest of this works.

Deployment and Orchestration

Alright so you've made the app, you have some way of tracking major/minor etc. Everything works great on your laptop. How do we put it on the internet.

  • You want a way to take a container and deploy it out to a Linux host
  • You don't want to patch or maintain the host
  • You need to know if the deployment has gone wrong
  • Either the deployment should roll back automatically or fail safe waiting for intervention
  • The whole thing needs to be as safe as possible.

Is there a lightweight way to do this? Maybe!

Basic Design

Cloudflare ->  Autoscaling Group -> 4 instances setup with Cloud init -> Docker Compose with Watchtower -> DBaaS

When we deploy we'll be hitting the IP addresses of the instances on the Watertower HTTP route with curl and telling it to connect to our private container registry and pull down new versions of our application. We shouldn't need to SSH into the boxes ever and when a box dies or needs to be replaced, we can just delete it and run Terraform again to make a new one. SSL will be static long-lived certificates and we should be able to distribute traffic across different cloud providers however we'd like.

Cloudflare as the Glue

I know, a lot of you are rolling your eyes. "This isn't portable at all!" Let me defend my work at bit. We need a WAF, we need SSL, we need DNS, we need a load balancer and we need metrics. I can do all of that with open-source projects, but it's not easy. As I was writing it out, it started to get (actually) quite difficult to do.

Cloudflare is very cheap for what they offer. We aren't using anything here that we couldn't move somewhere else if needed. It scales pretty well, up to 20 origins (which isn't amazing but if you have hit 20 servers serving customer traffic you are ready to move up in complexity). You are free to change the backend CPU as needed (or even experiment with local machines, mix and match datacenter and cloud, etc). You also get a nice dashboard of what is going on without any work. It's a hard value proposition to fight against, especially when almost all of it is free. I also have no ideological dog in the fight of OSS vs SaaS.

Pricing

Up to 2 origin servers: $5 per month

Additional origins, up to 20: $5 per month per origin

First 500k DNS requests are free

$0.50 per every 500k DNS requests after

Compared to ALB pricing, we can see why this is more idiot proof. There we have 4 dimensions to cost: New connections (per second), Active connections (per minute), Processed bytes (GBs per hour), Rule evaluations (per second). The hourly bill is calculated by taking the maximum LCUs consumed across the four dimensions and we're charged on the highest one. Now ALBs can be much cheaper than Cloudflare, but it's harder to control the cost. If one element starts to explode in price, there isn't a lot you can do to bring it back down.

Cloudflare we're looking at $20 a month and then traffic. So if we get 60,000,000 requests a month we're paying $60 a month in DNS and $20 for the load balancer. For ALB it would largely depend on the type of traffic we're getting and how it is distributed.

BUT there are also much cheaper options. For €7 a month on Hetzner, you can get 25 targets and 20 TB of network traffic. € 1/TB for network traffic above that. So for our same cost we could handle a pretty incredible amount of traffic through Hetzner, but it commits us to them and violates the spirit of this thing. I just wanted to mention it in case someone was getting ready to "actually" me.

Also keep in mind we're just in the "trying ideas out" part of the exercise. Let's define a load balancer.

provider "cloudflare" {
  email   = "[email protected]"
  api_key = "your_api_key"
}

resource "cloudflare_load_balancer" "example_lb" {
  name   = "example-load-balancer.example.com"
  zone_id = "0da42c8d2132a9ddaf714f9e7c920711"
  default_pool_ids = [cloudflare_load_balancer_pool.pool1.id, cloudflare_load_balancer_pool.pool2.id]
  fallback_pool_id = cloudflare_load_balancer_pool.pool1.id
  steering_policy = "random"
  session_affinity = "none"
  proxied = true

  # Add other load balancer settings here from https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/load_balancer
  }

Then we need a monitor.

resource "cloudflare_load_balancer_monitor" "example" {
  account_id     = "f037e56e89293a057740de681ac9abbe"
  type           = "http"
  expected_body  = "alive"
  expected_codes = "2xx"
  method         = "GET"
  timeout        = 7
  path           = "/health"
  interval       = 60
  retries        = 2
  description    = "example http load balancer"
  header {
    header = "Host"
    values = ["example.com"]
  }
  allow_insecure   = false
  follow_redirects = true
  probe_zone       = "example.com"
}

Finally we need some pools

resource "cloudflare_load_balancer_pool" "pool1" {
  account_id = "f037e56e89293a057740de681ac9abbe"
  name       = "pool1"
  monitor = cloudflare_load_balancer_monitor.example.id
  origins {
    name    = "server01"
    address = "d9bb:3880:71b0:5fab:e426:8883:5a75:e82e"
    enabled = false
    header {
      header = "Host"
      values = ["server01"]
    }
  }
  origins {
    name    = "server02"
    address = "9726:61db:23a9:41d5:7eb0:649a:87b0:4291"
    header {
      header = "Host"
      values = ["server02"]
    }
  }
  description        = "example load balancer pool 1"
  enabled            = false
  minimum_origins    = 1
  notification_email = "[email protected]"
  load_shedding {
    default_percent = 55
    default_policy  = "random"
  }
  origin_steering {
    policy = "random"
  }
}

resource "cloudflare_load_balancer_pool" "pool2" {
  account_id = "f037e56e89293a057740de681ac9abbe"
  name       = "pool2"
  monitor = cloudflare_load_balancer_monitor.example.id
  origins {
    name    = "server03"
    address = "3601:03b9:88b7:fa50:8163:818c:eceb:bc14"
    enabled = false
    header {
      header = "Host"
      values = ["server03"]
    }
  }
  origins {
    name    = "server04"
    address = "8118:87ef:6b50:099d:fc4a:e66d:a991:5d20"
    header {
      header = "Host"
      values = ["server04"]
    }
  }
  description        = "example load balancer pool 2"
  enabled            = false
  minimum_origins    = 1
  notification_email = "[email protected]"
  load_shedding {
    default_percent = 55
    default_policy  = "random"
  }
  origin_steering {
    policy = "random"
  }
}

The addresses are just placeholders, but you'll need to swap values etc. This gives us a nice basic load balancer. Note that we don't have session affinity turned on, so we'll need to add Redis or something to help with state server-side. The IP addresses we point to will need to be reserved on the cloud provider side, but we can use IPv6 so hopefully should save us a few dollars a month there.

How much uptime is enough uptime

So there are two paths here we have to discuss before we get much further.

Path 1

When we deploy to a server, we make an API call to Cloudflare to mark the origin as not enabled. Then we wait for the connections to drain, deploy the container, bring it back up, wait for it to be healthy and then we mark it enabled again. This is traditionally the way we would need to do things, if we were targeting zero downtime.

Now we can do this. We have places later that we could stick such a script. But this is gonna be brittle. We'd basically need to do something like the following.

  • Run a GET against https://api.cloudflare.com/client/v4/user/load_balancers/pools
  • Take the result, look at the IP addresses, figure out which one is the machine in question and then mark it as not enabled IF all other origins were healthy. We wouldn't want to remove multiple machines at the same time. So we'd then need to hit: https://api.cloudflare.com/client/v4/user/load_balancers/pools/{identifier}/health and confirm the health of the pools.
  • But "health" isn't an instant concept. There is a delay between the concept of when the origin is unhealthly and I'll know about it, depending on how often I check and retries. So this isn't a perfect system, but it should work pretty well as long as I add a bit of jitter to it.

I think this exceeds what I want to do for the first pass. We can do it, but it's not consistent with the uptime discussion we had before. This is brittle and is going to require a lot of babysitting to get right.

Path 2

We rely on the healthchecks to steer traffic and assume that our deployments are going to be pretty fast, so while we might drop some traffic on the floor, a user (with our random distribution and server-side sessions) should be able to reload the page and hopefully get past the problem. It might not scale forever but it does remove a lot of our complexity.

Let's go with Path 2 for now.

Server setup + WAF

Alright so we've got the load balancer, it sits on the internet and takes traffic. Fabulous stuff. How do we set up a server? To do it cross-platform we have to use cloud-init.

The basics are pretty straight forward. We're gonna use latest debian and we're gonna update it and restart. Then we're gonna install Docker Compose and then finally stick a few files in there to run this. This is all pretty easy, but we do have a problem we need to tackle first. We need some way to do a level of secrets management so we can write out Terraform and cloud-init files, keep them in version control but also not have the secrets just kinda live there.

SOPS

So typically for secret management we want to use whatever our cloud provider gives us, but since we don't have something like that, we'll need to do something more basic.

We'll use age for encryption which is a great simple encryption library. You can install it here. We run age-keygen -o key.txt which gives us our secret file. Then we need to set an environmental variable with the path to the key like this: SOPS_AGE_KEY_FILE=/Users/mathew.duggan/key.txt

For those unfamiliar with how SOPS (installed here) works, basically you generate the age key as shown above and then you can encrypt files through a CLI or with Terraform locally. So:

secrets.json
{
   "username": "admin",
   "password": "password"
}

Turns into:

{
	"username": "ENC[AES256_GCM,data:+bGf/sI=,iv:J47szLfZ5wMWr6Ghc94VAABXs2Ec4Hi+e3ohc2HuF/Q=,tag:XIY1jOgDe9SBDMGxFhLwtw==,type:str]",
	"password": "ENC[AES256_GCM,data:RIHz14crqEk=,iv:H3g7/4Bd5vB/6U+Kf+rIR/xBRIGHGoZeN7U1zi5lgsM=,tag:+vD9BXb18rLhpf/sTsvYEA==,type:str]",
	"sops": {
		"kms": null,
		"gcp_kms": null,
		"azure_kv": null,
		"hc_vault": null,
		"age": [
			{
				"recipient": "age1j6dmaunhspfvh78lgnrtr6zkd7whcypcz6jdwypaydc6gaa79vtq5ryvzf",
				"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA1YlcvdkpGc3pBbVFiUnhP\nYVJnalp0WlREVjlQZkFROGtvcWN2VWxsUUJnCmYvZ1ZPd3NzTjZxNHd6MEVNcmI1\nTTBZdnFaSEFSaXZRK28rc01VZGRxWHMKLS0tIGpZUjZCNDFDUnIvYXRJTDhtcGlu\nT3JJWlN1YlJYeU1ueEQ1cytDbDFXQ00K70mBEowf/AGgiFFNj3ocv0NfbI1IMJX/\nMJHMKtXPYJsoSKJla6Y+cXMXPe7LNNorSnmqvkNF7rgEMvONMNoEiA==\n-----END AGE ENCRYPTED FILE-----\n"
			}
		],
		"lastmodified": "2023-10-19T13:06:42Z",
		"mac": "ENC[AES256_GCM,data:q8R8Zb+PtpBs6TBPu6VJsQXEKLwi2+WtpE3culIy1obUNdfjWaXyBtC/zbWI5eeh2Z4u//2p49G2bMv0jSzMJZnH4TLIzpHxnd6XFjzu4TqObM6FnI3ZW/SSoPwTRxgHqvooMffm3NO5pxoz3FhnJDHwYk+jTK+JoGxyZF5nBe4=,iv:Ey+so87o/kYbvOaSUXs+vyIrEQXEC39vmswdl0L3Gvw=,tag:5mWJTfBgCFjXVuoYBUiDCA==,type:str]",
		"pgp": null,
		"unencrypted_suffix": "_unencrypted",
		"version": "3.8.1"
	}
}

By running this: sops --encrypt --age age1j6dmaunhspfvh78lgnrtr6zkd7whcypcz6jdwypaydc6gaa79vtq5ryvzf secrets.json > secrets.enc.json

So we can use this with Terraform pretty easily. We run export SOPS_AGE_KEY_FILE=/Users/mathew.duggan/key.txt just to ensure everything is set and then the Terraform looks like the following:

terraform {
  required_providers {
    sops = {
      source = "carlpett/sops"
      version = "~> 0.5"
    }
  }
}

data "sops_file" "secret" {
  source_file = "secrets.enc.json"
}

output "root-value-password" {
  # Access the password variable from the map
  value = data.sops_file.secret.data["password"]
  sensitive = true
}

Now you can use SOPS with AWS, GCP, Azure, or use their own secrets system. I present this only as a "we're small and am looking for a way to easily encrypt configuration files".

Cloud init

So now we're to the last part of the server setup. We'll need to define a cloud-init YAML to set up the host and we'll need to define a Docker Compose file to set up the application that is going to handle all the pulling of images from here. Now thankfully we should be able to reuse this stuff for the foreseeable future.

#cloud-config

package_update: true
package_upgrade: true
package_reboot_if_required: true

groups:
    - docker

users:
    - name: admin
      lock_passwd: true
      shell: /bin/bash
      ssh_authorized_keys:
      - ${init_ssh_public_key}
      groups: docker
      sudo: ALL=(ALL) NOPASSWD:ALL

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - unattended-upgrades
  - nginx
  
write_files:
  - owner: root:root
    encoding: b64
    path: /etc/ssl/cloudflare.crt
    content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlHQ2pDQ0EvS2dBd0lCQWdJSVY1RzZsVmJDTG1Fd0RRWUpLb1pJaHZjTkFRRU5CUUF3Z1pBeEN6QUpCZ05WDQpCQVlUQWxWVE1Sa3dGd1lEVlFRS0V4QkRiRzkxWkVac1lYSmxMQ0JKYm1NdU1SUXdFZ1lEVlFRTEV3dFBjbWxuDQphVzRnVUhWc2JERVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeVlXNWphWE5qYnpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2DQpjbTVwWVRFak1DRUdBMVVFQXhNYWIzSnBaMmx1TFhCMWJHd3VZMnh2ZFdSbWJHRnlaUzV1WlhRd0hoY05NVGt4DQpNREV3TVRnME5UQXdXaGNOTWpreE1UQXhNVGN3TURBd1dqQ0JrREVMTUFrR0ExVUVCaE1DVlZNeEdUQVhCZ05WDQpCQW9URUVOc2IzVmtSbXhoY21Vc0lFbHVZeTR4RkRBU0JnTlZCQXNUQzA5eWFXZHBiaUJRZFd4c01SWXdGQVlEDQpWUVFIRXcxVFlXNGdSbkpoYm1OcGMyTnZNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVNNd0lRWURWUVFEDQpFeHB2Y21sbmFXNHRjSFZzYkM1amJHOTFaR1pzWVhKbExtNWxkRENDQWlJd0RRWUpLb1pJaHZjTkFRRUJCUUFEDQpnZ0lQQURDQ0Fnb0NnZ0lCQU4yeTJ6b2pZZmwwYktmaHAwQUpCRmVWK2pRcWJDdzNzSG12RVB3TG1xRExxeW5JDQo0MnRaWFI1eTkxNFpCOVpyd2JML0s1TzQ2ZXhkL0x1akpuVjJiM2R6Y3g1cnRpUXpzbzB4emxqcWJuYlFUMjBlDQppaHgvV3JGNE9rWkt5ZFp6c2RhSnNXQVB1cGxESDVQN0o4MnEzcmU4OGpRZGdFNWhxanFGWjNjbENHN2x4b0J3DQpoTGFhem0zTkpKbFVmemRrOTdvdVJ2bkZHQXVYZDVjUVZ4OGpZT09lVTYwc1dxbU1lNFFIZE92cHFCOTFiSm9ZDQpRU0tWRmpVZ0hlVHBOOHROcEtKZmI5TEluM3B1bjNiQzlOS05IdFJLTU5YM0tsL3NBUHE3cS9BbG5kdkEyS3czDQpEa3VtMm1IUVVHZHpWSHFjT2dlYTlCR2pMSzJoN1N1WDkzelRXTDAydTc5OWRyNlhrcmFkL1dTaEhjaGZqalJuDQphTDM1bmlKVURyMDJZSnRQZ3hXT2JzcmZPVTYzQjhqdUxVcGhXLzRCT2pqSnlBRzVsOWoxLy9hVUdFaS9zRWU1DQpscVZ2MFA3OFFyeG94UitNTVhpSndRYWI1RkI4VEcvYWM2bVJIZ0Y5Q21rWDkwdWFSaCtPQzA3WGpUZGZTS0dSDQpQcE05aEIyWmhMb2wvbmY4cW1vTGRvRDVIdk9EWnVLdTIrbXVLZVZIWGd3Mi9BNndNN093cmlueFppeUJrNUhoDQpDdmFBREg3UFpwVTZ6L3p2NU5VNUhTdlhpS3RDekZ1RHU0L1pmaTM0UmZIWGVDVWZIQWI0S2ZOUlhKd01zeFVhDQorNFpwU0FYMkc2Um5HVTVtZXVYcFU1L1YrRFFKcC9lNjlYeXlZNlJYRG9NeXdhRUZsSWxYQnFqUlJBMnBBZ01CDQpBQUdqWmpCa01BNEdBMVVkRHdFQi93UUVBd0lCQmpBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQ01CMEdBMVVkDQpEZ1FXQkJSRFdVc3JhWXVBNFJFemFsZk5Wemphbm4zRjZ6QWZCZ05WSFNNRUdEQVdnQlJEV1VzcmFZdUE0UkV6DQphbGZOVnpqYW5uM0Y2ekFOQmdrcWhraUc5dzBCQVEwRkFBT0NBZ0VBa1ErVDlucWNTbEF1Vy85MERlWW1RT1cxDQpRaHFPb3I1cHNCRUd2eGJOR1YyaGRMSlk4aDZRVXE0OEJDZXZjTUNoZy9MMUNrem5CTkk0MGkzLzZoZURuM0lTDQp6VkV3WEtmMzRwUEZDQUNXVk1aeGJRamtOUlRpSDhpUnVyOUVzYU5RNW9YQ1BKa2h3ZzIrSUZ5b1BBQVlVUm9YDQpWY0k5U0NEVWE0NWNsbVlISi9YWXdWMWljR1ZJOC85YjJKVXFrbG5PVGE1dHVnd0lVaTVzVGZpcE5jSlhIaGd6DQo2QktZRGwwL1VQMGxMS2JzVUVUWGVUR0RpRHB4WllJZ2JjRnJSRERrSEM2QlN2ZFdWRWlINWI5bUgyQk9ONjB6DQowTzBqOEVFS1R3aTlqbmFmVnRaUVhQL0Q4eW9Wb3dkRkRqWGNLa09QRi8xZ0loOXFyRlI2R2RvUFZnQjNTa0xjDQo1dWxCcVphQ0htNTYzanN2V2Iva1hKbmxGeFcrMWJzTzlCREQ2RHdlQmNHZE51cmdtSDYyNXdCWGtzU2REN3kvDQpmYWtrOERhZ2piaktTaFlsUEVGT0FxRWNsaXdqRjQ1ZWFiTDB0MjdNSlY2MU8vakh6SEwzZGtuWGVFNEJEYTJqDQpiQStKYnlKZVVNdFU3S01zeHZ4ODJSbWhxQkVKSkRCQ0ozc2NWcHR2aERNUnJ0cURCVzVKU2h4b0FPY3BGUUdtDQppWVdpY240Nm5QRGpnVFUwYlgxWlBwVHByeVhidmNpVkw1UmtWQnV5WDJudGNPTERQbFpXZ3haQ0JwOTZ4MDdGDQpBbk96S2daazRSelpQTkF4Q1hFUlZ4YWpuL0ZMY09oZ2xWQUtvNUgwYWMrQWl0bFEwaXA1NUQyL21mOG83MnRNDQpmVlE2VnB5akVYZGlJWFdVcS9vPQ0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
  - owner: root:root
    encoding: b64
    path: /etc/ssl/cert.pem
    content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlFcGpDQ0E0NmdBd0lCQWdJVUgzZXMwaHVaQy8rTUNxQWRyWXEwTE05UFY4QXdEUVlKS29aSWh2Y05BUUVMDQpCUUF3Z1lzeEN6QUpCZ05WQkFZVEFsVlRNUmt3RndZRFZRUUtFeEJEYkc5MVpFWnNZWEpsTENCSmJtTXVNVFF3DQpNZ1lEVlFRTEV5dERiRzkxWkVac1lYSmxJRTl5YVdkcGJpQlRVMHdnUTJWeWRHbG1hV05oZEdVZ1FYVjBhRzl5DQphWFI1TVJZd0ZBWURWUVFIRXcxVFlXNGdSbkpoYm1OcGMyTnZNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoDQpNQjRYRFRJek1EY3pNVEUzTXprd01Gb1hEVE00TURjeU56RTNNemt3TUZvd1lqRVpNQmNHQTFVRUNoTVFRMnh2DQpkV1JHYkdGeVpTd2dTVzVqTGpFZE1Cc0dBMVVFQ3hNVVEyeHZkV1JHYkdGeVpTQlBjbWxuYVc0Z1EwRXhKakFrDQpCZ05WQkFNVEhVTnNiM1ZrUm14aGNtVWdUM0pwWjJsdUlFTmxjblJwWm1sallYUmxNSUlCSWpBTkJna3Foa2lHDQo5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdmtmbjB1eVZ3LzlSYlBDbDQ2dzhIeVZnTXZKREtVUWgvQUk0DQpIODRXRGRzM1hTRmxrbmFIK0FQdmJoM0Rsc3M5NEZnRDVGVVRMdENzQzRtSFpZVlNiRzJqeCtJbjJGcTdTSjdUDQp1QlJUbHBXWmNyVEViRjRBa00wRm53NGwwbEdQeFlZRjRaOG5uZm13YUtvNnlwb0Ftd3draXJWWXU3dWE4Mm01DQp3eWoyZHZKcWNkUExxTXdHRFVkYnlYemdwZE9IaXRBVFFoTE56VmtaOEI1L2RyODcweDR3TE8rRkVOOG92QUprDQpaNVZCRndSOEI5WEs4dUtEcmdBZkxYUVM5UVZ3WHpjcmQxQVp6S1RDVnBlMmlwemFiSGN5TUt1WDdpZjRTRGQ1DQpiZ2Ird1hycGY2dkNRWklDa3REdWJFcDdCVzlCNVhIUnlmMnJ2Yms2VEtjZ2xTbGNRUUlEQVFBQm80SUJLRENDDQpBU1F3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01DQmdnckJnRUZCUWNEDQpBVEFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU3pwcWpFOEJUK0FKYUg2c3VnRmwxajdqend4REFmDQpCZ05WSFNNRUdEQVdnQlFrNkZOWFhYdzBRSWVwNjVUYnV1RVdlUHdwcERCQUJnZ3JCZ0VGQlFjQkFRUTBNREl3DQpNQVlJS3dZQkJRVUhNQUdHSkdoMGRIQTZMeTl2WTNOd0xtTnNiM1ZrWm14aGNtVXVZMjl0TDI5eWFXZHBibDlqDQpZVEFwQmdOVkhSRUVJakFnZ2c4cUxtMWhkR1IxWjJkaGJpNWpiMjJDRFcxaGRHUjFaMmRoYmk1amIyMHdPQVlEDQpWUjBmQkRFd0x6QXRvQ3VnS1lZbmFIUjBjRG92TDJOeWJDNWpiRzkxWkdac1lYSmxMbU52YlM5dmNtbG5hVzVmDQpZMkV1WTNKc01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3VvUG9KV05VZ0xPRXVmendLRlprMHBvL2tNR29qDQoxYTdCSGEzcWtNWGUrN2J4aW1pQTBvYzcyVEhYSm8zVm82bTIwaGRpbDRiSzVPYzZoTGpiUTFOR2ZXNm84MXk2DQpyUXZEaXBXN3JuL3R3V3hPTkpHTFNDZDZFalpqWXpUUW5EdFBSQWQrVnBwV1BuNUtLZHRSNkM2ZjhaMFlqeldjDQp3b3JLdkRuV2E5b0gycEUzZUNSRUZsc1lRUUtVNWxOYUpibm9nRXNaY2ZDa0MvU0JCaTRaN0lIRnJzWnd1YTU5DQorVDIxUWNOd3BKbExLZ2VRZlpLazMzTFc5MFlyYjRhNStMaTljQzZsVC9MRHdTc20ySkVVVm1nbDJOaC8wV2dpDQpBcHFxUjV5dmUwdUI2M0tTdW90Z2hyWlp0cnNhVW1OYytjRjhneHU4Si8rdXFhaWZQWk83NVZtVw0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
  - owner: root:root
    encoding: b64
    path: /etc/ssl/key.pem
    permissions: '0600'
    content: ${private_ssl_key}
  - owner: admin:docker
    path: /home/admin/docker-compose.yaml
    content: |
    version: "3"
    services:
      app:
        image: ghcr.io/<org>/<image>:<tag>
        restart: unless-stopped
        ports:
          - "8000:2368"
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
      watchtower:
        image: containrrr/watchtower
        command: --debug --http-api-update
        restart: unless-stopped
        environment:
          - WATCHTOWER_HTTP_API_TOKEN=${watchtower_token}
        labels:
          - "com.centurylinklabs.watchtower.enable=false"
        ports:
          - "8080:8080"
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
          - /home/admin/.docker/config.json:/config.json
    - owner: www-data:www-data
      path: /etc/nginx/sites-available/default
      content: |
        server {
          listen 443 ssl http2;
          listen [::]:443 ssl http2;
          charset UTF-8;
          ssl_session_timeout 5m;
          ssl_prefer_server_ciphers on;
          ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
          ssl_protocols TLSv1.2;
          ssl_buffer_size 4k;
          ssl_certificate         /etc/ssl/cert.pem;
          ssl_certificate_key     /etc/ssl/key.pem;
          ssl_client_certificate /etc/ssl/cloudflare.crt;
          ssl_verify_client on;
          
          server_name hostname.com www.hostname.com;
          
          location / {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_http_version 1.1;
            proxy_buffering        on;
            proxy_pass http://127.0.0.1:8000;
            proxy_redirect off;
            }
            
          location /v1/update {
            proxy_http_version 1.1;
            proxy_buffering on;
            proxy_pass http://127.0.0.1:8080;
            proxy_redirect off;
            }
          }
  
runcmd:
  - curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
  - add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
  - apt-get update -y
  - apt-get install -y docker-ce docker-ce-cli containerd.io
  - systemctl start docker
  - systemctl enable docker
  - curl -L "https://github.com/docker/compose/releases/download/2.23.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose
  - su admin -c 'docker login -u ${docker_username} -p ${docker_password} ${docker_repository}'
  - su admin -c 'docker compose -f /home/admin/docker-compose.yml up -d'

Now obviously you'll need to modify this and test it, it took some tweaks to get it working on mine and I'm confident there are improvements we could make. However I think we can use it as a sample reference doc with the understanding it is NOT ready to copy and paste.

So here's the basic flow. We're going to use the SSL certificates Cloudflare gives us as well as inserting their certificate for Authenticated Origin Pulls. This ensures all the traffic coming to our server is from Cloudflare. Now we could be traffic from another Cloudflare customer, a malicious one, but at least this gives us a good starting point to limit the traffic. Plus presumably if there is a malicious customer hitting you, at least you can reach out to Cloudflare and they'll do....something.

Now we put it together with Terraform and we have something we can deploy. We'll do Digital Ocean as our example but the cloud provider part doesn't really matter.

secrets.json

{
   "private_ssl_key": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlFcGpDQ0E0NmdBd0lCQWdJVUgzZXMwaHVaQy8rTUNxQWRyWXEwTE05UFY4QXdEUVlKS29aSWh2Y05BUUVMDQpCUUF3Z1lzeEN6QUpCZ05WQkFZVEFsVlRNUmt3RndZRFZRUUtFeEJEYkc5MVpFWnNZWEpsTENCSmJtTXVNVFF3DQpNZ1lEVlFRTEV5dERiRzkxWkVac1lYSmxJRTl5YVdkcGJpQlRVMHdnUTJWeWRHbG1hV05oZEdVZ1FYVjBhRzl5DQphWFI1TVJZd0ZBWURWUVFIRXcxVFlXNGdSbkpoYm1OcGMyTnZNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoDQpNQjRYRFRJek1EY3pNVEUzTXprd01Gb1hEVE00TURjeU56RTNNemt3TUZvd1lqRVpNQmNHQTFVRUNoTVFRMnh2DQpkV1JHYkdGeVpTd2dTVzVqTGpFZE1Cc0dBMVVFQ3hNVVEyeHZkV1JHYkdGeVpTQlBjbWxuYVc0Z1EwRXhKakFrDQpCZ05WQkFNVEhVTnNiM1ZrUm14aGNtVWdUM0pwWjJsdUlFTmxjblJwWm1sallYUmxNSUlCSWpBTkJna3Foa2lHDQo5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdmtmbjB1eVZ3LzlSYlBDbDQ2dzhIeVZnTXZKREtVUWgvQUk0DQpIODRXRGRzM1hTRmxrbmFIK0FQdmJoM0Rsc3M5NEZnRDVGVVRMdENzQzRtSFpZVlNiRzJqeCtJbjJGcTdTSjdUDQp1QlJUbHBXWmNyVEViRjRBa00wRm53NGwwbEdQeFlZRjRaOG5uZm13YUtvNnlwb0Ftd3draXJWWXU3dWE4Mm01DQp3eWoyZHZKcWNkUExxTXdHRFVkYnlYemdwZE9IaXRBVFFoTE56VmtaOEI1L2RyODcweDR3TE8rRkVOOG92QUprDQpaNVZCRndSOEI5WEs4dUtEcmdBZkxYUVM5UVZ3WHpjcmQxQVp6S1RDVnBlMmlwemFiSGN5TUt1WDdpZjRTRGQ1DQpiZ2Ird1hycGY2dkNRWklDa3REdWJFcDdCVzlCNVhIUnlmMnJ2Yms2VEtjZ2xTbGNRUUlEQVFBQm80SUJLRENDDQpBU1F3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01DQmdnckJnRUZCUWNEDQpBVEFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU3pwcWpFOEJUK0FKYUg2c3VnRmwxajdqend4REFmDQpCZ05WSFNNRUdEQVdnQlFrNkZOWFhYdzBRSWVwNjVUYnV1RVdlUHdwcERCQUJnZ3JCZ0VGQlFjQkFRUTBNREl3DQpNQVlJS3dZQkJRVUhNQUdHSkdoMGRIQTZMeTl2WTNOd0xtTnNiM1ZrWm14aGNtVXVZMjl0TDI5eWFXZHBibDlqDQpZVEFwQmdOVkhSRUVJakFnZ2c4cUxtMWhkR1IxWjJkaGJpNWpiMjJDRFcxaGRHUjFaMmRoYmk1amIyMHdPQVlEDQpWUjBmQkRFd0x6QXRvQ3VnS1lZbmFIUjBjRG92TDJOeWJDNWpiRzkxWkdac1lYSmxMbU52YlM5dmNtbG5hVzVmDQpZMkV1WTNKc01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3VvUG9KV05VZ0xPRXVmendLRlprMHBvL2tNR29qDQoxYTdCSGEzcWtNWGUrN2J4aW1pQTBvYzcyVEhYSm8zVm82bTIwaGRpbDRiSzVPYzZoTGpiUTFOR2ZXNm84MXk2DQpyUXZEaXBXN3JuL3R3V3hPTkpHTFNDZDZFalpqWXpUUW5EdFBSQWQrVnBwV1BuNUtLZHRSNkM2ZjhaMFlqeldjDQp3b3JLdkRuV2E5b0gycEUzZUNSRUZsc1lRUUtVNWxOYUpibm9nRXNaY2ZDa0MvU0JCaTRaN0lIRnJzWnd1YTU5DQorVDIxUWNOd3BKbExLZ2VRZlpLazMzTFc5MFlyYjRhNStMaTljQzZsVC9MRHdTc20ySkVVVm1nbDJOaC8wV2dpDQpBcHFxUjV5dmUwdUI2M0tTdW90Z2hyWlp0cnNhVW1OYytjRjhneHU4Si8rdXFhaWZQWk83NVZtVw0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ",
   "watchtower_token": "tx#okr#n+8_wpf%#n9cxr@30vi7wy_@*@69bw+smfic&k^zb8h",
   "docker_username": "username",
   "docker_password": "password",
   "docker_repository": "repository"
}
Base64 encoded private key for SSL along with the watchtower token to access the API and everything else

Terraform file

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "2.30.0"
    }
    sops = {
      source  = "carlpett/sops"
      version = "~> 0.5"
    }
    cloudflare = {
      source  = "cloudflare/cloudflare"
      version = "4.17.0"
    }
  }
}

variable "ssh_public_key" {
  type        = string
  description = "SSH public key"
  default     = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeogciUcb1roDZWVXaTFrMSqU66qlb4YT2GhDMZQm+cM6kxAgl5GY72Yiuir/Sml8pHMvTRPV5ezg+17gSntnBtIbf3wNwuB0F/21l7vGS2XteY6p557cRHZjSFuc2uPiysnI21FfZCrsEJ7uM3Ebyd/zJ394URcWQm54NtVh/QxuHzfuK9QCbxhlsXXFAfTnrWvLVGQkq/R+fjtKy12o42Y59JIsZT4aORSGujDiagBysGOCXonYqRhs9gmdZPkcKUe3r8j6fZRY2l8/QX3D6zhDZ8x74Gi70ojuvR8oCsWs9tB2sF/XQi806G/s/mbhh6hcj7ALyo5Th+jw7I8rj matdevdug@matdevdug-ThinkPad-X1-Carbon-5th"
}

provider "digitalocean" {
  token = "secret_api_key"
}

data "sops_file" "secret" {
  source_file = "secrets.enc.json"
}

locals {
  virtual_machines = {
    "server01" = { vm_size = "s-4vcpu-8gb", zone = "nyc1" },
    "server02" = { vm_size = "s-4vcpu-8gb", zone = "nyc1" },
    "server03" = { vm_size = "s-4vcpu-8gb", zone = "nyc1" },
    "server04" = { vm_size = "s-4vcpu-8gb", zone = "nyc1" }
  }
}

resource "digitalocean_droplet" "web" {
  for_each = local.virtual_machines
  name     = each.key
  image    = "debian-12-x64"
  size     = each.value.vm_size
  region   = each.value.zone
  user_data = templatefile("${path.module}/cloud-init.yaml", {
    init_ssh_public_key = file(var.ssh_public_key)
    private_ssl_key     = data.sops_file.secret.data["private_ssl_key"]
    watchtower_token    = data.sops_file.secret.data["watchtower_token"]
    docker_username     = data.sops_file.secret.data["docker_username"]
    docker_password     = data.sops_file.secret.data["docker_password"]
    docker_repository   = data.sops_file.secret.data["docker_repository"]
  })
}

resource "digitalocean_reserved_ip" "example" {
  for_each   = digitalocean_droplet.web
  droplet_id = each.value.id
  region     = each.value.region
}

Hooking it all together

So we'll need to go back to the Cloudflare terraform and set the reserved_ips we get from the cloud provider as the IPs for the origins. Then we should be able to go through, set Authenticated Origin Pulls up as well as SSL to "Strict" in the Cloudflare control panel. Finally since we have Watchtower set up, all we need to deploy a new version of the application is to write a simple deploy script that curls each one of our servers IP addresses with the Watchtower HTTP Token set to tell it to pull a new version of our container from our registry and deploy it. Read more about that here.

In my testing (which was somewhat limited), even though the scripts needed tweaks and modifications, the underlying concept actually worked pretty well. I was able to see all my traffic coming through Cloudflare easily, the SSL components all worked and whenever I wanted to upgrade a host it was pretty simple to stop traffic to the host in the web UI, reboot or destroy and run Terraform again and then send traffic to it again.

In terms of encryption while my age solution wasn't perfect I think it'll hold together reasonably well. It is a secret value which you can safely commit to source control and rotate the secret pretty easily whenever you want.

Next Steps

  • Put the whole thing together in a structured Terraform module so it's more reliable and less prone to random breakage
  • Write out a bunch of different cloud provider options to make it easier to switch between them
  • Write a simple CLI to remove an origin from the load balancer before running the deploy and then confirming the origin is healthy before sticking it back in (for the requirement of zero-downtime deployments)
  • Taking a second pass at the encryption story.

Going through this is a useful exercise in explaining why these infrastructure products are so complicated. They're complicated because its hard to do and has a lot of moving parts. Even with the heavy use of existing tooling, this thing turned out to be more complicated than I expected.

Hopefully this has been an interesting thought experiment. I'm excited to take another pass at this idea and potentially turn it into a more usable product. If this was helpful (or if I missed something based), I'm always open to feedback. Especially if you thought of an optimization!  https://c.im/@matdevdug


Terraform Cloud Review

Source

If I were told to go off and make a hosted Terraform product, I would probably end up with a list of features that looked something like the following:

  • Extremely reliable state tracking
  • Assistance with upgrading between versions of Terraform and providers and letting users know when it looked safe to upgrade and when there might be problems between versions
  • Consistent running of Terraform with a fresh container image each time, providers and versions cached on the host VM so the experience is as fast as possible
  • As many linting, formatting and HCL optimizations I can offer, configurable on and off
  • Investing as much engineering work as I can afford in providing users an experience where, unlike with the free Terraform, if a plan succeeds on Terraform Cloud, the Apply will succeed
  • Assisting with Workspace creation. Since we want to keep the number of resources low, seeing if we can leverage machine learning to say "we think you should group these resources together as their own workspace" and showing you how to do that
  • Figure out some way for organizations to interact with the Terraform resources other than just running the Terraform CLI, so users can create richer experiences for their teams through easy automation that feeds back into the global source of truth that is my incredibly reliable state tracking
  • Try to do whatever I can to encourage more resources in my cloud. Unlimited storage, lots of workspaces, helping people set up workspaces. The more stuff in there the more valuable it is for the org to use (and also more logistically challenging for them to cancel)

This is me would be a product I would feel confident charging a lot of money for. Terraform Cloud is not that product. It has some of these features locked behind the most expensive tiers, but not enough of them to justify the price.

I've written about my feelings around the Terraform license change before. I won't bore you with that again. However since now the safest way to use Terraform is to pay Hashicorp, what does that look like? As someone who has used Terraform for years and Terraform Cloud almost daily for a year, it's a profoundly underwhelming experience.

Currently it is a little-loved product with lots of errors and sharp edges. This is as v0.1 of a version of this as I could imagine, except the pace of development has been glacial. Terraform Cloud is a "good enough" platform that seems to understand that if you could do better, you would. Like a diner at 2 AM on the side of the highway, it's primary selling point is the fact that it is there. That and the license terms you will need to accept soon.

Terraform Cloud - Basic Walkthrough

At a high level Terraform Cloud allows organizations to centralize their Projects and Workspaces and store that state with Hashicorp. It also gives you access to a Registry for you to host your own privacy Terraform modules and use them in your workspaces. The top level options look as follows:

That's it!

You may be wondering "What does Usage do?" I have no idea, as the web UI has never worked for me even though I appear to have all the permissions one could have. I have seen the following since getting my account:

I'm not sure what wasn't found.

I'm not sure what access I lack or if the page was intended to work. It's very mysterious in that way.

There is Explorer, which lets you basically see "what versions of things do I use across the different repos". You can't do anything with that information, like I can't say "alright well upgrade these two to the version that everyone else uses". It's also a beta feature and not one that existed when I first started using the platform.

Finally there are the Workspaces, where you spend 99% of your time.

You get some ok stats here. Up in the top left you see "Needs Attention", "Errors", "Running", "Hold" and then "Applied." Even though you may have many Workspaces, you cannot change how many you see here. 20 is the correct number I guess.

Creating a Workspace

Workspaces are either based on a repo, CLI driven or you call the API. You tell it what VCS, what repo, if you want to use the root of the repo or a sub-directory (which is good because soon you'll have too many resources for one workspace for everything). You tell it Auto Apply (which is checked by default) or Manual and when to trigger a run (whenever a change, whenever specific files in a path change or whenever you push a tag). That's it.

You can see all the runs, what their status is and basically what resources have changed or will change. Any plan that you run from your laptop also show up here. Now you don't need to manage your runs here, you can still do local, but then there is absolutely no reason to use this product. Almost all of the features rely on your runs being handled by Hashicorp here inside of a Workspace.

Workspace flow

Workspaces show you when the run was, how long the plan took, what resources are associated with this (10 resources at a time even though you might have thousands. Details links you to the last run, there are tags and run triggers. Run triggers allow you to link workspaces together, so this workspace would dependent on the output of another workspace.

The settings are as follows:

Runs is pretty straight forward. States allow you to inspect the state changes directly. So you can see the full JSON of a resource and roll back to this specific state version. This can be nice for reviewing what specifically changed on each resource, but in my experience you don't get much over looking at the actual code. But if you are in a situation where something has suddenly broken and you need a fast way of saying "what was added and what was removed", this is where you would go.

NOTE: BE SUPER CAREFUL WITH THIS

The state inspector has the potential to show TONS of sensitive data. It's all the data in Terraform in the raw form. Just be aware it exists when you start using the service and take a look to ensure there isn't anything you didn't want there.

Variables are variables and the settings allow you to lock the workspace, apply Sentinel settings, set an SSH key for downloading private modules and finally if you want changes to the VCS to trigger an action here. So for instance, when you merge in a PR you can trigger Terraform Cloud to automatically apply this workspace. Nothing super new here compared to any CI/CD system, but still it is all baked in.

That's it!

No-Code Modules

One selling point I heard a lot about, but haven't actually seen anyone use. The idea is good though, where you write premade modules and push them to your private registry. Then members of your organization can just run them to do things like "stand up a template web application stack". Hashicorp has a tutorial here that I ran though and found it to work pretty much as expected. It isn't anywhere near the level of power that I would want, compared to something like Pulumi, but it is a nice step forward for automating truly constant tasks (like adding domain names to an internal domain or provisioning some SSL certificate for testing).

Dynamic Credentials

You can link Terraform Cloud and Vault, if you use it, so you no longer need to stick long-living credentials inside of the Workspace to access cloud providers. Instead you can leverage Vault to get short-lived credentials that improve the security of the Workspaces. I ran through this and did have problems getting it worked for GCP, but AWS seemed to work well. It requires some setup inside of the actual repository, but it's a nice security improvement vs leaving production credentials in this random web application and hoping you don't mess up the user scoping.

User scoping is controlled primarily through "projects", which basically trickle down to the user level. You make a project, which has workspaces, that have their own variables and then assign that to a team or business unit. That same logic is reflected inside of credentials.

Private Registry

This is one thing Hashicorp nailed. It's very easy to hook up Terraform Cloud to allow your workspaces to access internal modules backed by your private repositories. It supports the same documentation options as public modules, tracks downloads and allows for easy versioning control through git tags. I have nothing but good things to say about this entire thing.

Sharing between organizations is something they lock at the top tier, but this seems like a very niche usecase so I don't consider it to be too big of a problem. However if you are someone looking to produce a private provider or module for your customers to use, I would reach out to Hashicorp and see how they want you to do that.

The primary value for this is just to easily store all of your IaC logic in modules and then rely on the versioning inside of different environments to roll out changes. For instance, we do this for things like upgrading a system. Make the change, publish the new version to the private registry and then slowly roll it out. Then you can monitor the rollout through git grep pretty easily.

Pricing

$0.00014 per hour per resource. So a lot of money when you think "every IAM custom role, every DNS record, every SSL certificate, every single thing in your entire organization". You do get a lot of the nice features at this "standard" tier, but I'm kinda shocked they don't unlock all the enterprise features at this price point. No-code provisioning is only available at the higher levels, as well as Drift detection, Continuous validation (checks between runs to see if anything has changed ) as well as Ephemeral workspaces. The last one is a shame, because it looks like a great feature. Set up your workspace to self-destruct at regular intervals so you can nuke development environments. I'd love to use that but alas.

Problems

Oh the problems. So the runners sometimes get "stuck", which seems to usually happen after someone cancels a job in the web UI. You'll run into an issue, try to cancel a job, fix the problem and rerun the runner only to have it get stuck forever. I've sat there and watched it try to load the modules for 45 minutes. There isn't any way I have seen to tell Terraform Cloud "this runner is broken, go get me another one". Sometimes they get stuck for an unknown reason.

Since you need to make all the plans and applies remotely to get any value out of the service, it can also sometimes cause traffic jams in your org. If you work with Terraform a lot, you know you need to run plans pretty regularly. Since you need to wait for a runner every single time, you can end up wasting a lot of time sitting there waiting for another job to finish. Again I'm not sure what triggers you getting another runner. You can self host, but then I'm truly baffled at what value this tool brings.

Even if that was an option for you and you wanted to do it, its locked behind the highest subscription tier. So I can't even say "add a self-hosted runner just for plans" so I could unstick my team. This seems like an obvious add, along with a lot more runner controls so I could see what was happening and how to avoid getting it jammed up.

Conclusion

I feel bad this is so short, but there just isn't anything else to write. This is a super bare-bones tool that does what it says on the box for a lot of money. It doesn't give you a ton of value over Spacelift or or any of the others. I can't recommend it, it doesn't work particularly well and I haven't enjoyed my time with it. Managing it vs using an S3 bucket is an experience I would describe as "marginally better". It's nice that it handles contention across team mates for me, but so do all the others at a lower price.

I cannot think of a single reason to recommend this over Spacelift, which has better pricing, better tooling and seems to have a better runner system except for the license change. Which was clearly the point of the license change. However for those evaluating options, head elsewhere. This thing isn't worth the money.


We need a different name for non-technical tech conferences

We need a different name for non-technical tech conferences

I recently returned from Google Cloud Next. Typically I wouldn't go to a vendor conference like this, since they're usually thinly veiled sales meetings wearing the trench-coat of a conference. However I've been to a few GCP events and found them to be technical and well-run, so I rolled the dice and hopped on the 11 hour flight from London to San Francisco.  

We all piled into Moscone Center and I was pretty hopeful. There were a lot of engineers from Google and other reputable orgs, the list of talks we had signed up for before showing up sounded good, or at least useful. I figured this could be a good opportunity to get some idea of where GCP was going and perhaps hear about some large customers technical workarounds to known limitations and issues with the platform. Then we got to the keynote.

AI. The only topic discussed and the only thing anybody at the executive level cared about was AI. This would become a theme, a constant refrain among every executive-type I spoke to. AI was going to replace customer service, programmers, marketing, copy writers, seemingly every single person in the company except for the executives. It seemed only the VPs and the janitors were safe. None of the leaders I spoke to afterwards seemed to appreciate my observation that if they spent most of their day in meetings being shown slide decks, wouldn't they be the easiest to replace with a robot? Or maybe their replacement could be a mop with sunglasses leaned against an office chair if no robot was available.

I understand keynotes aren't for engineers, but the sense I got from this was "nothing has happened in GCP anywhere else except for AI". This isn't true, like objectively I know new things have been launched, but it sends a pretty clear message that it's not a priority if nobody at the executive level seems to care about them. This is also a concern because Google famously has institutional ADHD with an inability to maintain long-term focus on slowly incrementing and improving a product. Instead it launches amazing products, years ahead of the competition then, like a child bored with a toy, drops them into the backyard and wanders away. But whatever, let's move on from the keynote.

Over the next few days what I was to experience was an event with some fun moments, mostly devoid of any technical discussion whatsoever. Rarely were talks geared towards technical staff, when technical questions came up during the recorded events they were almost never answered. Most importantly there was no presentation I heard that even remotely touched on long-known missing features of GCP when compared to peers or roadmaps. When I asked technical questions, often Google employees would come up to me after the talk with the answer, which I appreciate. But everyone at home and in the future won't get that experience and miss out on the benefit.

Most talks were the GCP products marketing page turned into slides, with a seemingly mandatory reference to AI in each one. Several presenters joked about "that was my required AI callout", which started funny but as time went on I began to worry...maybe they were actually required to mention AI? There are almost no live demos (pre-recorded which is ok but live is more compelling), zero code shown, mostly a tour of existing things the GCP web console could do along with a few new features. I ended up getting more value from finding the PMs of various products on the floor and subjecting to these poor souls to my many questions.

This isn't just a Google problem. Every engineer I spoke to about this talked about a similar time they got burned going to "not a conference conference". From AWS to Salesforce and Facebook, these organizations pitch people on getting facetime with engineers and concrete answers to questions. Instead they're opportunity to try and pitch you on more products, letting executives feel loved by ensuring they get one-on-one time from senior folks in the parent company. They sound great but mostly it's an opportunity to collect stickers.

We need to stop pretending these types of conferences are technical conferences. They're not. It's an opportunity for non-technical people inside of your organization who interact with your technical SaaS providers to get facetime with employees of that company and ask basic questions in a shame-free environment. That has value and should be something that exists, but you should also make sure engineers don't wander into these things.

Here are the 7 things I think you shouldn't do if you call yourself a tech conference.

7 Deadly Sins of "Tech" Conferences

  • Discussing internal tools that aren't open source and that I can't see or use. It's great if X corp has worked together with Google to make the perfect solution to a common problem. It doesn't mean shit to me if I can't use it or at least see it and ask questions about it. Don't let it into the slide deck if it has zero value to the community outside of showing that "solving this problem is possible".
  • Not letting people who work with customers talk about common problems. I know, from talking to Google folks and from lots of talks with other customers, common issues people experience with GCP products. Some are misconfigurations or not understanding what the product is good at and designed to do. If you talk about a service, you need to discuss something about "common pitfalls" or "working around frequently seen issues".
  • Pretending a sales pitch is a talk. Nothing makes me see red like halfway through a talk, inviting the head of sales onto the stage to pitch me on their product. Jesus Christ, there's a whole section of sales stuff, you gotta leave me alone in the middle of talks.
  • Not allowing a way for people to get questions into the livestream. Now this isn't true for every conference, but if this is the one time a year people can ask questions of the PM for a major product and see if they intend to fix a problem, let me ask that question. I'll gladly submit it beforehand and let people vote on it, or whatever you want. It can't be a free-for-all but there has to be something.
  • Skipping all specifics. If you are telling me that X service is going to solve all my problems and you have 45 minutes, don't spend 30 explaining how great it is in the abstract. Show me how it solves those problems in detail. Some of the Google presenters did this and I'm extremely grateful to them, but it should have been standard. I saw the "Google is committed to privacy and safety" generic slides so many times across different presentations that I remembered the stock photo of two women looking at code and started trying to read what she had written. I think it was Javascript.
  • Blurring the line between presenter and sponsor. Most well-run tech conferences I've been to make it super clear when you are hearing from a sponsor vs when someone is giving an unbiased opinion. A lot of these not-tech tech conferences don't, where it sounds like a Google employee is endorsing a third-party solution who has also sponsored the event. For folks new to this environment, it's misleading. Is Google saying this is the only way they endorse doing x?
  • Keeping all the real content behind NDAs. Now during Next there were a lot of super useful meetings that happened, but I wasn't in them. I had to learn about them from people at the bar who had signed NDAs and were invited to learn actual information. If you aren't going to talk about roadmap or any technical details or improvements publically, don't bother having the conference. Release a PDF with whatever new sales content you want me to read. The folks who are invited to the real meetings can still go to those. No judgement, you don't want to have those chats publically, but don't pretend you might this year.

One last thing: if you are going to have a big conference with people meeting with your team, figure out some way you want them to communicate with that team. Maybe temporary email addresses or something? Most people won't use them, but it means a lot to people to think they have a way of having some line of communication with the company. If they get weird then just deactivate the temp email. It's weird to tell people "just come find me afterwards". Where?

What are big companies supposed to do?

I understand large companies are loathe to share details unless forced to. I also understand that companies hate letting engineers speak directly to the end users, for fear that the people who make the sausage and the people who consume the sausage might learn something terrible about how its made. That is the cost of holding a tech conference about your products. You have to let these two groups of people interact with each other and ask questions.

Now obviously there are plenty of great conferences based on open-source technology or about more general themes. These tend to be really high quality and I've gone to a ton I love. However there is value, as we all become more and more dependent on cloud providers, to letting me know more about what this platform is moving towards. I need to know what platforms like GCP are working on so I know what is the technology inside the stack on the rise and which are on the decline.

Instead these conferences are for investors and the business community instead of anyone interested in the products. The point of Next was to show the community that Google is serious about AI. Just like the point of the last Google conference was to show investors that Google is serious about AI. I'm confident the next conference on any topic Google has will also be asked to demonstrate their serious committment to AI technology.

You can still have these. Call them something else. Call them "leadership conferences" or "vision conferences". Talk to Marketing and see what words you can slap in there that conveys "you are an important person we want to talk about our products with" that also tells me, a technical peon, that you don't want me there. I'll be overjoyed not to fly 11 hours and you'll be thrilled not to have me asking questions of your engineers. Everybody wins.


Terraform is dead; Long live Pulumi?

Terraform is dead; Long live Pulumi?

The best tools in tech scale. They're not always easy to learn, they might take some time to get good with but once you start to use them they just stick with you forever. On the command line, things like gawk and sed jump to mind, tools that have saved me more than once. I've spent a decade now using Vim and I work with people who started using Emacs in university and still use it for 5 hours+ a day. You use them for basic problems all the time but when you need that complexity and depth of options, they scale with your problem. In the cloud when I think of tools like this, things like s3 and SQS come to mind, set and forget tooling that you can use from day 1 to day 1000.

Not every tool is like this. I've been using Terraform at least once a week for the last 5 years. I have led migrating two companies to Infrastructure as Code with Terraform from using the web UI of their cloud provider, writing easily tens of thousands of lines of HCL along the way. At first I loved Terraform, HCL felt easy to write, the providers from places like AWS and GCP are well maintained and there are tons of resources on the internet to get you out of any problem.

As the years went on, our relationship soured. Terraform has warts that, at this point, either aren't solvable or aren't something that can be solved without throwing away a lot of previous work. In no particular orders, here are my big issues with Terraform:

  • It scales poorly. Terraform often starts with dev stage and prod as different workspaces. However since both terraform plan and terraform apply make API calls to your cloud provider for each resource, it doesn't take long for this to start to take a long time. You run plan a lot when working with Terraform, so this isn't a trivial thing.
  • Then you don't want to repeat yourself, so you start moving more complicated logic into Modules. At this point the environments are completely isolated state files with no mixing, if you try to cross accounts things get more complicated. The basic structure you quickly adopt looks like this.
  • At some point you need to have better DRY coverage, better environment handling, different backends for different environments and you need to work with multiple modules concurrently. Then you explore Terragrunt which is a great tool, but is now another tool on top of the first Infrastructure as code tool and it works with Terraform Cloud but it requires some tweaks to do so.
  • Now you and your team realize that Terraform can destroy the entire company if you make a mistake, so you start to subdivide different resources out into different states. Typically you'll have the "stateless resources" in one area and the "stateful" resources in another, but actually dividing stuff up into one or another isn't completely straightforward. Destroying an SQS queue is really bad, but is it stateful? Kubernetes nodes don't have state but they're not instantaneous to fix either.
  • HCL isn't a programming language. It's a fine alternative to YAML or JSON, but it lacks a lot of the tooling you want when dealing with more complex scenarios. You can do many of the normal things like conditionals, joins, trys, loops, for_each, but they're clunky and limited when compared to something like Golang or Python.
  • The tooling around HCL is pretty barebones. You get some syntax checking, but otherwise it's a lot of switching tmux panes to figure out why it worked one place and didn't work another place.
  • terraform validate and terraform plan don't mean the thing is going to work. You can write something, it'll pass both check stages and fail on apply. This can be really bad as your team needs to basically wait for you to fix whatever you did so the infrastructure isn't in an inconsistent place or half working. This shouldn't happen in theory but its a common problem.
  • If an apply fails, it's not always possible to back out. This is especially scary when there are timeouts, when something is still happening inside of the providers stack but now Terraform has given up on knowing what state it was left in.
  • Versioning is bad. Typically whatever version of Terraform you started with is what you have until someone decides to try to upgrade and hope nothing breaks. tfenv becomes a mission critical tool. Provider version drift is common, again typically "whatever the latest version was when someone wrote this module".

License Change

All of this is annoying, but I've learned to grumble and live with it. Then HashiCorp decided to pull the panic lever of "open-source" companies which is a big license change. Even though Terraform Cloud, their money-making product, was never open-source, they decided that the Terraform CLI needed to fall under the BSL. You can read it here. The specific clause people are getting upset about is below:

You may make production use of the Licensed Work,
provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products.

Now this clause, combined with the 4 year expiration date, effectively kills the Terraform ecosystem. Nobody is going to authorize internal teams to open-source any complementary tooling with the BSL in place and there certainly isn't going to be any competitive pressure to improve Terraform. While it doesn't, at least how I read it as not a lawyer, really impact most usage of Terraform as just a tool that you run on your laptop, it does make the future of Terraform development directly tied to Terraform Cloud. This wouldn't be a problem except Terraform Cloud is bad.

Terraform Cloud

I've used it for a year, it's extremely bare-bones software. It picks the latest version when you make the workspace of Terraform and then that's it. It doesn't help you upgrade Terraform, it doesn't really do any checking or optimizations, structure suggestions or anything else you need as Terraform scales. It sorta integrates with Terragrunt but not really. Basically it is identical to the CLI output of Terraform with some slight visual dressing. Then there's the kicker: the price.

$0.00014 per resource per hour. This is predatory pricing. First, because Terraform drops in value to zero if you can't put everything into Infrastructure as Code. HashiCorp knows this, hence the per-resource price. Second because they know it's impossible for me, the maintainer of the account, to police. What am I supposed to do, tell people "no you cannot have a custom IAM policy because we can't have people writing safe scoped roles"? Maybe I should start forcing subdomain sharing, make sure we don't get too spoiled with all these free hostnames. Finally it's especially grating because we're talking about sticking small collections of JSON onto object storage. There's no engineering per resource, no scaling concerns on HashiCorp's side and disk space is cheap to boot.

This combined with the license change is enough for me. I'm out. I'll deal with some grief to use your product, but at this point HashiCorp has overplayed the value of Terraform. It's a clunky tool that scales poorly and I need to do all the scaling and upgrade work myself with third-party tools, even if I pay you for your cloud product. The per-hour pricing is just the final nail in the coffin from HashiCorp.

I asked around for an alternative and someone recommend Pulumi. I've never heard of them before, so I thought this could be a super fun opportunity to try them out.

Pulumi

Pulumi and Terraform are similar, except unlike Terraform with HCL, Pulumi has lots of scale built in. Why? Because you can use a real programming language to write your Infrastructure as Code. It's a clever concept, letting you scale up the complexity of your project from writing just YAML to writing Golang or Python.

Here is the basic outline of how Pulumi structures infrastructure.

You write programs inside of projects with Nodejs, Python, Golang, .Net, Java or YAML. Programs define resources. You then run the programs inside of stacks, which are different environments. It's nice that Pulumi comes with the project structure defined vs Terraform you define it yourself. Every stack has its own state out of the box which again is a built-in optimization.

Installation was easy and they had all the expected install options. Going through the source code I was impressed with the quality, but was concerned about the 1,718 open issues as of writing this. Clicking around it does seem like they're actively working on them and it has your normal percentage of "not real issues but just people opening them as issues" problem. Also a lot of open issues with comments suggests an engaged user base. The setup on my side was very easy and I opted not to use their cloud product, mostly because it has the same problem that Terraform Cloud has.

A Pulumi Credit is the price for managing one resource for one hour. If using the Team Edition, each credit costs $0.0005. For billing purposes, we count any resource that's declared in a Pulumi program. This includes provider resources (e.g., an Amazon S3 bucket), component resourceswhich are groupings of resources (e.g., an Amazon EKS cluster), and stacks which contain resources (e.g., dev, test, prod stacks).
You consume one Pulumi Credit to manage each resource for an hour. For example, one stack containing one S3 bucket and one EC2 instance is three resources that are counted in your bill. Example: If you manage 625 resources with Pulumi every month, you will use 450,000 Pulumi Credits each month. Your monthly bill would be $150 USD = (450,000 total credits - 150,000 free credits) * $0.0005.

My mouth was actually agape when I got to that monthly bill. I get 150k credits for "free" with Teams which is 200 resources a month. That is absolutely nothing. That's "my DNS records live in Infrastructure as Code". But paying per hour doesn't even unlock all the features! I'm limited on team size, I don't get SSO, I don't get support. Also you are the smaller player, how do you charge more than HashiCorp? Disk space is real cheap and these files are very small. Charge me $99 a month per runner or per user or whatever you need to, but I don't want to ask the question "are we putting too much of our infrastructure into code". It's either all in there or there's zero point and this pricing works directly against that goal.

Alright so Pulumi Cloud is out. Maybe the Enterprise pricing is better but that's not on the website so I can't make a decision based on that. I can't mentally handle getting on another sales email list. Thankfully Pulumi has state locking with S3 now according to this so this isn't a deal-breaker.  Let's see what running it just locally looks like.

Pulumi Open-Source only

Thankfully they make that pretty easy. pulumi login --local means your state is stored locally, encrypted with a passphrase. To use s3 just switch that to pulumi login s3:// Now managing state locally or using S3 isn't a new thing, but it's nice that switching between them is pretty easy. You can start local, grow to S3 and then migrate to their Cloud product as you need. Run pulumi new python for a new blank Python setup.

❯ pulumi new python
This command will walk you through creating a new Pulumi project.

Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.

project name: (test) test
project description: (A minimal Python Pulumi program)
Created project 'test'

stack name: (dev)
Created stack 'dev'
Enter your passphrase to protect config/secrets:
Re-enter your passphrase to confirm:

Installing dependencies...

Creating virtual environment...
Finished creating virtual environment
Updating pip, setuptools, and wheel in virtual environment...

I love that it does all the correct Python things. We have a venv, we've got a requirements.txt and we've got a simple configuration file. Working with it was delightful. Setting my Hetzner API key as a secret was easy and straight-forward with: pulumi config set hcloud:token XXXXXXXXXXXXXX --secret. So what does working with it look like. Let's look at an error.

❯ pulumi preview
Enter your passphrase to unlock config/secrets
    (set PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember):
Previewing update (dev):
     Type                 Name               Plan     Info
     pulumi:pulumi:Stack  matduggan.com-dev           1 error


Diagnostics:
  pulumi:pulumi:Stack (matduggan.com-dev):
    error: Program failed with an unhandled exception:
    Traceback (most recent call last):
      File "/opt/homebrew/bin/pulumi-language-python-exec", line 197, in <module>
        loop.run_until_complete(coro)
      File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
        return future.result()
               ^^^^^^^^^^^^^^^
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 137, in run_in_stack
        await run_pulumi_func(lambda: Stack(func))
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 49, in run_pulumi_func
        func()
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 137, in <lambda>
        await run_pulumi_func(lambda: Stack(func))
                                      ^^^^^^^^^^^
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 160, in __init__
        func()
      File "/opt/homebrew/bin/pulumi-language-python-exec", line 165, in run
        return runpy.run_path(args.PROGRAM, run_name='__main__')
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "<frozen runpy>", line 304, in run_path
      File "<frozen runpy>", line 240, in _get_main_module_details
      File "<frozen runpy>", line 159, in _get_module_details
      File "<frozen importlib._bootstrap_external>", line 1074, in get_code
      File "<frozen importlib._bootstrap_external>", line 1004, in source_to_code
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/Users/mathew.duggan/Documents/work/pulumi/__main__.py", line 14
        )], user_data="""
                      ^
    SyntaxError: unterminated triple-quoted string literal (detected at line 17)

We get all the super clear output of a Python error message, we still get the secrets encryption and we get all the options of Python when writing the file. However things get a little unusual when I go to inspect the state files.

Local State Files

For some reason when I select local, Pulumi doesn't store the state files in the same directory as where I'm working. Instead it stores them as a user preference at ~/.pulumi which is odd. I understand I selected local, but it's weird to assume I don't want to store the state in git or something. It is also storing a lot of things in my user directory: 358 directories, 848 files. Every template is its own directory.

How can you set it up to work correctly?

rm -rf ~/.pulumi
mkdir test && cd test
mkdir pulumi
pulumi login file://pulumi/
pulumi new --force python
cd ~/.pulumi
336 directories, 815 files

If you go back into the directory  and go to /test/pulumi/.pulumi you do see the state files. The force flag is required to let it create the new project inside a directory with stuff already in it. It all ends up working but it's clunky.

Maybe I'm alone on this, but I feel like this is unnecessarily complicated. If I'm going to work locally, the assumption should be I'm going to sit this inside of a repo. Or at the very least I'm going to expect the directory to be a self-contained thing. Also don't put stuff at $HOME/.pulumi. The correct location is ~/.config. I understand nobody follows that rule but the right places to put it are: in the directory where I make the project or in ~/.config.

S3-compatible State

Since this is the more common workflow, let me talk a bit about S3 remote backend. I tried to do a lot of testing to cover as many use-cases as possible. The lockfile works and is per stack, so you do have that basic level of functionality. Stacks cannot reference each other's outputs unless they are in the same bucket as far as I can tell, so you would need to plan for one bucket. Sharing stack names across multiple projects works, so you don't need to worry that every project has a dev, stage and prod. State encryption is your problem, but that's pretty easy to deal with in modern object storage.

The login process is basically pulumi login 's3://?region=us-east-1&awssdk=v2&profile=' and for GCP pulumi login gs://. You can see all the custom backend setup docs here. I also moved between custom backends, going from local to s3 and from s3 to GCP. It all functioned like I would expect, which was nice.

Otherwise nothing exciting to report. In my testing it worked as well as local, and trying to break it with a few folks working on the same repo didn't reveal any obvious problems. It seems as reliable as Terraform in S3, which is to say not perfect but pretty good.

Daily use

Once Pulumi was set up to use object storage, I tried to use it to manage a non-production project in Google Cloud along with someone else who agreed to work with me on it. I figured with at least two people doing the work, the experience would be more realistic.

Compared to working with Terraform, I felt like Pulumi was easier to use. Having all of the options and autocomplete of an IDE available to me when I wanted it really sped things up, plus handling edge cases that previously would have required a lot of very sensitive HCL were very simple with Python. I also liked being able to write tests for infrastructure code, which made things like database operations feel less dangerous. In Terraform the only safety check is whoever is looking at the output, so having another level of checking before potentially destroying resources was nice.

While Pulumi does provide more opinions on how to structure it, even with two of us there were quickly some disagreements on the right way to do things. I prefer more of a monolithic design and my peer prefers smaller stacks, which you can do but I find chaining together the stack output to be more work than its worth. I found the micro-service style in Pulumi to be a bit grating and easy to break, while the monolithic style was much easier for me to work in.

Setting up a CI/CD pipeline wasn't too challenging, basing everything off of this image. All the CI/CD docs on their website presuppose you are using the Cloud product, which again makes sense and I would be glad to do if they changed the pricing. However rolling your own isn't hard, it works as expected, but I want to point out one sticking point I ran into that isn't really Pulumi's fault so much as it is "the complexity of adding in secrets support".

Pulumi Secrets

So Pulumi integrates with a lot of secret managers, which is great. It also has its own secret manager which works fine. The key things to keep in mind are: if you are adding a secret, make sure you flag it as a secret to keep it from getting printed on the output. If you are going to use an external secrets manager, set aside some time to get that working. It took a bit of work to get the permissions such that CI/CD and everything else worked as expected, especially with the micro-service design where one program relied on the output of another program. You can read the docs here.

Unexpected Benefits

Here are some delightful (maybe obvious) things I ran into while working with Pulumi.

  • We already have experts in these languages. It was great to be able to ask someone with years of Python development experience "what is the best way to structure large Python projects". There is so much expertise and documentation out there vs the wasteland that is Terraform project architecture.
  • Being able to use a database. Holy crap, this was a real game-changer to me. I pulled down the GCP IAM stock roles, stuck them in SQLite and then was able to query them depending on the set of permissions the service account or user group required. Very small thing, but a massive time-saver vs me going to the website and searching around. It also lets me automate the entire process of Ticket -> PR for IAM role.
This is what I'm talking about.
  • You can set up easy APIs. Making a website that generates HCL to stick into a repo and then make a PR? Nightmare. Writing a simple Flask app that runs Pulumi against your infrastructure with scoped permissions? Not bad at all. If your org does something like "add a lot of DNS records" or "add a lot of SSH keys", this really has the potential to change your workday. Also it's easy to set up an abstraction for your entire Infrastructure. Pulumi has docs on how to get started with all of this here. Slack bots, simple command-line tools, all of it was easy to do.
  • Tests. It's nice to be able to treat infrastructure like its important.
  • Getting better at a real job skill. Every hour I get more skilled in writing Golang, I'm more valuable to my organization. I'm also just getting more hours writing code in an actual programming language, which is always good. Every hour I invest in HCL is an hour I invested in something that no other tool will ever use.
  • Speed seemed faster than Terraform. I don't know why that would be, but it did feel like especially on successive previews the results just came much faster. This was true on our CI/CD jobs as well, timing them against Terraform it seemed like Pulumi was faster most of the time. Take this with a pile of salt, I didn't do a real benchmark and ultimately we're hitting the same APIs, so I doubt there's a giant performance difference.

Conclusion

Do I think Pulumi can take over the Terraform throne? There's a lot to like here. The product is one of those great ideas, a natural evolution from where we started in DevOps to where we want to go. Moving towards treating infrastructure like everything else is the next logical leap and they have already done a lot of the ground work. I want Pulumi to succeed, I like it as a product.

However it needs to get out of its own way. The pricing needs a rethink, make it a no-brainier for me to use your cloud product and get fully integrated into it. If you give me a reliable, consistent bill I can present to leadership, I don't have to worry about Pulumi as a service I need to police. The entire organization can be let loose to write whatever infra they need, which benefits us and Pulumi as we'll be more dependent on their internal tooling.

If cost management is a big issue, have me bring my own object storage and VMs for runners. Pulumi can still thrive and be very successfully without being a zero-setup business. This is a tool for people who maintain large infrastructures. We can handle some infrastructure requirements if that is the sticking point.  

Hopefully the folks running Pulumi see this moment as the opportunity it is, both for the field at large to move past markup languages and for them to make a grab for a large share of the market.

If there is interest I can do more write-ups on sample Flask apps or Slack bots or whatever. Also if I made a mistake or you think something needs clarification, feel free to reach out to me here: https://c.im/@matdevdug.


Adventures in IPv6 Part 2

As I discussed in Part 1 I've converted this site over to pure IPv6. Well at least as pure as I could get away with. I still have some problems though, chief among them that I cannot send emails with the Ghost CMS. I've switched from Mailgun to Scaleway which does have IPv6 for their SMTP service.

smtp.tem.scw.cloud has IPv6 address 2001:bc8:1201:21:d6ae:52ff:fed0:418e
smtp.tem.scw.cloud has IPv6 address 2001:bc8:1201:21:d6ae:52ff:fed0:6aac

I've also confirmed that my docker-compose stack running Ghost can successfully reach IPv6 external addresses with no issues.

matdevdug-busy-1      | PING google.com (2a00:1450:4002:411::200e): 56 data bytes
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=0 ttl=113 time=15.079 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=1 ttl=113 time=14.607 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=2 ttl=113 time=14.540 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=3 ttl=113 time=14.593 ms
matdevdug-busy-1      |
matdevdug-busy-1      |
matdevdug-busy-1      | --- google.com ping statistics ---
matdevdug-busy-1      | 4 packets transmitted, 4 packets received, 0% packet loss
matdevdug-busy-1      | round-trip min/avg/max = 14.540/14.704/15.079 ms

I've also confirmed that Scaleway is reachable by the container no problem with the domain name, so it isn't a DNS problem.

PING smtp.tem.scw.cloud(ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac)) 56 data bytes
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=1 ttl=53 time=23.1 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=2 ttl=53 time=22.2 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=3 ttl=53 time=22.2 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=4 ttl=53 time=22.1 ms

--- smtp.tem.scw.cloud ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 22.086/22.397/23.063/0.388 ms

At this point I have three theories.

  1. It's an SMTP problem. Possible, but unlikely given how long SMTP has supported IPv6. A quick check by running it over bash by following the instructions here shows that works fine.
  2. Something is blocking the port.
telnet smtp.tem.scw.cloud 587
Trying 2001:bc8:1201:21:d6ae:52ff:fed0:6aac...
Connected to smtp.tem.scw.cloud.
Escape character is '^]'.
220 smtp.scw-tem.cloud ESMTP Service Ready

Alright it's not that.

3. Nodemailer is being stupid. It looks like Ghost relies on Nodemailer so let's check it out. Let's install Node and NPM on my debian junk machine.

sudo apt install npm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  eslint gyp handlebars libjs-async libjs-events libjs-inherits libjs-is-typedarray libjs-prettify libjs-regenerate libjs-source-map
  libjs-sprintf-js libjs-typedarray-to-buffer libjs-util libnode-dev libssl-dev libuv1-dev node-abbrev node-agent-base node-ajv node-ajv-keywords
  node-ampproject-remapping node-ansi-escapes node-ansi-regex node-ansi-styles node-anymatch node-aproba node-archy node-are-we-there-yet
  node-argparse node-arrify node-assert node-async node-async-each node-babel-helper-define-polyfill-provider node-babel-plugin-add-module-exports
  node-babel-plugin-lodash node-babel-plugin-polyfill-corejs2 node-babel-plugin-polyfill-corejs3 node-babel-plugin-polyfill-regenerator node-babel7
  node-babel7-runtime node-balanced-match node-base64-js node-binary-extensions node-brace-expansion node-braces node-browserslist node-builtins
  node-cacache node-camelcase node-caniuse-lite node-chalk node-chokidar node-chownr node-chrome-trace-event node-ci-info node-cli-table node-cliui
  node-clone node-clone-deep node-color-convert node-color-name node-colors node-columnify node-commander node-commondir node-concat-stream
  node-console-control-strings node-convert-source-map node-copy-concurrently node-core-js node-core-js-compat node-core-js-pure node-core-util-is
  node-css-loader node-css-selector-tokenizer node-data-uri-to-buffer node-debbundle-es-to-primitive node-debug node-decamelize
  node-decompress-response node-deep-equal node-deep-is node-defaults node-define-properties node-defined node-del node-delegates node-depd
  node-diff node-doctrine node-electron-to-chromium node-encoding node-end-of-stream node-enhanced-resolve node-err-code node-errno node-error-ex
  node-es-abstract node-es-module-lexer node-es6-error node-escape-string-regexp node-escodegen node-eslint-scope node-eslint-utils
  node-eslint-visitor-keys node-espree node-esprima node-esquery node-esrecurse node-estraverse node-esutils node-events node-fancy-log
  node-fast-deep-equal node-fast-levenshtein node-fetch node-file-entry-cache node-fill-range node-find-cache-dir node-find-up node-flat-cache
  node-flatted node-for-in node-for-own node-foreground-child node-fs-readdir-recursive node-fs-write-stream-atomic node-fs.realpath
  node-function-bind node-functional-red-black-tree node-gauge node-get-caller-file node-get-stream node-glob node-glob-parent node-globals
  node-globby node-got node-graceful-fs node-gyp node-has-flag node-has-unicode node-hosted-git-info node-https-proxy-agent node-iconv-lite
  node-icss-utils node-ieee754 node-iferr node-ignore node-imurmurhash node-indent-string node-inflight node-inherits node-ini node-interpret
  node-ip node-ip-regex node-is-arrayish node-is-binary-path node-is-buffer node-is-extendable node-is-extglob node-is-glob node-is-number
  node-is-path-cwd node-is-path-inside node-is-plain-obj node-is-plain-object node-is-stream node-is-typedarray node-is-windows node-isarray
  node-isexe node-isobject node-istanbul node-jest-debbundle node-jest-worker node-js-tokens node-js-yaml node-jsesc node-json-buffer
  node-json-parse-better-errors node-json-schema node-json-schema-traverse node-json-stable-stringify node-json5 node-jsonify node-jsonparse
  node-kind-of node-levn node-loader-runner node-locate-path node-lodash node-lodash-packages node-lowercase-keys node-lru-cache node-make-dir
  node-memfs node-memory-fs node-merge-stream node-micromatch node-mime node-mime-types node-mimic-response node-minimatch node-minimist
  node-minipass node-mkdirp node-move-concurrently node-ms node-mute-stream node-n3 node-negotiator node-neo-async node-nopt
  node-normalize-package-data node-normalize-path node-npm-bundled node-npm-package-arg node-npm-run-path node-npmlog node-object-assign
  node-object-inspect node-once node-optimist node-optionator node-osenv node-p-cancelable node-p-limit node-p-locate node-p-map node-parse-json
  node-path-dirname node-path-exists node-path-is-absolute node-path-is-inside node-path-type node-picocolors node-pify node-pkg-dir node-postcss
  node-postcss-modules-extract-imports node-postcss-modules-values node-postcss-value-parser node-prelude-ls node-process-nextick-args node-progress
  node-promise-inflight node-promise-retry node-promzard node-prr node-pump node-punycode node-quick-lru node-randombytes node-read
  node-read-package-json node-read-pkg node-readable-stream node-readdirp node-rechoir node-regenerate node-regenerate-unicode-properties
  node-regenerator-runtime node-regenerator-transform node-regexpp node-regexpu-core node-regjsgen node-regjsparser node-repeat-string
  node-require-directory node-resolve node-resolve-cwd node-resolve-from node-resumer node-retry node-rimraf node-run-queue node-safe-buffer
  node-schema-utils node-semver node-serialize-javascript node-set-blocking node-set-immediate-shim node-shebang-command node-shebang-regex
  node-signal-exit node-slash node-slice-ansi node-source-list-map node-source-map node-source-map-support node-spdx-correct node-spdx-exceptions
  node-spdx-expression-parse node-spdx-license-ids node-sprintf-js node-ssri node-string-decoder node-string-width node-strip-ansi node-strip-bom
  node-strip-json-comments node-supports-color node-tapable node-tape node-tar node-terser node-text-table node-through node-time-stamp
  node-to-fast-properties node-to-regex-range node-tslib node-type-check node-typedarray node-typedarray-to-buffer
  node-unicode-canonical-property-names-ecmascript node-unicode-match-property-ecmascript node-unicode-match-property-value-ecmascript
  node-unicode-property-aliases-ecmascript node-unique-filename node-uri-js node-util node-util-deprecate node-uuid node-v8-compile-cache
  node-v8flags node-validate-npm-package-license node-validate-npm-package-name node-watchpack node-wcwidth.js node-webassemblyjs
  node-webpack-sources node-which node-wide-align node-wordwrap node-wrap-ansi node-wrappy node-write node-write-file-atomic node-y18n node-yallist
  node-yargs node-yargs-parser terser webpack
Suggested packages:
  node-babel-eslint node-esprima-fb node-inquirer libjs-angularjs libssl-doc node-babel-plugin-polyfill-es-shims node-babel7-debug javascript-common
  livescript chai node-jest-diff node-opener
Recommended packages:
  javascript-common build-essential node-tap
The following NEW packages will be installed:
  eslint gyp handlebars libjs-async libjs-events libjs-inherits libjs-is-typedarray libjs-prettify libjs-regenerate libjs-source-map
  libjs-sprintf-js libjs-typedarray-to-buffer libjs-util libnode-dev libssl-dev libuv1-dev node-abbrev node-agent-base node-ajv node-ajv-keywords
  node-ampproject-remapping node-ansi-escapes node-ansi-regex node-ansi-styles node-anymatch node-aproba node-archy node-are-we-there-yet
  node-argparse node-arrify node-assert node-async node-async-each node-babel-helper-define-polyfill-provider node-babel-plugin-add-module-exports
  node-babel-plugin-lodash node-babel-plugin-polyfill-corejs2 node-babel-plugin-polyfill-corejs3 node-babel-plugin-polyfill-regenerator node-babel7
  node-babel7-runtime node-balanced-match node-base64-js node-binary-extensions node-brace-expansion node-braces node-browserslist node-builtins
  node-cacache node-camelcase node-caniuse-lite node-chalk node-chokidar node-chownr node-chrome-trace-event node-ci-info node-cli-table node-cliui
  node-clone node-clone-deep node-color-convert node-color-name node-colors node-columnify node-commander node-commondir node-concat-stream
  node-console-control-strings node-convert-source-map node-copy-concurrently node-core-js node-core-js-compat node-core-js-pure node-core-util-is
  node-css-loader node-css-selector-tokenizer node-data-uri-to-buffer node-debbundle-es-to-primitive node-debug node-decamelize
  node-decompress-response node-deep-equal node-deep-is node-defaults node-define-properties node-defined node-del node-delegates node-depd
  node-diff node-doctrine node-electron-to-chromium node-encoding node-end-of-stream node-enhanced-resolve node-err-code node-errno node-error-ex
  node-es-abstract node-es-module-lexer node-es6-error node-escape-string-regexp node-escodegen node-eslint-scope node-eslint-utils
  node-eslint-visitor-keys node-espree node-esprima node-esquery node-esrecurse node-estraverse node-esutils node-events node-fancy-log
  node-fast-deep-equal node-fast-levenshtein node-fetch node-file-entry-cache node-fill-range node-find-cache-dir node-find-up node-flat-cache
  node-flatted node-for-in node-for-own node-foreground-child node-fs-readdir-recursive node-fs-write-stream-atomic node-fs.realpath
  node-function-bind node-functional-red-black-tree node-gauge node-get-caller-file node-get-stream node-glob node-glob-parent node-globals
  node-globby node-got node-graceful-fs node-gyp node-has-flag node-has-unicode node-hosted-git-info node-https-proxy-agent node-iconv-lite
  node-icss-utils node-ieee754 node-iferr node-ignore node-imurmurhash node-indent-string node-inflight node-inherits node-ini node-interpret
  node-ip node-ip-regex node-is-arrayish node-is-binary-path node-is-buffer node-is-extendable node-is-extglob node-is-glob node-is-number
  node-is-path-cwd node-is-path-inside node-is-plain-obj node-is-plain-object node-is-stream node-is-typedarray node-is-windows node-isarray
  node-isexe node-isobject node-istanbul node-jest-debbundle node-jest-worker node-js-tokens node-js-yaml node-jsesc node-json-buffer
  node-json-parse-better-errors node-json-schema node-json-schema-traverse node-json-stable-stringify node-json5 node-jsonify node-jsonparse
  node-kind-of node-levn node-loader-runner node-locate-path node-lodash node-lodash-packages node-lowercase-keys node-lru-cache node-make-dir
  node-memfs node-memory-fs node-merge-stream node-micromatch node-mime node-mime-types node-mimic-response node-minimatch node-minimist
  node-minipass node-mkdirp node-move-concurrently node-ms node-mute-stream node-n3 node-negotiator node-neo-async node-nopt
  node-normalize-package-data node-normalize-path node-npm-bundled node-npm-package-arg node-npm-run-path node-npmlog node-object-assign
  node-object-inspect node-once node-optimist node-optionator node-osenv node-p-cancelable node-p-limit node-p-locate node-p-map node-parse-json
  node-path-dirname node-path-exists node-path-is-absolute node-path-is-inside node-path-type node-picocolors node-pify node-pkg-dir node-postcss
  node-postcss-modules-extract-imports node-postcss-modules-values node-postcss-value-parser node-prelude-ls node-process-nextick-args node-progress
  node-promise-inflight node-promise-retry node-promzard node-prr node-pump node-punycode node-quick-lru node-randombytes node-read
  node-read-package-json node-read-pkg node-readable-stream node-readdirp node-rechoir node-regenerate node-regenerate-unicode-properties
  node-regenerator-runtime node-regenerator-transform node-regexpp node-regexpu-core node-regjsgen node-regjsparser node-repeat-string
  node-require-directory node-resolve node-resolve-cwd node-resolve-from node-resumer node-retry node-rimraf node-run-queue node-safe-buffer
  node-schema-utils node-semver node-serialize-javascript node-set-blocking node-set-immediate-shim node-shebang-command node-shebang-regex
  node-signal-exit node-slash node-slice-ansi node-source-list-map node-source-map node-source-map-support node-spdx-correct node-spdx-exceptions
  node-spdx-expression-parse node-spdx-license-ids node-sprintf-js node-ssri node-string-decoder node-string-width node-strip-ansi node-strip-bom
  node-strip-json-comments node-supports-color node-tapable node-tape node-tar node-terser node-text-table node-through node-time-stamp
  node-to-fast-properties node-to-regex-range node-tslib node-type-check node-typedarray node-typedarray-to-buffer
  node-unicode-canonical-property-names-ecmascript node-unicode-match-property-ecmascript node-unicode-match-property-value-ecmascript
  node-unicode-property-aliases-ecmascript node-unique-filename node-uri-js node-util node-util-deprecate node-uuid node-v8-compile-cache
  node-v8flags node-validate-npm-package-license node-validate-npm-package-name node-watchpack node-wcwidth.js node-webassemblyjs
  node-webpack-sources node-which node-wide-align node-wordwrap node-wrap-ansi node-wrappy node-write node-write-file-atomic node-y18n node-yallist
  node-yargs node-yargs-parser npm terser webpack
0 upgraded, 349 newly installed, 0 to remove and 1 not upgraded.
Need to get 13.8 MB of archives.
After this operation, 106 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Jesus Christ NPM, what is happening

Now that I have that nightmare factory installed.

"use strict";
const nodemailer = require("nodemailer");

const transporter = nodemailer.createTransport({
  host: "smtp.tem.scw.cloud",
  port: 587,
  // Just so I don't need to worry about it
  secure: false,
  auth: {
    // TODO: replace `user` and `pass` values from <https://forwardemail.net>
    user: 'scaleway-user-name',
    pass: 'scaleway-password'
  }
});

// async..await is not allowed in global scope, must use a wrapper
async function main() {
  // send mail with defined transport object
  const info = await transporter.sendMail({
    from: '"Dead People 👻" <[email protected]>', // sender address
    to: "[email protected]", // list of receivers
    subject: "Hello", // Subject line
    text: "Hello world", // plain text body
    html: "<b>Hello world?</b>", // html body
  });

  console.log("Message sent: %s", info.messageId);
}

main().catch(console.error);

Looks like Nodemailer doesn't seem to understand this is an IPv6 box.

node example.js
Error: connect ENETUNREACH 51.159.99.81:587 - Local (0.0.0.0:0)
    at internalConnect (node:net:1060:16)
    at defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18)
    at node:net:1244:9
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
  errno: -101,
  code: 'ESOCKET',
  syscall: 'connect',
  address: '51.159.99.81',
  port: 587,
  command: 'CONN'
}

It looks like this should have been fixed here: https://github.com/nodemailer/nodemailer/pull/1311 but clearly isn't. What happens if I just manually set the IPv6 address.

Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: IP: 2001:bc8:1201:21:d6ae:52ff:fed0:6aac is not in the cert's list:

However if you set it to use an IP for host and a DNS entry for hostname, everything seems to work great.

"use strict";
const nodemailer = require("nodemailer");

const transporter = nodemailer.createTransport({
  host: "2001:bc8:1201:21:d6ae:52ff:fed0:6aac",
  port: 587,
  secure: false,
  tls: {
    rejectUnauthorized: true,
    servername: "smtp.tem.scw.cloud"},
  auth: {
    user: 'scaleway-username',
    pass: 'scaleway-password'
  }
});

// async..await is not allowed in global scope, must use a wrapper
async function main() {
  // send mail with defined transport object
  const info = await transporter.sendMail({
    from: '"Test" <[email protected]>', // sender address
    to: [email protected]", // list of receivers
    subject: "Hello ✔", // Subject line
    text: "Hello world?", // plain text body
    html: "<b>Hello world?</b>", // html body
  });

  console.log("Message sent: %s", info.messageId);
}

main().catch(console.error);

Alright well issue submitted here: https://github.com/TryGhost/Ghost/issues/17627

It is a little alarming that the biggest Node email package doesn't work with IPv6 and seemingly only one person noticed and tried to fix it. Well whatever, we have a workaround.

Python

Alright let's try to fix the pip problems I was seeing before in various scripts.

pip3 install requests
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Right I forgot Python was doing this now. Fine, I'll use venv, not a problem. I guess first I compile a version of Python if I want the latest? I don't see any newer ARM packages out there. Alright, compiling Python.

sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget libbz2-dev

wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz

cd Python-3.11.4/

sudo make -j 2

sudo make altinstall

Alright now pip works great on the latest version inside of a venv. My scripts all seem to work fine and there appears to be no issues. Whatever problem there was before is resolved. Specific shoutout to requests where I'm doing some strange things with network traffic and it seems to have no problems.

Conclusion

So the amount of work to get a pretty simple blog up was nontrivial, but we're here now. I have a patch for Ghost that I can apply to the container, Python seems to be working fine/great now and Docker seems to work as long as I use a user-created network with IPv6 strictly defined. The Docker default bridge also works if you specify the links inside of the docker-compose file, but that seems to be depricated so let's not waste too much time on that. For those looking for instructions on the Docker part I just followed the guide outlined here.

Now that everything is up and running it seems fine, but again if you are thinking of running an IPv6 only server infrastructure, set aside a lot of time for problem solving. Even simple applications like this require a lot of research to get up and running successfully with outbound network functioning and everything linked up in the correct way.


IPv6 Is A Disaster (but we can fix it)

IP addresses have been in the news a lot lately and not for good reasons. AWS has announced they are charging $.005 per IPv4 address per hour, joining other cloud providers in charging for the luxury of a public IPv4 address. GCP charges $.004, same with Azure and Hetzner charges €0.001/h. Clearly the era of cloud providers going out and purchasing more IPv4 space is coming to an end. As time goes on, the addresses are just more valuable and it makes less sense to give them out for free.

So the writing is on the wall. We need to switch to IPv6. Now I was first told that we were going to need to switch to IPv6 when I was in high school in my first Cisco class and I'm 36 now, to give you some perspective on how long this has been "coming down the pipe". Up to this point I haven't done much at all with IPv6, there has been almost no market demand for those skills and I've never had a job where anybody seemed all that interested in doing it. So I skipped learning about it, which is a shame because it's actually a great advancement in networking.

Now is the second best time to learn though, so I decided to migrate this blog to IPv6 only. We'll stick it behind a CDN to handle the IPv4 traffic, but let's join the cool kids club. What I found was horrifying: almost nothing works out of the box. Major  dependencies cease functioning right away and workarounds cannot be described as production ready. The migration process for teams to IPv6 is going to be very rocky, mostly because almost nobody has done the work. We all skipped it for years and now we'll need to pay the price.

Why is IPv6 worth the work?

I'm not gonna do a thing about what is IPv4 vs IPv6. There are plenty of great articles on the internet about that. Let's just quickly recap though "why would anyone want to make the jump to IPv6".

An IPv6 packet header
  • Address space (obviously)
  • Smaller number of header fields (8 vs 13 on v4)
  • Faster processing: No more checksum, so routers don't have to do a recalculation for every packet.
  • Faster routing: More summary routes and hierarchical routes. (Don't know what that is? No stress. Summary route = combining multiple IPs so you don't need all the addresses, just the general direction based on the first part of the address. Ditto with routes, since IPv6 is globally unique you can have small and efficient backbone routing.)
  • QoS: Traffic Class and Flow Label fields make QoS easier.
  • Auto-addressing. This allows IPv6 hosts on a LAN to connect without a router or DHCP server.
  • You can add IPsec to IPv6 with the Authentication Header and Encapsulating Security Payload.

Finally the biggest one: because IPv6 addresses are free and IPv4 ones are not.

Setting up an IPv6-Only Server

The actual setup process was simple. I provisioned a Debian box and selected "IPv6". Then I got my first surprise. My box didn't get an IPv6 address. I was given a /64 of addresses, which is 18,446,744,073,709,551,616. It is good to know that my small ARM server could scale to run all the network infrastructure for every company I've ever worked for on all public addresses.

Now this sounds wasteful but when you look at how IPv6 works, it really isn't. Since IPv6 is much less "chatty" than IPv4, even if I had 10,000 hosts on this network it doesn't matter. As discussed here it actually makes sense to keep all the IPv6 space, even if at first it comes across as insanely wasteful. So just don't think about how many addresses are getting sent to each device.

Important: resist the urge to optimize address utilization. Talking to more experienced networking folks, this seems to be a common trap people fall into. We've all spent so much time worrying about how much space we have remaining in an IPv4 block and designing around that problem. That issue doesn't exist anymore. A /64 prefix is the smallest you should configure on an interface.

Attempting to stick a smaller prefix, which is something I've heard people try, like a /68 or a /96 can break stateless address auto-configuration. Your mentality should be a /48 per site. That's what the Regional Internet Registries hands out when allocating IPv6. When thinking about network organization, you need to think about the nibble boundary. (I know, it sounds like I'm making shit up now). It's basically a way to make IPv6 easier to read.

Let's say you have 2402:9400:10::/48. You would divide it up as follows if you wanted only /64 for each box as a flat network.

Subnet #Subnet Address
02402:9400:10::/64
12402:9400:10:1::/64
22402:9400:10:2::/64
32402:9400:10:3::/64
42402:9400:10:4::/64
52402:9400:10:5::/64

A /52 works a similar way.

Subnet #Subnet Address
02402:9400:10::/52
12402:9400:10:1000::/52
22402:9400:10:2000::/52
32402:9400:10:3000::/52
42402:9400:10:4000::/52
52402:9400:10:5000::/52

You can still at a glance know which subnet you are looking at.

Alright I've got my box ready to go. Let's try to set it up like a normal server.

Problem 1 - I can't SSH in

This was a predictable problem. Neither my work or home ISP supports IPv6. So it's great that I have this box set up, but now I can't really do anything with it. Fine, I attach an IPv4 address for now, SSH in and I'll set up cloudflared to run a tunnel. Presumably they'll handle the conversion on their side.

Except that isn't how Cloudflare rolls. Imagine my surprise when the tunnel collapses when I remove the IPv4 address. By default the cloudflared utility assumes IPv4 and you need to go in and edit the systemd service file to add: --edge-ip-version 6. After this, the tunnel is up and I'm able to SSH in.

Problem 2 - I can't use GitHub

Alright so I'm on the box. Now it's time to start setting up stuff. I run my server setup script and it immediately fails. It's trying to access the installation script for hishtory, a great shell history utility I use on all my personal stuff. It's trying to pull the install file from GitHub and failing. "Certainly that can't be right. GitHub must support IPv6?"

Nope. Alright fine, seems REALLY bad that the service the entire internet uses to release software doesn't work with IPv6, but you know Microsoft is broke and also only cares about fake AI now, so whatever. I ended up using the TransIP Github Proxy which worked fine. Now I have access to Github. But then Python fails with urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable>. Alright I give up on this. My guess is the version of Python 3 in Debian doesn't like IPv6, but I'm not in the mood to troubleshoot it right now.

Problem 3 - Can't set up Datadog

Let's do something more basic. Certainly I can set up Datadog to keep an eye on this box. I don't need a lot of metrics, just a few historical load numbers. Go to Datadog, log in and start to walk through the process. Immediately collapses. The simple setup has you run curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh. Now S3 supports IPv6, so what the fuck?

curl -v https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh
*   Trying [64:ff9b::34d9:8430]:443...
*   Trying 52.216.133.245:443...
* Immediate connect fail for 52.216.133.245: Network is unreachable
*   Trying 54.231.138.48:443...
* Immediate connect fail for 54.231.138.48: Network is unreachable
*   Trying 52.217.96.222:443...
* Immediate connect fail for 52.217.96.222: Network is unreachable
*   Trying 52.216.152.62:443...
* Immediate connect fail for 52.216.152.62: Network is unreachable
*   Trying 54.231.229.16:443...
* Immediate connect fail for 54.231.229.16: Network is unreachable
*   Trying 52.216.210.200:443...
* Immediate connect fail for 52.216.210.200: Network is unreachable
*   Trying 52.217.89.94:443...
* Immediate connect fail for 52.217.89.94: Network is unreachable
*   Trying 52.216.205.173:443...
* Immediate connect fail for 52.216.205.173: Network is unreachable

It's not S3 or the box, because I can connect to the test S3 bucket AWS provides just fine.

curl -v  http://s3.dualstack.us-west-2.amazonaws.com/
*   Trying [2600:1fa0:40bf:a809:345c:d3f8::]:80...
* Connected to s3.dualstack.us-west-2.amazonaws.com (2600:1fa0:40bf:a809:345c:d3f8::) port 80 (#0)
> GET / HTTP/1.1
> Host: s3.dualstack.us-west-2.amazonaws.com
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: r1WAG/NYpaggrPl3Oja4SG1CrcBZ+1RIpYKivAiIhiICtfwiItTgLfm6McPXXJpKWeM848YWvOQ=
< x-amz-request-id: BPCVA8T6SZMTB3EF
< Date: Tue, 01 Aug 2023 10:31:27 GMT
< Location: https://aws.amazon.com/s3/
< Server: AmazonS3
< Content-Length: 0
<
* Connection #0 to host s3.dualstack.us-west-2.amazonaws.com left intact

Fine I'll do it the manual way through apt.

0% [Connecting to apt.datadoghq.com (18.66.192.22)]

Goddamnit. Alright Datadog is out. It's at this point I realize the experiment of trying to go IPv6 only isn't going to work. Almost nothing seems to work right without proxies and hacks. I'll try to stick as much as I can on IPv6 but going exclusive isn't an option at this point.

NAT64

So in order to access IPv4 resources from IPv6 you need to go through a NAT64 service. I ended up using this one: https://nat64.net/. Immediately all my problems stopped and I was able to access resources normally. I am a little nervous about relying exclusively on what appears to be a hobby project for accessing critical internet resources, but since nobody seems to care upstream of me about IPv6 I don't think I have a lot of choice.

I am surprised there aren't more of these. This is the best list I was able to find:

Most of them seem to be gone now. Dresel's link doesn't work, Trex in my testing had problems, August Internet is gone, most of the Go6lab test devices are down, Tuxis worked but they launched the service in 2019 and seem to have no further interaction with it. Basically Kasper Dupont seems to be the only person on the internet with any sort of widespread interest in allowing IPv6 to actually work. Props to you Kasper.

Basically one person props up this entire part of the internet.

Kasper Dupont

So I was curious about Kasper and emailed him to ask a few questions. You can see that back and forth below.

Me: I found the Public NAT64 service super useful in the transition but would love to know a little bit more about why you do it.

Kasper: I do it primarily because I want to push IPv6 forward. For a few years
I had the opportunity to have a native IPv6-only network at home with
DNS64+NAT64, and I found that to be a pleasant experience which I
wanted to give more people a chance to experience.

When I brought up the first NAT64 gateway it was just a proof of
concept of a NAT64 extension I wanted to push. The NAT64 service took
off, the extension - not so much.

A few months ago I finally got native IPv6 at my current home, so now
I can use my own service in a fashion which much more resembles how my
target users would use it.

Me: You seem to be one of the few remaining free public services like this on the internet and would love to know a bit more about what motivated you to do it, how much it costs to run, anything you would feel comfortable sharing.

Kasper: For my personal products I have a total of 7 VMs across different
hosting providers. Some of them I purchase from Hetzner at 4.51 Euro
per month: https://hetzner.cloud/?ref=fFum6YUDlpJz

The other VMs are a bit more expensive, but not a lot.

Out of those VMs the 4 are used for the NAT64 service and the others
are used for other IPv6 transition related services. For example I
also run this service on a single VM: http://v4-frontend.netiter.com/

I hope to eventually make arrangements with transit providers which
will allow me to grow the capacity of the service and make it
profitable such that I can work on IPv6 full time rather than as a
side gig. The ideal outcome of that would be that IPv4-only content
providers pay the cost through their transit bandwidth payments.

Me: Any technical details you would like to mention would also be great

Kasper: That's my kind of audience :-)

I can get really really technical.

I think what primarily sets my service aside from other services is
that each of my DNS64 servers is automatically updated with NAT64
prefixes based on health checks of all the gateways. That means the
outage of any single NAT64 gateway will be mostly invisible to users.
This also helps with maintenance. I think that makes my NAT64 service
the one with the highest availability among the public NAT64 services.

The NAT64 code is developed entirely by myself and currently runs as a
user mode daemon on Linux. I am considering porting the most
performance critical part to a kernel module.

This site

Alright so I got the basics up and running. In order to pull docker containers over IPv6 you need to add: registry.ipv6.docker.com/library/ to the front of the image name. So for instance:
image: mysql:8.0 becomes image: registry.ipv6.docker.com/library/mysql:8.0

Docker warns you this setup isn't production ready. I'm not really sure what that means for Docker. Presumably if it were to stop you should be able to just pull normally?

Once that was done, we set up the site as an AAAA DNS record and allowed Cloudflare to proxy, meaning they handle the advertisement of IPv6 and bring the traffic to me. One thing I did modify from before was previously I was using Caddy webserver but since I now have a hard reliance on Cloudflare for most of my traffic, I switched to Nginx. One nice thing you can do now that you know all traffic is coming from Cloudflare is switch how SSL works.

Now I have an Origin Certificate from Cloudflare hard-loaded into Nginx with Authenticated Origin Pulls set up so that I know for sure all traffic is running through Cloudflare. The certificate is signed for 15 years, so I can feel pretty confident sticking it in my secrets management system and not thinking about it ever again. For those that are interested there is a tutorial here on how to do it: https://www.digitalocean.com/community/tutorials/how-to-host-a-website-using-cloudflare-and-nginx-on-ubuntu-22-04

Alright the site is back up and working fine. It's what you are reading right now, so if it's up then the system works.

Unsolved Problems

  • My containers still can't communicate with IPv4 resources even though they're on an IPv6 network with an IPv6 bridge. The DNS64 resolution is working, and I've added fixed-cidr-v6 into Docker. I can talk to IPv6 resources just fine, but the NAT64 conversion process doesn't work. I'm going to keep plugging away at it.
  • Before you ping me I did add NAT with ip6tables.
  • SMTP server problems. I haven't been able to find a commercial SMTP service that has an AAAA record. Mailgun and SES were both duds as were a few of the smaller ones I tried. Even Fastmail didn't have anything that could help me. If you know of one please let me know: https://c.im/@matdevdug

Why not stick with IPv4?

Putting aside "because we're running out of addresses" for a minute. If we had adopted IPv6 earlier, the way we do infrastructure could be radically different. So often companies use technology like load balancers and tunnels not because they actually need anything that these things do, but because they need some sort of logical division between private IP ranges and a public IP address they can stick in an DNS A record.

If you break a load balancer into its basic parts, it is doing two things. It is distributing incoming packets onto the back-end servers and it s checking the health of those servers and taking unhealthy ones out of the rotation. Nowadays they often handle things like SSL termination and metrics, but it's not a requirement to be called a load balancer.

There are many ways to load balance, but the most common are as follows:

  1. Round-robin of connection requests.
  2. Weighted Round-Robin with different servers getting more or less.
  3. Least-Connection with servers that have the fewest connections getting more requests.
  4. Weighted Least-Connection, same thing but you can tilt it towards certain boxes.

What you notice is there isn't anything there that requires, or really even benefits from a private IP address vs a public IP address. Configuring the hosts to accept traffic from only one source (the load balancer) is pretty simple and relatively cheap to do, computationally speaking. A lot of the infrastructure designs we've been forced into, things like VPCs, NAT gateways, public vs private subnets, all of these things could have been skipped or relied on less.

The other irony is that IP whitelisting, which currently is a broken security practice that is mostly a waste of time as we all use IP addresses owned by cloud providers, would actually be something that mattered. The process for companies to purchase a /44 for themselves would have gotten easier with demand and it would have been more common for people to go and buy a block of IPs from American Registry for Internet Numbers (ARIN), Réseaux IP Européens Network Coordination Centre (RIPE), or Asia-Pacific Network Information Centre (APNIC).

You would never need to think "well is Google going to buy more IP addresses" or "I need to monitor GitHub support page to make sure they don't add more later". You'd have one block they'd use for their entire business until the end of time. Container systems wouldn't need to assign internal IP addresses on each host, it would be trivial to allocate chunks of public IPs for them to use and also advertise over standard public DNS as needed.

Obviously I'm not saying private networks serve no function. My point is a lot of the network design we've adopted isn't based on necessity but on forced design. I suspect we would have ended up designing applications with the knowledge that they sit on the open internet vs relying entirely on the security of a private VPC. Given how security exploits work this probably would have been a benefit to overall security and design.

So even if cost and availability isn't a concern for you, allowing your organization more ownership and control over how your network functions has real measurable value.

Is this gonna get better?

So this sucks. You either pay cloud providers more money or you get a broken internet. My hope is that the folks who don't want to pay push more IPv6 adoption, but it's also a shame that it has taken so long for us to get here. All these problems and issues could have been addressed gradually and instead it's going to be something where people freak out until the teams that own these resources make the required changes.

I'm hopeful the end result might be better. I think at the very least it might open up more opportunities for smaller companies looking to establish themselves permanently with an IP range that they'll own forever, plus as IPv6 gets more mainstream it will (hopefully) get easier for customers to live with. But I have to say right now this is so broken it's kind of amazing.

If you are a small company looking to not pay the extra IP tax, set aside a lot of time to solve a myriad of problems you are going to encounter.

Thoughts/corrections/objections: [email protected]


Serverless Functions Post-Mortem

Around 2016, the term "serverless functions" started to take off in the tech industry. In short order, it was presented as the undeniable future of infrastructure. It's the ultimate solution to redundancy, geographic resilience, load balancing and autoscaling. Never again would we need to patch, tweak or monitor an application. The cloud providers would do it, all we had to do is hit a button and deploy to internet.

I was introduced to it like most infrastructure technology is presented to me, which is as a veiled threat. "Looks like we won't need as many Operations folks in the future with X" is typically how executives discuss it. Early in my career this talk filled me with fear, but now that I've heard it 10+ times, I adopt a "wait and see" mentality. I was told the same thing about VMs, going from IBM and Oracle to Linux, going from owning the datacenter to renting a cage to going to the cloud. Every time it seems I survive.

Even as far as tech hype goes, serverless functions picked up steam fast. Technologies like AWS Lambda and GCP Cloud Functions were adopted by orgs I worked at very fast compared to other technology. Conference after conference and expert after expert proclaimed that serverless was inevitable.  It felt like AWS Lambda and others were being adopted for production workloads at a breakneck pace.

Then, without much fanfare, it stopped. Other serverless technologies like GKE Autopilot and ECS are still going strong, but the idea of a serverless function replacing the traditional web framework or API has almost disappeared. Even cloud providers pivoted, positioning the tools as more "glue between services" than the services themselves. The addition of being able to run Docker containers as functions seemed to help a bit, but it remains a niche component of the API world.

What happened? Why were so many smart people wrong? What can we learn as a community about hype and marketing around new tools?

Promise of serverless

Above we see a serverless application as initially pitched. Users would ingress through the API Gateway technology, which handles everything from traffic management, CORS, authorization and API version management. It basically serves as the web server and framework all in one. Easy to test new versions with multiple versions of the same API at the same time, easy to monitor and easy to set up.

After that comes the actual serverless function. These could be written in whatever language you wanted and could run for up to 15 minutes as of 2023. So instead of having, say, a Rails application where you are combining the Model-View-Controller into a monolith, you can break it into each route and use different tools to solve for each situation.

This suggests how one might structure a new PHP applications for instance.

Since these were only invoked in response to a request coming from a user, it was declared a cost savings. You weren't paying for server resources you weren't using, unlike traditional servers where you would provision the expected capacity beforehand based on a guess. The backend would also endlessly scale, meaning it would be impossible to overwhelm the service with traffic. No more needing to worry about DDoS or floods of traffic.

Finally at the end would be a database managed by your cloud provider. All in all you aren't managing any element of this process, so no servers or software updates. You could deploy a thousand times a day and precisely control the rollout and rollback of code. Each function could be written in the language that best suited it. So maybe your team writes most things in Python or Ruby but then goes back through for high volume routes and does those in Golang.

Combined with technologies like S3 and DynamoDB along with SNS you have a compelling package. You could still send messages between functions with SNS topics. Storage was effectively unlimited with S3 and you had a reliable and flexible key-value store with DynamoDB. Plus you ditched the infrastructure folks, the monolith, any dependency on the host OS and you were billed by your cloud provider for your actual usage based on the millisecond.

Initial Problems

The initial adoption of serverless was challenging for teams, especially teams used to monolith development.

  • Local development. Typically a developer pulls down the entire application they're working on and runs it on their device to be able to test quickly. With serverless, that doesn't really work since the application is potentially thousands of different services written in different languages. You can do this with serverless functions but it's way more complicated.
  • Hard to set resources correctly. How much memory did this function need under testing can be very different from how much it needs under production. Developers tended to set their limits high to avoid problems, wiping out much of the cost savings. There is no easy way to adjust functions based on real-world data outside of doing it by hand one by one.
  • AWS did make this process easier with AWS Lambda Power Tuning but you'll still need to roll out the changes yourself function by function. Since even a medium sized application can be made up of 100+ functions, this is a non-trivial thing to do. Plus these aren't static things, changes can get rolled out that dramatically change the memory usage with no warning
  • Is it working? Observability is harder with a distributed system vs a monolith and serverless just added to that. Metrics are less useful as are old systems like uptime checks. You need, certainly in the beginning, to rely on logs and traces a lot more. For smaller teams especially, the monitoring shift from "uptime checks + grafana" to a more complex log-based profile of health was a rough adjustment.

All these problems were challenges but it seems many were able to get through it with momentum intact. We started to see a lot of small applications launch that were serverless function based, from APIs to hobby developer projects. All of this is reflected by the Datadog State of Serverless report for 2020 which you can see here.

At this point everything seems great. 80% of AWS container users have adopted Lambda in some capacity, paired with SQS and DynamoDB. NodeJS and Python are the dominant languages, which is a little eyebrow raising. This suggests that picking the right language for the job didn't end up happening, instead picking the language easiest for the developer. But that's fine, that is also an optimization.

What happened? What went wrong?

Production Problems

Across the industry we started to hear feedback from teams that had gone hard into serverless functions backing back out. I started to see problems in my own teams that had adopted serverless. The following trends came up in no particular order.

  • Latency. Traditional web frameworks and containers are fast at processing requests, typically hitting latency in database calls. Serverless functions were slow depending on the last time you invoked them. This led to teams needing to keep "functions warm." What does this mean?

When the function gets a request it downloads the code and gets ready to run it. After that for a period of time, the function is just ready to rerun until it is recycled and the process needs to be run again. The way around this at first was typically an EventBridge rule to keep the function running every minute. This kind of works but not really.

Later Provisioned Concurrency was added, which is effectively....a server. It's a VM where your code is already loaded. You are limited per account to how many functions you can have set to be Provisioned Concurrency, so it's hardly a silver bullet. Again none of this happens automatically, so its up to someone to go through and carefully tune each function to ensure it is in the right category.

  • Scaling. Serverless functions don't scale to infinity. You can scale concurrency levels up every minute by an additional 500 microVMs. But it is very possible for one function to eat all of the capacity for every other function. Again it requires someone to go through and understand what Reserved Concurrency each function needs and divide that up as a component of the whole.

In addition, serverless functions don't magically get rid of database concurrency limits. So you'll hit situations where a spike of traffic somewhere else kills your ability to access the database. This is also true of monoliths, but it is typically easier to see when this is happening when the logs and metrics are all flowing from the same spot.

In practice it is far harder to scale serverless functions than an autoscaling group. With autoscaling groups I can just add more servers and be done with it. With serverless functions I need an in-depth understanding of each route of my app and where those resources are being spent. Traditional VMs give you a lot of flexibility in dealing with spikes, but serverless functions don't.

There are also tiers of scaling. You need to think of KMS throttling, serverless function concurrency limit, database connection limits, slow queries. Some of these don't go away with traditional web apps, but many do. Solutions started to pop up but they often weren't great.

Teams switched from always having a detailed response from the API to just returning a 200 showing that the request had been received. That allowed teams to stick stuff into an SQS queue and process it later. This works unless there is a problem in processing, breaking the expectations from most clients that 200 means the request was successful, not that the request had been received.

Functions often needed to be rewritten as you went, moving everything you could to the initialization phase and keeping all the connection logic out of the handler code. The initial momentem of serverless was crashing into the rewrites as teams learned painful lesson after painful lesson.

  • Price. Instead of being fire and forget, serverless functions proved to be very expensive at scale. Developers don't think of routes of an API in terms of how many seconds they need to run and how much memory they use. It was a change in thinking and certainly compared to a flat per-month EC2 pricing, the spikes in traffic and usage was an unpleasant surprise for a lot of teams.

Combined with the cost of RDS and API Gateway and you are looking at a lot of cash going out every month.

The other cost was the requirement that you have a full suite of cloud services identical to production for testing. How do you test your application end to end with serverless functions? You need to stand up the exact same thing as production. Traditional applications you could test on your laptop and run tests against it in the CI/CD pipeline before deployment. Serverless stacks you need to rely a lot more on Blue/Green deployments and monitoring failure rates.

  • Slow deployments. Pushing out a ton of new Lambdas is a time-consuming process. I've waited 30+ minutes for a medium-sized application. God knows how long people running massive stacks were waiting.
  • Security. Not running the server is great, but you still need to run all the dependencies. It's possible for teams to spawn tons of functions with different versions of the same dependencies, or even choosing to use different libraries. This makes auditing your dependency security very hard, even with automation checking your repos. It is more difficult to guarantee that every compromised version of X dependency is removed from production than it would be for a smaller number of traditional servers.

Why didn't this work?

I think three primary mistakes were made.

  1. The complexity of running a server in a modern cloud platform was massively overstated. Especially with containers, running a linux box of some variety and pushing containers to it isn't that hard. All the cloud platform offer load balancers, letting you offload SSL termination, so really any Linux box with Podman or Docker can run listening on that port until the box has some sort of error.

    Setting up Jenkins to be able to monitor Docker Hub for an image change and trigger a deployment is not that hard. If the servers are just doing that, setting up a new box doesn't require the deep infrastructure skills that serverless function advocates were talking about. The "skill gap" just didn't exist in the way that people were talking about.
  2. People didn't think critically about price. Serverless functions look cheap, but we never think about how many seconds or minute a server is busy. That isn't how we've been conditioned to think about applications and it showed. Often the first bill was a shocker, meaning the savings from maintenance had to be massive and they just weren't.
  3. Really hard to debug problems. Relying on logs and X-Ray to figure out what went wrong is just much harder than pulling the entire stack down to your laptop and triggering the same requests. It is a new skill and one that people had not developed up to that point. The first time you have a long-running production issue that would have been trivial to fix in the old monolith application design style that persists for a long time in the serverless function world, the enthusiasm from leadership evaporates very quickly.

Conclusion

Serverless functions fizzled out and it's important for us as an industry to understand why the hype wasn't real. Important questions were skipped over in an attempt to increase buy-in to cloud platforms and simplify the deployment and development story for teams. Hopefully this provides us a chance to be more skeptical of promises like this in the future. We should have adopted a much more wait and see to this technology instead of rushing straight in and hitting all the sharp edges right away.

Currently serverless functions live as what they're best at, which is either glue between different services, triggering longer-running jobs or as very simple platforms that allow for tight cost control by single developers who are putting together something for public use. If you want to use something serverless for more, you would be better off looking at something like ECS with Fargate or Cloud Run in GCP.


CodePerfect 95 Review

CodePerfect 95 Review

I have a long history of loving text editors. Their simplicity and purity of design is appealing to me, as is their long lifespans. Writing a text editor that becomes popular really becomes a lifelong responsibility and opportunity, which is just very cool to me. They become subcultures onto themselves. IDEs I have less love for.

There's nothing wrong with using one, in fact I use them for troubleshooting on a pretty regular basis. I just haven't found one I love yet. They either have a million plugins (so I'm constantly getting notifications for updates) or they just have thousands upon thousands of features, so even to get started I need to watch a few YouTube tutorials and read a dozen pages of docs. I love JetBrains products but the first time I tried to use PyCharm for a serious project I felt like I was launching a shuttle into space.

Busy is a bit of an understatement

However I find myself writing a lot of Golang lately, as it has become the common microservice language across a couple of jobs now. I actually like it, but I'm always looking for an IDE to help me write it faster and better. My workflow is typically to write it in Helix or Vim and then use the IDE for inspecting the code before putting it in a commit, or for faster debugging than have two tabs open in the Tmux and switching between them. It works, but it's not exactly an elegant solution.

I stumbled across CodePerfect 95 and fell in love with the visual styling. So I had to give it a try. Their site is here: https://codeperfect95.com/

Visuals

It's hard to overstate how much I love this design. It is very Mac OS 9 in a way that I just was instantly drawn to. Everything from the atypical color choices to the fonts are just classic Apple design.

Mac OS 9

Whoever picked this logo, I was instantly delighted with it.

There were a few quibbles. It should respect the system dark/light mode, even if it goes against the design of the application. That's a users preference and should get reflected in some way.

Also as far as I could tell, nothing about the font used or any of the design elements were customizable. This is fine for me, I actually prefer when tools have strong opinions and present them to me, but I know for some people the ability to switch the monospace font used is a big deal. In general there are just not a lot of options, which is great for me but you should be aware of.

Usage

Alright so I got a free 7 day trial when I downloaded it and I really tried to kick the tires as much as possible. So I converted over to it for all my work during that period. This app promises speed and delivers. It is as fast as a terminal application and comes with most of the window and tab customization you would typically turn to a tool like Tmux for.

It apparently indexes the project when you open it, but honestly it happened so fast I didn't even notice what it was doing. As fast as I could open the project and remember what the project was, I could search or do whatever. I'm sure if you work on giant projects that might not be the case, but nothing I threw at the index process seemed to choke it at all.

It supports panes and tabs, so basically using Cmd+number to switch panes. It's super fast and I found very comfortable. The only thing that is slightly strange is when you open a new pane, it shows absolutely nothing. No file path, no "click here to open". You need to understand that when you switch to an empty pane you have to open a file. This is what the pane view looks like:

Cmd+P is fuzzy find and works as expected. So if you are used to using Vim to search and open files, this is going to feel very familiar to you. Cmd+T is the Symbol GoTo which works like all of these you have ever used:

You can jump to the definition of an identifier, completion, etc. All of this worked exactly like you would think it does. It was very fast and easy to do. I really liked some of the completion stuff. For instance, Generate Function actually saved me a fair amount of time.

Given:

dog := Dog{}
bark(dog, 1, false)

You can mouse over and generate this:

func bark(v0 Dog, v1 int, v2 bool) {
  panic("not implemented")
}

This is their docs example but when I tested it, it seemed to work well.

The font is pretty easy to read but I would have love to tweak the colors a bit. They went with kind of a muted color scheme, whereas I prefer a strong visual difference between comments and actual code. All the UI elements are black and white, very strong contract, so to make the actual workspace muted and a little hard to read is strange.

VSCode defaults to a more aggressive and easier to read design, especially in sunlight.

Builds

So one of the primary reasons IDEs are so nice to use is the integrated build system. However with Golang builds are pretty straightforward typically, so there isn't a lot to report here. It's basically "what arguments do you pass to go build saved as a profile".

It works well though. No complaints and stepping through the build errors was easy and fast to do. Not fancy but works like it says on the box.

Work Impressions

I was able to do everything I would need to do with a typical Golang application inside the IDE, which is not a small task. I liked features like the Postfix completion which did actually save me a fair amount of time once I started using them.

However I ended up missing a few of the GoLand features like Code Coverage checking for tests and built-in support for Kubernetes and Terraform, just because it's common to touch all subsystems when I'm working on something and not just exclusively Go code. You definitely see some value with having a tool customized for one environment over having a general purpose tool with plugins, but it was a little hard to give up all the customization options with GoLand. Then again it reduces complexity and onboarding time, so it's a trade-off.

Pricing and License

First with a product like this I like to check the Terms and Conditions. I was surprised that they....basically don't have any.

Clearly no lawyers were involved in this process, which seems odd. This reads like a Ron Swanson ToS.

The way you buy licenses is also a little unusual. It's an attempt to bridge the Jetbrains previous perpetual license and the perpetual fallback license.

A key has two parts: a one-time perpetual license, and subscription-based automatic updates. You can choose either one, or both:

    License only
        A perpetual license locked to a particular version.
        After 3 included months of updates, locked to the final version.
    License and subscription
        A perpetual license with access to ongoing updates .
        When your subscription ends, your perpetual license is locked to the final version.
    Subscription only
        Access to the software during your subscription.
        You lose access when your subscription ends.

I'm also not clear what they mean by "cannot be expensed".

Why can't I expense it? According to what? You writing on a webpage "you cannot expense it"? This seems like a way to extract more money from people depending on whether they're using it at work or home.

Jetbrains does something similar but they have an actual license you agree to. There's no documentation of a license here, so I don't know if this matters at all. If CodePerfect wants to run their business like this, I guess they can, but they're going to need to have a document that says something like this:

3.4. This subscription is only for natural persons who are purchasing a subscription to Products using only their own funds. Notwithstanding anything to the contrary in this Agreement, you may not use any of the Products, and this grant of rights shall not be in effect, in the event that you do not pay Subscription fees using your own funds. If any third party pays the Subscription fees or if you expect or receive reimbursement for those fees from any third party, this grant of rights shall be invalid and void.

I feel like $40 for software where I only get 3 months of updates is not an amazing deal. Sublime Text is $99 for 3 years. Nova is $99 for one year. Examining the changelog it appears they're still closing relatively big bugs even now, so I would be a tiny bit nervous about getting locked into whatever version I'm at in three months forever. Changelog

The subscription was also not a great deal.

So I mean the easiest comparison would be GoLand.

$10 a month = $120 for the year and I get the perpetual fallback license. $100 for the year and I get CodePerfect (I understand the annual price break). The pricing isn't crazy but JetBrains is an established company with a known track record of shipping IDEs. I would be a bit hesitant to shell out for this based on a 7 day trial for a product that has existed for 302 days as of July 5th. I'd rather they charge me $99 for a license with 12 months of updates that just ends instead of a subscription. It's also strange that they don't seem to change the currency based on the location of the user.

My issue with all this is getting a one-time payment reimbursed is not bad. Subscriptions are typically frowned upon as expenses at most places I've worked unless they're training for the entire department. For my own personal usage, I would be hesitant to sign up for a new subscription from an unknown entity, especially when the ToS is a paragraph and the "license" I am agreeing to doesn't seem to exist? A lot of this is just new software growing pains, but I hope they're aware.

Conclusion

CodePerfect 95 is my favorite kind of software. It's functional yet fun, with some whimsy and joy mixed in with practical features. It works well and is as fast as promised. I enjoyed my week of using it, finding it to be mostly usable as JetBrains GoLand but in a lighter piece of software. So would I buy it?

I'm hesitant. I want to buy it, but there's zero chance I could get a legal department to approve this for an enterprise purchase. So my option would be to buy the more expensive version and expense it or just pay for it myself. Subscription fatigue is a real thing and I will typically pay a 20% premium to not have to deal with it. To not have to deal with a subscription I would need to buy a license every 3 months for $160 a year in total.

I can't get there yet. I've joined their newsletter and I'll keep an eye on it. If it continues to be a product in six months I'll pull the trigger. Switching workflows is a lot of work for me and it requires enough time to mentally adjust that I don't want to fall in love with a tool and then have it disappear. If they did $99 for a year license that just expired I'd buy it today.


Today the EU decided to give me a giant present

Today the EU decided to give me a giant present

For those of you who have spent years dealing with the nightmarish process of carefully putting EU user data in its own silo, often in its own infrastructure in a different EU region, it looks like the nightmare might be coming to an end. See the new press release here: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_3721

Some specific details I found really interesting in the full report (which is a doozy of a read(: https://commission.europa.eu/system/files/2023-07/Adequacy%20decision%20EU-US%20Data%20Privacy%20Framework.pdf

The EU-U.S. Data Privacy Framework introduces new binding safeguards to address all the concerns raised by the European Court of Justice, including limiting access to EU data by US intelligence services to what is necessary and proportionate, and establishing a Data Protection Review Court (DPRC), to which EU individuals will have access.

US companies can certify their participation in the EU-U.S. Data Privacy Framework by committing to comply with a detailed set of privacy obligations. This could include, for example, privacy principles such as purpose limitation, data minimisation and data retention, as well as specific obligations concerning data security and the sharing of data with third parties.

To certify under the EU-U.S. DPF (or re-certify on an annual basis), organisations are required to publicly declare their commitment to comply with the Principles, make their privacy policies available and fully implement them67. As part of their (re-)certification application, organisations have to submit information to the DoC on, inter alia, the name of the relevant organisation, a description of the purposes for
which the organisation will process personal data, the personal data that will be covered by the certification, as well as the chosen verification method, the relevant independent recourse mechanism and the statutory body that has jurisdiction to enforce compliance with the Principles68

Organisations can receive personal data on the basis of the EU-U.S. DPF from the date they are placed on the DPF list by the DoC. To ensure legal certainty and avoid 'false claims', organisations certifying for the first time are not allowed to publicly refer to their adherence to the Principles before the DoC has determined that the organisation's certification submission is complete and added the organisation to the DPF List69. To be allowed to continue to rely on the EU-U.S. DPF to receive personal data from the Union, such organisations must annually re-certify their participation in the framework. When an organisation leaves the EU-U.S. DPF for any reason, it must remove all statements implying that the organisation continues to participate in the Framework

So it looks similar to Privacy Shield but with more work being done on the US side to meet the EU requirements. This is all super new and we'll need to see how this shakes out in the practical implementation, but I'm extremely hopefully for less friction-filled interactions between EU and US tech companies.


GKE (Google Kubernetes Engine) Review

GKE (Google Kubernetes Engine) Review

What if Kubernetes was idiot-proof?

Love/Hate Relationship

AWS and I have spent a frightening amount of time together. In that time I have come to love that weird web UI with bizarre application naming. It's like asking an alien not familiar with humans to name things. Why is Athena named Athena? Nothing else gets a deity name. CloudSearch, CloudFormation, CloudFront, Cloud9, CloudTrail, CloudWatch, CloudHSM, CloudShell are just lazy, we understand you are the cloud. Also Amazon if you are going to overuse a word that I'm going to search, use the second word so the right result comes up faster. All that said, I've come to find comfort in its primary color icons and "mobile phones don't exist" web UI.

Outside of AWS I've also done a fair amount of work with Azure, mostly in Kubernetes or k8s-adjacent spaces. All said I've now worked with Kubernetes on bare metal in a datacenter, in a datacenter with VMs, on raspberry pis in a cluster with k3s, in AWS with EKS, in Azure with AKS, DigitalOcean Kubernetes and finally with GKE in GCP. Me and the Kubernetes help documentation site are old friends at this point, a sea of purple links. I say all this to suggest that I have made virtually every mistake one can with this particular platform.

When being told I was going to be working in GCP (Google Cloud Platform) I was not enthused. I try to stay away from Google products in my personal life. I switched off Gmail for Fastmail, Search for DuckDuckGo, Android for iOS and Chrome for Firefox. It has nothing to do with privacy, I actually feel like I understand how Google uses my personal data pretty well and don't object to it on an ideological level. I'm fine with making an informed decision about using my personal data if the return to me in functionality is high enough.

I mostly move off Google services in my personal life because I don't understand how Google makes decisions. I'm not talking about killing Reader or any of the Google graveyard things. Companies try things and often they don't work out, that's life. It's that I don't even know how fundamental technology is perceived. Is Golang, which relies extensively on Google employees, doing well? Are they happy with it, or is it in danger? Is Flutter close to death or thriving? Do they like Gmail or has it lost favor with whatever executives are in charge of it this month? My inability to get a sense of whether something is doing well or poorly inside of Google makes me nervous about adopting their stack into my life.

I say all this to explain that, even though I was not excited to use GCP and learn a new platform. Even though there are parts of GCP that I find deeply frustrating as compared to its peers...there is a gem here. If you are serious about using Kubernetes, GKE is the best product I've seen on the market. It isn't even close. GKE is so good that if you are all-in on Kubernetes, it's worth considering moving from AWS or Azure.

I know, bold statement.

TL;DR

  • GKE is the best managed k8s product I've ever tried. It aggressively helps you do things correctly and is easy to set up and run.
  • GKE Autopilot is all of that but they handle all the node/upgrade/security etc. It's like Heroku-levels of easy to get something deployed. If you are a small company who doesn't want to hire or assign someone to manage infrastructure, you could grow forever on GKE Autopilot and still be able to easily migrate to another provider or the datacenter later on.
  • The rest of GCP is a bit of a mixed bag. Do your homework.

Disclaimer

I am not and have never been a google employee/contractor/someone they know exists. I once bombed an interview when I was 23 for an job at Google. This interview stands out to me because despite working with it every day for a year my brain just forgot how RAID parity worked on a data tranmission level. Got off the call and instantly all memory of how it worked returned to me. Needless to say nobody at Google cares that I have written this and it is just my opinions.

Corrections are always appreciated. Let me know at: [email protected]

Traditional K8s Setup

One common complaint about k8s is you have to set up everything. Even "hosted" platform often just provide the control plane, meaning almost everything else is some variation of your problem. Here's the typically collection of what you need to make decisions about in no particular order:

  • Secrets encryption: yes/no how
  • Version of Kubernetes to start on
  • What autoscaling technology are you going to use
  • Managed/unmanaged nodes
  • CSI drivers, do you need them, which ones
  • Which CNI, what does it mean to select a CNI, how do they work behind the scenes. This one in particular throws new cluster users because it seems like a nothing decision but it actually has profound impact in how the cluster operates
  • Can you provision load balancers from inside of the cluster?
  • CoreDNS, do you want it to cache DNS requests?
  • Vertical pod autoscaling vs horizontal pod autoscaling
  • Monitoring, what collects the stats, what default data do you get, where does it get stored (node-exporter setup to prometheus?)
  • Are you gonna use an OIDC? You probably want it, how do you set it up?
  • Helm, yes or no?
  • How do service accounts work?
  • How do you link IAM with the cluster?
  • How do you audit the cluster for compliance purposes?
  • Is the cluster deployed in the correct resilient way to guard against AZ outages?
  • Service mesh, do you have one, how do you install it, how do you manage it?
  • What OS is going to run on your nodes?
  • How do you test upgrades? What checks to make sure you aren't relying on a removed API? When is the right time to upgrade?
  • What is monitoring overall security posture? Do you have known issues with the cluster? What is telling you that?
  • Backups! Do you want them? What controls them? Can you test them?
  • Cost control. What tells you if you have a massively overprovisioned node group?

This isn't anywhere near all the questions you need to answer, but this is typically where you need to start. One frustration with a lot of k8s services I've tried in the past is they have multiple solutions to every problem and it's unclear which is the recommended path. I don't want to commit to the wrong CNI and then find out later that nobody has used that one in six months and I'm an idiot. (I'm often an idiot but I prefer to be caught for less dumb reasons).

Are these failings of kubernetes?

I don't think so. K8s is everything to every org. You can't make a universal tool that attempts to cover every edge case that doesn't allow for a lot of customization. With customization comes some degree of risk that you'll make the wrong choice. It's the Mac vs Linux laptop debate in an infrastructure sphere. You can get exactly what you need with the Linux box but you need to understand if all the hardware is supported and what tradeoffs each decision involves. With a Mac I'm getting whatever Apple thinks is the correct combination of all of those pieces, for better or worse.

If you can get away with Cloud Run or ECS, don't let me stop you. Pick the level of customization you need for the job, not whatever is hot right now.

Enter GKE

Alright so when I was hired I was tasked with replacing an aging GKE cluster that was coming to end of life running Istio. After running some checks, we weren't using any of the features of Istio, so we decided to go with Linkerd since it's a much easier to maintain service mesh. I sat down and started my process for upgrading an old cluster.

  • Check the node OS for upgrades, check the node k8s version
  • Confirm API usage to see if we are using outdated APIs
  • How do I install and manage the ancillary services and what are they? What installs CoreDNS, service mesh, redis, etc.
  • Can I stand up a clean cluster from what I have or was critical stuff added by hand? It never should be but it often is.
  • Map out the application dependencies and ensure they're put into place in the right order.
  • What controls DNS/load balancing and how can I cut between cluster 1 and cluster 2

It's not a ton of work, but it's also not zero work. It's also a good introduction to how applications work and what dependencies they have. Now my experience with recreating old clusters in k8s has been, to be blunt, a fucking disaster in the past. It typically involves 1% trickle traffic, everything returning 500s, looking at logs, figuring out what is missing, adding it, turning 1% back on, errors everywhere, look at APM, oh that app's healthcheck is wrong, etc.

The process with GKE was so easy I was actually sweating a little bit when I cut over traffic, because I was sure this wasn't going to work. It took longer to map out the application dependencies and figure out the Istio -> Linkerd part than it did to actually recreate the cluster. That's a first and a lot of it has to do with how GKE holds your hand through every step.

How does GKE make your life easier?

Let's walk through my checklist and how GKE solves pretty much all of them.

  1. Node OS and k8 version on the node.

GCP offers a wide variety of OSes that you can run but recommends one I have never heard of before.

Container-Optimized OS from Google is an operating system image for your Compute Engine VMs that is optimized for running containers. Container-Optimized OS is maintained by Google and based on the open source Chromium OS project. With Container-Optimized OS, you can bring up your containers on Google Cloud Platform quickly, efficiently, and securely.

I'll be honest, my first thought when I saw "server OS based on Chromium" was "someone at Google really needed to get an OKR win". However after using it for a year, I've really come to like it Now it's not a solution for everyone, but if you can operate within the limits its a really nice solution. Here are the limits.

  • No package manager. They have something called the CoreOS Toolbox which I've used a few times to debug problems so you can still troubleshoot. Link
  • No non-containerized applications
  • No install third-party kernel modules or drivers
  • It is not supported outside of the GCP environment

I know, it's a bad list. But when I read some of the nice features I decided to make the switch. Here's what you get:

  • The root filesystem is always mounted as read-only. Additionally, its checksum is computed at build time and verified by the kernel on each boot.
  • Stateless kinda. /etc/ is writable but stateless. So you can write configuration settings but those settings do not persist across reboots. (Certain data, such as users' home directories, logs, and Docker images, persist across reboots, as they are not part of the root filesystem.)
  • Ton of other security stuff you get for free. Link

I love all this. Google tests the OS internally, they're scanning for CVEs, they're slowly rolling out updates and its designed to just run containers correctly, which is all I need. This OS has been idiot proof. In a year of running it I haven't had a single OS issue. Updates go out, they get patched, I don't notice ever. Troubleshooting works fine. This means I never need to talk about a Linux upgrade ever again AND the limitations of the OS means my applications can't rely on stuff they shouldn't use. Truly set and forget.

I don't run software I can't build from source.

Go nuts: https://cloud.google.com/container-optimized-os/docs/how-to/building-from-open-source

2. Outdated APIs.

There's a lot of third-party tools that do this for you and they're all pretty good. However GKE does it automatically in a really smart way.

Not my cluster but this is what it looks like

Basically the web UI warns you if you are relying on outdated APIs and will not upgrade if you are. Super easy to check "do I have bad API calls hiding somewhere".

3. How do I install and manage the ancillary services and what are they?

GKE comes batteries included. DNS is there but it's just a flag in Terraform to configure. Service accounts same thing, Ingress and Gateway to GCP is also just in there working. Hooking up to your VPC through a toggle in Terraform so you can naively routeable. They even reserve the Pods IPs before the pods are created which is nice and eliminates a source of problems.

They have their own CNI which also just works. One end of the Virtual Ethernet Device pair is attached to the Pod and the other is connected to the Linux bridge device cbr0. I've never encountered any problems with any of the GKE defaults, from the subnets it offers to generate for pods to the CNI it is using for networking. The DNS cache is nice to be able to turn on easily.

4. Can I stand up a clean cluster from what I have or was critical stuff added by hand?

Because everything you need to do happens in Terraform for GKE, it's very simple to see if you can stand up another cluster. Load balancing is happening inside of YAMLs, ditto for deployments, so standing up a test cluster and seeing if apps deploy correctly to it is very fast. You don't have to install a million helm charts to get everything configured just right.

However they ALSO have backup and restore built it!

Here is your backup running happily and restoring it is just as easy to do through the UI.

So if you have a cluster with a bunch of custom stuff in there and don't have time to sort it out, you don't have to.

5. Map out the application dependencies and ensure they're put into place in the right order.

This obviously varies from place to place, but the web UI for GKE does make it very easy to inspect deployments and see what is going on with them. This helps a lot, but of course if you have a service mesh that's going to be the one-stop shop for figuring out what talks to what when. The Anthos service mesh provides this and is easy to add onto a cluster.

6. What controls DNS/load balancing and how can I cut between cluster 1 and cluster 2

Alright so this is the only bad part. GCP load balancers provide zero useful information. I don't know why, or who made the web UIs look like this. Again, making an internal or external load balancer as an Ingress or Gateway with GKE is stupid easy with annotations.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: my-static-address
    kubernetes.io/ingress.allow-http: "false"
    networking.gke.io/managed-certificates: managed-cert
    kubernetes.io/ingress.class: "gce"


Why would this data be the most useful data?

I don't who this is for or why I would care from what region of the world my traffic is coming from. It's also not showing correctly on Firefox with the screen cut off on the right. For context, this is the correct information I want from a load balancer every single time:

The entire GCP load balancer thing is a tire-fire. The web UI to make load balancers breaks all the time. Adding an SSL through the web UI almost never works.  They give you a ton of great information about the backend of the load balancer but adding things like a new TLS policy requires kind of a lot of custom stuff. I could go on and on.

Autopilot

Alright so lets say all of that was still a bit much for you. You want a basic infrastructure where you don't need to think about nodes, or load balancers, or operating systems. You write your YAML, you deploy it to The Cloud and then things happens automagically. That is GKE Autopilot

Here are all the docs on it. Let me give you the elevator pitch. It's a stupid easy way to run Kubernetes that is probably going to save you money. Why? Because selecting and adjusting the type and size of node you provision is something most starting companies mess up with Kubernetes and here you don't need to do that. You aren't billed for unused capacity on your nodes, because GKE manages the nodes. You also aren't charged for system Pods, operating system costs, or unscheduled workloads.

Hardening Autopilot is also very easy. You can see all the options that exist and are already turned on here. If you are a person who is looking to deploy an application where maintaining it cannot be a big part of your week, this is a very flexible platform to do it on. You can move to standard GKE later if you'd like. Want off GCP? It is not that much work to convert your YAML to work with a different hosted provider or a datacenter.

I went in with low expectations and was very impressed.

Why shouldn't I use GKE?

I hinted at it above. As good as GKE is, the rest of GCP is crazy inconsistent. First the project structure for how things work is maddening. You have an organization and below that are projects (which are basically AWS accounts). They all have their own permission structure which can be inherited from folders that you put the projects in. However since GCP doesn't allow for the combination of IAM premade roles into custom roles, you end up needing to write hundreds of lines of Terraform for custom roles OR just find a premade role that is Pretty Close.

GCP excels at networking, data visualization (outside of load balancing), kubernetes, serverless with cloud run and cloud functions and big data work. A lot of the smaller services on the edge don't get a lot of love. If you are heavy users of the following, proceed with caution.

GCP Secret Manager

For a long time GCP didn't have any secret manager, instead having customers encrypt objects in buckets. Their secret manager product is about as bare-bones as it gets. Secret rotation is basically a cron job that pushes to a Pub/Sub topic and then you do the rest of it. No metrics, no compliance check integrations, no help with rotation.

It'll work for most use cases, but there's just zero bells and whistles.

GCP SSL Certificates

I don't know how Let's Encrypt, a free service, outperforms GCPs SSL certificate generation process. I've never seen a service that mangles SSL certificates as bad as this. Let's start with just trying to find them.

The first two aren't what I'm looking for. The third doesn't take me to anything that looks like an SSL certificate. SSL certificates actually live at Security -> Certificate Manager. If you try to go there even if you have SSL certificates you get this screen.

I'm baffled. I have Google SSL certificates with their load balancers. How is the API not enabled?

To issue the certs it does the same sort of DNS and backend checking as a lot of other services. To be honest I've had more problems with this service issuing SSL certificates than any in my entire life. It was easier to buy certificates from Verisign. If you rely a lot on generating a ton of these quickly, be warned.

IAM recommender

GCP has this great feature which is it audits what permissions a role has and then tells you basically "you gave them too many permissions". It looks like this:

Great right? Now sometimes this service will recommend you modify the permissions to either a new premade role or a custom role. It's unclear when or how that happens, but when it does there is a little lightbulb next to it. You can click it to apply the new permissions, but since mine (and most peoples) permissions are managed in code somewhere, this obviously doesn't do anything long-term.

Now you can push these recommendations to Big Query, but what I want is some sort of JSON or CSV that just says "switch these to use x premade IAM roles". My point is there is a lot of GCP stuff that is like 90% there. Engineers did the hard work of tracking IAM usage, generating the report, showing me the report, making a recommendation. I just need an easier way to act on that outside of the API or GCP web console.

These are just a few examples that immediately spring to mind. My point being when evaluating GCP please kick the tires on all the services, don't just see that one named what you are expecting exists. The user experience and quality varies wildly.

I'm interested, how do I get started?

GCP terraform used to be bad, but now it is quite good. You can see the whole getting started guide here. I recommend trying Autopilot and seeing if it works for you just because its cheap.

Even if you've spent a lot of time running k8s, give GKE a try. It's really impressive, even if you don't intend to move over to it. The security posture auditing, workload metrics, backup, hosted prometheus, etc is all really nice. I don't love all the GCP products, but this one has super impressed me.