Skip to content

Terraform Cloud Review

Source

If I were told to go off and make a hosted Terraform product, I would probably end up with a list of features that looked something like the following:

  • Extremely reliable state tracking
  • Assistance with upgrading between versions of Terraform and providers and letting users know when it looked safe to upgrade and when there might be problems between versions
  • Consistent running of Terraform with a fresh container image each time, providers and versions cached on the host VM so the experience is as fast as possible
  • As many linting, formatting and HCL optimizations I can offer, configurable on and off
  • Investing as much engineering work as I can afford in providing users an experience where, unlike with the free Terraform, if a plan succeeds on Terraform Cloud, the Apply will succeed
  • Assisting with Workspace creation. Since we want to keep the number of resources low, seeing if we can leverage machine learning to say "we think you should group these resources together as their own workspace" and showing you how to do that
  • Figure out some way for organizations to interact with the Terraform resources other than just running the Terraform CLI, so users can create richer experiences for their teams through easy automation that feeds back into the global source of truth that is my incredibly reliable state tracking
  • Try to do whatever I can to encourage more resources in my cloud. Unlimited storage, lots of workspaces, helping people set up workspaces. The more stuff in there the more valuable it is for the org to use (and also more logistically challenging for them to cancel)

This is me would be a product I would feel confident charging a lot of money for. Terraform Cloud is not that product. It has some of these features locked behind the most expensive tiers, but not enough of them to justify the price.

I've written about my feelings around the Terraform license change before. I won't bore you with that again. However since now the safest way to use Terraform is to pay Hashicorp, what does that look like? As someone who has used Terraform for years and Terraform Cloud almost daily for a year, it's a profoundly underwhelming experience.

Currently it is a little-loved product with lots of errors and sharp edges. This is as v0.1 of a version of this as I could imagine, except the pace of development has been glacial. Terraform Cloud is a "good enough" platform that seems to understand that if you could do better, you would. Like a diner at 2 AM on the side of the highway, it's primary selling point is the fact that it is there. That and the license terms you will need to accept soon.

Terraform Cloud - Basic Walkthrough

At a high level Terraform Cloud allows organizations to centralize their Projects and Workspaces and store that state with Hashicorp. It also gives you access to a Registry for you to host your own privacy Terraform modules and use them in your workspaces. The top level options look as follows:

That's it!

You may be wondering "What does Usage do?" I have no idea, as the web UI has never worked for me even though I appear to have all the permissions one could have. I have seen the following since getting my account:

I'm not sure what wasn't found.

I'm not sure what access I lack or if the page was intended to work. It's very mysterious in that way.

There is Explorer, which lets you basically see "what versions of things do I use across the different repos". You can't do anything with that information, like I can't say "alright well upgrade these two to the version that everyone else uses". It's also a beta feature and not one that existed when I first started using the platform.

Finally there are the Workspaces, where you spend 99% of your time.

You get some ok stats here. Up in the top left you see "Needs Attention", "Errors", "Running", "Hold" and then "Applied." Even though you may have many Workspaces, you cannot change how many you see here. 20 is the correct number I guess.

Creating a Workspace

Workspaces are either based on a repo, CLI driven or you call the API. You tell it what VCS, what repo, if you want to use the root of the repo or a sub-directory (which is good because soon you'll have too many resources for one workspace for everything). You tell it Auto Apply (which is checked by default) or Manual and when to trigger a run (whenever a change, whenever specific files in a path change or whenever you push a tag). That's it.

You can see all the runs, what their status is and basically what resources have changed or will change. Any plan that you run from your laptop also show up here. Now you don't need to manage your runs here, you can still do local, but then there is absolutely no reason to use this product. Almost all of the features rely on your runs being handled by Hashicorp here inside of a Workspace.

Workspace flow

Workspaces show you when the run was, how long the plan took, what resources are associated with this (10 resources at a time even though you might have thousands. Details links you to the last run, there are tags and run triggers. Run triggers allow you to link workspaces together, so this workspace would dependent on the output of another workspace.

The settings are as follows:

Runs is pretty straight forward. States allow you to inspect the state changes directly. So you can see the full JSON of a resource and roll back to this specific state version. This can be nice for reviewing what specifically changed on each resource, but in my experience you don't get much over looking at the actual code. But if you are in a situation where something has suddenly broken and you need a fast way of saying "what was added and what was removed", this is where you would go.

NOTE: BE SUPER CAREFUL WITH THIS

The state inspector has the potential to show TONS of sensitive data. It's all the data in Terraform in the raw form. Just be aware it exists when you start using the service and take a look to ensure there isn't anything you didn't want there.

Variables are variables and the settings allow you to lock the workspace, apply Sentinel settings, set an SSH key for downloading private modules and finally if you want changes to the VCS to trigger an action here. So for instance, when you merge in a PR you can trigger Terraform Cloud to automatically apply this workspace. Nothing super new here compared to any CI/CD system, but still it is all baked in.

That's it!

No-Code Modules

One selling point I heard a lot about, but haven't actually seen anyone use. The idea is good though, where you write premade modules and push them to your private registry. Then members of your organization can just run them to do things like "stand up a template web application stack". Hashicorp has a tutorial here that I ran though and found it to work pretty much as expected. It isn't anywhere near the level of power that I would want, compared to something like Pulumi, but it is a nice step forward for automating truly constant tasks (like adding domain names to an internal domain or provisioning some SSL certificate for testing).

Dynamic Credentials

You can link Terraform Cloud and Vault, if you use it, so you no longer need to stick long-living credentials inside of the Workspace to access cloud providers. Instead you can leverage Vault to get short-lived credentials that improve the security of the Workspaces. I ran through this and did have problems getting it worked for GCP, but AWS seemed to work well. It requires some setup inside of the actual repository, but it's a nice security improvement vs leaving production credentials in this random web application and hoping you don't mess up the user scoping.

User scoping is controlled primarily through "projects", which basically trickle down to the user level. You make a project, which has workspaces, that have their own variables and then assign that to a team or business unit. That same logic is reflected inside of credentials.

Private Registry

This is one thing Hashicorp nailed. It's very easy to hook up Terraform Cloud to allow your workspaces to access internal modules backed by your private repositories. It supports the same documentation options as public modules, tracks downloads and allows for easy versioning control through git tags. I have nothing but good things to say about this entire thing.

Sharing between organizations is something they lock at the top tier, but this seems like a very niche usecase so I don't consider it to be too big of a problem. However if you are someone looking to produce a private provider or module for your customers to use, I would reach out to Hashicorp and see how they want you to do that.

The primary value for this is just to easily store all of your IaC logic in modules and then rely on the versioning inside of different environments to roll out changes. For instance, we do this for things like upgrading a system. Make the change, publish the new version to the private registry and then slowly roll it out. Then you can monitor the rollout through git grep pretty easily.

Pricing

$0.00014 per hour per resource. So a lot of money when you think "every IAM custom role, every DNS record, every SSL certificate, every single thing in your entire organization". You do get a lot of the nice features at this "standard" tier, but I'm kinda shocked they don't unlock all the enterprise features at this price point. No-code provisioning is only available at the higher levels, as well as Drift detection, Continuous validation (checks between runs to see if anything has changed ) as well as Ephemeral workspaces. The last one is a shame, because it looks like a great feature. Set up your workspace to self-destruct at regular intervals so you can nuke development environments. I'd love to use that but alas.

Problems

Oh the problems. So the runners sometimes get "stuck", which seems to usually happen after someone cancels a job in the web UI. You'll run into an issue, try to cancel a job, fix the problem and rerun the runner only to have it get stuck forever. I've sat there and watched it try to load the modules for 45 minutes. There isn't any way I have seen to tell Terraform Cloud "this runner is broken, go get me another one". Sometimes they get stuck for an unknown reason.

Since you need to make all the plans and applies remotely to get any value out of the service, it can also sometimes cause traffic jams in your org. If you work with Terraform a lot, you know you need to run plans pretty regularly. Since you need to wait for a runner every single time, you can end up wasting a lot of time sitting there waiting for another job to finish. Again I'm not sure what triggers you getting another runner. You can self host, but then I'm truly baffled at what value this tool brings.

Even if that was an option for you and you wanted to do it, its locked behind the highest subscription tier. So I can't even say "add a self-hosted runner just for plans" so I could unstick my team. This seems like an obvious add, along with a lot more runner controls so I could see what was happening and how to avoid getting it jammed up.

Conclusion

I feel bad this is so short, but there just isn't anything else to write. This is a super bare-bones tool that does what it says on the box for a lot of money. It doesn't give you a ton of value over Spacelift or or any of the others. I can't recommend it, it doesn't work particularly well and I haven't enjoyed my time with it. Managing it vs using an S3 bucket is an experience I would describe as "marginally better". It's nice that it handles contention across team mates for me, but so do all the others at a lower price.

I cannot think of a single reason to recommend this over Spacelift, which has better pricing, better tooling and seems to have a better runner system except for the license change. Which was clearly the point of the license change. However for those evaluating options, head elsewhere. This thing isn't worth the money.


We need a different name for non-technical tech conferences

We need a different name for non-technical tech conferences

I recently returned from Google Cloud Next. Typically I wouldn't go to a vendor conference like this, since they're usually thinly veiled sales meetings wearing the trench-coat of a conference. However I've been to a few GCP events and found them to be technical and well-run, so I rolled the dice and hopped on the 11 hour flight from London to San Francisco.  

We all piled into Moscone Center and I was pretty hopeful. There were a lot of engineers from Google and other reputable orgs, the list of talks we had signed up for before showing up sounded good, or at least useful. I figured this could be a good opportunity to get some idea of where GCP was going and perhaps hear about some large customers technical workarounds to known limitations and issues with the platform. Then we got to the keynote.

AI. The only topic discussed and the only thing anybody at the executive level cared about was AI. This would become a theme, a constant refrain among every executive-type I spoke to. AI was going to replace customer service, programmers, marketing, copy writers, seemingly every single person in the company except for the executives. It seemed only the VPs and the janitors were safe. None of the leaders I spoke to afterwards seemed to appreciate my observation that if they spent most of their day in meetings being shown slide decks, wouldn't they be the easiest to replace with a robot? Or maybe their replacement could be a mop with sunglasses leaned against an office chair if no robot was available.

I understand keynotes aren't for engineers, but the sense I got from this was "nothing has happened in GCP anywhere else except for AI". This isn't true, like objectively I know new things have been launched, but it sends a pretty clear message that it's not a priority if nobody at the executive level seems to care about them. This is also a concern because Google famously has institutional ADHD with an inability to maintain long-term focus on slowly incrementing and improving a product. Instead it launches amazing products, years ahead of the competition then, like a child bored with a toy, drops them into the backyard and wanders away. But whatever, let's move on from the keynote.

Over the next few days what I was to experience was an event with some fun moments, mostly devoid of any technical discussion whatsoever. Rarely were talks geared towards technical staff, when technical questions came up during the recorded events they were almost never answered. Most importantly there was no presentation I heard that even remotely touched on long-known missing features of GCP when compared to peers or roadmaps. When I asked technical questions, often Google employees would come up to me after the talk with the answer, which I appreciate. But everyone at home and in the future won't get that experience and miss out on the benefit.

Most talks were the GCP products marketing page turned into slides, with a seemingly mandatory reference to AI in each one. Several presenters joked about "that was my required AI callout", which started funny but as time went on I began to worry...maybe they were actually required to mention AI? There are almost no live demos (pre-recorded which is ok but live is more compelling), zero code shown, mostly a tour of existing things the GCP web console could do along with a few new features. I ended up getting more value from finding the PMs of various products on the floor and subjecting to these poor souls to my many questions.

This isn't just a Google problem. Every engineer I spoke to about this talked about a similar time they got burned going to "not a conference conference". From AWS to Salesforce and Facebook, these organizations pitch people on getting facetime with engineers and concrete answers to questions. Instead they're opportunity to try and pitch you on more products, letting executives feel loved by ensuring they get one-on-one time from senior folks in the parent company. They sound great but mostly it's an opportunity to collect stickers.

We need to stop pretending these types of conferences are technical conferences. They're not. It's an opportunity for non-technical people inside of your organization who interact with your technical SaaS providers to get facetime with employees of that company and ask basic questions in a shame-free environment. That has value and should be something that exists, but you should also make sure engineers don't wander into these things.

Here are the 7 things I think you shouldn't do if you call yourself a tech conference.

7 Deadly Sins of "Tech" Conferences

  • Discussing internal tools that aren't open source and that I can't see or use. It's great if X corp has worked together with Google to make the perfect solution to a common problem. It doesn't mean shit to me if I can't use it or at least see it and ask questions about it. Don't let it into the slide deck if it has zero value to the community outside of showing that "solving this problem is possible".
  • Not letting people who work with customers talk about common problems. I know, from talking to Google folks and from lots of talks with other customers, common issues people experience with GCP products. Some are misconfigurations or not understanding what the product is good at and designed to do. If you talk about a service, you need to discuss something about "common pitfalls" or "working around frequently seen issues".
  • Pretending a sales pitch is a talk. Nothing makes me see red like halfway through a talk, inviting the head of sales onto the stage to pitch me on their product. Jesus Christ, there's a whole section of sales stuff, you gotta leave me alone in the middle of talks.
  • Not allowing a way for people to get questions into the livestream. Now this isn't true for every conference, but if this is the one time a year people can ask questions of the PM for a major product and see if they intend to fix a problem, let me ask that question. I'll gladly submit it beforehand and let people vote on it, or whatever you want. It can't be a free-for-all but there has to be something.
  • Skipping all specifics. If you are telling me that X service is going to solve all my problems and you have 45 minutes, don't spend 30 explaining how great it is in the abstract. Show me how it solves those problems in detail. Some of the Google presenters did this and I'm extremely grateful to them, but it should have been standard. I saw the "Google is committed to privacy and safety" generic slides so many times across different presentations that I remembered the stock photo of two women looking at code and started trying to read what she had written. I think it was Javascript.
  • Blurring the line between presenter and sponsor. Most well-run tech conferences I've been to make it super clear when you are hearing from a sponsor vs when someone is giving an unbiased opinion. A lot of these not-tech tech conferences don't, where it sounds like a Google employee is endorsing a third-party solution who has also sponsored the event. For folks new to this environment, it's misleading. Is Google saying this is the only way they endorse doing x?
  • Keeping all the real content behind NDAs. Now during Next there were a lot of super useful meetings that happened, but I wasn't in them. I had to learn about them from people at the bar who had signed NDAs and were invited to learn actual information. If you aren't going to talk about roadmap or any technical details or improvements publically, don't bother having the conference. Release a PDF with whatever new sales content you want me to read. The folks who are invited to the real meetings can still go to those. No judgement, you don't want to have those chats publically, but don't pretend you might this year.

One last thing: if you are going to have a big conference with people meeting with your team, figure out some way you want them to communicate with that team. Maybe temporary email addresses or something? Most people won't use them, but it means a lot to people to think they have a way of having some line of communication with the company. If they get weird then just deactivate the temp email. It's weird to tell people "just come find me afterwards". Where?

What are big companies supposed to do?

I understand large companies are loathe to share details unless forced to. I also understand that companies hate letting engineers speak directly to the end users, for fear that the people who make the sausage and the people who consume the sausage might learn something terrible about how its made. That is the cost of holding a tech conference about your products. You have to let these two groups of people interact with each other and ask questions.

Now obviously there are plenty of great conferences based on open-source technology or about more general themes. These tend to be really high quality and I've gone to a ton I love. However there is value, as we all become more and more dependent on cloud providers, to letting me know more about what this platform is moving towards. I need to know what platforms like GCP are working on so I know what is the technology inside the stack on the rise and which are on the decline.

Instead these conferences are for investors and the business community instead of anyone interested in the products. The point of Next was to show the community that Google is serious about AI. Just like the point of the last Google conference was to show investors that Google is serious about AI. I'm confident the next conference on any topic Google has will also be asked to demonstrate their serious committment to AI technology.

You can still have these. Call them something else. Call them "leadership conferences" or "vision conferences". Talk to Marketing and see what words you can slap in there that conveys "you are an important person we want to talk about our products with" that also tells me, a technical peon, that you don't want me there. I'll be overjoyed not to fly 11 hours and you'll be thrilled not to have me asking questions of your engineers. Everybody wins.


Terraform is dead; Long live Pulumi?

Terraform is dead; Long live Pulumi?

The best tools in tech scale. They're not always easy to learn, they might take some time to get good with but once you start to use them they just stick with you forever. On the command line, things like gawk and sed jump to mind, tools that have saved me more than once. I've spent a decade now using Vim and I work with people who started using Emacs in university and still use it for 5 hours+ a day. You use them for basic problems all the time but when you need that complexity and depth of options, they scale with your problem. In the cloud when I think of tools like this, things like s3 and SQS come to mind, set and forget tooling that you can use from day 1 to day 1000.

Not every tool is like this. I've been using Terraform at least once a week for the last 5 years. I have led migrating two companies to Infrastructure as Code with Terraform from using the web UI of their cloud provider, writing easily tens of thousands of lines of HCL along the way. At first I loved Terraform, HCL felt easy to write, the providers from places like AWS and GCP are well maintained and there are tons of resources on the internet to get you out of any problem.

As the years went on, our relationship soured. Terraform has warts that, at this point, either aren't solvable or aren't something that can be solved without throwing away a lot of previous work. In no particular orders, here are my big issues with Terraform:

  • It scales poorly. Terraform often starts with dev stage and prod as different workspaces. However since both terraform plan and terraform apply make API calls to your cloud provider for each resource, it doesn't take long for this to start to take a long time. You run plan a lot when working with Terraform, so this isn't a trivial thing.
  • Then you don't want to repeat yourself, so you start moving more complicated logic into Modules. At this point the environments are completely isolated state files with no mixing, if you try to cross accounts things get more complicated. The basic structure you quickly adopt looks like this.
  • At some point you need to have better DRY coverage, better environment handling, different backends for different environments and you need to work with multiple modules concurrently. Then you explore Terragrunt which is a great tool, but is now another tool on top of the first Infrastructure as code tool and it works with Terraform Cloud but it requires some tweaks to do so.
  • Now you and your team realize that Terraform can destroy the entire company if you make a mistake, so you start to subdivide different resources out into different states. Typically you'll have the "stateless resources" in one area and the "stateful" resources in another, but actually dividing stuff up into one or another isn't completely straightforward. Destroying an SQS queue is really bad, but is it stateful? Kubernetes nodes don't have state but they're not instantaneous to fix either.
  • HCL isn't a programming language. It's a fine alternative to YAML or JSON, but it lacks a lot of the tooling you want when dealing with more complex scenarios. You can do many of the normal things like conditionals, joins, trys, loops, for_each, but they're clunky and limited when compared to something like Golang or Python.
  • The tooling around HCL is pretty barebones. You get some syntax checking, but otherwise it's a lot of switching tmux panes to figure out why it worked one place and didn't work another place.
  • terraform validate and terraform plan don't mean the thing is going to work. You can write something, it'll pass both check stages and fail on apply. This can be really bad as your team needs to basically wait for you to fix whatever you did so the infrastructure isn't in an inconsistent place or half working. This shouldn't happen in theory but its a common problem.
  • If an apply fails, it's not always possible to back out. This is especially scary when there are timeouts, when something is still happening inside of the providers stack but now Terraform has given up on knowing what state it was left in.
  • Versioning is bad. Typically whatever version of Terraform you started with is what you have until someone decides to try to upgrade and hope nothing breaks. tfenv becomes a mission critical tool. Provider version drift is common, again typically "whatever the latest version was when someone wrote this module".

License Change

All of this is annoying, but I've learned to grumble and live with it. Then HashiCorp decided to pull the panic lever of "open-source" companies which is a big license change. Even though Terraform Cloud, their money-making product, was never open-source, they decided that the Terraform CLI needed to fall under the BSL. You can read it here. The specific clause people are getting upset about is below:

You may make production use of the Licensed Work,
provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products.

Now this clause, combined with the 4 year expiration date, effectively kills the Terraform ecosystem. Nobody is going to authorize internal teams to open-source any complementary tooling with the BSL in place and there certainly isn't going to be any competitive pressure to improve Terraform. While it doesn't, at least how I read it as not a lawyer, really impact most usage of Terraform as just a tool that you run on your laptop, it does make the future of Terraform development directly tied to Terraform Cloud. This wouldn't be a problem except Terraform Cloud is bad.

Terraform Cloud

I've used it for a year, it's extremely bare-bones software. It picks the latest version when you make the workspace of Terraform and then that's it. It doesn't help you upgrade Terraform, it doesn't really do any checking or optimizations, structure suggestions or anything else you need as Terraform scales. It sorta integrates with Terragrunt but not really. Basically it is identical to the CLI output of Terraform with some slight visual dressing. Then there's the kicker: the price.

$0.00014 per resource per hour. This is predatory pricing. First, because Terraform drops in value to zero if you can't put everything into Infrastructure as Code. HashiCorp knows this, hence the per-resource price. Second because they know it's impossible for me, the maintainer of the account, to police. What am I supposed to do, tell people "no you cannot have a custom IAM policy because we can't have people writing safe scoped roles"? Maybe I should start forcing subdomain sharing, make sure we don't get too spoiled with all these free hostnames. Finally it's especially grating because we're talking about sticking small collections of JSON onto object storage. There's no engineering per resource, no scaling concerns on HashiCorp's side and disk space is cheap to boot.

This combined with the license change is enough for me. I'm out. I'll deal with some grief to use your product, but at this point HashiCorp has overplayed the value of Terraform. It's a clunky tool that scales poorly and I need to do all the scaling and upgrade work myself with third-party tools, even if I pay you for your cloud product. The per-hour pricing is just the final nail in the coffin from HashiCorp.

I asked around for an alternative and someone recommend Pulumi. I've never heard of them before, so I thought this could be a super fun opportunity to try them out.

Pulumi

Pulumi and Terraform are similar, except unlike Terraform with HCL, Pulumi has lots of scale built in. Why? Because you can use a real programming language to write your Infrastructure as Code. It's a clever concept, letting you scale up the complexity of your project from writing just YAML to writing Golang or Python.

Here is the basic outline of how Pulumi structures infrastructure.

You write programs inside of projects with Nodejs, Python, Golang, .Net, Java or YAML. Programs define resources. You then run the programs inside of stacks, which are different environments. It's nice that Pulumi comes with the project structure defined vs Terraform you define it yourself. Every stack has its own state out of the box which again is a built-in optimization.

Installation was easy and they had all the expected install options. Going through the source code I was impressed with the quality, but was concerned about the 1,718 open issues as of writing this. Clicking around it does seem like they're actively working on them and it has your normal percentage of "not real issues but just people opening them as issues" problem. Also a lot of open issues with comments suggests an engaged user base. The setup on my side was very easy and I opted not to use their cloud product, mostly because it has the same problem that Terraform Cloud has.

A Pulumi Credit is the price for managing one resource for one hour. If using the Team Edition, each credit costs $0.0005. For billing purposes, we count any resource that's declared in a Pulumi program. This includes provider resources (e.g., an Amazon S3 bucket), component resourceswhich are groupings of resources (e.g., an Amazon EKS cluster), and stacks which contain resources (e.g., dev, test, prod stacks).
You consume one Pulumi Credit to manage each resource for an hour. For example, one stack containing one S3 bucket and one EC2 instance is three resources that are counted in your bill. Example: If you manage 625 resources with Pulumi every month, you will use 450,000 Pulumi Credits each month. Your monthly bill would be $150 USD = (450,000 total credits - 150,000 free credits) * $0.0005.

My mouth was actually agape when I got to that monthly bill. I get 150k credits for "free" with Teams which is 200 resources a month. That is absolutely nothing. That's "my DNS records live in Infrastructure as Code". But paying per hour doesn't even unlock all the features! I'm limited on team size, I don't get SSO, I don't get support. Also you are the smaller player, how do you charge more than HashiCorp? Disk space is real cheap and these files are very small. Charge me $99 a month per runner or per user or whatever you need to, but I don't want to ask the question "are we putting too much of our infrastructure into code". It's either all in there or there's zero point and this pricing works directly against that goal.

Alright so Pulumi Cloud is out. Maybe the Enterprise pricing is better but that's not on the website so I can't make a decision based on that. I can't mentally handle getting on another sales email list. Thankfully Pulumi has state locking with S3 now according to this so this isn't a deal-breaker.  Let's see what running it just locally looks like.

Pulumi Open-Source only

Thankfully they make that pretty easy. pulumi login --local means your state is stored locally, encrypted with a passphrase. To use s3 just switch that to pulumi login s3:// Now managing state locally or using S3 isn't a new thing, but it's nice that switching between them is pretty easy. You can start local, grow to S3 and then migrate to their Cloud product as you need. Run pulumi new python for a new blank Python setup.

❯ pulumi new python
This command will walk you through creating a new Pulumi project.

Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.

project name: (test) test
project description: (A minimal Python Pulumi program)
Created project 'test'

stack name: (dev)
Created stack 'dev'
Enter your passphrase to protect config/secrets:
Re-enter your passphrase to confirm:

Installing dependencies...

Creating virtual environment...
Finished creating virtual environment
Updating pip, setuptools, and wheel in virtual environment...

I love that it does all the correct Python things. We have a venv, we've got a requirements.txt and we've got a simple configuration file. Working with it was delightful. Setting my Hetzner API key as a secret was easy and straight-forward with: pulumi config set hcloud:token XXXXXXXXXXXXXX --secret. So what does working with it look like. Let's look at an error.

❯ pulumi preview
Enter your passphrase to unlock config/secrets
    (set PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember):
Previewing update (dev):
     Type                 Name               Plan     Info
     pulumi:pulumi:Stack  matduggan.com-dev           1 error


Diagnostics:
  pulumi:pulumi:Stack (matduggan.com-dev):
    error: Program failed with an unhandled exception:
    Traceback (most recent call last):
      File "/opt/homebrew/bin/pulumi-language-python-exec", line 197, in <module>
        loop.run_until_complete(coro)
      File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
        return future.result()
               ^^^^^^^^^^^^^^^
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 137, in run_in_stack
        await run_pulumi_func(lambda: Stack(func))
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 49, in run_pulumi_func
        func()
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 137, in <lambda>
        await run_pulumi_func(lambda: Stack(func))
                                      ^^^^^^^^^^^
      File "/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py", line 160, in __init__
        func()
      File "/opt/homebrew/bin/pulumi-language-python-exec", line 165, in run
        return runpy.run_path(args.PROGRAM, run_name='__main__')
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "<frozen runpy>", line 304, in run_path
      File "<frozen runpy>", line 240, in _get_main_module_details
      File "<frozen runpy>", line 159, in _get_module_details
      File "<frozen importlib._bootstrap_external>", line 1074, in get_code
      File "<frozen importlib._bootstrap_external>", line 1004, in source_to_code
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/Users/mathew.duggan/Documents/work/pulumi/__main__.py", line 14
        )], user_data="""
                      ^
    SyntaxError: unterminated triple-quoted string literal (detected at line 17)

We get all the super clear output of a Python error message, we still get the secrets encryption and we get all the options of Python when writing the file. However things get a little unusual when I go to inspect the state files.

Local State Files

For some reason when I select local, Pulumi doesn't store the state files in the same directory as where I'm working. Instead it stores them as a user preference at ~/.pulumi which is odd. I understand I selected local, but it's weird to assume I don't want to store the state in git or something. It is also storing a lot of things in my user directory: 358 directories, 848 files. Every template is its own directory.

How can you set it up to work correctly?

rm -rf ~/.pulumi
mkdir test && cd test
mkdir pulumi
pulumi login file://pulumi/
pulumi new --force python
cd ~/.pulumi
336 directories, 815 files

If you go back into the directory  and go to /test/pulumi/.pulumi you do see the state files. The force flag is required to let it create the new project inside a directory with stuff already in it. It all ends up working but it's clunky.

Maybe I'm alone on this, but I feel like this is unnecessarily complicated. If I'm going to work locally, the assumption should be I'm going to sit this inside of a repo. Or at the very least I'm going to expect the directory to be a self-contained thing. Also don't put stuff at $HOME/.pulumi. The correct location is ~/.config. I understand nobody follows that rule but the right places to put it are: in the directory where I make the project or in ~/.config.

S3-compatible State

Since this is the more common workflow, let me talk a bit about S3 remote backend. I tried to do a lot of testing to cover as many use-cases as possible. The lockfile works and is per stack, so you do have that basic level of functionality. Stacks cannot reference each other's outputs unless they are in the same bucket as far as I can tell, so you would need to plan for one bucket. Sharing stack names across multiple projects works, so you don't need to worry that every project has a dev, stage and prod. State encryption is your problem, but that's pretty easy to deal with in modern object storage.

The login process is basically pulumi login 's3://?region=us-east-1&awssdk=v2&profile=' and for GCP pulumi login gs://. You can see all the custom backend setup docs here. I also moved between custom backends, going from local to s3 and from s3 to GCP. It all functioned like I would expect, which was nice.

Otherwise nothing exciting to report. In my testing it worked as well as local, and trying to break it with a few folks working on the same repo didn't reveal any obvious problems. It seems as reliable as Terraform in S3, which is to say not perfect but pretty good.

Daily use

Once Pulumi was set up to use object storage, I tried to use it to manage a non-production project in Google Cloud along with someone else who agreed to work with me on it. I figured with at least two people doing the work, the experience would be more realistic.

Compared to working with Terraform, I felt like Pulumi was easier to use. Having all of the options and autocomplete of an IDE available to me when I wanted it really sped things up, plus handling edge cases that previously would have required a lot of very sensitive HCL were very simple with Python. I also liked being able to write tests for infrastructure code, which made things like database operations feel less dangerous. In Terraform the only safety check is whoever is looking at the output, so having another level of checking before potentially destroying resources was nice.

While Pulumi does provide more opinions on how to structure it, even with two of us there were quickly some disagreements on the right way to do things. I prefer more of a monolithic design and my peer prefers smaller stacks, which you can do but I find chaining together the stack output to be more work than its worth. I found the micro-service style in Pulumi to be a bit grating and easy to break, while the monolithic style was much easier for me to work in.

Setting up a CI/CD pipeline wasn't too challenging, basing everything off of this image. All the CI/CD docs on their website presuppose you are using the Cloud product, which again makes sense and I would be glad to do if they changed the pricing. However rolling your own isn't hard, it works as expected, but I want to point out one sticking point I ran into that isn't really Pulumi's fault so much as it is "the complexity of adding in secrets support".

Pulumi Secrets

So Pulumi integrates with a lot of secret managers, which is great. It also has its own secret manager which works fine. The key things to keep in mind are: if you are adding a secret, make sure you flag it as a secret to keep it from getting printed on the output. If you are going to use an external secrets manager, set aside some time to get that working. It took a bit of work to get the permissions such that CI/CD and everything else worked as expected, especially with the micro-service design where one program relied on the output of another program. You can read the docs here.

Unexpected Benefits

Here are some delightful (maybe obvious) things I ran into while working with Pulumi.

  • We already have experts in these languages. It was great to be able to ask someone with years of Python development experience "what is the best way to structure large Python projects". There is so much expertise and documentation out there vs the wasteland that is Terraform project architecture.
  • Being able to use a database. Holy crap, this was a real game-changer to me. I pulled down the GCP IAM stock roles, stuck them in SQLite and then was able to query them depending on the set of permissions the service account or user group required. Very small thing, but a massive time-saver vs me going to the website and searching around. It also lets me automate the entire process of Ticket -> PR for IAM role.
This is what I'm talking about.
  • You can set up easy APIs. Making a website that generates HCL to stick into a repo and then make a PR? Nightmare. Writing a simple Flask app that runs Pulumi against your infrastructure with scoped permissions? Not bad at all. If your org does something like "add a lot of DNS records" or "add a lot of SSH keys", this really has the potential to change your workday. Also it's easy to set up an abstraction for your entire Infrastructure. Pulumi has docs on how to get started with all of this here. Slack bots, simple command-line tools, all of it was easy to do.
  • Tests. It's nice to be able to treat infrastructure like its important.
  • Getting better at a real job skill. Every hour I get more skilled in writing Golang, I'm more valuable to my organization. I'm also just getting more hours writing code in an actual programming language, which is always good. Every hour I invest in HCL is an hour I invested in something that no other tool will ever use.
  • Speed seemed faster than Terraform. I don't know why that would be, but it did feel like especially on successive previews the results just came much faster. This was true on our CI/CD jobs as well, timing them against Terraform it seemed like Pulumi was faster most of the time. Take this with a pile of salt, I didn't do a real benchmark and ultimately we're hitting the same APIs, so I doubt there's a giant performance difference.

Conclusion

Do I think Pulumi can take over the Terraform throne? There's a lot to like here. The product is one of those great ideas, a natural evolution from where we started in DevOps to where we want to go. Moving towards treating infrastructure like everything else is the next logical leap and they have already done a lot of the ground work. I want Pulumi to succeed, I like it as a product.

However it needs to get out of its own way. The pricing needs a rethink, make it a no-brainier for me to use your cloud product and get fully integrated into it. If you give me a reliable, consistent bill I can present to leadership, I don't have to worry about Pulumi as a service I need to police. The entire organization can be let loose to write whatever infra they need, which benefits us and Pulumi as we'll be more dependent on their internal tooling.

If cost management is a big issue, have me bring my own object storage and VMs for runners. Pulumi can still thrive and be very successfully without being a zero-setup business. This is a tool for people who maintain large infrastructures. We can handle some infrastructure requirements if that is the sticking point.  

Hopefully the folks running Pulumi see this moment as the opportunity it is, both for the field at large to move past markup languages and for them to make a grab for a large share of the market.

If there is interest I can do more write-ups on sample Flask apps or Slack bots or whatever. Also if I made a mistake or you think something needs clarification, feel free to reach out to me here: https://c.im/@matdevdug.


Adventures in IPv6 Part 2

As I discussed in Part 1 I've converted this site over to pure IPv6. Well at least as pure as I could get away with. I still have some problems though, chief among them that I cannot send emails with the Ghost CMS. I've switched from Mailgun to Scaleway which does have IPv6 for their SMTP service.

smtp.tem.scw.cloud has IPv6 address 2001:bc8:1201:21:d6ae:52ff:fed0:418e
smtp.tem.scw.cloud has IPv6 address 2001:bc8:1201:21:d6ae:52ff:fed0:6aac

I've also confirmed that my docker-compose stack running Ghost can successfully reach IPv6 external addresses with no issues.

matdevdug-busy-1      | PING google.com (2a00:1450:4002:411::200e): 56 data bytes
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=0 ttl=113 time=15.079 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=1 ttl=113 time=14.607 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=2 ttl=113 time=14.540 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=3 ttl=113 time=14.593 ms
matdevdug-busy-1      |
matdevdug-busy-1      |
matdevdug-busy-1      | --- google.com ping statistics ---
matdevdug-busy-1      | 4 packets transmitted, 4 packets received, 0% packet loss
matdevdug-busy-1      | round-trip min/avg/max = 14.540/14.704/15.079 ms

I've also confirmed that Scaleway is reachable by the container no problem with the domain name, so it isn't a DNS problem.

PING smtp.tem.scw.cloud(ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac)) 56 data bytes
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=1 ttl=53 time=23.1 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=2 ttl=53 time=22.2 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=3 ttl=53 time=22.2 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=4 ttl=53 time=22.1 ms

--- smtp.tem.scw.cloud ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 22.086/22.397/23.063/0.388 ms

At this point I have three theories.

  1. It's an SMTP problem. Possible, but unlikely given how long SMTP has supported IPv6. A quick check by running it over bash by following the instructions here shows that works fine.
  2. Something is blocking the port.
telnet smtp.tem.scw.cloud 587
Trying 2001:bc8:1201:21:d6ae:52ff:fed0:6aac...
Connected to smtp.tem.scw.cloud.
Escape character is '^]'.
220 smtp.scw-tem.cloud ESMTP Service Ready

Alright it's not that.

3. Nodemailer is being stupid. It looks like Ghost relies on Nodemailer so let's check it out. Let's install Node and NPM on my debian junk machine.

sudo apt install npm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  eslint gyp handlebars libjs-async libjs-events libjs-inherits libjs-is-typedarray libjs-prettify libjs-regenerate libjs-source-map
  libjs-sprintf-js libjs-typedarray-to-buffer libjs-util libnode-dev libssl-dev libuv1-dev node-abbrev node-agent-base node-ajv node-ajv-keywords
  node-ampproject-remapping node-ansi-escapes node-ansi-regex node-ansi-styles node-anymatch node-aproba node-archy node-are-we-there-yet
  node-argparse node-arrify node-assert node-async node-async-each node-babel-helper-define-polyfill-provider node-babel-plugin-add-module-exports
  node-babel-plugin-lodash node-babel-plugin-polyfill-corejs2 node-babel-plugin-polyfill-corejs3 node-babel-plugin-polyfill-regenerator node-babel7
  node-babel7-runtime node-balanced-match node-base64-js node-binary-extensions node-brace-expansion node-braces node-browserslist node-builtins
  node-cacache node-camelcase node-caniuse-lite node-chalk node-chokidar node-chownr node-chrome-trace-event node-ci-info node-cli-table node-cliui
  node-clone node-clone-deep node-color-convert node-color-name node-colors node-columnify node-commander node-commondir node-concat-stream
  node-console-control-strings node-convert-source-map node-copy-concurrently node-core-js node-core-js-compat node-core-js-pure node-core-util-is
  node-css-loader node-css-selector-tokenizer node-data-uri-to-buffer node-debbundle-es-to-primitive node-debug node-decamelize
  node-decompress-response node-deep-equal node-deep-is node-defaults node-define-properties node-defined node-del node-delegates node-depd
  node-diff node-doctrine node-electron-to-chromium node-encoding node-end-of-stream node-enhanced-resolve node-err-code node-errno node-error-ex
  node-es-abstract node-es-module-lexer node-es6-error node-escape-string-regexp node-escodegen node-eslint-scope node-eslint-utils
  node-eslint-visitor-keys node-espree node-esprima node-esquery node-esrecurse node-estraverse node-esutils node-events node-fancy-log
  node-fast-deep-equal node-fast-levenshtein node-fetch node-file-entry-cache node-fill-range node-find-cache-dir node-find-up node-flat-cache
  node-flatted node-for-in node-for-own node-foreground-child node-fs-readdir-recursive node-fs-write-stream-atomic node-fs.realpath
  node-function-bind node-functional-red-black-tree node-gauge node-get-caller-file node-get-stream node-glob node-glob-parent node-globals
  node-globby node-got node-graceful-fs node-gyp node-has-flag node-has-unicode node-hosted-git-info node-https-proxy-agent node-iconv-lite
  node-icss-utils node-ieee754 node-iferr node-ignore node-imurmurhash node-indent-string node-inflight node-inherits node-ini node-interpret
  node-ip node-ip-regex node-is-arrayish node-is-binary-path node-is-buffer node-is-extendable node-is-extglob node-is-glob node-is-number
  node-is-path-cwd node-is-path-inside node-is-plain-obj node-is-plain-object node-is-stream node-is-typedarray node-is-windows node-isarray
  node-isexe node-isobject node-istanbul node-jest-debbundle node-jest-worker node-js-tokens node-js-yaml node-jsesc node-json-buffer
  node-json-parse-better-errors node-json-schema node-json-schema-traverse node-json-stable-stringify node-json5 node-jsonify node-jsonparse
  node-kind-of node-levn node-loader-runner node-locate-path node-lodash node-lodash-packages node-lowercase-keys node-lru-cache node-make-dir
  node-memfs node-memory-fs node-merge-stream node-micromatch node-mime node-mime-types node-mimic-response node-minimatch node-minimist
  node-minipass node-mkdirp node-move-concurrently node-ms node-mute-stream node-n3 node-negotiator node-neo-async node-nopt
  node-normalize-package-data node-normalize-path node-npm-bundled node-npm-package-arg node-npm-run-path node-npmlog node-object-assign
  node-object-inspect node-once node-optimist node-optionator node-osenv node-p-cancelable node-p-limit node-p-locate node-p-map node-parse-json
  node-path-dirname node-path-exists node-path-is-absolute node-path-is-inside node-path-type node-picocolors node-pify node-pkg-dir node-postcss
  node-postcss-modules-extract-imports node-postcss-modules-values node-postcss-value-parser node-prelude-ls node-process-nextick-args node-progress
  node-promise-inflight node-promise-retry node-promzard node-prr node-pump node-punycode node-quick-lru node-randombytes node-read
  node-read-package-json node-read-pkg node-readable-stream node-readdirp node-rechoir node-regenerate node-regenerate-unicode-properties
  node-regenerator-runtime node-regenerator-transform node-regexpp node-regexpu-core node-regjsgen node-regjsparser node-repeat-string
  node-require-directory node-resolve node-resolve-cwd node-resolve-from node-resumer node-retry node-rimraf node-run-queue node-safe-buffer
  node-schema-utils node-semver node-serialize-javascript node-set-blocking node-set-immediate-shim node-shebang-command node-shebang-regex
  node-signal-exit node-slash node-slice-ansi node-source-list-map node-source-map node-source-map-support node-spdx-correct node-spdx-exceptions
  node-spdx-expression-parse node-spdx-license-ids node-sprintf-js node-ssri node-string-decoder node-string-width node-strip-ansi node-strip-bom
  node-strip-json-comments node-supports-color node-tapable node-tape node-tar node-terser node-text-table node-through node-time-stamp
  node-to-fast-properties node-to-regex-range node-tslib node-type-check node-typedarray node-typedarray-to-buffer
  node-unicode-canonical-property-names-ecmascript node-unicode-match-property-ecmascript node-unicode-match-property-value-ecmascript
  node-unicode-property-aliases-ecmascript node-unique-filename node-uri-js node-util node-util-deprecate node-uuid node-v8-compile-cache
  node-v8flags node-validate-npm-package-license node-validate-npm-package-name node-watchpack node-wcwidth.js node-webassemblyjs
  node-webpack-sources node-which node-wide-align node-wordwrap node-wrap-ansi node-wrappy node-write node-write-file-atomic node-y18n node-yallist
  node-yargs node-yargs-parser terser webpack
Suggested packages:
  node-babel-eslint node-esprima-fb node-inquirer libjs-angularjs libssl-doc node-babel-plugin-polyfill-es-shims node-babel7-debug javascript-common
  livescript chai node-jest-diff node-opener
Recommended packages:
  javascript-common build-essential node-tap
The following NEW packages will be installed:
  eslint gyp handlebars libjs-async libjs-events libjs-inherits libjs-is-typedarray libjs-prettify libjs-regenerate libjs-source-map
  libjs-sprintf-js libjs-typedarray-to-buffer libjs-util libnode-dev libssl-dev libuv1-dev node-abbrev node-agent-base node-ajv node-ajv-keywords
  node-ampproject-remapping node-ansi-escapes node-ansi-regex node-ansi-styles node-anymatch node-aproba node-archy node-are-we-there-yet
  node-argparse node-arrify node-assert node-async node-async-each node-babel-helper-define-polyfill-provider node-babel-plugin-add-module-exports
  node-babel-plugin-lodash node-babel-plugin-polyfill-corejs2 node-babel-plugin-polyfill-corejs3 node-babel-plugin-polyfill-regenerator node-babel7
  node-babel7-runtime node-balanced-match node-base64-js node-binary-extensions node-brace-expansion node-braces node-browserslist node-builtins
  node-cacache node-camelcase node-caniuse-lite node-chalk node-chokidar node-chownr node-chrome-trace-event node-ci-info node-cli-table node-cliui
  node-clone node-clone-deep node-color-convert node-color-name node-colors node-columnify node-commander node-commondir node-concat-stream
  node-console-control-strings node-convert-source-map node-copy-concurrently node-core-js node-core-js-compat node-core-js-pure node-core-util-is
  node-css-loader node-css-selector-tokenizer node-data-uri-to-buffer node-debbundle-es-to-primitive node-debug node-decamelize
  node-decompress-response node-deep-equal node-deep-is node-defaults node-define-properties node-defined node-del node-delegates node-depd
  node-diff node-doctrine node-electron-to-chromium node-encoding node-end-of-stream node-enhanced-resolve node-err-code node-errno node-error-ex
  node-es-abstract node-es-module-lexer node-es6-error node-escape-string-regexp node-escodegen node-eslint-scope node-eslint-utils
  node-eslint-visitor-keys node-espree node-esprima node-esquery node-esrecurse node-estraverse node-esutils node-events node-fancy-log
  node-fast-deep-equal node-fast-levenshtein node-fetch node-file-entry-cache node-fill-range node-find-cache-dir node-find-up node-flat-cache
  node-flatted node-for-in node-for-own node-foreground-child node-fs-readdir-recursive node-fs-write-stream-atomic node-fs.realpath
  node-function-bind node-functional-red-black-tree node-gauge node-get-caller-file node-get-stream node-glob node-glob-parent node-globals
  node-globby node-got node-graceful-fs node-gyp node-has-flag node-has-unicode node-hosted-git-info node-https-proxy-agent node-iconv-lite
  node-icss-utils node-ieee754 node-iferr node-ignore node-imurmurhash node-indent-string node-inflight node-inherits node-ini node-interpret
  node-ip node-ip-regex node-is-arrayish node-is-binary-path node-is-buffer node-is-extendable node-is-extglob node-is-glob node-is-number
  node-is-path-cwd node-is-path-inside node-is-plain-obj node-is-plain-object node-is-stream node-is-typedarray node-is-windows node-isarray
  node-isexe node-isobject node-istanbul node-jest-debbundle node-jest-worker node-js-tokens node-js-yaml node-jsesc node-json-buffer
  node-json-parse-better-errors node-json-schema node-json-schema-traverse node-json-stable-stringify node-json5 node-jsonify node-jsonparse
  node-kind-of node-levn node-loader-runner node-locate-path node-lodash node-lodash-packages node-lowercase-keys node-lru-cache node-make-dir
  node-memfs node-memory-fs node-merge-stream node-micromatch node-mime node-mime-types node-mimic-response node-minimatch node-minimist
  node-minipass node-mkdirp node-move-concurrently node-ms node-mute-stream node-n3 node-negotiator node-neo-async node-nopt
  node-normalize-package-data node-normalize-path node-npm-bundled node-npm-package-arg node-npm-run-path node-npmlog node-object-assign
  node-object-inspect node-once node-optimist node-optionator node-osenv node-p-cancelable node-p-limit node-p-locate node-p-map node-parse-json
  node-path-dirname node-path-exists node-path-is-absolute node-path-is-inside node-path-type node-picocolors node-pify node-pkg-dir node-postcss
  node-postcss-modules-extract-imports node-postcss-modules-values node-postcss-value-parser node-prelude-ls node-process-nextick-args node-progress
  node-promise-inflight node-promise-retry node-promzard node-prr node-pump node-punycode node-quick-lru node-randombytes node-read
  node-read-package-json node-read-pkg node-readable-stream node-readdirp node-rechoir node-regenerate node-regenerate-unicode-properties
  node-regenerator-runtime node-regenerator-transform node-regexpp node-regexpu-core node-regjsgen node-regjsparser node-repeat-string
  node-require-directory node-resolve node-resolve-cwd node-resolve-from node-resumer node-retry node-rimraf node-run-queue node-safe-buffer
  node-schema-utils node-semver node-serialize-javascript node-set-blocking node-set-immediate-shim node-shebang-command node-shebang-regex
  node-signal-exit node-slash node-slice-ansi node-source-list-map node-source-map node-source-map-support node-spdx-correct node-spdx-exceptions
  node-spdx-expression-parse node-spdx-license-ids node-sprintf-js node-ssri node-string-decoder node-string-width node-strip-ansi node-strip-bom
  node-strip-json-comments node-supports-color node-tapable node-tape node-tar node-terser node-text-table node-through node-time-stamp
  node-to-fast-properties node-to-regex-range node-tslib node-type-check node-typedarray node-typedarray-to-buffer
  node-unicode-canonical-property-names-ecmascript node-unicode-match-property-ecmascript node-unicode-match-property-value-ecmascript
  node-unicode-property-aliases-ecmascript node-unique-filename node-uri-js node-util node-util-deprecate node-uuid node-v8-compile-cache
  node-v8flags node-validate-npm-package-license node-validate-npm-package-name node-watchpack node-wcwidth.js node-webassemblyjs
  node-webpack-sources node-which node-wide-align node-wordwrap node-wrap-ansi node-wrappy node-write node-write-file-atomic node-y18n node-yallist
  node-yargs node-yargs-parser npm terser webpack
0 upgraded, 349 newly installed, 0 to remove and 1 not upgraded.
Need to get 13.8 MB of archives.
After this operation, 106 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Jesus Christ NPM, what is happening

Now that I have that nightmare factory installed.

"use strict";
const nodemailer = require("nodemailer");

const transporter = nodemailer.createTransport({
  host: "smtp.tem.scw.cloud",
  port: 587,
  // Just so I don't need to worry about it
  secure: false,
  auth: {
    // TODO: replace `user` and `pass` values from <https://forwardemail.net>
    user: 'scaleway-user-name',
    pass: 'scaleway-password'
  }
});

// async..await is not allowed in global scope, must use a wrapper
async function main() {
  // send mail with defined transport object
  const info = await transporter.sendMail({
    from: '"Dead People 👻" <[email protected]>', // sender address
    to: "[email protected]", // list of receivers
    subject: "Hello", // Subject line
    text: "Hello world", // plain text body
    html: "<b>Hello world?</b>", // html body
  });

  console.log("Message sent: %s", info.messageId);
}

main().catch(console.error);

Looks like Nodemailer doesn't seem to understand this is an IPv6 box.

node example.js
Error: connect ENETUNREACH 51.159.99.81:587 - Local (0.0.0.0:0)
    at internalConnect (node:net:1060:16)
    at defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18)
    at node:net:1244:9
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
  errno: -101,
  code: 'ESOCKET',
  syscall: 'connect',
  address: '51.159.99.81',
  port: 587,
  command: 'CONN'
}

It looks like this should have been fixed here: https://github.com/nodemailer/nodemailer/pull/1311 but clearly isn't. What happens if I just manually set the IPv6 address.

Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: IP: 2001:bc8:1201:21:d6ae:52ff:fed0:6aac is not in the cert's list:

However if you set it to use an IP for host and a DNS entry for hostname, everything seems to work great.

"use strict";
const nodemailer = require("nodemailer");

const transporter = nodemailer.createTransport({
  host: "2001:bc8:1201:21:d6ae:52ff:fed0:6aac",
  port: 587,
  secure: false,
  tls: {
    rejectUnauthorized: true,
    servername: "smtp.tem.scw.cloud"},
  auth: {
    user: 'scaleway-username',
    pass: 'scaleway-password'
  }
});

// async..await is not allowed in global scope, must use a wrapper
async function main() {
  // send mail with defined transport object
  const info = await transporter.sendMail({
    from: '"Test" <[email protected]>', // sender address
    to: [email protected]", // list of receivers
    subject: "Hello ✔", // Subject line
    text: "Hello world?", // plain text body
    html: "<b>Hello world?</b>", // html body
  });

  console.log("Message sent: %s", info.messageId);
}

main().catch(console.error);

Alright well issue submitted here: https://github.com/TryGhost/Ghost/issues/17627

It is a little alarming that the biggest Node email package doesn't work with IPv6 and seemingly only one person noticed and tried to fix it. Well whatever, we have a workaround.

Python

Alright let's try to fix the pip problems I was seeing before in various scripts.

pip3 install requests
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Right I forgot Python was doing this now. Fine, I'll use venv, not a problem. I guess first I compile a version of Python if I want the latest? I don't see any newer ARM packages out there. Alright, compiling Python.

sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget libbz2-dev

wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz

cd Python-3.11.4/

sudo make -j 2

sudo make altinstall

Alright now pip works great on the latest version inside of a venv. My scripts all seem to work fine and there appears to be no issues. Whatever problem there was before is resolved. Specific shoutout to requests where I'm doing some strange things with network traffic and it seems to have no problems.

Conclusion

So the amount of work to get a pretty simple blog up was nontrivial, but we're here now. I have a patch for Ghost that I can apply to the container, Python seems to be working fine/great now and Docker seems to work as long as I use a user-created network with IPv6 strictly defined. The Docker default bridge also works if you specify the links inside of the docker-compose file, but that seems to be depricated so let's not waste too much time on that. For those looking for instructions on the Docker part I just followed the guide outlined here.

Now that everything is up and running it seems fine, but again if you are thinking of running an IPv6 only server infrastructure, set aside a lot of time for problem solving. Even simple applications like this require a lot of research to get up and running successfully with outbound network functioning and everything linked up in the correct way.


IPv6 Is A Disaster (but we can fix it)

IP addresses have been in the news a lot lately and not for good reasons. AWS has announced they are charging $.005 per IPv4 address per hour, joining other cloud providers in charging for the luxury of a public IPv4 address. GCP charges $.004, same with Azure and Hetzner charges €0.001/h. Clearly the era of cloud providers going out and purchasing more IPv4 space is coming to an end. As time goes on, the addresses are just more valuable and it makes less sense to give them out for free.

So the writing is on the wall. We need to switch to IPv6. Now I was first told that we were going to need to switch to IPv6 when I was in high school in my first Cisco class and I'm 36 now, to give you some perspective on how long this has been "coming down the pipe". Up to this point I haven't done much at all with IPv6, there has been almost no market demand for those skills and I've never had a job where anybody seemed all that interested in doing it. So I skipped learning about it, which is a shame because it's actually a great advancement in networking.

Now is the second best time to learn though, so I decided to migrate this blog to IPv6 only. We'll stick it behind a CDN to handle the IPv4 traffic, but let's join the cool kids club. What I found was horrifying: almost nothing works out of the box. Major  dependencies cease functioning right away and workarounds cannot be described as production ready. The migration process for teams to IPv6 is going to be very rocky, mostly because almost nobody has done the work. We all skipped it for years and now we'll need to pay the price.

Why is IPv6 worth the work?

I'm not gonna do a thing about what is IPv4 vs IPv6. There are plenty of great articles on the internet about that. Let's just quickly recap though "why would anyone want to make the jump to IPv6".

An IPv6 packet header
  • Address space (obviously)
  • Smaller number of header fields (8 vs 13 on v4)
  • Faster processing: No more checksum, so routers don't have to do a recalculation for every packet.
  • Faster routing: More summary routes and hierarchical routes. (Don't know what that is? No stress. Summary route = combining multiple IPs so you don't need all the addresses, just the general direction based on the first part of the address. Ditto with routes, since IPv6 is globally unique you can have small and efficient backbone routing.)
  • QoS: Traffic Class and Flow Label fields make QoS easier.
  • Auto-addressing. This allows IPv6 hosts on a LAN to connect without a router or DHCP server.
  • You can add IPsec to IPv6 with the Authentication Header and Encapsulating Security Payload.

Finally the biggest one: because IPv6 addresses are free and IPv4 ones are not.

Setting up an IPv6-Only Server

The actual setup process was simple. I provisioned a Debian box and selected "IPv6". Then I got my first surprise. My box didn't get an IPv6 address. I was given a /64 of addresses, which is 18,446,744,073,709,551,616. It is good to know that my small ARM server could scale to run all the network infrastructure for every company I've ever worked for on all public addresses.

Now this sounds wasteful but when you look at how IPv6 works, it really isn't. Since IPv6 is much less "chatty" than IPv4, even if I had 10,000 hosts on this network it doesn't matter. As discussed here it actually makes sense to keep all the IPv6 space, even if at first it comes across as insanely wasteful. So just don't think about how many addresses are getting sent to each device.

Important: resist the urge to optimize address utilization. Talking to more experienced networking folks, this seems to be a common trap people fall into. We've all spent so much time worrying about how much space we have remaining in an IPv4 block and designing around that problem. That issue doesn't exist anymore. A /64 prefix is the smallest you should configure on an interface.

Attempting to stick a smaller prefix, which is something I've heard people try, like a /68 or a /96 can break stateless address auto-configuration. Your mentality should be a /48 per site. That's what the Regional Internet Registries hands out when allocating IPv6. When thinking about network organization, you need to think about the nibble boundary. (I know, it sounds like I'm making shit up now). It's basically a way to make IPv6 easier to read.

Let's say you have 2402:9400:10::/48. You would divide it up as follows if you wanted only /64 for each box as a flat network.

Subnet #Subnet Address
02402:9400:10::/64
12402:9400:10:1::/64
22402:9400:10:2::/64
32402:9400:10:3::/64
42402:9400:10:4::/64
52402:9400:10:5::/64

A /52 works a similar way.

Subnet #Subnet Address
02402:9400:10::/52
12402:9400:10:1000::/52
22402:9400:10:2000::/52
32402:9400:10:3000::/52
42402:9400:10:4000::/52
52402:9400:10:5000::/52

You can still at a glance know which subnet you are looking at.

Alright I've got my box ready to go. Let's try to set it up like a normal server.

Problem 1 - I can't SSH in

This was a predictable problem. Neither my work or home ISP supports IPv6. So it's great that I have this box set up, but now I can't really do anything with it. Fine, I attach an IPv4 address for now, SSH in and I'll set up cloudflared to run a tunnel. Presumably they'll handle the conversion on their side.

Except that isn't how Cloudflare rolls. Imagine my surprise when the tunnel collapses when I remove the IPv4 address. By default the cloudflared utility assumes IPv4 and you need to go in and edit the systemd service file to add: --edge-ip-version 6. After this, the tunnel is up and I'm able to SSH in.

Problem 2 - I can't use GitHub

Alright so I'm on the box. Now it's time to start setting up stuff. I run my server setup script and it immediately fails. It's trying to access the installation script for hishtory, a great shell history utility I use on all my personal stuff. It's trying to pull the install file from GitHub and failing. "Certainly that can't be right. GitHub must support IPv6?"

Nope. Alright fine, seems REALLY bad that the service the entire internet uses to release software doesn't work with IPv6, but you know Microsoft is broke and also only cares about fake AI now, so whatever. I ended up using the TransIP Github Proxy which worked fine. Now I have access to Github. But then Python fails with urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable>. Alright I give up on this. My guess is the version of Python 3 in Debian doesn't like IPv6, but I'm not in the mood to troubleshoot it right now.

Problem 3 - Can't set up Datadog

Let's do something more basic. Certainly I can set up Datadog to keep an eye on this box. I don't need a lot of metrics, just a few historical load numbers. Go to Datadog, log in and start to walk through the process. Immediately collapses. The simple setup has you run curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh. Now S3 supports IPv6, so what the fuck?

curl -v https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh
*   Trying [64:ff9b::34d9:8430]:443...
*   Trying 52.216.133.245:443...
* Immediate connect fail for 52.216.133.245: Network is unreachable
*   Trying 54.231.138.48:443...
* Immediate connect fail for 54.231.138.48: Network is unreachable
*   Trying 52.217.96.222:443...
* Immediate connect fail for 52.217.96.222: Network is unreachable
*   Trying 52.216.152.62:443...
* Immediate connect fail for 52.216.152.62: Network is unreachable
*   Trying 54.231.229.16:443...
* Immediate connect fail for 54.231.229.16: Network is unreachable
*   Trying 52.216.210.200:443...
* Immediate connect fail for 52.216.210.200: Network is unreachable
*   Trying 52.217.89.94:443...
* Immediate connect fail for 52.217.89.94: Network is unreachable
*   Trying 52.216.205.173:443...
* Immediate connect fail for 52.216.205.173: Network is unreachable

It's not S3 or the box, because I can connect to the test S3 bucket AWS provides just fine.

curl -v  http://s3.dualstack.us-west-2.amazonaws.com/
*   Trying [2600:1fa0:40bf:a809:345c:d3f8::]:80...
* Connected to s3.dualstack.us-west-2.amazonaws.com (2600:1fa0:40bf:a809:345c:d3f8::) port 80 (#0)
> GET / HTTP/1.1
> Host: s3.dualstack.us-west-2.amazonaws.com
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: r1WAG/NYpaggrPl3Oja4SG1CrcBZ+1RIpYKivAiIhiICtfwiItTgLfm6McPXXJpKWeM848YWvOQ=
< x-amz-request-id: BPCVA8T6SZMTB3EF
< Date: Tue, 01 Aug 2023 10:31:27 GMT
< Location: https://aws.amazon.com/s3/
< Server: AmazonS3
< Content-Length: 0
<
* Connection #0 to host s3.dualstack.us-west-2.amazonaws.com left intact

Fine I'll do it the manual way through apt.

0% [Connecting to apt.datadoghq.com (18.66.192.22)]

Goddamnit. Alright Datadog is out. It's at this point I realize the experiment of trying to go IPv6 only isn't going to work. Almost nothing seems to work right without proxies and hacks. I'll try to stick as much as I can on IPv6 but going exclusive isn't an option at this point.

NAT64

So in order to access IPv4 resources from IPv6 you need to go through a NAT64 service. I ended up using this one: https://nat64.net/. Immediately all my problems stopped and I was able to access resources normally. I am a little nervous about relying exclusively on what appears to be a hobby project for accessing critical internet resources, but since nobody seems to care upstream of me about IPv6 I don't think I have a lot of choice.

I am surprised there aren't more of these. This is the best list I was able to find:

Most of them seem to be gone now. Dresel's link doesn't work, Trex in my testing had problems, August Internet is gone, most of the Go6lab test devices are down, Tuxis worked but they launched the service in 2019 and seem to have no further interaction with it. Basically Kasper Dupont seems to be the only person on the internet with any sort of widespread interest in allowing IPv6 to actually work. Props to you Kasper.

Basically one person props up this entire part of the internet.

Kasper Dupont

So I was curious about Kasper and emailed him to ask a few questions. You can see that back and forth below.

Me: I found the Public NAT64 service super useful in the transition but would love to know a little bit more about why you do it.

Kasper: I do it primarily because I want to push IPv6 forward. For a few years
I had the opportunity to have a native IPv6-only network at home with
DNS64+NAT64, and I found that to be a pleasant experience which I
wanted to give more people a chance to experience.

When I brought up the first NAT64 gateway it was just a proof of
concept of a NAT64 extension I wanted to push. The NAT64 service took
off, the extension - not so much.

A few months ago I finally got native IPv6 at my current home, so now
I can use my own service in a fashion which much more resembles how my
target users would use it.

Me: You seem to be one of the few remaining free public services like this on the internet and would love to know a bit more about what motivated you to do it, how much it costs to run, anything you would feel comfortable sharing.

Kasper: For my personal products I have a total of 7 VMs across different
hosting providers. Some of them I purchase from Hetzner at 4.51 Euro
per month: https://hetzner.cloud/?ref=fFum6YUDlpJz

The other VMs are a bit more expensive, but not a lot.

Out of those VMs the 4 are used for the NAT64 service and the others
are used for other IPv6 transition related services. For example I
also run this service on a single VM: http://v4-frontend.netiter.com/

I hope to eventually make arrangements with transit providers which
will allow me to grow the capacity of the service and make it
profitable such that I can work on IPv6 full time rather than as a
side gig. The ideal outcome of that would be that IPv4-only content
providers pay the cost through their transit bandwidth payments.

Me: Any technical details you would like to mention would also be great

Kasper: That's my kind of audience :-)

I can get really really technical.

I think what primarily sets my service aside from other services is
that each of my DNS64 servers is automatically updated with NAT64
prefixes based on health checks of all the gateways. That means the
outage of any single NAT64 gateway will be mostly invisible to users.
This also helps with maintenance. I think that makes my NAT64 service
the one with the highest availability among the public NAT64 services.

The NAT64 code is developed entirely by myself and currently runs as a
user mode daemon on Linux. I am considering porting the most
performance critical part to a kernel module.

This site

Alright so I got the basics up and running. In order to pull docker containers over IPv6 you need to add: registry.ipv6.docker.com/library/ to the front of the image name. So for instance:
image: mysql:8.0 becomes image: registry.ipv6.docker.com/library/mysql:8.0

Docker warns you this setup isn't production ready. I'm not really sure what that means for Docker. Presumably if it were to stop you should be able to just pull normally?

Once that was done, we set up the site as an AAAA DNS record and allowed Cloudflare to proxy, meaning they handle the advertisement of IPv6 and bring the traffic to me. One thing I did modify from before was previously I was using Caddy webserver but since I now have a hard reliance on Cloudflare for most of my traffic, I switched to Nginx. One nice thing you can do now that you know all traffic is coming from Cloudflare is switch how SSL works.

Now I have an Origin Certificate from Cloudflare hard-loaded into Nginx with Authenticated Origin Pulls set up so that I know for sure all traffic is running through Cloudflare. The certificate is signed for 15 years, so I can feel pretty confident sticking it in my secrets management system and not thinking about it ever again. For those that are interested there is a tutorial here on how to do it: https://www.digitalocean.com/community/tutorials/how-to-host-a-website-using-cloudflare-and-nginx-on-ubuntu-22-04

Alright the site is back up and working fine. It's what you are reading right now, so if it's up then the system works.

Unsolved Problems

  • My containers still can't communicate with IPv4 resources even though they're on an IPv6 network with an IPv6 bridge. The DNS64 resolution is working, and I've added fixed-cidr-v6 into Docker. I can talk to IPv6 resources just fine, but the NAT64 conversion process doesn't work. I'm going to keep plugging away at it.
  • Before you ping me I did add NAT with ip6tables.
  • SMTP server problems. I haven't been able to find a commercial SMTP service that has an AAAA record. Mailgun and SES were both duds as were a few of the smaller ones I tried. Even Fastmail didn't have anything that could help me. If you know of one please let me know: https://c.im/@matdevdug

Why not stick with IPv4?

Putting aside "because we're running out of addresses" for a minute. If we had adopted IPv6 earlier, the way we do infrastructure could be radically different. So often companies use technology like load balancers and tunnels not because they actually need anything that these things do, but because they need some sort of logical division between private IP ranges and a public IP address they can stick in an DNS A record.

If you break a load balancer into its basic parts, it is doing two things. It is distributing incoming packets onto the back-end servers and it s checking the health of those servers and taking unhealthy ones out of the rotation. Nowadays they often handle things like SSL termination and metrics, but it's not a requirement to be called a load balancer.

There are many ways to load balance, but the most common are as follows:

  1. Round-robin of connection requests.
  2. Weighted Round-Robin with different servers getting more or less.
  3. Least-Connection with servers that have the fewest connections getting more requests.
  4. Weighted Least-Connection, same thing but you can tilt it towards certain boxes.

What you notice is there isn't anything there that requires, or really even benefits from a private IP address vs a public IP address. Configuring the hosts to accept traffic from only one source (the load balancer) is pretty simple and relatively cheap to do, computationally speaking. A lot of the infrastructure designs we've been forced into, things like VPCs, NAT gateways, public vs private subnets, all of these things could have been skipped or relied on less.

The other irony is that IP whitelisting, which currently is a broken security practice that is mostly a waste of time as we all use IP addresses owned by cloud providers, would actually be something that mattered. The process for companies to purchase a /44 for themselves would have gotten easier with demand and it would have been more common for people to go and buy a block of IPs from American Registry for Internet Numbers (ARIN), Réseaux IP Européens Network Coordination Centre (RIPE), or Asia-Pacific Network Information Centre (APNIC).

You would never need to think "well is Google going to buy more IP addresses" or "I need to monitor GitHub support page to make sure they don't add more later". You'd have one block they'd use for their entire business until the end of time. Container systems wouldn't need to assign internal IP addresses on each host, it would be trivial to allocate chunks of public IPs for them to use and also advertise over standard public DNS as needed.

Obviously I'm not saying private networks serve no function. My point is a lot of the network design we've adopted isn't based on necessity but on forced design. I suspect we would have ended up designing applications with the knowledge that they sit on the open internet vs relying entirely on the security of a private VPC. Given how security exploits work this probably would have been a benefit to overall security and design.

So even if cost and availability isn't a concern for you, allowing your organization more ownership and control over how your network functions has real measurable value.

Is this gonna get better?

So this sucks. You either pay cloud providers more money or you get a broken internet. My hope is that the folks who don't want to pay push more IPv6 adoption, but it's also a shame that it has taken so long for us to get here. All these problems and issues could have been addressed gradually and instead it's going to be something where people freak out until the teams that own these resources make the required changes.

I'm hopeful the end result might be better. I think at the very least it might open up more opportunities for smaller companies looking to establish themselves permanently with an IP range that they'll own forever, plus as IPv6 gets more mainstream it will (hopefully) get easier for customers to live with. But I have to say right now this is so broken it's kind of amazing.

If you are a small company looking to not pay the extra IP tax, set aside a lot of time to solve a myriad of problems you are going to encounter.

Thoughts/corrections/objections: [email protected]


Serverless Functions Post-Mortem

Around 2016, the term "serverless functions" started to take off in the tech industry. In short order, it was presented as the undeniable future of infrastructure. It's the ultimate solution to redundancy, geographic resilience, load balancing and autoscaling. Never again would we need to patch, tweak or monitor an application. The cloud providers would do it, all we had to do is hit a button and deploy to internet.

I was introduced to it like most infrastructure technology is presented to me, which is as a veiled threat. "Looks like we won't need as many Operations folks in the future with X" is typically how executives discuss it. Early in my career this talk filled me with fear, but now that I've heard it 10+ times, I adopt a "wait and see" mentality. I was told the same thing about VMs, going from IBM and Oracle to Linux, going from owning the datacenter to renting a cage to going to the cloud. Every time it seems I survive.

Even as far as tech hype goes, serverless functions picked up steam fast. Technologies like AWS Lambda and GCP Cloud Functions were adopted by orgs I worked at very fast compared to other technology. Conference after conference and expert after expert proclaimed that serverless was inevitable.  It felt like AWS Lambda and others were being adopted for production workloads at a breakneck pace.

Then, without much fanfare, it stopped. Other serverless technologies like GKE Autopilot and ECS are still going strong, but the idea of a serverless function replacing the traditional web framework or API has almost disappeared. Even cloud providers pivoted, positioning the tools as more "glue between services" than the services themselves. The addition of being able to run Docker containers as functions seemed to help a bit, but it remains a niche component of the API world.

What happened? Why were so many smart people wrong? What can we learn as a community about hype and marketing around new tools?

Promise of serverless

Above we see a serverless application as initially pitched. Users would ingress through the API Gateway technology, which handles everything from traffic management, CORS, authorization and API version management. It basically serves as the web server and framework all in one. Easy to test new versions with multiple versions of the same API at the same time, easy to monitor and easy to set up.

After that comes the actual serverless function. These could be written in whatever language you wanted and could run for up to 15 minutes as of 2023. So instead of having, say, a Rails application where you are combining the Model-View-Controller into a monolith, you can break it into each route and use different tools to solve for each situation.

This suggests how one might structure a new PHP applications for instance.

Since these were only invoked in response to a request coming from a user, it was declared a cost savings. You weren't paying for server resources you weren't using, unlike traditional servers where you would provision the expected capacity beforehand based on a guess. The backend would also endlessly scale, meaning it would be impossible to overwhelm the service with traffic. No more needing to worry about DDoS or floods of traffic.

Finally at the end would be a database managed by your cloud provider. All in all you aren't managing any element of this process, so no servers or software updates. You could deploy a thousand times a day and precisely control the rollout and rollback of code. Each function could be written in the language that best suited it. So maybe your team writes most things in Python or Ruby but then goes back through for high volume routes and does those in Golang.

Combined with technologies like S3 and DynamoDB along with SNS you have a compelling package. You could still send messages between functions with SNS topics. Storage was effectively unlimited with S3 and you had a reliable and flexible key-value store with DynamoDB. Plus you ditched the infrastructure folks, the monolith, any dependency on the host OS and you were billed by your cloud provider for your actual usage based on the millisecond.

Initial Problems

The initial adoption of serverless was challenging for teams, especially teams used to monolith development.

  • Local development. Typically a developer pulls down the entire application they're working on and runs it on their device to be able to test quickly. With serverless, that doesn't really work since the application is potentially thousands of different services written in different languages. You can do this with serverless functions but it's way more complicated.
  • Hard to set resources correctly. How much memory did this function need under testing can be very different from how much it needs under production. Developers tended to set their limits high to avoid problems, wiping out much of the cost savings. There is no easy way to adjust functions based on real-world data outside of doing it by hand one by one.
  • AWS did make this process easier with AWS Lambda Power Tuning but you'll still need to roll out the changes yourself function by function. Since even a medium sized application can be made up of 100+ functions, this is a non-trivial thing to do. Plus these aren't static things, changes can get rolled out that dramatically change the memory usage with no warning
  • Is it working? Observability is harder with a distributed system vs a monolith and serverless just added to that. Metrics are less useful as are old systems like uptime checks. You need, certainly in the beginning, to rely on logs and traces a lot more. For smaller teams especially, the monitoring shift from "uptime checks + grafana" to a more complex log-based profile of health was a rough adjustment.

All these problems were challenges but it seems many were able to get through it with momentum intact. We started to see a lot of small applications launch that were serverless function based, from APIs to hobby developer projects. All of this is reflected by the Datadog State of Serverless report for 2020 which you can see here.

At this point everything seems great. 80% of AWS container users have adopted Lambda in some capacity, paired with SQS and DynamoDB. NodeJS and Python are the dominant languages, which is a little eyebrow raising. This suggests that picking the right language for the job didn't end up happening, instead picking the language easiest for the developer. But that's fine, that is also an optimization.

What happened? What went wrong?

Production Problems

Across the industry we started to hear feedback from teams that had gone hard into serverless functions backing back out. I started to see problems in my own teams that had adopted serverless. The following trends came up in no particular order.

  • Latency. Traditional web frameworks and containers are fast at processing requests, typically hitting latency in database calls. Serverless functions were slow depending on the last time you invoked them. This led to teams needing to keep "functions warm." What does this mean?

When the function gets a request it downloads the code and gets ready to run it. After that for a period of time, the function is just ready to rerun until it is recycled and the process needs to be run again. The way around this at first was typically an EventBridge rule to keep the function running every minute. This kind of works but not really.

Later Provisioned Concurrency was added, which is effectively....a server. It's a VM where your code is already loaded. You are limited per account to how many functions you can have set to be Provisioned Concurrency, so it's hardly a silver bullet. Again none of this happens automatically, so its up to someone to go through and carefully tune each function to ensure it is in the right category.

  • Scaling. Serverless functions don't scale to infinity. You can scale concurrency levels up every minute by an additional 500 microVMs. But it is very possible for one function to eat all of the capacity for every other function. Again it requires someone to go through and understand what Reserved Concurrency each function needs and divide that up as a component of the whole.

In addition, serverless functions don't magically get rid of database concurrency limits. So you'll hit situations where a spike of traffic somewhere else kills your ability to access the database. This is also true of monoliths, but it is typically easier to see when this is happening when the logs and metrics are all flowing from the same spot.

In practice it is far harder to scale serverless functions than an autoscaling group. With autoscaling groups I can just add more servers and be done with it. With serverless functions I need an in-depth understanding of each route of my app and where those resources are being spent. Traditional VMs give you a lot of flexibility in dealing with spikes, but serverless functions don't.

There are also tiers of scaling. You need to think of KMS throttling, serverless function concurrency limit, database connection limits, slow queries. Some of these don't go away with traditional web apps, but many do. Solutions started to pop up but they often weren't great.

Teams switched from always having a detailed response from the API to just returning a 200 showing that the request had been received. That allowed teams to stick stuff into an SQS queue and process it later. This works unless there is a problem in processing, breaking the expectations from most clients that 200 means the request was successful, not that the request had been received.

Functions often needed to be rewritten as you went, moving everything you could to the initialization phase and keeping all the connection logic out of the handler code. The initial momentem of serverless was crashing into the rewrites as teams learned painful lesson after painful lesson.

  • Price. Instead of being fire and forget, serverless functions proved to be very expensive at scale. Developers don't think of routes of an API in terms of how many seconds they need to run and how much memory they use. It was a change in thinking and certainly compared to a flat per-month EC2 pricing, the spikes in traffic and usage was an unpleasant surprise for a lot of teams.

Combined with the cost of RDS and API Gateway and you are looking at a lot of cash going out every month.

The other cost was the requirement that you have a full suite of cloud services identical to production for testing. How do you test your application end to end with serverless functions? You need to stand up the exact same thing as production. Traditional applications you could test on your laptop and run tests against it in the CI/CD pipeline before deployment. Serverless stacks you need to rely a lot more on Blue/Green deployments and monitoring failure rates.

  • Slow deployments. Pushing out a ton of new Lambdas is a time-consuming process. I've waited 30+ minutes for a medium-sized application. God knows how long people running massive stacks were waiting.
  • Security. Not running the server is great, but you still need to run all the dependencies. It's possible for teams to spawn tons of functions with different versions of the same dependencies, or even choosing to use different libraries. This makes auditing your dependency security very hard, even with automation checking your repos. It is more difficult to guarantee that every compromised version of X dependency is removed from production than it would be for a smaller number of traditional servers.

Why didn't this work?

I think three primary mistakes were made.

  1. The complexity of running a server in a modern cloud platform was massively overstated. Especially with containers, running a linux box of some variety and pushing containers to it isn't that hard. All the cloud platform offer load balancers, letting you offload SSL termination, so really any Linux box with Podman or Docker can run listening on that port until the box has some sort of error.

    Setting up Jenkins to be able to monitor Docker Hub for an image change and trigger a deployment is not that hard. If the servers are just doing that, setting up a new box doesn't require the deep infrastructure skills that serverless function advocates were talking about. The "skill gap" just didn't exist in the way that people were talking about.
  2. People didn't think critically about price. Serverless functions look cheap, but we never think about how many seconds or minute a server is busy. That isn't how we've been conditioned to think about applications and it showed. Often the first bill was a shocker, meaning the savings from maintenance had to be massive and they just weren't.
  3. Really hard to debug problems. Relying on logs and X-Ray to figure out what went wrong is just much harder than pulling the entire stack down to your laptop and triggering the same requests. It is a new skill and one that people had not developed up to that point. The first time you have a long-running production issue that would have been trivial to fix in the old monolith application design style that persists for a long time in the serverless function world, the enthusiasm from leadership evaporates very quickly.

Conclusion

Serverless functions fizzled out and it's important for us as an industry to understand why the hype wasn't real. Important questions were skipped over in an attempt to increase buy-in to cloud platforms and simplify the deployment and development story for teams. Hopefully this provides us a chance to be more skeptical of promises like this in the future. We should have adopted a much more wait and see to this technology instead of rushing straight in and hitting all the sharp edges right away.

Currently serverless functions live as what they're best at, which is either glue between different services, triggering longer-running jobs or as very simple platforms that allow for tight cost control by single developers who are putting together something for public use. If you want to use something serverless for more, you would be better off looking at something like ECS with Fargate or Cloud Run in GCP.


CodePerfect 95 Review

CodePerfect 95 Review

I have a long history of loving text editors. Their simplicity and purity of design is appealing to me, as is their long lifespans. Writing a text editor that becomes popular really becomes a lifelong responsibility and opportunity, which is just very cool to me. They become subcultures onto themselves. IDEs I have less love for.

There's nothing wrong with using one, in fact I use them for troubleshooting on a pretty regular basis. I just haven't found one I love yet. They either have a million plugins (so I'm constantly getting notifications for updates) or they just have thousands upon thousands of features, so even to get started I need to watch a few YouTube tutorials and read a dozen pages of docs. I love JetBrains products but the first time I tried to use PyCharm for a serious project I felt like I was launching a shuttle into space.

Busy is a bit of an understatement

However I find myself writing a lot of Golang lately, as it has become the common microservice language across a couple of jobs now. I actually like it, but I'm always looking for an IDE to help me write it faster and better. My workflow is typically to write it in Helix or Vim and then use the IDE for inspecting the code before putting it in a commit, or for faster debugging than have two tabs open in the Tmux and switching between them. It works, but it's not exactly an elegant solution.

I stumbled across CodePerfect 95 and fell in love with the visual styling. So I had to give it a try. Their site is here: https://codeperfect95.com/

Visuals

It's hard to overstate how much I love this design. It is very Mac OS 9 in a way that I just was instantly drawn to. Everything from the atypical color choices to the fonts are just classic Apple design.

Mac OS 9

Whoever picked this logo, I was instantly delighted with it.

There were a few quibbles. It should respect the system dark/light mode, even if it goes against the design of the application. That's a users preference and should get reflected in some way.

Also as far as I could tell, nothing about the font used or any of the design elements were customizable. This is fine for me, I actually prefer when tools have strong opinions and present them to me, but I know for some people the ability to switch the monospace font used is a big deal. In general there are just not a lot of options, which is great for me but you should be aware of.

Usage

Alright so I got a free 7 day trial when I downloaded it and I really tried to kick the tires as much as possible. So I converted over to it for all my work during that period. This app promises speed and delivers. It is as fast as a terminal application and comes with most of the window and tab customization you would typically turn to a tool like Tmux for.

It apparently indexes the project when you open it, but honestly it happened so fast I didn't even notice what it was doing. As fast as I could open the project and remember what the project was, I could search or do whatever. I'm sure if you work on giant projects that might not be the case, but nothing I threw at the index process seemed to choke it at all.

It supports panes and tabs, so basically using Cmd+number to switch panes. It's super fast and I found very comfortable. The only thing that is slightly strange is when you open a new pane, it shows absolutely nothing. No file path, no "click here to open". You need to understand that when you switch to an empty pane you have to open a file. This is what the pane view looks like:

Cmd+P is fuzzy find and works as expected. So if you are used to using Vim to search and open files, this is going to feel very familiar to you. Cmd+T is the Symbol GoTo which works like all of these you have ever used:

You can jump to the definition of an identifier, completion, etc. All of this worked exactly like you would think it does. It was very fast and easy to do. I really liked some of the completion stuff. For instance, Generate Function actually saved me a fair amount of time.

Given:

dog := Dog{}
bark(dog, 1, false)

You can mouse over and generate this:

func bark(v0 Dog, v1 int, v2 bool) {
  panic("not implemented")
}

This is their docs example but when I tested it, it seemed to work well.

The font is pretty easy to read but I would have love to tweak the colors a bit. They went with kind of a muted color scheme, whereas I prefer a strong visual difference between comments and actual code. All the UI elements are black and white, very strong contract, so to make the actual workspace muted and a little hard to read is strange.

VSCode defaults to a more aggressive and easier to read design, especially in sunlight.

Builds

So one of the primary reasons IDEs are so nice to use is the integrated build system. However with Golang builds are pretty straightforward typically, so there isn't a lot to report here. It's basically "what arguments do you pass to go build saved as a profile".

It works well though. No complaints and stepping through the build errors was easy and fast to do. Not fancy but works like it says on the box.

Work Impressions

I was able to do everything I would need to do with a typical Golang application inside the IDE, which is not a small task. I liked features like the Postfix completion which did actually save me a fair amount of time once I started using them.

However I ended up missing a few of the GoLand features like Code Coverage checking for tests and built-in support for Kubernetes and Terraform, just because it's common to touch all subsystems when I'm working on something and not just exclusively Go code. You definitely see some value with having a tool customized for one environment over having a general purpose tool with plugins, but it was a little hard to give up all the customization options with GoLand. Then again it reduces complexity and onboarding time, so it's a trade-off.

Pricing and License

First with a product like this I like to check the Terms and Conditions. I was surprised that they....basically don't have any.

Clearly no lawyers were involved in this process, which seems odd. This reads like a Ron Swanson ToS.

The way you buy licenses is also a little unusual. It's an attempt to bridge the Jetbrains previous perpetual license and the perpetual fallback license.

A key has two parts: a one-time perpetual license, and subscription-based automatic updates. You can choose either one, or both:

    License only
        A perpetual license locked to a particular version.
        After 3 included months of updates, locked to the final version.
    License and subscription
        A perpetual license with access to ongoing updates .
        When your subscription ends, your perpetual license is locked to the final version.
    Subscription only
        Access to the software during your subscription.
        You lose access when your subscription ends.

I'm also not clear what they mean by "cannot be expensed".

Why can't I expense it? According to what? You writing on a webpage "you cannot expense it"? This seems like a way to extract more money from people depending on whether they're using it at work or home.

Jetbrains does something similar but they have an actual license you agree to. There's no documentation of a license here, so I don't know if this matters at all. If CodePerfect wants to run their business like this, I guess they can, but they're going to need to have a document that says something like this:

3.4. This subscription is only for natural persons who are purchasing a subscription to Products using only their own funds. Notwithstanding anything to the contrary in this Agreement, you may not use any of the Products, and this grant of rights shall not be in effect, in the event that you do not pay Subscription fees using your own funds. If any third party pays the Subscription fees or if you expect or receive reimbursement for those fees from any third party, this grant of rights shall be invalid and void.

I feel like $40 for software where I only get 3 months of updates is not an amazing deal. Sublime Text is $99 for 3 years. Nova is $99 for one year. Examining the changelog it appears they're still closing relatively big bugs even now, so I would be a tiny bit nervous about getting locked into whatever version I'm at in three months forever. Changelog

The subscription was also not a great deal.

So I mean the easiest comparison would be GoLand.

$10 a month = $120 for the year and I get the perpetual fallback license. $100 for the year and I get CodePerfect (I understand the annual price break). The pricing isn't crazy but JetBrains is an established company with a known track record of shipping IDEs. I would be a bit hesitant to shell out for this based on a 7 day trial for a product that has existed for 302 days as of July 5th. I'd rather they charge me $99 for a license with 12 months of updates that just ends instead of a subscription. It's also strange that they don't seem to change the currency based on the location of the user.

My issue with all this is getting a one-time payment reimbursed is not bad. Subscriptions are typically frowned upon as expenses at most places I've worked unless they're training for the entire department. For my own personal usage, I would be hesitant to sign up for a new subscription from an unknown entity, especially when the ToS is a paragraph and the "license" I am agreeing to doesn't seem to exist? A lot of this is just new software growing pains, but I hope they're aware.

Conclusion

CodePerfect 95 is my favorite kind of software. It's functional yet fun, with some whimsy and joy mixed in with practical features. It works well and is as fast as promised. I enjoyed my week of using it, finding it to be mostly usable as JetBrains GoLand but in a lighter piece of software. So would I buy it?

I'm hesitant. I want to buy it, but there's zero chance I could get a legal department to approve this for an enterprise purchase. So my option would be to buy the more expensive version and expense it or just pay for it myself. Subscription fatigue is a real thing and I will typically pay a 20% premium to not have to deal with it. To not have to deal with a subscription I would need to buy a license every 3 months for $160 a year in total.

I can't get there yet. I've joined their newsletter and I'll keep an eye on it. If it continues to be a product in six months I'll pull the trigger. Switching workflows is a lot of work for me and it requires enough time to mentally adjust that I don't want to fall in love with a tool and then have it disappear. If they did $99 for a year license that just expired I'd buy it today.


Today the EU decided to give me a giant present

Today the EU decided to give me a giant present

For those of you who have spent years dealing with the nightmarish process of carefully putting EU user data in its own silo, often in its own infrastructure in a different EU region, it looks like the nightmare might be coming to an end. See the new press release here: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_3721

Some specific details I found really interesting in the full report (which is a doozy of a read(: https://commission.europa.eu/system/files/2023-07/Adequacy%20decision%20EU-US%20Data%20Privacy%20Framework.pdf

The EU-U.S. Data Privacy Framework introduces new binding safeguards to address all the concerns raised by the European Court of Justice, including limiting access to EU data by US intelligence services to what is necessary and proportionate, and establishing a Data Protection Review Court (DPRC), to which EU individuals will have access.

US companies can certify their participation in the EU-U.S. Data Privacy Framework by committing to comply with a detailed set of privacy obligations. This could include, for example, privacy principles such as purpose limitation, data minimisation and data retention, as well as specific obligations concerning data security and the sharing of data with third parties.

To certify under the EU-U.S. DPF (or re-certify on an annual basis), organisations are required to publicly declare their commitment to comply with the Principles, make their privacy policies available and fully implement them67. As part of their (re-)certification application, organisations have to submit information to the DoC on, inter alia, the name of the relevant organisation, a description of the purposes for
which the organisation will process personal data, the personal data that will be covered by the certification, as well as the chosen verification method, the relevant independent recourse mechanism and the statutory body that has jurisdiction to enforce compliance with the Principles68

Organisations can receive personal data on the basis of the EU-U.S. DPF from the date they are placed on the DPF list by the DoC. To ensure legal certainty and avoid 'false claims', organisations certifying for the first time are not allowed to publicly refer to their adherence to the Principles before the DoC has determined that the organisation's certification submission is complete and added the organisation to the DPF List69. To be allowed to continue to rely on the EU-U.S. DPF to receive personal data from the Union, such organisations must annually re-certify their participation in the framework. When an organisation leaves the EU-U.S. DPF for any reason, it must remove all statements implying that the organisation continues to participate in the Framework

So it looks similar to Privacy Shield but with more work being done on the US side to meet the EU requirements. This is all super new and we'll need to see how this shakes out in the practical implementation, but I'm extremely hopefully for less friction-filled interactions between EU and US tech companies.


GKE (Google Kubernetes Engine) Review

GKE (Google Kubernetes Engine) Review

What if Kubernetes was idiot-proof?

Love/Hate Relationship

AWS and I have spent a frightening amount of time together. In that time I have come to love that weird web UI with bizarre application naming. It's like asking an alien not familiar with humans to name things. Why is Athena named Athena? Nothing else gets a deity name. CloudSearch, CloudFormation, CloudFront, Cloud9, CloudTrail, CloudWatch, CloudHSM, CloudShell are just lazy, we understand you are the cloud. Also Amazon if you are going to overuse a word that I'm going to search, use the second word so the right result comes up faster. All that said, I've come to find comfort in its primary color icons and "mobile phones don't exist" web UI.

Outside of AWS I've also done a fair amount of work with Azure, mostly in Kubernetes or k8s-adjacent spaces. All said I've now worked with Kubernetes on bare metal in a datacenter, in a datacenter with VMs, on raspberry pis in a cluster with k3s, in AWS with EKS, in Azure with AKS, DigitalOcean Kubernetes and finally with GKE in GCP. Me and the Kubernetes help documentation site are old friends at this point, a sea of purple links. I say all this to suggest that I have made virtually every mistake one can with this particular platform.

When being told I was going to be working in GCP (Google Cloud Platform) I was not enthused. I try to stay away from Google products in my personal life. I switched off Gmail for Fastmail, Search for DuckDuckGo, Android for iOS and Chrome for Firefox. It has nothing to do with privacy, I actually feel like I understand how Google uses my personal data pretty well and don't object to it on an ideological level. I'm fine with making an informed decision about using my personal data if the return to me in functionality is high enough.

I mostly move off Google services in my personal life because I don't understand how Google makes decisions. I'm not talking about killing Reader or any of the Google graveyard things. Companies try things and often they don't work out, that's life. It's that I don't even know how fundamental technology is perceived. Is Golang, which relies extensively on Google employees, doing well? Are they happy with it, or is it in danger? Is Flutter close to death or thriving? Do they like Gmail or has it lost favor with whatever executives are in charge of it this month? My inability to get a sense of whether something is doing well or poorly inside of Google makes me nervous about adopting their stack into my life.

I say all this to explain that, even though I was not excited to use GCP and learn a new platform. Even though there are parts of GCP that I find deeply frustrating as compared to its peers...there is a gem here. If you are serious about using Kubernetes, GKE is the best product I've seen on the market. It isn't even close. GKE is so good that if you are all-in on Kubernetes, it's worth considering moving from AWS or Azure.

I know, bold statement.

TL;DR

  • GKE is the best managed k8s product I've ever tried. It aggressively helps you do things correctly and is easy to set up and run.
  • GKE Autopilot is all of that but they handle all the node/upgrade/security etc. It's like Heroku-levels of easy to get something deployed. If you are a small company who doesn't want to hire or assign someone to manage infrastructure, you could grow forever on GKE Autopilot and still be able to easily migrate to another provider or the datacenter later on.
  • The rest of GCP is a bit of a mixed bag. Do your homework.

Disclaimer

I am not and have never been a google employee/contractor/someone they know exists. I once bombed an interview when I was 23 for an job at Google. This interview stands out to me because despite working with it every day for a year my brain just forgot how RAID parity worked on a data tranmission level. Got off the call and instantly all memory of how it worked returned to me. Needless to say nobody at Google cares that I have written this and it is just my opinions.

Corrections are always appreciated. Let me know at: [email protected]

Traditional K8s Setup

One common complaint about k8s is you have to set up everything. Even "hosted" platform often just provide the control plane, meaning almost everything else is some variation of your problem. Here's the typically collection of what you need to make decisions about in no particular order:

  • Secrets encryption: yes/no how
  • Version of Kubernetes to start on
  • What autoscaling technology are you going to use
  • Managed/unmanaged nodes
  • CSI drivers, do you need them, which ones
  • Which CNI, what does it mean to select a CNI, how do they work behind the scenes. This one in particular throws new cluster users because it seems like a nothing decision but it actually has profound impact in how the cluster operates
  • Can you provision load balancers from inside of the cluster?
  • CoreDNS, do you want it to cache DNS requests?
  • Vertical pod autoscaling vs horizontal pod autoscaling
  • Monitoring, what collects the stats, what default data do you get, where does it get stored (node-exporter setup to prometheus?)
  • Are you gonna use an OIDC? You probably want it, how do you set it up?
  • Helm, yes or no?
  • How do service accounts work?
  • How do you link IAM with the cluster?
  • How do you audit the cluster for compliance purposes?
  • Is the cluster deployed in the correct resilient way to guard against AZ outages?
  • Service mesh, do you have one, how do you install it, how do you manage it?
  • What OS is going to run on your nodes?
  • How do you test upgrades? What checks to make sure you aren't relying on a removed API? When is the right time to upgrade?
  • What is monitoring overall security posture? Do you have known issues with the cluster? What is telling you that?
  • Backups! Do you want them? What controls them? Can you test them?
  • Cost control. What tells you if you have a massively overprovisioned node group?

This isn't anywhere near all the questions you need to answer, but this is typically where you need to start. One frustration with a lot of k8s services I've tried in the past is they have multiple solutions to every problem and it's unclear which is the recommended path. I don't want to commit to the wrong CNI and then find out later that nobody has used that one in six months and I'm an idiot. (I'm often an idiot but I prefer to be caught for less dumb reasons).

Are these failings of kubernetes?

I don't think so. K8s is everything to every org. You can't make a universal tool that attempts to cover every edge case that doesn't allow for a lot of customization. With customization comes some degree of risk that you'll make the wrong choice. It's the Mac vs Linux laptop debate in an infrastructure sphere. You can get exactly what you need with the Linux box but you need to understand if all the hardware is supported and what tradeoffs each decision involves. With a Mac I'm getting whatever Apple thinks is the correct combination of all of those pieces, for better or worse.

If you can get away with Cloud Run or ECS, don't let me stop you. Pick the level of customization you need for the job, not whatever is hot right now.

Enter GKE

Alright so when I was hired I was tasked with replacing an aging GKE cluster that was coming to end of life running Istio. After running some checks, we weren't using any of the features of Istio, so we decided to go with Linkerd since it's a much easier to maintain service mesh. I sat down and started my process for upgrading an old cluster.

  • Check the node OS for upgrades, check the node k8s version
  • Confirm API usage to see if we are using outdated APIs
  • How do I install and manage the ancillary services and what are they? What installs CoreDNS, service mesh, redis, etc.
  • Can I stand up a clean cluster from what I have or was critical stuff added by hand? It never should be but it often is.
  • Map out the application dependencies and ensure they're put into place in the right order.
  • What controls DNS/load balancing and how can I cut between cluster 1 and cluster 2

It's not a ton of work, but it's also not zero work. It's also a good introduction to how applications work and what dependencies they have. Now my experience with recreating old clusters in k8s has been, to be blunt, a fucking disaster in the past. It typically involves 1% trickle traffic, everything returning 500s, looking at logs, figuring out what is missing, adding it, turning 1% back on, errors everywhere, look at APM, oh that app's healthcheck is wrong, etc.

The process with GKE was so easy I was actually sweating a little bit when I cut over traffic, because I was sure this wasn't going to work. It took longer to map out the application dependencies and figure out the Istio -> Linkerd part than it did to actually recreate the cluster. That's a first and a lot of it has to do with how GKE holds your hand through every step.

How does GKE make your life easier?

Let's walk through my checklist and how GKE solves pretty much all of them.

  1. Node OS and k8 version on the node.

GCP offers a wide variety of OSes that you can run but recommends one I have never heard of before.

Container-Optimized OS from Google is an operating system image for your Compute Engine VMs that is optimized for running containers. Container-Optimized OS is maintained by Google and based on the open source Chromium OS project. With Container-Optimized OS, you can bring up your containers on Google Cloud Platform quickly, efficiently, and securely.

I'll be honest, my first thought when I saw "server OS based on Chromium" was "someone at Google really needed to get an OKR win". However after using it for a year, I've really come to like it Now it's not a solution for everyone, but if you can operate within the limits its a really nice solution. Here are the limits.

  • No package manager. They have something called the CoreOS Toolbox which I've used a few times to debug problems so you can still troubleshoot. Link
  • No non-containerized applications
  • No install third-party kernel modules or drivers
  • It is not supported outside of the GCP environment

I know, it's a bad list. But when I read some of the nice features I decided to make the switch. Here's what you get:

  • The root filesystem is always mounted as read-only. Additionally, its checksum is computed at build time and verified by the kernel on each boot.
  • Stateless kinda. /etc/ is writable but stateless. So you can write configuration settings but those settings do not persist across reboots. (Certain data, such as users' home directories, logs, and Docker images, persist across reboots, as they are not part of the root filesystem.)
  • Ton of other security stuff you get for free. Link

I love all this. Google tests the OS internally, they're scanning for CVEs, they're slowly rolling out updates and its designed to just run containers correctly, which is all I need. This OS has been idiot proof. In a year of running it I haven't had a single OS issue. Updates go out, they get patched, I don't notice ever. Troubleshooting works fine. This means I never need to talk about a Linux upgrade ever again AND the limitations of the OS means my applications can't rely on stuff they shouldn't use. Truly set and forget.

I don't run software I can't build from source.

Go nuts: https://cloud.google.com/container-optimized-os/docs/how-to/building-from-open-source

2. Outdated APIs.

There's a lot of third-party tools that do this for you and they're all pretty good. However GKE does it automatically in a really smart way.

Not my cluster but this is what it looks like

Basically the web UI warns you if you are relying on outdated APIs and will not upgrade if you are. Super easy to check "do I have bad API calls hiding somewhere".

3. How do I install and manage the ancillary services and what are they?

GKE comes batteries included. DNS is there but it's just a flag in Terraform to configure. Service accounts same thing, Ingress and Gateway to GCP is also just in there working. Hooking up to your VPC through a toggle in Terraform so you can naively routeable. They even reserve the Pods IPs before the pods are created which is nice and eliminates a source of problems.

They have their own CNI which also just works. One end of the Virtual Ethernet Device pair is attached to the Pod and the other is connected to the Linux bridge device cbr0. I've never encountered any problems with any of the GKE defaults, from the subnets it offers to generate for pods to the CNI it is using for networking. The DNS cache is nice to be able to turn on easily.

4. Can I stand up a clean cluster from what I have or was critical stuff added by hand?

Because everything you need to do happens in Terraform for GKE, it's very simple to see if you can stand up another cluster. Load balancing is happening inside of YAMLs, ditto for deployments, so standing up a test cluster and seeing if apps deploy correctly to it is very fast. You don't have to install a million helm charts to get everything configured just right.

However they ALSO have backup and restore built it!

Here is your backup running happily and restoring it is just as easy to do through the UI.

So if you have a cluster with a bunch of custom stuff in there and don't have time to sort it out, you don't have to.

5. Map out the application dependencies and ensure they're put into place in the right order.

This obviously varies from place to place, but the web UI for GKE does make it very easy to inspect deployments and see what is going on with them. This helps a lot, but of course if you have a service mesh that's going to be the one-stop shop for figuring out what talks to what when. The Anthos service mesh provides this and is easy to add onto a cluster.

6. What controls DNS/load balancing and how can I cut between cluster 1 and cluster 2

Alright so this is the only bad part. GCP load balancers provide zero useful information. I don't know why, or who made the web UIs look like this. Again, making an internal or external load balancer as an Ingress or Gateway with GKE is stupid easy with annotations.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: my-static-address
    kubernetes.io/ingress.allow-http: "false"
    networking.gke.io/managed-certificates: managed-cert
    kubernetes.io/ingress.class: "gce"


Why would this data be the most useful data?

I don't who this is for or why I would care from what region of the world my traffic is coming from. It's also not showing correctly on Firefox with the screen cut off on the right. For context, this is the correct information I want from a load balancer every single time:

The entire GCP load balancer thing is a tire-fire. The web UI to make load balancers breaks all the time. Adding an SSL through the web UI almost never works.  They give you a ton of great information about the backend of the load balancer but adding things like a new TLS policy requires kind of a lot of custom stuff. I could go on and on.

Autopilot

Alright so lets say all of that was still a bit much for you. You want a basic infrastructure where you don't need to think about nodes, or load balancers, or operating systems. You write your YAML, you deploy it to The Cloud and then things happens automagically. That is GKE Autopilot

Here are all the docs on it. Let me give you the elevator pitch. It's a stupid easy way to run Kubernetes that is probably going to save you money. Why? Because selecting and adjusting the type and size of node you provision is something most starting companies mess up with Kubernetes and here you don't need to do that. You aren't billed for unused capacity on your nodes, because GKE manages the nodes. You also aren't charged for system Pods, operating system costs, or unscheduled workloads.

Hardening Autopilot is also very easy. You can see all the options that exist and are already turned on here. If you are a person who is looking to deploy an application where maintaining it cannot be a big part of your week, this is a very flexible platform to do it on. You can move to standard GKE later if you'd like. Want off GCP? It is not that much work to convert your YAML to work with a different hosted provider or a datacenter.

I went in with low expectations and was very impressed.

Why shouldn't I use GKE?

I hinted at it above. As good as GKE is, the rest of GCP is crazy inconsistent. First the project structure for how things work is maddening. You have an organization and below that are projects (which are basically AWS accounts). They all have their own permission structure which can be inherited from folders that you put the projects in. However since GCP doesn't allow for the combination of IAM premade roles into custom roles, you end up needing to write hundreds of lines of Terraform for custom roles OR just find a premade role that is Pretty Close.

GCP excels at networking, data visualization (outside of load balancing), kubernetes, serverless with cloud run and cloud functions and big data work. A lot of the smaller services on the edge don't get a lot of love. If you are heavy users of the following, proceed with caution.

GCP Secret Manager

For a long time GCP didn't have any secret manager, instead having customers encrypt objects in buckets. Their secret manager product is about as bare-bones as it gets. Secret rotation is basically a cron job that pushes to a Pub/Sub topic and then you do the rest of it. No metrics, no compliance check integrations, no help with rotation.

It'll work for most use cases, but there's just zero bells and whistles.

GCP SSL Certificates

I don't know how Let's Encrypt, a free service, outperforms GCPs SSL certificate generation process. I've never seen a service that mangles SSL certificates as bad as this. Let's start with just trying to find them.

The first two aren't what I'm looking for. The third doesn't take me to anything that looks like an SSL certificate. SSL certificates actually live at Security -> Certificate Manager. If you try to go there even if you have SSL certificates you get this screen.

I'm baffled. I have Google SSL certificates with their load balancers. How is the API not enabled?

To issue the certs it does the same sort of DNS and backend checking as a lot of other services. To be honest I've had more problems with this service issuing SSL certificates than any in my entire life. It was easier to buy certificates from Verisign. If you rely a lot on generating a ton of these quickly, be warned.

IAM recommender

GCP has this great feature which is it audits what permissions a role has and then tells you basically "you gave them too many permissions". It looks like this:

Great right? Now sometimes this service will recommend you modify the permissions to either a new premade role or a custom role. It's unclear when or how that happens, but when it does there is a little lightbulb next to it. You can click it to apply the new permissions, but since mine (and most peoples) permissions are managed in code somewhere, this obviously doesn't do anything long-term.

Now you can push these recommendations to Big Query, but what I want is some sort of JSON or CSV that just says "switch these to use x premade IAM roles". My point is there is a lot of GCP stuff that is like 90% there. Engineers did the hard work of tracking IAM usage, generating the report, showing me the report, making a recommendation. I just need an easier way to act on that outside of the API or GCP web console.

These are just a few examples that immediately spring to mind. My point being when evaluating GCP please kick the tires on all the services, don't just see that one named what you are expecting exists. The user experience and quality varies wildly.

I'm interested, how do I get started?

GCP terraform used to be bad, but now it is quite good. You can see the whole getting started guide here. I recommend trying Autopilot and seeing if it works for you just because its cheap.

Even if you've spent a lot of time running k8s, give GKE a try. It's really impressive, even if you don't intend to move over to it. The security posture auditing, workload metrics, backup, hosted prometheus, etc is all really nice. I don't love all the GCP products, but this one has super impressed me.


Developers Guide to Moving to Denmark

Developers Guide to Moving to Denmark

I've wanted to write a guide for tech workers looking to leave the US and
move to Denmark for awhile. I made the move over 4 years ago and finally
feel like I can write on the topic with enough detail to answer most
questions.

Denmark gets a lot of press in the US for being a socialist
paradise and is often held up as the standard by which things are
judged. The truth is more complicated, Europe has its own issues that may impact you more or less depending on your background.

Here's the short version: moving to Europe from the US is a significant
improvement in quality of life for most people. There are pitfalls, but
especially if you have children, every aspect of being a parent, from
the amount of time you get to spend with them to the safety and quality
of their schools, is better. If you have never considered it, you
should, even if not Denmark (although I can't help you with how). It
takes a year to do, so even if things seem ok right this second try to
think longer-term.

TL;DR
Reasons to move to Denmark

  • 5 or 6 weeks of vacation a year (with some asterisks)
  • Public very good healthcare
  • Amazing work/life balance, with lots of time for hobbies and activities
  • Great public childcare at really affordable prices
  • Be in a union
  • Summer is amazing weather
  • Safety. Denmark is really safe compared to even a safe US area.
  • Very low stress. You don't worry about retirement or health insurance or childcare or any of the things that you might constantly obsess over.
  • Freedom from religious influence. Denmark has a very good firewall against religious influence in politics.
  • Danes are actually quite friendly. I know, they pretend they're not in media but they are. They won't start a conversation with you but they'd love to chat if you start one.

Reasons not to move to Denmark

  • You are gonna miss important stuff back home. People are gonna die and you won't be there. Weddings, birthdays, promotions and divorces are all still happening and you aren't there. I won't sugarcoat it, you are leaving most of your old life behind.
  • Eating out is a big part of your life; there are restaurants, but they're expensive and for the most part unimpressive. If someone who worked at Noma owns it, then it's probably great, but otherwise often meh.
  • You refuse to bike or take public transit. Owning a car here is possible but very expensive and difficult to park.
  • Lower salary. Tech workers make a lot less here before taxes.
  • Taxes. They're high. All Danes pay 15% but if you earn "top tax" you pay another 15%. Basically with a high salary you get bumped into this tax bracket.
  • Food. High quality ingredients at stores but overall very bland. You'll eat a lot of meals at your work cafeteria and it's healthy but uninspired.
  • Buying lots of items is your primary joy in life. Electronics are more expensive in the EU, Amazon doesn't exist in Denmark so your selection is much more limited.

Leaving

There are certainly no shortages of reasons today why one would consider leaving the US. From mass shootings to a broken political system where the majority is held hostage by the minority rural voter, it's not the best time to live in the US. When Trump was elected, I decided it was time to get out. Trump wasn't the reason I left (although man wouldn't you be a little bit impressed if it were?).

I was tired of everything being so hopeless. Everyone I knew in Chicago was working professional jobs, often working late hours, but nobody was getting ahead. I was lucky that I could afford a house and a moderately priced car, but for so many I knew it just seemed pointless. Grind forever and maybe you would get to buy a condo. All you could do was run the numbers with people and you realized they were never gonna get out.

Everything felt like it was building up to this explosion. I would go back to rural Ohio and people were desperately poor even though the economy had, on paper, been good for years. Flex scheduling at chain retail locations meant you couldn't take another job because your first job might call you at any time. Nobody had health insurance outside of the government-provided "Buckeye card". Friends I knew, people who had been moderates growing up, were carrying AR-15s to Wal-Mart and talking about the upcoming war with liberals. There were confederate flags everywhere in a state that fought on the Union side.

I'm not really qualified to talk about politics, so I won't. The only skill that lends itself to this situation was someone who has watched a lot of complex systems fail. This felt like watching a giant companies infrastructure collapse. Every piece had problems, different problems, that nobody seemed to understand holistically. I couldn't get it out of my head, this fear that I would need to flee quickly and wouldn't be able to. "If you think it's gonna explode you wanna go now before people realize how bad it is" was the thought that ran over and over in my head.

So I'd sit out the "burning to the ground" part. Denmark seemed the perfect place to do it. Often held up as the perfect society with its well-functioning welfare state, low levels of corruption and high political stability. They have a shortage of tech workers and I was one of those. I'd sell my condo, car and all my possessions and wait out the collapse. It wasn't a perfect plan (the US economy is too important to think its collapse wouldn't be felt in Denmark) but it was the best plan I could come up with.

Doing something is better than sitting on Twitter and getting sad all the time was my thought.

Is it a paradise?

No. I think the US media builds Denmark, Norway and Sweden up to unrealistic levels. It's a nice country with decent people and the trains, while DSB doesn't run on time, it does run and exist which is more than I can say for US trains. There are problems and often the problems don't get discussed until you get here. I'll try to give you some top level stuff you should be aware of.

There is a lot of anger towards communities that have immigrated from the Muslim world. These neighborhoods are often officially classified as "ghettos" under a 2018 law:

And the Danish state decides whether areas are deemed ghettoes not just by their crime, unemployment or education rates, but on the proportion of residents who are deemed “non-western” – meaning recent, first-, or second-generation migrants.

You'll sometimes hear this discussed as the "parallel societies" problem. That Denmark is not Danish enough anymore unless steps are taken to break up these neighborhoods and disperse their residents. The solution proposed was to change the terms: The Interior Ministry last week revealed proposed reforms that would remove the word "ghetto" in current legislation and reduce the share of people of "non-Western" origin in social housing to 30% within 10 years. Families removed from these areas would be relocated to other parts of the country.

It's not a problem that stops at one generation either. I've seen Danes whose family immigrated over a generation ago who speak fluent Danish (as they are Danish) be asked "where they come from" at social events multiple times. So even if you are a citizen, speak the language, go through the educational system, you aren't fully integrated by a lot of folks standards. This can be demoralizing for some people.

I also love their health system, but it's not fair to all the workers who maintain it. The medical staff don't get paid enough in Denmark for all the work they do, especially when you compare staff like nurses to nurses in the US. Similarity a lot of critical workers like daycare workers, teachers, etc, are in the same boat. It's not as bad as the US for teachers but there's still definitely a gap there.

Denmark also doesn't apply all these great benefits uniformly. Rural Denmark is pretty poor and has limited access to a lot of these services. It's not West Virginia, but some of the magic of a completely flat fair society disappears when you spend a week in rural Jutland. These towns are peeling paint and junk on the front lawn, just like you would expect in any poor small town. There's still a safety net and they still have a much better time of it than an American in the same situation, but still.

I hope Danish people reading this don't get upset. I'm not saying this to speak ill of your country, but sometimes I see people emotionally crash and burn when they come expecting liberal paradise and encounter many problems which look similar to ones back home. It's important to be realistic about what living here looks like. Denmark has not solved all racial, gender or economic issues in their society. They are still trying to, which is more than I can say for some.

Steps

The next part are all the practical steps I could think of. I'm glad to elaborate on any of them if there is useful information that is missing. If I missed something or you disagree, the easiest way to reach me is on the Fediverse at: [email protected].

Why do I say this is a guide for developers? Because I haven't run this by a large group to fact-check, just developers moving from the US or Canada. So this is true of what we've experienced, but I have no idea how true it is for other professions. Most of it should be applicable to anyone, but maybe not, you have been warned etc etc.

Getting a Visa

The first part of the process is the most time consuming but not very difficult. You need to get a job from an employer looking to sponsor someone for a work visa. Since developers tend towards the higher end of the pay scale, you'll likely qualify for a specific (and easier) visa process.

In terms of job posting sites I like Work in Denmark and Job Index. I used Work in Denmark and it was great. If a job listing is in Danish, don't bother translating it and applying, it means they want a local. Danish CVs are similar to US resumes but often folks include a photo in theirs. It's not a requirement, but I've seen it a fair amount when people are applying to jobs.

It can be a long time before you hear anything, which is just how it works. Even if you seem amazing for a job, my experience with US tech companies was often I'd hear back within a week for an interview. Often with Denmark its 2-3 weeks to get a rejection. Just gotta wait them out.

Where do you want to live

Short answer? As close to Copenhagen as possible. It's the capital city, it has the most resources by a lot and it is the easiest to adjust to IMO. I originally moved to Odense, the third largest city and found it far too small for me. I ended up passing time by hanging out in the Ikea food court because I ran out of things to do, which is as depressing as it sounds.

The biggest cities in Denmark are Copenhagen, Aarhus on the Jutland peninsula and Odense on the island of Fyn sitting between the two. Here's a map that shows you what I'm talking about.

A lot of jobs will contact you that are based in Jutland. I would think long and hard before committing to living in Jutland if you haven't spent a lot of time in Denmark. The further you get from Copenhagen, the more expectation there is that you are fluent in Danish. These cities are also painfully small by US standards.

  • Copenhagen: 1,153,615
  • Aarhus: 237,551
  • Odense: 145,931

Typically jobs in Jutland are more desperate for applicants and are easier to get for foreign workers. If you are looking for a smaller city or maybe even living out in the countryside (which is incredibly beautiful), it's a good option. Just be sure that's what you want to do. You'll want to enroll in Danish classes immediately at a faster rate to get around and do things like "read menus" and "answer the phone".

There are perks to not living in Copenhagen. My wife got to ride horses once a week, which is something she did as a little kid and could do again for a very reasonable $50 a month. I enjoyed the long walks through empty nature around Fyn and the leisurely pace of life for awhile. Just be sure, because these towns are very sleepy and can make you go a bit insane.

Interviews

Danish interviews are a bit different from US ones. Take home assignments and test projects are less common, with most companies comfortable assuming you aren't lying on your resume. They may ask for a GitHub handle just to see if you have anything up there. The pace is pretty relaxed compared to the US, don't expect a lot of live code challenges or random quizzes. You walk through the work you've done and they'll ask follow ups.

Even though the interviews are relaxed, they're surprisingly easy to fail. Danes really don't like bragging or "dominating" the conversation. Make sure you attribute victories to your team, you were part of a group that did all this great work. It's not cheap to move someone to Denmark, so try and express why you want to do it. A lot of foreign workers bounce off of Denmark when they move here, so you are trying to convince them you are worth the work.

After the interview you'll have....another interview. Then another interview. You'll be shocked how often people want to talk to you. This is part of the group consensus thing that is pretty important here. Jobs really want the whole team to be happy with a decision and get a chance to weigh in on who they work with. Managers and bosses have a lot less power than in the US and you see it from the very beginning of the interview.

Remember, keep it light, lots of self-deprecating humor. Danes love that stuff, poking fun at yourself or just injecting some laughter into the interview. They also love to hear how great Denmark is, so drop some of that in too. You'll feel a little weird celebrating their country in a job interview, but I've found it really creates positive feelings among the people you are talking to.

Don't answer the baby question. They can't directly ask you if you are gonna have kids, but often places bringing foreign workers over will dance around the question. "Oh it's just you and your partner? Denmark is a great place for kids." The right answer is no. I gave a sad no and stared off screen for a moment. I don't have any fertility issues, it just seemed an effective way to sell it.

Alright you got the job. Now we start the visa process for real. That was actually the easy part.

Sitting in VFS

This wasn't going to work. That was my thought as I sat in the waiting room of VFS Chicago, a visa application processing company. Think airport waiting area meets DMV. Basically for smaller countries it doesn't make sense for them to pay to staff places with employees to intake immigrants, so they outsource it to this depressing place. I was surrounded by families all carrying heavy binders and all I had was my tiny thin binder.

I watched in horror as a French immigration official told a woman "she was never getting to France" as a binder was closed with authority. Apparently the French staff their counter with actual French people who seem to take some joy in crushing dreams. This woman immediately started to cry and plead that she needed the visa, she had to get back. She had easily 200 pages of some sort of documentation. I looked on in horror as she collapsed sobbing into a seat.

On the flip side I had just watched a doctor get approved in three minutes. He walked in still wearing scrubs, said "I'm here to move to Sweden", they checked his medical credentials and stamped a big "APPROVED" on the document. If you or your spouse is a doctor or nurse, there's apparently nowhere in the EU who won't instantly give you a visa.

My process ended up fine, with some confusion over whether I was trying to move to the Netherlands or Denmark. "You don't want a Dutch visa, correct?" I was asked more than once. They took my photo and fingerprints and we moved on. Then I waited for a long time for a PDF saying "approved". I was a little bit sad they didn't mail me anything.

Work Visa Process

Just because it seems like nobody in either sphere understands how the other works

The specific visa we are trying to get is outlined here. This website is where you do everything. Danish immigration doesn't have phone support and nothing happens with paper. It's all through this website. Basically your employer fills out one part and you fill out the rest. It's pretty straight forward and the form is hard to mess up. But also your workplace has probably done it before and can answer most questions.

This can be weird for Americans where we are still a paper-based society. Important things come with a piece of paper generally. When my daughter was born in a Danish hospital I freaked out because when it was time to discharge her they were like "ok time to go!". "Certainly there's a birth certificate or something that I get about her?" The nurse looked confused and then told me "the church handles all that sort of stuff." She was correct, the church (for some reason) is where we got the document that we provided to the US to get her citizenship.

Almost nothing you'll get in this entire process is on paper. It's all through websites and email. Once you get used to it, it's fine, but I have the natural aversion to important documents existing only in government websites where (in the US) they can disappear with no warning. I recommend backups of everything even though it rarely comes up. The Danish systems mostly just work, or if they break they break for everyone.

IMPORTANT

There is a part of the process that they don't draw particular attention to. You need to get your biometrics taken, which means photo and fingerprints. This process is a giant pain in the ass in the US. You have a very limited time window from when you submit the application to get your biometrics recorded, so check the appointment availability BEFORE you hit submit. The place that offers biometric intake is VFS You have to get it done within 14 days of submitting and there are often no appointments.

Here are the documents you will need over and over:

  • full color copies of your passport including covers
  • the receipt from the application website showing you paid the fee. THIS IS INCREDIBLY IMPORTANT and the website does not tell you how important it is when you pay the fee. That ID number it generates is needed by everything.
  • Employment contract
  • Documentation of education. For me this included basically my resume and jobs I had done as a proxy for not having a computer science degree.

Make a binder and put all this stuff in, with multiple copies. It will save you a ton of work in the long-term. This binder is your life for this entire process. All hail the binder.

Alright you've applied after checking for a biometrics appointment. You paid your fee, sat through the interviews, put in the application. Now you wait for an email. It can take a mysterious amount of time, but you just need to be patient. Hopefully you get the good new email with your new CPR number. Congrats, you are in the Danish system.

Moving

Moving stuff to Denmark is a giant pain in the ass. There are a lot of international moving companies and I hear pretty universally bad things about all of them. You need to think of your possessions in terms of cargo containers. How many cargo containers do you currently have in your house worth of stuff and how much can you get rid of. Our moving company advised us to try and get within a 20 foot cargo container for the best pricing.

It's not a ton of space. We're talking 1,094 cubic feet.

You gotta get everything inside there and ideally you go way smaller. Moving prices can vary wildly between $1000 and $10,000 depending on how much junk you have. You cannot be sentimental here, you want to get rid of everything possible. Don't bring furniture, buy new stuff at Ikea. Forget bringing a car, the cost to register it in Denmark will be roughly what you paid for the car to begin with. Sell the car, sell the furniture, get rid of everything you can.

Check to see if anything with a plug will work. If your device shows an inscription for a range 110V-220V then all you need is a plug adapter. If you only see an inscription for 110V, then you need a transformer that will transform the electricity from 220V to 110V. Otherwise, if you attempt to plug in your device without a transformer, bad things happen. I wouldn't bother bringing anything that won't work with 220V. The plug adapters are cheap, but the transformers aren't.

Stuff you will want to stockpile

This is a pretty good idea of what American stuff you can get. 
  • over the counter medicine, doesn't really exist here outside of Panodil.
  • Pepto, aspirin, melatonin, cold and flu pills, buy a lot of it cause you can't get more
  • Spices and Sauces
  • Cream of tartar
  • Pumpkin pie spice
  • Meatloaf mix
  • Good chili spice mixes or chili spices in general
  • Hot peppers, like a variety of dried peppers especially ones from Mexico are almost impossible to find here
  • Everything bagel seasoning, I just love it
  • Ranch dressing
  • Hot sauces, they're terrible here
  • BBQ sauces, also terrible here
  • Liquid butter for popcorn if that's your thing
  • Taco mix, it's way worse here
  • Foods
  • Cheez-its and Goldfish crackers don't exist
  • Gatorade powder (you can buy it per bottle but its expensive)
  • Tex-mex anything, really Mexican food in general
  • Cereal, American sugar cereal doesn't exist
  • Cooler ranch Doritos
  • Mac and Cheese
  • Good dill pickles (Danish pickles are sweet and gross)
  • Peanut butter - its here but its expensive

You are going to get used to Danish food, I promise, but it's painfully bland at first. There's a transition period and spices can help get you over the hurdle.

Note: If you eat a lot of peppers like jalapeños, it is too expensive to buy them every time. You will want to grow them in your house. This is common among American expats, but be aware if you are used to them being everywhere and cheap.

Medical Records
When you get your yellow card (your health insurance card), you are also assigned a doctor. In order to get your medical records into the Danish system, you need to bring them with you. If you don't have a complicated medical history I think it's fine to skip this step (they'll ask you all the standard questions) but if you have a more complicated health issue you'll want those documents with you. The lead time to get a doctors appointment here in Denmark for a GP isn't long, typically same week for kids and two weeks for adults.

Different people have different experiences with the health system in Denmark, but I want to give you a few high level notes. Typically Danes get a lot less medication than Americans, so don't expect to walk out of the doctors office with a prescription. There is a small fee for medicine, but it's a small fraction of what it costs with insurance in the US. Birth control pills, IUDs and other resources are easy to get and quite affordable (or free).

If you need a specific medication for a disease, try to get as much as you can from the US doctor. The process for getting specific medicine can sometimes be complicated in Denmark, possibly requiring a referral to a specialist and additional testing. You'll want to allocate some time between when you arrive and when you can get a new script. Generally it works but it might take awhile.

Landing

The pets and I waiting for the bus with my stolen luggage cart

My first week was one of the harder weeks I've had in my life. I landed and then immediately had to take off to go grab the dog and cat. The plan was simple: the pets had been flown on a better airline than me. I would grab them and then take the train from the airport to Odense. It's like an hour and a half train ride. Should be simple. I am all jitters when I land but I find the warehouse where the pets were unloaded.

Outside are hundreds of truck drivers and I realize I have made a critical error. People had told me over and over I didn't need to rent a car, which might have been true if I didn't have pets. However the distance between the warehouse and where I needed to be was too long to walk again with animals in crates. The truck drivers are sitting around laughing and drinking energy drinks while I wander around waiting for the warehouse to let me in.

I decide to steal an abandoned luggage cart outside of the UPS building. "I'm bringing it closer to where it should be anyway" is my logic. The drivers find this quite funny, with many jokes being made at my expense. Typically I'd chalk this up to paranoia but they are pointing and laughing at me.  I get the dog and cat, they're not in great shape but they're alive. I give them some water and take off for the bus to the airport.

Loading two crated animals onto a city bus isn't easy in the best of times. Doing it while the cat pee smell coming out of one crate is enough to make your eyes water is another. I have taken over the middle of this bus and people are waving their hands in front of their faces due to the smell. After loading everyone on, I check Google Maps again and feel great. This bus is going to turn around but will take me back to the front of the airport where I want to go.

It does not do that. Instead it takes off to the countryside. After ten minutes of watching the airport disappear far into the background, I get off at the next stop. In front of a long line of older people (tourists?) I get the dog out of the box, throw the giant kennel into a dumpster, zip tie the cat kennel to the top of my suitcase and start off again.

We make it to the train where a conductor is visibly disgusted by the smell. I sit next to the bathroom hoping the smell of public train bathroom would cover it. I attempt to grab a taxi to take me to where I am staying to get set up. No go, there are no taxis. I had not planned for there to be no taxis. On the train I had swapped out the cat pad so the smell was not nearly so intense, but it still wasn't great.

I then walked the kilometers from the train station to where I was staying, sweating the entire time. The dog was desperately trying to escape after the trauma of flying and staying in the SAS animal holding area with race horses and other exotic animals. There were giant slugs on the ground everywhere, something I have since learned is just a Thing in Denmark. We eventually get there and I collapse onto the unmade bed.

What I have with me is what I'm going to need to get set up. There is a multi-month delay between when you land and when your stuff gets there, so for a long time you are starting completely fresh. The next day I start the millions of appointments you need to get set up.

Week 1

Alright you've landed, your stuff is on a boat on its way to you. Typically jobs will either put you up in corporate housing to let you find an apartment or they'll stick you in a hotel. You are gonna be overwhelmed at first, so try to take care of the basics. There is a great outline of all the steps here.

It is a pretty extreme culture shock at first. My first night in Denmark was a disaster. I didn't realize you had to buy the shopping bags and just stole a few by accident. So basically within 24 hours of landing I was already committing crimes. My first meal included an non-alcoholic beer because I assumed Carlsberg Nordic meant "lots of booze" not "no booze".

When you wake up have a plan for what you need to get done that day. It's really tiring, you are gonna be jet-lagged, you aren't used to biking so don't beat yourself up if you only get that one thing done. But you are time limited here so it's important to hit these milestones quickly. You are also going to burn though kind of a lot of cash to get set up. You'll make it up over time, but be aware.

Get a phone plan

You can bring a cellphone from the US and have it work here. Cellphone plans are quite cheap, with a pay as you go sim available for 99 dkk a month with 100 GB of data and 100 hours of talk time. You can get that deal here. If you require an esim, I recommend 3 although it is a bit more. They are here.

Find an apartment
The gold standard for apartment hunting is BoligPortal here. Findboliger was also ok but much lower amounts of inventory. You can get a list of all the good websites here.

These services cost money to you. I'm not exactly sure why (presumably because they can so why not). Just remember to cancel once you find the apartment.

Some tips for apartment hunting

  • Moving into an apartment in Denmark can be jaw droppingly expensive. Landlords are allowed to ask for up to 3 months of rent as a deposit AND 3 months of rent before you move in. You may have to pay 6 months of rent before you get a single paycheck from your new job.
  • You aren't going to get back all that deposit. Danish landlord companies are incredibly predatory in how this works. They will act quite casual when you move in, but come back when you move out and will inspect everything for an hour plus. You need to document all damage before you sign in, same as the US. But mentally you should write off half that deposit.
  • After you have moved in, you have 14 days to fill out a list of defects and send it to your landlord.
  • Don't pay rent in cash. If the landlord says pay in cash it's a scam. Move on.
  • See if you have separate meters in your apartment for water/electric. You want this ideally.
  • Fiber internet is surprisingly common in Denmark. In general they have awesome internet. If this is a priority ask the apartment folks about it. Even if the building you are looking at doesn't have it, chances are there is a building they manage that does.
This doesn't have anything to do with this, I just love this picture

Appliances
Danish washers and dryers are great. Their refrigerators suck so goddamn hard. They're small, for some reason a pool of water often forms at the bottom, the seal needs to be reglued from time to time, stuff freezes if its anywhere near the back wall. I've never seen a good fridge after three tries so just expect it to be crap.

All the normal kitchen appliances are here, but there are distinct tiers of fancy. Grocery stores like Netto often have cheap appliances like toasters, Ikea sells some, but stay away from the electronics stores like Power unless you know you want a fancy one of them. Amazon Germany will ship to Denmark and that's where I got my vacuum and a few other small items.

Due to the cost of eating out in Denmark you are going to be cooking a lot. So get whatever you need to make that process less painful. Here's what I found to be great:

  • Instant Pot: slow cooker and a rice cooker
  • Salad washer: their lettuce is very dirty
  • Hand blender: if you wanna do soups
  • Microwave: I got the cheapest I could find, weirdly no digital controls just a knob you turn. Not sure why
  • Coffee bean grinder: Pre-ground coffee is always bad, Danish stuff is nightmarish bad
  • Hot water kettle: just get one you'll use it all the time
  • Drip coffee maker: again surprisingly hard to find. Amazon Germany for me.
  • Vacuum

Kitchen Tools

  • Almost all stove-tops are induction so expect to have to buy new pots and pans, don't bring non-induction ones from the US
  • Counter space is limited and there is not a ton of kitchen storage in your average Danish apartment so think carefully about anything you might not need or use on a regular basis
  • Magasin will sell you any exotic tools you might want or need and there are plenty of specialist cooking stores around town

Go visit ICS
You can make an appointment here.

They will get you set up with MitID, the digital ID service. This is what you use to log into your bank account, government websites, the works. They'll also get you your yellow card as well as sign you up for your doctor. The process is pretty painless.

Bank

  • pick whichever you want, bring your US passport, Danish yellow card and employment contract
  • it takes forever, so also maybe a book
  • they'll walk you through what you need there but it's pretty straight forward
  • credit card rewards don't exist in Denmark and you don't really need a credit card for anything

If the bank person tells you they need to contact the US, ask to speak to someone else. I'm not sure why some Danish bank employees think this, but there is nobody at the US Department of Treasury they can speak to. It was a bizarre roadblock that left me trying to hunt down who they would be talking to at a giant federal organization. In the end another clerk explained she was wrong and just set me up, but I've heard this issue from other Americans so be aware.

I did enjoy how the woman was like "I'll just call the US" and I thought I am truly baffled at who she might be calling.

First night

Moving In

  • Danish apartments don't come with light fixtures installed. This means your first night is gonna be pretty dark if you aren't prepared. Trust me, I know from having spent my first night sleeping on the floor in the dark because I assumed I would have lights to unpack stuff. You are gonna see these on the wall:

Here's the process to install a light fixture:

  1. Turn off the power
  2. Pop the inner plastic part out with a screwdriver
  3. Put the wire from the light fixture through the hole
  4. Strip the cables from the light fixture like 4 cm
  5. Insert the two leads of your lamp into the N and M1 terminals
  6. If colored, the blue wire goes into N and the brown wire into M1
  7. If not colored it shouldn't matter

Here is a video that walks you through it.

You are gonna wanna do this while the sun is out for obvious reasons so plan ahead.

Buying a Bike

See me wearing jeans? Like a fucking idiot?

Your bike in Denmark is going to be your primary form of transportation. You ride it, rain or shine, everywhere. You'll haul groceries on it, carry Ikea stuff home on it, this thing is going to be a giant part of your life. Buying one is....tricky. You want something like this:

Here's the stuff you want:

  • Mudguards, Denmark rains a lot
  • Kevlar tires. Your bike tires will get popped at the worst possible moments, typically during a massive downpour.
  • Basket. You want a basket on the front and you want them to put it on. Sometimes men get weird about this but this isn't the time for that. Just get the basket.
  • Cargo rack on the back.
  • Wheel lock, the weird circular lock on the back wheel. It's what keeps people from stealing it (kinda). You also need a chain if the bike is new.
  • Lights, ideally permanently mounted lights. They're a legal requirement here for bikes and police do give tickets.
  • If you haven't changed a tube on a bike in awhile, practice it. You'll have to do it on the road sometime.
  • Get a road tool kit.
  • Get a flashlight in this tool kit because the sun sets early in the winter in Denmark and hell is trying to swap a tube in the dark by the light of a cellphone while its raining.
  • If you can get disc brakes, they're less work and last longer
  • Minimum three gears, five if you can.
  • Denmark always has a bike lane. Never ride with traffic.
You need all that

It doesn't have to be that one but it should have everything that one does plus a flashlight

Bike Ownership

  • home insurance covers your bike mostly, but make sure you have that option(and get home insurance)
  • write down the frame number off the bike, it's also on the receipt. You need it for insurance claims
  • You should lubricate the chain every week with daily use and clean the chain at least once a month. A lot of people don't and end up with very broken bikes
  • Danes use hand signals to indicate.

You are expected to use these every time.

  • Danes are very serious about biking. You need to treat it like driving a car. Stay to the right unless you are passing, don't ride together blocking people from passing, move out of the way of people who ring their bells.
  • Never ever walk in a bike lane
  • Wear a helmet
  • Buy rain gear. It rained every morning on my way to work for a month when I first moved here. I got hit in the eye with hail and fell off the bike. You need gear.

Rain Gear

Rain jackets: Regnjakker
Best stuff is: https://www.hellyhansen.com/en_dk/ or McKinley on a budget.

Rain pants: regnbukser
I love the Patagonia rain pants cause they're not just hot rubber pants. Get some with air slots if you can.

You can grab a full set here if you don't want to mix and match: https://www.spejdersport.dk/asivik-rain-regnsaet-dame

Rain boots:
Tretorn is the brand to beat. You can grab that here: https://www.tretorn.dk/ They also sell all the gear you need.

Backpack:
Get a rain cover for the backpack and also get a waterproof backpack. I'm not kidding when I say it rains a lot. Rain covers are everywhere and I used a shopping bag for two months when I kept forgetting mine.

Alright you got your apartment, yellow card, bank account, bike and rain gear. You are ready to start going to work. Get ready for Danish work culture, which is pretty different from US work culture.

Work

Danish work can be a rough adjustment for someone growing up in the American style of work. I'll try to guide you through it. Danes have to work 37 hours a week, but in practice this can be a bit flexible. You'll want to be there at 9 your first day but don't be shocked if you are pretty alone when you get there. Danes often get to work a little later.

You'll want to join your union. You aren't eligible for the unemployment payouts since you are here on a work visa, but the union is still the best place to turn to in Denmark to get information about whether something is allowed or not. They're easy to talk to, with my union I submit an email and get a call the next day. They are also the ones who track what salaries are across the industry and whether you are underpaid. This is critical to salary negotiation and can be an immense amount of leverage when sitting down with your boss or employer.

Just another day biking to work

Seriously, join a union

If you get fired in Denmark, you have the right to get your union in there to negotiate the best possible exit package for you. I have heard a lot of horror stories from foreigners moving to Denmark about not getting paid, about being lied to about what to do if they get hurt on the job, the list goes on and on. This is the group that can help you figure out what is and isn't allowed. They're a bargain at twice the price.

Schedules tend to be pretty relaxed in Denmark as long as you are hitting around that 37. It's socially acceptable to take an hour to run an appointment or take care of something. Lunches are typically short, like 30 minutes, with most workplaces providing food you pay for in a canteen. It's cheaper than bringing lunch and usually pretty good. A lot of Danes are vegetarian or vegan so that shouldn't be a problem.

Titles don't mean anything

This can be tricky for Americans who see "CTO" or "principal engineer" and act really deferential. Danes will give (sometimes harsh) feedback to management pretty often. This is culturally acceptable where management isn't really "above" anyone, it's just another role. You really want to avoid making decisions that impact other people without their approval, or at least the opportunity to give that approval, even in high management positions.

Danish work isn't the same level of competitive as US/China/India

As an American, if you want a high-paying job you need a combination of luck, family background and basically winning a series of increasingly tight competitions. You need to do well in high school and standardized tests to get into an ok university where you need to major in the right thing to make enough money to pay back the money you borrowed to go to the university. You need a job that offers good enough health insurance that you don't declare bankruptcy with every medical issue you encounter.

US Tech interviews are grueling, multi-day affairs involving a phone screen, take home, on-site personality and practical exam AND the job can fire you at any second with zero warning. You have to be consistently providing value on a project the executive level cares about. So it's not even enough to be doing a good job, you have to do a good job on whatever hobby project is hot that quarter.

Danes don't live in that universe. They are competitive people in terms of sports or certain schools, but they don't have the "if I fail I'm going to be in serious physical distress". So things like job titles, which to Americans are "how I tell you how important I am", mean nothing here. Don't try to impress with a long list of your previous titles, just be like "I worked a bunch of places and here's what I did". Always shoot for casual, not panicked and intense.

Cultural Norms

Dress is pretty casual. I've never seen people working in suits and ties outside of a bank or government office. There isn't AC in most places, so dress in the summer for comfort. Typically once a week someone brings in cake and there are beers or sodas provided by the workplace. Friday beer is actually kind of important and you don't want to always skip it. It's one of the big bonding opportunities in Denmark among coworkers.

Many things considered taboo in American workplaces are fine here. You are free to discuss salary and people often will. You are encouraged to join a union, which I did and found to be worthwhile. They'll help with any dispute or provide you with advice if you aren't sure if something is allowed. Saying you need to leave early is totally fine. Coffee and tea are always free but soda isn't and it's not really encouraged at any workplace I've been at in Denmark to consume soda every day.

There are requirements around desk ergonomics which means you can ask for things like a standing desk, ergonomic mouse and keyboard, standing pad, etc. Often workplaces will bring in someone to assess desks and provide recommendations, which can be useful. If you need something ask for it. Typically places will provide it without too much hassle.

Working Late/On-Call

It happens, but a lot less. Typically if you work after-hours or late you would be expected to get that time back later on by leaving early or coming in late. The 37 hours is all hours worked. The rules for on-call are a bit mixed and as far as I know aren't defined in any sort of on-call rules. Just be aware that your boss shouldn't be asking you to work late and unlike the US being on salary doesn't mean that you can be asked to work unlimited hours in a week.

Vacation

Danish summer isn't bad

Danish vacation is mostly awesome. Here's the part that kinda stinks. Some jobs will ask that you use a big chunk of your vacation over a summer holiday, which is two or three weeks the office is closed during May 1 and September 30. Now your boss can require that you use your vacation during this period, which is a disaster for foreigners. The reason being is you don't have anywhere to go, everything is already booked in Denmark during the summer vacation and everything travel related is more expensive.

Plus you'll probably want to spend more of that vacation back home with family. So try to find a job that doesn't mandate when you use your vacation. Otherwise you'll be stuck either flying out at higher prices or doing a lame staycation in your apartment while everyone else flees to their summer houses in Jutland.

Conclusion

Is it worth it? I think so. You'll feel the reduction in stress within six months. For the first time maybe in your entire adult life, you'll have time to explore new hobbies. Wanna try basketweaving or kayaking or horseback riding? There's a club for that. You'll also have the time to try those things. It sounds silly but the ability to just relax during your off-time and not have to do something related to tech at all has had a profound impact on my stress levels.

Some weeks are easier than other. You'll miss home. It'll be sad. But you can push through and adapt if you want to. If I missed something or you need more information please reach out at [email protected] on the Fediverse. Good luck!