Skip to content

Grief and Programming

Grief and Programming
Once again I asked, “Father, I want to know what a dying person feels when no one will speak with him, nor be open enough to permit him to speak, about his dying.”
The old man was quiet, and we sat without speaking for nearly an hour. Since he did not bid me leave, I remained. Although I was content, I feared he would not share his wisdom, but he finally spoke. The words came slowly.
“My son, it is the horse on the dining-room table. It is a horse that visits every house and sits on every dining-room table—the tables of the rich and of the poor, of the simple and of the wise. This horse just sits there, but its presence makes you wish to leave without speaking of it. If you leave, you will always fear the presence of the horse. When it sits on your table, you will wish to speak of it, but you may not be able to.
The Horse on the Dining-Room Table by Richard Kalish

The Dog

I knew when we walked into this room that they didn't have any good news for me. There's a look people have when they are about to give you bad news and this line of vets staring at me had it. We were at Copenhagen University Animal Hospital, the best veterinary clinic in Denmark. I had brought Pixel, my corgi here, after a long series of unlikely events had each ended with the worst possible scenario. It started with a black lump on his skin.

We had moved Pixel from Chicago to Denmark, a process that involved a lot of trips to the vet, so when the lump appeared we noticed quickly. A trip to our local vet reassured us though, who took a look, prescribed antibiotics and told us to take him home and keep it clear. It didn't take long for us to realize something worse was happening. He kept scratching the thing open, dripping blood all over the house. My wife and I would wrestle with him in the bathroom every night after we put our newborn baby to bed, trying to clean the wound with the angled squirt bottle we had purchased to help my wife heal after childbirth. He ended up bandaged up and limping around, blood slowly seeping through the bandages.

After a few days the wound started to smell and we insisted the vet take another look. She glanced at it, proclaimed it was extremely seriously and had us to go to the next tier of care. They took care of the lump but told me "we are sure it is cancer" after taking a look at a sample of what they removed. I did some research and was convinced that, yes while it might be cancer, the chances of it being dangerous were slim. It wasn't uncommon for dogs to develop these lumps and since it had been cleanly removed with good margins, we had a decent chance.

Except we didn't. This nice woman, the professor of animal oncology looked at me while Pixel wandered around the room glancing suspiciously at the vet students. I marveled how calm they seemed as she delivered the news. He wasn't going to get better. She didn't know how much time I had. It wasn't going to be a long time. "Think months, not years" she said trying to reassure me that I at least had some time.

For a month after I was in a fog, just trying to cope with the impending loss of my friend. This vision I had that my daughter would know this dog and get to experience him was fading, replacing with the sad reality of a stainless steel table and watching a dog breath his last breath, transforming into a fur rug before your eyes. It was something I had no illusions about, having just done it with our cat who was ill.

Certainly, this would be the worst of it. My best friend, this animal that had come with me from Chicago to Denmark and kept me company alone for the first month of my time in the country. This dog who I had purchased in a McDonald's parking lot from a woman who assured me he "loved cocktail wieners and Law and Order", this loss would be enough. I could survive this and recover.

I was wrong though. This was just the beginning of the suffering.

Over the next three months I would be hit with a series of blows unlike anything I've ever experienced. In fast succession I would lose all but one of my grandparents, my brother would admit he suffered from a serious addiction to a dangerous drug and my mother would get diagnosed with a hard-to-treat cancer. You can feel your world shrink as you endlessly play out the worst case scenario time after time.

I found myself afraid to answer the phone, worried that this call would be the one that gave me the final piece of bad news. At the same time, I became desperate for distractions. Anything I could throw myself into, be it reading or writing or playing old PS1 videogames and losing over and over. Finally I hit on something that worked and I wanted to share with the rest of you.

Programming is something that distracted me in a way that nothing else did. It is difficult enough to require your entire focus yet rewarding enough that you still get the small bursts of "well at least I got something done". I'll talk about what I ended up making and what I'm working on, some trial and error I went through and some ideas for folks desperate for something, anything, to distract them from the horrors.

Since this is me, the first thing I did was do a ton of research. I don't know why this is always my go-to but here we are. You read the blog of an obsessive, this is what you get.

What is Grief?

At a basic level, grief is simply the term that indicates one's reactions to loss. When one suffers a large loss, you experience grief. More specifically grief refers to the heavy toll that weighs on the bereaved. We often discuss it in the context of emotions, the outpouring of feelings that is the most pronounced sign of grief.

In popular culture grief is often presented as 5 stages which include denial, anger, bargaining, depression and acceptance. These are often portrayed as linear, with each person passing through one to get to the other. More modern literature presents a different model. Worden suggested in 2002 that we think of grief as being a collection of tasks, rather than stages. The tasks are as follows:

  1. Accept the reality of the loss
  2. Work through the pain of grief
  3. Adjust to an environment in which the deceased is missing
  4. To emotionally relocate the deceased and move on with life.

One thing we don't discuss a lot with grief is the oscillation between two different sets of coping mechanisms. This is something Stroebe and Schut (1999) discussed at length, that there is a loss oriented coping mechanism and a restoration oriented coping mechanism for loss.

What this suggests is that, contrary to some discourse on the topic, you should anticipate needing to switch between two different mental modes. There needs to be part of you that is anticipating the issues you are going to face, getting ready to be useful for the people you care about who are suffering.

Aren't these people still alive?

Yes, so why am I so mentally consumed with thinking about this? Well it turns out there is a term for this called "anticipatory grief". First introduced by Lindemann (1944), it's been a topic in the academic writings around grief ever since. At a basic level it is just grief that happens before but in relation to impending death. It is also sometimes referred to as "anticipatory mourning".

One thing that became clear talking to people who have been through loss and death and experienced similar feelings of obsession over the loss before it happened is it doesn't take the place of actual grieving and mourning. This shook me to my core. I was going to obsess over this for years and it was going to happen, then I was going to still go through the same process.

The other thing that made me nervous was how many people reported chronic grief reactions. These are the normal stages of grief, only they are prolonged in duration and do not lead to a reasonable outcome. You get stuck in the grieving process. This is common enough that it is being proposed as a new DSM category of Attachment Disorders. For those that are curious, here are the criteria:

  • Criterion A. Chronic and disruptive yearning, pining, and longing for the deceased
  • Criteria B. The person must have four of the following eight remaining symptoms at least several times a day or to a degree intense enough to be distressing and disruptive:
  1. Trouble accepting the death
  2. Inability to trust others
  3. Excessive bitterness or anger related to the death
  4. Uneasiness about moving on
  5. Numbness/detachment
  6. Feeling life is empty or meaningless without the deceased
  7. Envisioning a bleak future
  8. Feeling agitated
  • Criterion C. The above symptom disturbance must cause marked and persistent dysfunction in social, occupational, or other important domains
  • Criterion D. The above symptom disturbance must last at least six months

Clearly I needed to get ready to settle in for the long haul if I was going to be of any use to anyone.

After talking to a support group it was clear the process really has no beginning or end. They emphasized getting a lot of sleep and trying to keep it as regular as possible. Don't end up trapped in the house for days not moving. Set small goals and basically be cautious, don't make any giant changes or decisions.

We talked about what a lot of them did and much of the discussion focused around a spiritual element. I'm not spiritual in any meaningful way. While I can appreciate the architecture of a church or a place of worship, seeking a higher power has never been a large part of my life. For the less religious a lot of them talking about creative outlets. These seemed more reasonable to me. "You should consider poetry" one man said. I didn't want to tell him I don't think I've ever really read poetry, much less written it.

What even is "focus" and "distractions"

I'm not going to lie. I figured I would add this section as a fun "this is how distractions and focus works in the human mind" with some graphs and charts. After spending a few days reading through JSTOR I had no such simple charts or graphs. The exact science behind how our minds work, what is focus and what is a distraction is a complicated topic so far beyond my ability to give it justice that it is laughable.

So here is my DISCLAIMER. As far as I can tell, we do not know how information is processed, stored or recalled in the brain. We also aren't sure how feelings or thinking even works. To even begin to answer these questions, you would need to be an expert in a ton of disciplines which I'm not. I'm not qualified to teach you anything about any topic related to the brain.

Why don't we know more about the human brain? Basically neurons are extremely small and the brain is very complicated. Scientists have been working on mapping the brain for 40 years with smaller organisms with some success. The way scientists seem to study this is: define the behavior, identify neurons involved in that behavior, finally agitate those neurons to see their role in changing the behavior. This is called "circuit dynamics". You can see a really good write-up here.

As you add complexity to the brain being studied, drawing these connections get more and more complicated. Our tools have improved as well, allowing us to tag neurons with different genes so they show up as different colors when studied to better understand their relationship in more complex creatures.

The retinal ganglion cells of a genetically modified mouse to demonstrate the complexity of these systems through the Brainbow technique. 

Anyway brains are really complicated and a lot of the stuff we repeat back to each other about how they work is, at best, educated guesses and at worst deliberately misleading. The actual scientific research happening in the field is still very much in the "getting a map of the terrain" step.

Disclaimer understood get on with it

Our brains are constantly being flooded with information from every direction. Somehow our mind has a tool to enable us to focus on one piece of information and filter out the distractions. When we focus on something, the electrical activity of the neocortex changes. The neurons there stop signalling in-sync and start firing out of order.

Why? Seemingly to allow us to let the neurons respond to different information in different ways. This means I can focus on your voice and filter out the background noise of the coffee shop. It seems the cholinergic system (don't stress I also had never heard of it) plays a large role in allowing this sync/out of sync behavior to work.

It turns out the cholinergic system regulates a lot of stuff. It handles the sensory processing, attention, sleep, and a lot more. You can read a lot more about it here. This system seems to be what is broadcasting to the brain "this is the thing you need to be focused on".

So we know that there is a system, based in science, to how focus and distraction work. However that isn't the only criteria that our minds use to judge whether a task was good or bad. We also rely on dopamine source floating around in our skulls to tell us "was that thing worth the effort we put into it". In order for the distraction to do what I wanted it had to be engrossing enough to trigger my brain into focusing on it and also giving me the dopamine to keep me invested.

The Nature of the Problem

There is a reality when you start to lose your family that is unlike anything I've ever experienced. You are being uprooted, the world you knew before is disappearing. The jokes change, the conversation change, you and the rest of your family have a different relationship now. Grief is a cruel teacher.

There is nothing gentle about feeling these entities in your life start to fade away. Quickly you discover that you lack the language for these conversations, long pauses on the phone while you struggle to offer anything up that isn't a glib stupid greeting card saying.

Other people lack the language too and it is hard to not hate them for it. I feel a sense of obligation to give the people expressing their sadness to me something, but I already know as I start talking that I don't have it. People are quick to offer up information or steps, which I think is natural, but the nature of illness is you can't really "work the problem". The Americans obsess on what hospital network she got into, the Danes obsess on how is she feeling.

You find yourself regretting all the certainty you had before. There was an idea of your life and that idea, as it turns out, had very little propping it up. For all the years invested in one view of the universe, this is far bigger. I'm not in it yet, but I can see its edges and interacting with people who are churning in its core scares me. My mind used to wander endlessly, but no more. Left to its own devices it will succumb to intense darkness.

Denial helps us to pace our feelings of grief. There is a grace in denial. It is nature's way of letting in only as much as we can handle.
Elisabeth Kubler-Ross

Outside of the crisis you try and put together the memories and pieces into some sort of story. Was I good to Pixel? Did I pay enough attention to the stories my grandparents told me, the lessons and the funny bits? What kind of child was I? You begin to try and put them together into some sort of narrative, some way of explaining it to other people, but it falls apart. I can't edit without feeling dishonest. Without editing it becomes this jumble of words and ideas without a beginning or end.

I became afraid of my daughter dying in a way I hadn't before. Sure I had a lot of fear going into the birth process and I was as guilty as anyone of checking on her breathing 10 times a night at first. The first night with her in the hospital, me lying there on a bed staring at the ceiling while our roommate sobbed through the night, her boyfriend and baby snoozing away peacefully, I felt this fear for the first time. But I dealt with it, ran the numbers, looked at the stats, did the research.

Now I feel cursed with the knowledge that someone has to be the other percent. Never did I really think I was blessed, but I had managed to go so long without a disaster that it felt maybe like I would continue to skate forever. Then they hit all at once, stacked on each other. Children can die from small falls, choking on reasonable pieces of food or any of a million other things. Dropping her off at daycare for the first time, I felt the lack of control and almost turned around.

You can't curse your children to carry your emotional weight though. It is yours, the price of being an adult. My parents never did and I'm not going to be the first. A lot of stuff talks about solving grief but that's not even how this really works. You persist through these times, waiting for the phone call you know is coming. That's it, there isn't anything else to do or say about it. You sit and you wait and you listen to people awkwardly try to reassure you in the same way you once did to others.

Why Programming?

Obviously I need something to distract me. Like a lot of people in the tech field, I tend to hobby-bounce. I had many drawers full of brief interests where I would obsess over something for a month or so, join the online community for it, read everything I could find about it and then drop it hard. When I moved from Chicago to Denmark though, all that stuff had to go. Condense your life down to a cargo container and very soon you need to make hard choices about what comes and what gets donated. This all means that I had to throw out any idea which involved the purchase of a lot of stuff.

Some other obvious stuff was out unfortunately. Denmark doesn't have a strong volunteering movement, which makes sense in the context of "if the job is worth doing you should probably pay someone to do it". My schedule with a small child wasn't amendable to a lot of hours spent outside of the house with the exception of work.

My list of requirements looked like this:

  • Not a big initial financial investment
  • Flexible schedule
  • Gave me the feeling of "making something"
  • Couldn't take up a lot of physical space

How to make it different from work

From the beginning I knew I had to make this different from what I do at work. I write code at work, I didn't want this to turn into something where I end up working on stuff for work at all hours. So I laid out a few basic rules to help keep the two things separate from get.

  • Use the absolute latest bleeding beta release of the programming language. Forget stability, get all the new features.
  • Don't write tests. I love TDD but I didn't want this to be "a work product".
  • Do not think about if you are going to release it into the public. If you want to later, go for it, but often when I'm working on a small fun project I get derailed thinking "well I need tests and CI and to use the correct framework and ensure I have a good and simple process for getting it running" etc etc.
  • Try to find projects where there would be a good feedback loop of seeing progress.
  • If something isn't interesting, just stop it immediately.

I started with a tvOS app.The whole OS exists to basically play video content, it is pretty easy to write and test and plus it is fun. Very quickly I felt it working, this distraction where I felt subconsciously I was allowed to be distracted because "this is productive and educational". It removed the sense of guilt I had when I was playing videogames or reading a book, tortured by a sense of "you should be focusing more on the members of your family that are going through this nightmare".

The tutorial I followed was this one. Nothing but good things to say about the experience, but I found a lot of the layout stuff to not really do much for me. I've never been that interested in CSS or web frontend work, not that it isn't valuable work I just don't have an eye for it. On to the next thing then.

From there I switched to GUI apps based in large part on this post. I use CLI tools all the time, it might be nice to make simple Python GUIs of commonly used tools for myself. I started small and easy, since I really wanted to get a fast feeling of progress.

The only GUI framework I've ever used before is tkinter which is convenient because it happens to be an idiot-proof GUI design. Here's the basic Hello World example:

from tkinter import *
from tkinter import ttk
root = Tk()
frm = ttk.Frame(root, padding=10)
frm.grid()
ttk.Label(frm, text="Hello World!").grid(column=0, row=0)
ttk.Button(frm, text="Quit", command=root.destroy).grid(column=1, row=0)
root.mainloop()

Immediately you should see some great helpful hints. First it's a grid, so adding or changing the relative position of the label is easy. Second we have a lot of flexibility with things that can sometimes be a problem with GUIs like auto-resizing.

Adding window.columnconfigure(i, weight=1, minsize=75) and window.rowconfigure(i, weight=1, minsize=50) allows us to quickly add resizing and by changing the weights we can change which column or row grows at what rate.

For those unaware of how GUI apps work, it's mostly based on Events and Event Handlers. Sometimes happens (a click or the keyboard is pressed) and you execute something as a result. See the Quit button above for a basic example. Very quickly you can start making fun little GUI apps that do whatever you want. The steps look like this:

  1. Import tkinter and make a new window
import tkinter as tk

window = tk.Tk()

window.title("Your Sweet App")

window.resizable(width=False, height=False)

Make a widget with a label and assign it to a Frame
start = tk.Frame(master=window)
output_value = tk.Entry(master=start, width=10)
input = tk.Label(master=start, text="whatever")

Set up your layout, add a button to trigger an action and then define a function which you call on the Button with the command argument.

Have no idea what I'm talking about? No worries, I just did this tutorial and an hour later I was back up and running. You can make all sorts of cool stuff with Ttk and Tkinter.

Defining functions to run basically shell commands and output their value was surprisingly fun. I started with real basic ones like host.

  1. Check to make sure it exists with shutil.which link
  2. Run the command and format the output
  3. Display in the GUI.

After that I made a GUI to check DKIM, SPF and DMARC. Then a GUI for seeing all your AWS stuff and deleting it if you wanted to. A simple website uptime checker that made a lambda for each new website you added in AWS and then had the app check an SQS queue when it opened to confirm the status. These things weren't really useful but they were fun and different.

The trick is to think of them like toys. They're not projects like you have at work, it's like Legos in Python. It's not serious and you can do whatever you want with them. I made a little one to check if any of the local shops had a bunch of hard to find electronics for sale. I didn't really want the electronics but it burned a lot of time.

Checking in

After this I just started including anything random I felt like including. Config files in TOML, storing state in SQLite, whatever seemed interesting. All of it is super easy in Python and with the GUI you get that really nice instant feedback. I'm not pulling containers or doing anything that requires a ton of setup. It is all super fast. This has become a nice nightly ritual. I get to sit there, open up the tools that I've come to know well, and very slowly change these various little GUIs. It's difficult enough to fully distract, to suck my whole mind into it.

I can't really tell you what will work for you, but hopefully this at least provided you with some idea of something that might work for you. I wish I had something more useful to say to people reading this suffering. I'm sorry. It seems the only sentiment I have heard that is honest.


Consider using Kubernetes ephemeral debug container

I'll admit I am late to learning these exist. About once a month, maybe more, I need to spin up some sort of debug container inside of Kubernetes. It's usually for something trivial like checking networking or making sure DNS isn't being weird and it normally happens inside of our test stack. Up until this point I've been using a conventional deployment, but it turns out there is a better option. kukectl debug

It allows you to attach containers to a running pod, meaning you don't need to bake the troubleshooting tools into every container (because you shouldn't be doing that). The Kubernetes docs go into a lot more detail.

In combination with this, I found some great premade debug containers. Lightrun makes these Koolkits in a variety of languages. I've really found them to be super useful and well-made. You do need Kubernetes v1.23 or above, but the actual running is super simple.

kubectl debug -it  --image=lightrun-platform/koolkits/koolkit-node --image-pull-policy=Never --target=

In addition to node they have Golang, Python and JVM. With Python it's possible to debug most networking problems with simple commands. For example, a DNS lookup is just:

import socket

socket.getaddrinfo("matduggan.com")

`

I regularly use the socket library for tons of networking troubleshooting and ensuring ports and other resources are not blocked by a VPC or other security policy. You can find those docs here.

For those in a datacenter consider using the excellent netmiko library.


Programming in the Apocalypse

A climate change what-if 

Over a few beers...

Growing up, I was obsessed with space. Like many tech workers, I consumed everything I could find about space and space travel. Young me was convinced we were on the verge of space travel, imagining I would live to see colonies on Mars and beyond. The first serious issue I encountered with my dreams of teleporters and warp travel was the disappointing results of the SETI program. Where was everyone?

In college I was introduced to the Great Filter. Basically a bucket of cold water was dumped on my obsession with the Drake equation as a kid.

When I heard about global warming, it felt like we had plenty of time. It was the 90s, anything was possible, we still had time to get ahead of this. Certainly this wouldn't be the thing that did our global civilization in. We had so much warning! Fusion reactors were 5-7 years away according to my Newsweek for kids in 1999. This all changed when I met someone who worked on wind farm design in Chicago who had spent years studying climate change.

She broke it down in a way I'll never forget. "People talk about global warming in terms of how it makes them feel. They'll discuss being nervous or unsure about it. We should talk about it like we talk about nuclear weapons. The ICBM is on the launch pad, it's fueled and going to take off. We probably can't stop it at this point, so we need to start planning for what life looks like afterwards."

The more I read about global warming, the more the scope of what we were facing sunk in. Everything was going to change, religion, politics, economics and even the kinds of food we eat. The one that stuck with me was saffron for some odd reason. The idea that my daughter might not get to ever try saffron was so strange. It exists, I can see it and buy it and describe what it tastes like. How could it and so many other things just disappear? Source

To be clear, I understand why people aren't asking the hard questions. There is no way we can take the steps to start addressing the planet burning up and the loss of so much plant and animal life without impacting the other parts of our lives. You cannot consume your way out of this problem, we have to accept immense hits to the global economy and the resulting poverty, suffering and impact. The scope of the change required is staggering. We've never done something like this before.

However let us take a step back. I don't want to focus on every element of this problem. What about just programming? What does this field, along with the infrastructure and community around it, look like in a world being transformed as dramatically as when electric light was introduced?

In the park a few nights ago (drinking in public is legal in Denmark), two fellow tech workers and I discussed just that. Our backgrounds are different enough that I thought the conversation would be interesting to the community at large and might spark some good discussion. It was a Danish person, me an American and a Hungarian. We ended up debating for hours, which suggests to me that maybe folks online would end up doing the same.

The year is 2050

The date we attempted to target for these conversations is 2050. While we all acknowledged that it was possible some sort of magical new technology comes along which solves our problems, it felt extremely unlikely. With China and India not even starting the process of coal draw-down yet it doesn't appear we'll be able to avoid the scenario being outlined. The ICBM is going to launch, as it were.

It's quite a bit hotter

Language Choice

Language popularity based on GitHub as of Q4 2021 Source

We started with a basic question. "What languages are people using in 2050 to do their jobs". Some came pretty quickly, with all of us agreeing that C, C#, PHP were likely to survive in 2050. There are too many systems out that rely on those languages and we all felt that wide-scale replacement of existing systems in 2050 was likely to slow or possibly stop. In a world dealing with food insecurity and constant power issues, the eeappetite to swap existing working systems will, we suspect, be low.

These graphs and others like it suggest a very different workflow from today. Due to heat and power issues, it is likely that disruptions to home and office internet will be a much more common occurrence. As flooding and sea level rise disrupts commuting, working asynchronously is going to be the new norm. So the priorities in 2050 we think might look like this:

  • dependency management has to be easy and ideally not something you need to touch a lot, with caching similar to pip.
  • reproducibility without the use of bandwidth heavy technology like containers will be a high priority as we shift away from remote testing and go back to working on primarily local machines. You could also just be more careful about container size and reusing layers between services.
  • test coverage will be more important than ever as it becomes less likely that you will be able to simply ping someone to ask them a question about what they meant or just general questions about functionality. TDD will likely be the only way the person deploying the software will know whether it is ready to go.

Winners

Based on this I felt like some of the big winners in 2050 will be Rust, Clojure and Go. They mostly nail these criteria and are well-designed for not relying on containers or other similar technology for basic testing and feature implementation. They are also (reasonably) tolerant of spotty internet presuming the environment has been set up ahead of time. Typescript also seemed like a clear winner both based on the current enthusiasm and the additional capability it adds to JavaScript development.

There was a bit of disagreement here around whether languages like Java would thrive or decrease. We went back and forth, pointing to its stability and well-understood running properties. I felt like, in a world where servers wouldn't be endlessly getting faster and you couldn't just throw more resources at a problem, Clojure or something like it would be a better horse. We did agree that languages with lots of constantly updated packages like Node with NPM would be a real logistical struggle.

This all applies to new applications, which we all agreed probably won't be what most people are working on. Given a world flooded with software, it will consume a lot of the existing talent out there just to keep the existing stuff working. I suspect folks won't have "one or two" languages, but will work on wherever the contracts are.

Financial Support

I think we're going to see the field of actively maintained programming languages drop. If we're all struggling to pay our rent and fill our cabinets with food, we aren't going to have a ton of time to donate to extremely demanding after-hour volunteer jobs, which is effectively what maintaining most programming languages is. Critical languages or languages which give certain companies advantages will likely have actual employees allocated for their maintenance, while others will languish.

While the above isn't the most scientific study, it does suggest that the companies who are "good" OSS citizens will be able to greatly steer the direction of which languages succeed or fail. While I'm not suggesting anyone is going to lock these languages down, their employees will be the ones actually doing the work and will be able to make substantive decisions about their future.

Infrastructure

In a 2016 paper in Global Environmental Change, Becker and four colleagues concluded that raising 221 of the world’s most active seaports by 2 meters (6.5 feet) would require 436 million cubic meters of construction materials, an amount large enough to create global shortages of some commodities. The estimated amount of cement — 49 million metric tons — alone would cost $60 billion in 2022 dollars. Another study that Becker co-authored in 2017 found that elevating the infrastructure of the 100 biggest U.S. seaports by 2 meters would cost $57 billion to $78 billion in 2012 dollars (equivalent to $69 billion to $103 billion in current dollars), and would require “704 million cubic meters of dredged fill … four times more than all material dredged by the Army Corps of Engineers in 2012.”
“We’re a rich country,” Becker said, “and we’re not going to have nearly enough resources to make all the required investments. So among ports there’s going to be winners and losers. I don’t know that we’re well-equipped for that.”

Source

Running a datacenter requires a lot of parts. You need replacement parts for a variety of machines from different manufacturers and eras, the ability to get new parts quickly and a lot of wires, connectors and adapters. As the world is rocked by a new natural disaster every day and seaports around the world become unusable, it will simply be impossible to run a datacenter without a considerable logistics arm. It will not be possible to call up a supplier and have a new part delivered in hours or even days.

Basic layout of what we mean when we say "a datacenter"

It isn't just the servers. The availability of data centers depends on reliability and maintainability of many sub-systems (power supply, HVAC, safety systems, communications, etc). We currently live in a world where many cloud services have SLAs with numbers like 99.999% availability along with heavy penalties for failing to reach the promised availability. For the datacenters themselves, we have standards like the TIA-942 link and GR-3160 link which determine what a correctly run datacenter looks like and how we define reliable.

COVID-related factory shutdowns in Vietnam and other key production hubs made it difficult to acquire fans, circuit breakers and IGBTs (semiconductors for power switching), according to Johnson. In some cases, the best option was to look outside of the usual network of suppliers and be open to paying higher prices.
“We’ve experienced some commodity and freight inflation,” said Johnson. “We made spot buys, so that we can meet the delivery commitments we’ve made to our customers, and we have implemented strategies to recapture those costs.”
But there are limits, said Johnson.
“The market is tight for everybody,” said Johnson. “There’s been instances where somebody had to have something, and maybe a competitor willing to jump the line and knock another customer out. We’ve been very honorable with what we’ve told our customers, and we’ve not played that game.” Source
The map of areas impacted by Global Warming and who makes parts is pretty much a one to one. 

We already know what kills datacenters now and the list doesn't look promising for the world of 2050.

A recent report (Ponemon Institute 2013) states the top root causes of data center failures: UPS system failure, Accidental/human error, cybercrime, weather related, water heat or CRAC failure, generator failure and IT equipment failure.
Expect to see a lot more electronic storage cabinets for parts

In order to keep a service up and operational you are going to need to keep more parts in stock, standardize models of machines and expect to deal with a lot more power outages. This will cause additional strain on the diesel generator backup systems, which have 14.09 failures per million hours (IEEE std. 493 Gold Book) and average repair time of 1257 hours now. In a world where parts take weeks or months to come, these repair numbers might totally change.

So I would expect the cloud to be the only realistic way to run anything, but I would also anticipate that just keeping the basic cloud services up and running to be a massive effort from whoever owns the datacenters. Networking is going to be a real struggle as floods, fires and power outages make the internet more brittle than it has been since its modern adoption. Prices will be higher as the cost for the providers will be higher.

Expectations

  • You will need to be multi-AZ and expect to cut over often.
  • Prices will be insane compared to now
  • Expect CPU power, memory, storage, etc to mostly stay static or drop over time as the focus becomes just keeping the lights on as it were
  • Downtime will become the new normal. We simply aren't going to be able to weather this level of global disruption with anything close to the level of average performance and uptime we have now.
  • Costs for the uptime we take for granted now won't be a little higher, they'll be exponentially higher even adjusted for inflation. I suspect contractual uptime will go from being "mostly an afterthought" to a extremely expensive conversation taking place at the highest levels between the customer and supplier.
  • Out of the box distributed services like Lambdas, Fargate, Lightsail, App Engine and the like will be the default since then the cloud provider is managing what servers are up and running.

For a long time we treated uptime as a free benefit of modern infrastructure. I suspect soon the conversation around uptime will be very different as it becomes much more difficult to reliably staff employees around the world to get online and fix a problem within minutes.

Mobile

By 2050, under an RCP 8.5 scenario, the number of people living in areas with a nonzero chance of lethal heat waves would rise from zero today to between 700 million and 1.2 billion (not factoring in air conditioner penetration). Urban areas in India and Pakistan may be the first places in the world to experience such lethal heatwaves (Exhibit 6). For the people living in these regions, the average annual likelihood of experiencing such a heat wave is projected to rise to 14 percent by 2050. The average share of effective annual outdoor working hours lost due to extreme heat in exposed regions globally could increase from 10 percent today to 10 to 15 percent by 2030 and 15 to 20 percent by 2050.

Link

We are going to live in a world where the current refugee crises is completely eclipsed. Hundreds of millions of people are going to be roving around the world trying to escape extreme weather. For many of those people, we suspect mobile devices will be the only way they can interact with government, social services, family and friends. Mobile networks will be under extreme strain as even well-stocked telecoms will struggle with floods of users with people attempting to escape the heat and weather disasters.

Mobile telecommunications, already essential to life in the smartphone era, is going to be as critical as power or water. Not just for users, but for the usage of the various embedding of ICT devices into the world around us. 5G has already greatly increased our ability to service many users, with some of that work happening in the transition from FDD to TDD (explanation). So in many ways we're already getting ready for a much higher level of dependence on mobile networks through the migration to 5G. The breaking of the service into "network slices" allows for effective abstraction away from the physical hardware, but obviously doesn't eliminate the physical element.

Mobile in 2050 Predictions

  • Phones are going to be in service for a lot longer. We all felt that both Android, with their aggressive backwards compatibility and Apple with the long support for software updates were already laying the groundwork for this.
  • Even among users who have the financial means to buy a new phone, it might be difficult for companies to manage their logistics chain. Expect long waits for new devices unless there is domestic production of the phone in question.
  • The trick to hardware launches and maintenance is going to be securing multiple independent supply chains and less about innovation
  • Mobile will be the only computing people have access to and they'll need to be more aware of data pricing. There's no reason to think telecoms won't take advantage of their position to increase data prices under the guise of maintenance for the towers.
Current graph of mobile data per gig. I'd expect that to increase as it becomes more critical to daily life. Source
Products like smartphones possess hundreds of components whose raw materials are transported from all over the world; the cumulative mileage traveled by all those parts would “probably reach to the moon,” Mims said. These supply chains are so complicated and opaque that smartphone manufacturers don’t even know the identity of all their suppliers — getting all of them to adapt to climate change would mark a colossal achievement. Yet each node is a point of vulnerability whose breakdown could send damaging ripples up and down the chain and beyond it. source

Developers will be expected to make applications that can go longer between updates, can tolerate network disruptions better and in general are less dependent on server-side components. Games and entertainment will be more important as people attempt to escape this incredibly difficult century of human life. Mobile platforms will likely have backdoors in place forced in by various governments and any concept of truly encrypted communication seem unlikely.

It is my belief (not the belief of the group) that this will strongly encourage asyncronous communication. Email seems like the perfect tool to me for writing responses and sending attachments in a world with shaky internet. It also isn't encrypted and can be run on very old hardware for a long time. The other folks felt like this was a bit delusional and that all messaging would be happening inside of siloed applications. It would be easier to add offline sync to already popular applications.

Work

Hey, Kiana,
Just wanted to follow up on my note from a few days ago in case it got buried under all of those e-mails about the flood. I’m concerned about how the Eastern Seaboard being swallowed by the Atlantic Ocean is going to affect our Q4 numbers, so I’d like to get your eyes on the latest earnings figures ASAP. On the bright side, the Asheville branch is just a five-minute drive from the beach now, so the all-hands meeting should be a lot more fun this year. Silver linings!
Regards,
Kevin

Source

Offices in 2050 for programmers will have been a thing of the past for awhile. It isn't going to make sense to force people to all commute to a single location, not out of a love of the employees but as a way to decentralize the businesses risk from fires, floods and famines. Programmers, like everyone, will be focused on ensuring their families have enough to eat and somewhere safe to sleep.

Open Source Survival?

Every job I've ever had relies on a large collection of Open Source software, whose survival depends on a small number of programmers having the available free time to maintain and add features to them. We all worried about how, in the face of reduced free time and increased economic pressures, who was going to be able to put in the many hours of unpaid labor to maintain this absolutely essential software. This is true for Linux, programming languages, etc.

To be clear this isn't a problem just for 2050, it is a problem today. In a world where programmers need to work harder just to provide for their families and are facing the same intense pressure as everyone else, it seems unreasonable to imagine. My hope is that we figure out a way to easily financially support OSS developers, but I suspect it won't be a top priority.

Stability

We're all worried about what happens to programming as a realistic career when we hit an actual economic collapse, not just a recession. Like stated above, it seems like a reasonable prediction that just keeping the massive amount of software in the world running will employ a fair amount of us. This isn't just a programming issue, but I would expect full-time employment roles to be few and far between. There will be a lot of bullshit contracting and temp gigs.

The group felt that automation would have likely gotten rid of most normal jobs by 2050 and we would be looking at a world where wage labor wasn't the norm anymore. I think that's optimistic and speaks to a lack of understanding of how flexible a human worker is. You can pull someone aside and effectively retrain them for a lot of tasks in less than 100 hours. Robots cannot do that and require a ton of expensive to change their position or function even slightly.

Daily Work

Expect to see a lot more work that involves less minute to minute communication with other developers. Open up a PR, discuss the PR, show the test results and then merge in the branch without requiring teams to share a time zone. Full-time employees will serve mostly as reviewers, ensuring the work they are receiving is up to spec and meets standards. Corporate loyalty and perks will be gone, you'll just swap freelance gigs whenever you need to or someone offers more money. It wouldn't surprise me if serious security programs were more common as OSS frameworks and packages go longer and longer without maintenance. If nothing else, it should provide a steady stream of work.

AI?

I don't think it's gonna happen. Maybe after this version of civilization burns to the ground and we make another one.

Conclusion

Ideally this sparks some good conversations. I don't have any magic solutions here, but I hope you don't leave this completely without hope. I believe we will survive and that the human race can use this opportunity to fundamentally redefine our relationship to concepts like "work" and economic opportunity. But blind hope doesn't count for much.

For maintainers looking to future proof their OSS or software platforms, start planning for easy of maintenance and deployment. Start discussing "what happens to our development teams if they can't get reliable internet for a day, a week, a month". It is great to deploy an application 100 times a day, but how long can it sit unattended? How does on-call work?

A whole lot is going to change in every element of life and our little part is no different. The market isn't going to save us, a startup isn't going to fix everything and capture all the carbon.


Fig Terminal Auto-complete Review

Fig Terminal Auto-complete Review

My entire career there has been a wide gap between CLI applications and GUI applications. GUIs are easier to use at first, they have options to provide more direct feedback to users, but they also can have odd or confusing behavior. Endlessly spinning wheels, error messages that don't mean anything to the user (or don't output any text) and changing UIs between versions has long meant that while GUIs might be easier to use at first they're a less stable target.

CLIs tend to be more stable, are easy to automate against and don't tend to experience as much mysterious behavior. This is why sys admins back in the day would always try to do as much as they could through PowerShell or bash even if a GUI was present. It was just a much better experience....once you got over the initial learning curve. Discovering the options, arguments and what they did was often a time-consuming task that exceeded the technical skill of a casual user.

At first Fig presents itself as just an auto-complete tool and, don't get me wrong, if that is all you are looking for, it does a great job. However there is clearly a much larger ambition with this product to expand the idea of terminal auto-completion into making CLIs much more approachable, baking documentation into the terminal and then allowing you to share that among your entire team for public or private tools. While at present the idea is just starting to become fleshed out, there is a lot of promise here.

So while Fig is a useful tool today, the promise of where Fig could go is really exciting to me. I'm going to focus mostly on what the product is now, but felt I needed to mention the obvious hooks they are building into the application and the overall framework they are constructing.

Disclaimer: I haven't been asked to write this, nor paid to do so. I have zero affiliation with Fig and I'm not important enough to "move the needle" on them.

TL;DR

Fig is a Node server/Rust client local application that runs on your local machine and inserts autocompletes for most of the commonly used CLIs and programming languages. It's basically an IDE autocomplete but across a wide range of standard tooling. It has well-written and clearly defined privacy rules, a robust selection of settings for personalization and in my testing seemed....remarkably feature complete. I have no idea how they intend to make money with this product, but give it a shot before they run out of VC money and end up charging you per autocomplete or something. Grab your download here.

What is auto-completion?

When we say "auto-completion", what we typically mean is one of two things for programmers. The first is the interface of an IDE like the one I use somewhat frequently, PyCharm. You start typing the name of something, hit ⌃ Space and bam you'll get a list of methods as shown below:

Now obviously IDEs do quite a bit more than just give you a list of methods, some apply machine learning to your project to improve the quality of the recommendations as well as some pretty impressive framework-specific recommendations. This is often provided as a compelling reason to adopt IDE-specific workflows vs more traditional text editors. The point being the IDE, being aware of the syntax of that language, attemps to provide common situation-specific completions, be it file paths, methods, dictionary keys or whatever.

The other common place we see autocomplete is in the shell, be it bash or something else. For a long time the standard way in Bash to enable the famous "tab complete" functionality was through the use of the complete builtin. Basically you pass the name of the command, if the command has a compspec then a list of possible completions for that word are provided. If there is no default compspec, Bash will attempt to expand any aliases on the command word and then build off of that to find shortcuts.

You can also define your own in a simple bash script you source. The process is simple and reliable:

#/usr/bin/env bash
complete -W "word1 word2 word3" program_these_words_will_be_used_with

The you just run source ./name-of-file.bash and you will be able to use tab complete with the custom program program_these_words_will_be_used_with. You can get an extremely well-writte write up of how all this works here. Now some programs support this tab-complete behavior, some do not but typically you need to know the first letter of the argument to use it effectively.

Where does Fig fit in?

If your IDE or plugin for Vim/Emacs provides the auto-completion for your editor, what exactly does Fig do? Basically it is bringing the IDE experience of completion to the terminal emulator. Instead of getting just a basic tab-complete you get something that looks like the image below:

Not only do you get a really nice breakdown of all the arguments but you get a quick explanation of what it does along with what to pass to the command. In short, Fig is trying to replace my normal workflow of: read the man page, attempt the command, open the man page back up, second attempt, etc. Instead you get this really fast autocomplete that does feel a lot like using an IDE.

Plus you don't need to switch your terminal emulator. It worked out of the box with iTerm 2 and the setup was pretty straight forward. We'll get into that next, but if you still aren't sure where the value is, check out this screenshot of the aws cli, a notoriously dense CLI interface:

It's just a faster way to work in the terminal.

Installation and Customization

Installing Fig is pretty easy. I'm typically pretty paranoid about installing stuff that is running in my terminal, but I have to give a lot of credit to the Fig team. Their data usage explanation in the setup of the app is maybe the best I've ever seen.

All of this is explained in the installer. A++ work

There are some parts of the setup I didn't love. I didn't like that the download didn't have a version number. Instead I'm downloading "latest".

I'd love a version number

In terms of running the app, it's a menu bar app on the Mac. However the settings menu is a real delight. You have some interesting concepts I haven't seen before in a tool like this, like a "Plugin Store".

Tons of respect to the Fig team for the ease of turning on and off things like "updates" and telemetry. It's all laid out there, not hidden in any way and have plain English explanations of what they're doing. I especially like being able to turn off AutoUpdate. Sometimes these tools get so many updates that seeing the prompt for a new update every time I open the terminal starts to make me go insane. I'm looking at you Oh My Zsh.

You can also selectively enable or disable Fig for different terminals. I like this, especially if you run into an issue and need to troubleshoot where the problem is. Plus Fig provides debug logs to you as a relatively top-level option, not deeply nested.

This is how it looks by default

I appreciate how easy this was to get to. I haven't actually encountered any problems with Fig that required debugging, but especially for a tool that is this "in the critical path" it's nice to know I don't have to go hunting around endlessly looking for "is there something wrong with Fig or did I forget how this application works".

Plugins

So the auto-completes you are loading live here. They're open-source which is nice, but you'll notice that Fig itself doesn't appear to be an open-source app. Their documentation also provides tons of great information on how to write your own completes, which seems to be what the tool is geared towards. You can check out that tutorial here. Big shoutout to the Fig documentation folks, this is really well-written and easy to follow.

Fig also supports you writing completes for internal tooling or scripts and sharing them with a team. I assume this is the plan to generate revenue and, if so, I hope it works out. I didn't get the chance to try this, in part because it wasn't clear to me how I join a team or if the team concept is live. But you can read their steps on the documentation site here to get a sense of how its supposed to work in the future.

I definitely could see the utility of syncing this sort of stuff across a team, especially with documentation baked in. It is just so much easier to write a script or CLI to interact with an internal API or database as opposed to attempting to ship a GUI, even with some of the simplistic tooling that exists today. I'll be keeping an eye on Fig to see when and if they ever expand on this. (If someone from Fig wants to invite me to a team or something, I'd love to see it.)

Application

Like I stated before, Fig doesn't appear to be open-source, although let me know if I missed a repo somewhere. Therefore in order to learn more about how it works we need to dig around the app bundle, but other than learning ReactJS and Swift are involved I didn't get a lot of data there. I decided to poke around the running process a bit.

Going through the debug logs it appears the client is a Rust app:

2022-05-05T07:44:05.388194Z DEBUG send_operation{operation="InitiateAuth" service="cognitoidentityprovider"}: rustls::client::hs: 584: Using ciphersuite Tls12(Tls12CipherSuite { suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, bulk: Aes128Gcm })
2022-05-05T07:44:05.388305Z DEBUG send_operation{operation="InitiateAuth" service="cognitoidentityprovider"}: rustls::client::tls12::server_hello: 82: Server supports tickets
2022-05-05T07:44:05.388474Z DEBUG send_operation{operation="InitiateAuth" service="cognitoidentityprovider"}: rustls::client::tls12: 410: ECDHE curve is ECParameters { curve_type: NamedCurve, named_group: secp256r1 }
2022-05-05T07:44:05.388572Z  WARN send_operation{operation="InitiateAuth" service="cognitoidentityprovider"}: rustls::check: 66: Received a ServerHelloDone handshake message while expecting [CertificateRequest]
2022-05-05T07:44:05.388648Z DEBUG send_operation{operation="InitiateAuth" service="cognitoidentityprovider"}: rustls::client::tls12: 694: Server DNS name is DnsName(DnsName(DnsName("cognito-idp.us-east-1.amazonaws.com")))
2022-05-05T07:44:05.492802Z DEBUG send_operation{operation="InitiateAuth" service="cognitoidentityprovider"}: rustls::client::tls12: 1005: Session saved

I suspected from the configs being Nodejs that the local server the CLI is interacting with is also Node, but to be fair I'm not sure. If anybody has more detail or documentation I would love to see it.

Overall as an app it seems to be a good citizen. Memory and CPU usage were good, nothing exciting to report there. I couldn't detect any additional latency with my terminal work, often a concern when introducing any sort of auto-complete. I tried to pull timings but due to how you interact with the software (with the interactive drop-downs) I wasn't able to do so. But in general it seemed quite good.

The only criticism I have of the Fig app is the Settings menu isn't following any sort of standard MacOS design template. First, you don't usually call it "Settings" in MacOS, it's "Preferences". Also arranging the options on the side is not typical, you usually make a horizontal line of options. That isn't a hard and fast rule though. One thing I did notice that was odd was it seemed "Settings" is a web app, or loading some sort of resource. If you close it and reopen it, you can see it loading in the options. Even if you just click around there was considerable lag in switching between the tabs at time.

This is on a 16-inch MacBook Pro with 32 GB of RAM, so I would expect this experience on other hardware as well. But you don't need to spend a lot of time in Settings, so it isn't that big of a deal.

Conclusion

Fig is good as a tool. I have zero complaints as to how it works or its performance and I definitely see the utility in having a new and better way to write auto-completes. I worry there is such a huge gap between what it does now (serve as a way to allow for excellent auto-completes for CLI apps) and a business model that would make money, but presumably they have some sort of vision for how that could work. If Fig becomes popular enough to allow developers inside of a company to ship these kinds of detailed auto-completes to less technical users, it could really be a game changer. However if the business model is just to ship these custom configs for internal apps to technical users, I'm not convinced there is enough utility here.

But for you on a personal laptop? There doesn't seem to be any downside. Good privacy policy, excellent performance and overall a low-stress setup and daily usage is impossible to beat. This is just pretty good software that works as advertised.


Don't Write Your Own Kubernetes YAML Generator

Me writing YAML

Modern infrastructure work is, by many measures, better than it has ever been. We live in a time when a lot of the routine daily problems have been automated away by cloud providers, tooling or just improved workflows. However in the place of watching OS upgrades has come the endless tedium of writing configuration files. Between terraform, Kubernetes, AWS and various frameworks I feel like the amount of time I spend programming gets dangerously low.

I'm definitely not alone in feeling the annoyance around writing configurations, especially for Kubernetes. One tool every business seems to make at some point in their Kubernetes journey is a YAML generator. I've seen these vary from internal web apps to CLI tools, but the idea is that developers who don't have time to learn how to write the k8s YAML for their app can quickly generate the basic template and swap out a few values. You know people hate writing YAML for k8s when different companies on different continents all somehow end up writing variations on the same tool.

These internal tools vary from good to bad, but I'm hear to tell you there is a better way. I'm going to tell you about what worked well for me based on size of the company and the complexity of how they were using k8s. The internal tools are typically trying to solve the following common use-cases:

  • I need to quickly generate new YAML for launching a new service
  • I have the same service across different clusters or namespaces for a non-production testing environment and a production environment. I need to insert different environmental variables or various strings into those files.

We're going to try and avoid having to write too much internal tools and keep our process simple.

Why would I do any of this vs use my internal tool?

My experience with the internal tools goes something like this. Someone writes it, ideally on the dev tools team but often it is just a random person who hates doing the research on what to include in the YAML. The tool starts out by generating the YAML for just deployments, then slow expands the options to include things like DNS, volumes, etc. Soon the tool has a surprising amount of complexity in it.

Because we've inserted this level of abstraction between the app developer and the platform, it can be difficult to determine what exactly went wrong and what to do about it. The tool is mission critical because you are making configs that go to production, but also not really treated that way as leadership is convinced someone checks the files before they go out.

Spoiler: nobody ever checks the file before they go out. Then the narrative becomes how complicated and hard to manage k8s is.

Scenario 1: You are just starting out with k8s or don't plan on launching new apps a lot

In this case I really recommend against writing internal tools you are going to have to maintain. My approach for stacks like this is pretty much this:

  1. Go to https://k8syaml.com/ to generate the YAML I need per environment
  2. Define separate CD steps to run kubectl against the different testing and production namespaces or clusters with distinct YAML files
  3. Profit

One thing I see a lot of with k8s is over-complexity starting out. Just because you can create all this internal tooling on top of the platform doesn't mean you should. If you aren't anticipating making a lot of new apps or are just starting out and need somewhere to make YAML files that don't involve a lot of copy/pasting from the Kubernetes docs, check out k8syaml above.

Assuming you are pulling latest from some container registry and that strings like "testdatabase.url.com" won't change a lot, you'll be able to grow for a long time on this low effort approach.

Here's a basic example of a config generated using k8syaml that we'll use for the rest of the examples.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: Testing
  labels:
    app: web
spec:
  selector:
    matchLabels:
      octopusexport: OctopusExport
  replicas: 1
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: web
        octopusexport: OctopusExport
    spec:
      volumes:
        - name: test-volume
          persistentVolumeClaim:
            claimName: test-claim
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
          env:
            - name: database
              value: testdata.example.com
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - web
                topologyKey: kubernetes.io/hostname

Scenario 2: You have a number of apps and maintaining YAML has started to turn into a hassle

This is typically where people start to over-complicate things. I advise folks to try and solve this problem as much as you can with the CI/CD process as opposed to attempting to build tooling that builds configs on the fly based on parameters passed to your tools CLI. It's often error-prone and even if it works now it's going to be a pain to maintain down the line.

Option 1: yq

This is typically the route I go. You create some baseline configuration, include the required environmental variables and then insert the correct values at the time of deploy from the CD stack by defining a new job for each target and inserting the values into the config file with yq. yq like the far more famous jq is a YAML processor that is easy to work with and write scripts around.

There are multiple versions floating around but the one I recommend can be found here. You will also probably want to set up shell completion which is documented in their help docs. Here are some basic examples based on the template posted above.

yq eval '.spec.template.spec.containers[0].name' example.yaml returns nginx. So in order to access our environmental variable we just need to run: yq eval '.spec.template.spec.containers[0].env[0].value' example.yaml. Basically if yq returns a - in the output, it's an array and you need to specify the location in the array. It's a common first-time user error.

So again using our example above we can define the parameters we want to pass to our YAML in our CD job as environmental variables and then insert them into the file at the time of deployment using something like this:

NAME=awesomedatabase yq -i '.spec.template.spec.containers[0].env[0].value = strenv(NAME)' example.yaml

Ta-dah our file is updated.

  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
      env:
        - name: database
          value: awesomedatabase

If you don't want to manage this via environmental variables its also equally easy to create a "config" YAML file, load the values from there and then insert them into your Kubernetes configuration. However I STRONGLY recommend adding a check to your CD job which confirms the values are set to something. Typically I do this by putting in a garbage value into the YAML and then checking inside the stage to confirm I changed the value. You don't want your app to fail because somehow an environmental variable or value got removed and you didn't catch it.

yq has excellent documentation available here. It walks you through all the common uses and many less common ones.

Why not just use sed? You can but doing a find replace on a file that might be going to production makes me nervous. I'd prefer to specify the structure of the specific item I am trying to replace, then confirm that some garbage value has been removed from the file before deploying it to production. Don't let me stop you though.

Option 2: Kustomize

Especially now that it is built-into kubectl, Kustomize is a robust option for doing what we're doing above with yq. It's a different design though, one favoring safety over direct file manipulation. The structure looks something like this.

Inside of where your YAML lives, you make a kustomization.yaml file. So you have the basics of your app structure defined in YAML and then you also have this custom file. The file will look something like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - service.yaml
  - deployment.yaml
  - test.yaml

Nested as a sub-directory under your primary app folder would a directory called sandbox. Inside of sandbox you'd have another YAML, sandbox/kustomization.yaml and that file would look like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- test.yaml

So we have the base file test.yaml and the new test.yaml. Here we can define whatever changes we want merged into the top-level test.yaml at the time of running. So for example we could have a new set of options:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: backend-test
spec:
  minReplicas: 2
  maxReplicas: 4
  metrics:
  - type: Resource
  resource:
    name: cpu
    target:
      type: Utilization
      averageUtilization: 90

Then when we deploy this specific option we could keep it simple and just run kubectl apply -k  overlays/sandbox. No need for manually merging anything and it is all carefully sorted and managed.

This is really only scratching the surface of what Kustomize can do and it's well worth checking out. In my experience though dev teams don't want to manage more YAML, they want less. However spending the time breaking configuration into these templates is both extremely safe (since we're distinctly calling the configuration we want merged in) and has minimal tooling overhead, it's worth exploring.

I thought you said you use yq yeah I do but I should probably use Kustomize. Typically though when apps start to reach the level of complexity where Kustomize really shines I start to push for option 3, which I think is the best of all of them.

Option 3: Use the client library

As time goes on and the complexity of what folks end up wanting to do in k8s increases, one issue is that the only way anyone knows how to interact with the cluster is through INSERT_CUSTOM_TOOL_NAME. Therefore the tool must be extended to add the support for whatever new thing they are trying to do, but this is obviously dangerous and requires testing by the internal dev tools person or team to ensure the behavior is correct.

If you are an organization who: lives and dies by Kubernetes, has an internal YAML generator or config generator but am starting to reach the limit of what it can do and don't want to spend the time breaking all of your environments into Kustomize templates, just use the official client libraries you can find here. Especially for TDD shops, I don't know why it doesn't come up more often.

I've used the Python Kubernetes client library and it is great, with tons of good examples to jumpstart your work. Not only does it open up allowing you to manage Kubernetes pretty much any way you would like, but frankly it's just a good overview of the Kubernetes APIs in general. I feel like every hour I've spent writing scripts with the library and referencing the docs, I learn something new (which is more than I can say for a lot of the Kubernetes books I've read in my life....). You can check out all those docs here.

Conclusion

The management of YAML configuration files for Kubernetes often comes up as a barrier to teams correctly managing their own application and its deployment. At first the process of writing these configs is an error-prone process and it is only natural to attempt to leverage the opportunity k8s provides to make an automated approach to the problem.

I would encourage folks starting out in k8s who have encountered their first headache writing YAML to resist the urge to generate their own tooling. You don't need one, either to make the base config file or to modify the file to meet the requirements of different environments. If you do need that level of certainty and confidence, write it like you would any other mission critical code.


Quickly find unused CIDR blocks in your AWS VPC

Finding new CIDR blocks in a VPC can get increasingly time consuming as you grow. I found a nice little tool which will quickly output to STDOUT the available ranges. You can find that here.

You can install it with: pip install aws-cidr-finder

Run with: aws-cidr-finder --profile aws-profile-name

Output looks like this:

CIDR               IP Count
---------------  ----------
172.31.96.0/19         8192
172.31.128.0/17       32768
Total                 40960

TIL The correct way to load SQL files into a Postgres and Mongo Docker Container

I know I'm probably the last one to find this out, but this has been an issue for a long time for me. I've been constantly building and storing SQL files in the actual container layers, thinking that was the correct way to do it. Thankfully, I was completely wrong.

Here is the correct way to do it.

Assuming you are in a directory with the SQL files you want.

FROM postgres:12.8

COPY *.sql /docker-entrypoint-initdb.d/

On startup Postgres will load the SQL files into the database. I've wasted a lot of time doing it the other way and somehow missed this the entire time. Hopefully I save you a bunch of time!

For Mongo

FROM mongo:4.2

COPY dump /docker-entrypoint-initdb.d/
COPY restore.sh /docker-entrypoint-initdb.d/

The restore.sh script is below.


#!/bin/bash

cd /docker-entrypoint-initdb.d || exit
ls -ahl
mongorestore /docker-entrypoint-initdb.d/

Just do a mongodump into a directory called dump and then when you want it, load the data with the restore.sh script inside the container.


AWS Elastic Kubernetes Service (EKS) Review

Spoilers

A bit of history

Over the last 5 years, containers have become the de-facto standard by which code is currently run. You may agree with this paradigm or think its a giant waste of resources (I waver between these two opinions on a weekly basis), but this is where we are. AWS, as perhaps the most popular cloud hosting platform, bucked the trend and attempted to launch their own container orchestration system called ECS. This was in direct competition to Kubernetes, the open-source project being embraced by organizations of every size.

ECS isn't a bad service, but unlike running EC2 instances inside of a VPC, switching to ECS truly removes any illusion that you will be able to migrate off of AWS. I think the service got a bad reputation for its deep tie-in with AWS along with some of the high Fargate pricing for the actual CPU units. As years went on, it became clear to the industry at large that ECS was not going to become the standard way we run containerized applications. This, in conjunction with the death of technology like docker swarm meant a clear winner started to emerge in k8s.

AWS EKS was introduced in 2017 and became generally available in 2018. When it launched, the response from the community was....tepid at best. It was missing a lot of the features found in competing products and seemed to be far behind the offerings from ECS at the time. The impression among folks I was talking to at the time was: AWS was forced to launch a Kubernetes service to stay competitive, but they had zero interest in diving into the k8s world and this was to meet the checkbox requirement. AWS itself didn't really want the service to explode in popularity was the rumor being passed around various infrastructure teams.

This is how EKS stacked up to its competition at launch

Not a super strong offering at launch

I became involved with EKS at a job where they had bet hard on docker swarm, it didn't work out and they were seeking a replacement. Over the years I've used it quite a bit, both as someone whose background is primary in AWS and someone who generally enjoys working with Kubernetes. So I've used the service extensively for production loads over years and have seen it change and improve in that time.

EKS Today

So its been years and you might think to yourself "certainly AWS has fixed all these problems and the service is stronger than ever". You would be...mostly incorrectly. EKS remains both a popular service and a surprisingly difficult one to set up correctly. Unlike services like RDS or Lambda, where the quality increases by quite a bit on a yearly basis, EKS has mostly remained a clearly internally disliked product.

This is how I like to imagine how AWS discusses EKS

New customer experience

There exists out of the box tooling for making a fresh EKS cluster that works well, but the tool isn't made or maintained by Amazon, which seems strange. eksctl which is maintained by Weaveworks here mostly works as promised. You will get a fully functional EKS cluster through a CLI interface that you can deploy to. However you are still pretty far from an actual functional k8s setup, which is a little bizarre.

Typically the sales pitch for AWS services is they are less work than doing it yourself. The pitch for EKS is its roughly the same amount of work to set it up, it is less maintenance in the long term. Once running, it keeps running and AWS managing the control plane means its difficult to get into a situation in which you cannot fix the cluster, which very nice. Typically k8s control plane issues are the most serious and everything else you can mostly resolve with tweaks.

However if you are considering going with EKS, understand you are going to need to spend a lot of time reading before you touch anything. You need to make hard-to-undo architectural decisions early in the setup process and probably want to experiment with test clusters before going with a full-on production option. This is not like RDS, where you can mostly go with the defaults, hit "new database" and continue on with your life.

Where do I go to get the actual setup information?

The best resource I've found is EKS Workshop, which walks you through tutorials of all the actual steps you are going to need to follow. Here is the list of things you would assume come out of the box, but strangely do not:

  • You are going to need to set up autoscaling. There are two options, the classic Autoscaler and the new AWS-made hotness which is Karpenter. You want to use Karpenter, it's better and doesn't have all the lack-of-awareness when it comes to AZs and EBS storage (basically autoscaling doesn't work if you have state on your servers unless you manually configure it to work, it's a serious problem). AWS has a blog talking about it.
  • You will probably want to install some sort of DNS service, if for nothing else so services not running inside of EKS can find your dynamic endpoints. You are ganna wanna start here.
  • You almost certainly want Prometheus and Grafana to see if stuff is working as outlined here.
  • You will want to do a lot of reading about IAM and RBAC to understand specifically how those work together. Here's a link. Also just read this entire thing from end to end: link.
  • Networking is obviously a choice you will need to make. AWS has a network plugin that I recommend, but you'll still need to install it: link.
  • Also you will need a storage driver. EBS is supported by kubernetes in the default configuration but you'll want the official one for all the latest and greatest features. You can get that here.
  • Maybe most importantly you are going to need some sort of ingress and load balancer controller. Enjoy setting that up with the instructions here.
  • OIDC auth and general cluster access is pretty much up to you. You get the IAM auth out of the box which is good enough for most use cases, but you need to understand RBAC and how that interacts with IAM. It's not as complicated as that sentence implies it will be, but it's also not super simple.

That's a lot of setup

I understand that part of the appeal of k8s is that you get to make all sorts of decisions on your own. There is some power in that, but in the same way that RDS became famous not because you could do anything you wanted, but because AWS stopped you from doing dumb stuff that would make you life hard; I'm surprised EKS is not more "out of the box". My assumption would have been that AWS launched the clusters with all their software installed by default and let you switch off those defaults in a config file or CLI option.

There is an ok Terraform module for this, but even this was not plug and play. You can find that here. This is what I use for most things and it isn't the fault of the module maintainers, EKS touches a lot of systems and because it is so customizable, it's hard to really turn into a simple module. I don't really considering this the problem of the maintainers, they've already applied a lot of creativity to getting around limitations in what they can do through the Go APIs.

Overall Impression - worth it?

I would still say yes after using it for a few years. The AWS SSO integration is great, allowing us to have folks access the EKS cluster through their SSO access even with the CLI.  aws sso login --profile name_of_profile and aws eks --profile name_of_profile --region eu-west-1 update-kubeconfig --name name_of_cluster is all that is required for me to interact with the cluster through normal kubectl commands, which is great. I also appreciate the regular node AMI upgrades, removing the responsibility from me and the team for keeping on top of those.

Storage

There are some issues here. The ebs experience is serviceable but slow, resizing and other operations are very rate-limited and can be an error-prone experience. So if your service relies a lot on resizing volumes or changing volumes, ebs is not the right choice inside of EKS, you are probably going to want the efs approach. I also feel like AWS could provide more feedback to you about the dangers of relying on ebs since they are tied to AZs, requiring some planning on your part before starting to keep all those pieces working. Otherwise the pods will simply be undeployable. By default the old autoscaling doesn't resolve this problem, requiring you to set node affinities.  

Storage and Kubernetes have a bad history, like a terrible marriage. I understand the container purists which say "containers should never have state, it's against the holy container rules" but I live in the real world where things must sometimes save to disk. I understand we don't like that, but like so many things in life, we must accept the things we cannot change. That being said it is still a difficult design pattern in EKS that requires planning. You might want to consider some node affinity for applications that do require persistent storage.

Networking

The AWS CNI link is quite good and I love that it allows me to generate IP addresses on the subnets in my VPCs. Not only does that simplify some of the security policy logic and the overall monitoring and deployment story, but mentally it is easier to not add another layer of abstraction to what is already an abstraction. The downside is they are actual network interfaces, so you need to monitor free IP addresses and ENIs.

AWS actually provides a nice tool to keep an eye on this stuff which you can find here. But be aware that if you intend on running a large cluster or if your cluster might scale up to be quite large, you are gonna get throttled with API calls. Remember the maximum number of ENIs you can attach to a node is determined by instance type, so its not an endlessly flexible approach. AWS maintains a list of ENI and IP limits per instance type here.

The attachment process also isn't instant, so you might be waiting a while for all this to come together. The point being you will need to understand the networking pattern of your pods inside of the cluster in a way you don't need to do as much with other CNIs.

Load Balancing and Ingress

This part of the experience is so bad it is kind of incredible. If you install the tool AWS recommends, you are going to get a new ALB for every single Ingress defined unless you group them together into groups. Ditto with every single internal LoadBalancer target, you'll end up with NLBs for each one. Now you can obviously group them together, but this process is incredibly frustrating in practice.

What do you mean?

You have a new application and you add this to your YAML:

kind: Ingress
metadata:
  namespace: new-app
  name: new-app
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: new-app
              servicePort: 80

Every time you do this you are going to end up with another ALB, which aren't free to use. You can group them together as shown here but my strong preference would have been to have AWS do this. I get they are not a charity, but it seems kind of crazy that this becomes the default unless you and the rest of your organization know how to group stuff together (and what kinds of apps can even be grouped together).

I understand it is in AWS's best interest to set it up like this, but in practice it means you'll see ALB and NLB cost exceed EC2 costs because by default you'll end up with so many of the damn things barely being used. My advice for someone setting up today would be to go with HAProxy as shown here. It's functionally a better experience and you'll save so much money you can buy the HAProxy enterprise license from the savings.

Conclusion in 2022

If you are going to run Kubernetes, don't manage it yourself unless you can dedicate staff to it. EKS is a very low-friction day to day experience but it frontloads the bad parts. Setup is a pain in the ass and requires a lot of understanding about how these pieces fit together that is very different from more "hands off" AWS products.

I think things are looking good though that AWS has realized it makes more sense to support k8s as opposed to fighting it. Karpenter as the new autoscaler is much better, I'm optimistic they'll do something with the load balancing situation and the k8s add-on stack is relatively stable. If folks are interested in buying-in today I think you'll be in ok shape in 5 years. Clearly whatever internal forces at AWS that wanted this service to fail have been pushed back by its success.

However if I were a very small company new to AWS I wouldn't touch this with a ten foot pole. It's far too complicated for the benefits and any sort of concept of "portability" is nonsense bullshit. You'll be so tied into the IAM elements of RBAC that you won't be able to lift and shift anyway, so if you are going to buy in then don't do this. EKS is for companies already invested in k8s or looking for a deeply tweakable platform inside of AWS when they have outgrown services like Lightsail and Elastic Beanstalk.  If that isn't you, don't waste your time.


How to make the Mac better for developers

A what-if scenario

A few years ago I became aware of the existence of the Apple Pro Workflow teams. These teams exist inside Apple to provide feedback to the owners of various professional-geared hardware and software teams inside the company. This can be everything from advising the Mac Pro team what kind of expansion a pro workstation might need all the way to feedback to the Logic Pro and Final Cut teams on ways the make the software fit better into conventional creative workflows.

I think the idea is really clever and wish more companies did something like this. It would be amazing, for instance, if I worked on accounting software if we had a few accountants in-house attempting to just use our software to solve problems. Shortening the feedback loop and relying less on customers reporting problems and concerns is a great way of demonstrating to people you take their time seriously. However it did get me thinking: what would a Developer Pro Workflow team ask for?

The History of the MacBook Pro and me

I've owned MacBook Pros since they launched, with the exception of the disastrous TouchBar Mac with the keyboard that didn't work. While I use Linux every day for my work, I prefer to have macOS as a "base operating system". There are a lot of reasons for that, but I think you can break them down into a few big advantages to me:

  1. The operating system is very polished. "Well wait, maybe you haven't tried Elementary/Manjaro/etc." I have and they're great for mostly volunteer efforts, but it's very hard to beat the overall sense of quality and "pieces fitting together" that comes with macOS. This is an OS maintained by , I suspect, a large team in terms of software development team sizes. Improvements and optimizations are pretty common and there is a robust community around security. It doesn't just work but for the most part it does keep ticking along.
  2. The hardware is able to be fixed around the world. One thing that bothered me about my switch to Linux: how do I deal with repairs and replacements. PCs change models all the time and, assuming I had a functional warranty, how long could I go without a computer? I earn my living on this machine, I can't wait six weeks for a charger or spend a lot of time fighting with some repair shop on what needs to be done to fix my computer. Worst case I can order the exact same model I have now from Apple, which is huge.
  3. The third-party ecosystem is robust, healthy and frankly time-tested. If I need to join a Zoom call, I don't want to think about whether Zoom is going to work. If someone calls me on Slack, I don't want to deal with quitting and opening it four times to get sound AND video to work. When I need to use third-party commercial software its often non-optional (a customer requires that I use it) and I don't have a ton of time to debug it. With the Mac, commercial apps are just higher quality than Linux. They get a lot more attention internally from companies and they just have a much higher success rate of working.  

Suggestions from my fake Workflow team

Now that we've established why I like the platform, what could be done to improve it for developers? How could we take this good starting point and further refine it. These suggestions are not ranked.

  • An official way to run Linux

Microsoft changed the conversation in developer circles with Windows Subsystem for Linux. Suddenly Windows machines, previously considered inferior for developing applications for Linux, mostly had that limitation removed. The combination of that and the rise of containerization has really reduced the appeal of Macs for developers, as frankly you just don't use the base OS UNIX tooling as much as you used to.

If Apple wants to be serious about winning back developer mindshare, they wouldn't need to make their own distro. I think something like Canonical Multipass would work great, still giving us a machine that resembles the cloud computers we're likely going to be deploying to. Making them more of a first-class citizen inside the Apple software stack like Microsoft would be a big win.

Well why would Apple do this if there are already third-party tools? Part of it is marketing, Apple just can tell a lot more users about something like this. Part of it is that it signals a seriousness about the effort and a commitment on their part to keep it working. I would be hesitant to rely heavily on third-party tooling around the Apple hyperkit VM system just because I know at any point without warning Apple could release new hardware that doesn't work with my workflow. If Apple took the tool in-house there would be at least some assurances that it would continue to work.

  • Podman for the Mac

Docker Desktop is no longer a product you should be using. With the changes to their licensing, Docker has decided to take a much more aggressive approach to monetization. It's their choice, obviously, but to me it invalidates using Docker Desktop for even medium sized companies. The license terms of:  fewer than 250 employees AND less than $10 million in annual revenue means I can accidentally violate the license by having a really good year, or merging with someone. I don't need that stress in my life and would never accept that kind of aggressive license in business-critical software unless there was absolutely no choice.

Apple could do a lot to help with me by assisting with the porting of podman to the Mac. Container-based workflows aren't going anywhere and if Apple installed podman as part of the "Container Developer Tools" command, it would not only remove a lot of concerns from users about Docker licensing, but also would just be very nice. Again this is solvable by users using a Linux VM but its a clunky solution and nothing something I think of as very Apple-like. If there is a team sitting around thinking about the button placement in Final Cut Pro, making my life easier when running podman run for the 12th time would be nice as well.

  • Xcode is the worst IDE I've ever used and needs a total rewrite

I don't know how to put this better. I'm sure the Xcode team is full of nice people and I bet they work hard. It's the worst IDE I've ever used. Every single thing in Xcode is too slow. I'm talking storyboards, suggestions, every single thing drags. A new single view app, just starting from scratch, takes multiple minutes to generate.

You'll see this screen so often, you begin to wonder where "Report" even goes

Basic shit in Xcode doesn't work. Interface Builder is useless, autocomplete is random in terms of how well or poorly it will do that second, even git commands don't work all the time for me inside Xcode. I've never experienced anything like it with free IDEs, so the idea that Apple ships this software for actual people to use is shocking to me.

If you are a software developer who has never tried to make a basic app in Xcode and are curious what people are talking about, give it a try. I knew mobile development was bad, but spending a week working inside Xcode after years of PyCharm and Vim/Tmux, I got shivers imagining if I paid my mortage with this horrible tool. Every 40 GB update you must just sweat bullets, worried that this will be the one that stops letting you update your apps.

I also cannot imagine that people inside Apple use this crap. Surely there's some sort of special Xcode build that's better or something, right? I would bet money a lot of internal teams at Apple are using AppCode to do their jobs, leaving Xcode hidden away. But I digress: Xcode is terrible and needs a total rewrite or just call JetBrains and ask how much they would charge to license every single Apple Developer account with AppCode, then move on with your life. Release a smaller standalone utility just for pushing apps to the App Store and move on.

It is embarrassing that Apple asks people to use this thing. It's hostile to developers and it's been too many years and too many billions of dollars made off the back of App developers at this point. Android Studio started out bad and has gotten ok. In that time period Xcode started pretty bad and remains pretty bad. Either fix it or give up on it.

  • Make QuickLook aware of the existence of code

QuickLook, the Mac functionality where you click on a file and hit spacebar to get a quick look at it is a great tool. You can quickly go through a ton of files and take a quick glance at all of them, or at least creative professionals can. I want to as well. There's no reason macOS can't understand what yaml is, or how to show me JSON in a non-terrible way. There are third party tools that do this but I don't see this as something Apple couldn't bring in-house. It's a relatively low commitment and would be just another nice daily improvement to my workflow.

Look how great that is!
  • An Apple package manager

I like Homebrew and I have nothing but deep respect for the folks who maintain it. But it's gone on too long now. The Mac App Store is a failure, the best apps don't live there and the restrictions and sandboxing along with Apple's cut means nobody is rushing to get listed there. Not all ideas work and its time to give up on that one.

Instead just give us an actual package manager. It doesn't need to be insanely complicated, heck we can make it similar to how Apple managed their official podcast feed for years with a very hands-off approach. Submit your package along with a URL and Apple will allow users a simple CLI to install it along with declared dependencies. We don't need to start from scratch on this, you can take the great work from homebrew and add some additional validation/hosting.

How great would it be if I could write a "setup script" for new developers when I hand them a MacBook Pro that went out and got everything they needed, official Apple apps, third-party stuff, etc? You wouldn't need some giant complicated MDM solution or an internal software portal and you would be able to add some structure and vetting to the whole thing.

Just being able to share a "new laptop setup" bash script with the internet would be huge. We live in a world where more and more corporations don't take backups of work laptops and they use tooling to block employees from maintaining things like Time Machine backups. While it would be optimal to restore my new laptop from my old laptop, sometimes it isn't possible. Or maybe you just want to get rid of the crap built up over years of downloading stuff, opening it once and never touching it again.

To me this is a no-brainer, taking the amazing work from the homebrew folks, bringing them in-house and adding some real financial resources to it, with the goal of making a robust CLI-based package installation process for the Mac. Right now Homebrew developers actively have to work around Apple's restrictions to get their tool to work, a tool tons of people rely on every day. That's just not acceptable. Pay the homebrew maintainers a salary, give it a new Apple name and roll it out at WWDC. Everybody will love you and you can count the downloads as "Mac App Store" downloads during investor calls.

  • A git-aware text editor

I love BBedit so much I bought a sweatshirt for it and a pin I proudly display on my backpack. That's a lot of passion for a text editor, but BBedit might be the best software in the world. I know, you are actively getting upset with me as you read that, but hear me out. It just works, it never loses files and it saves you so much time. Search and replace functionality in BBedit has, multiple times, gotten me out of serious jams.

But for your average developer who isn't completely in love with BBedit, there are still huge advantages to Apple shipping even a much more stripped down text editor that Markdown and Git-functionality. While you likely have your primary work editor, be it vim for me or vscode for you, there are times when you need to make slight tweaks to a file or you just aren't working on something that needs the full setup. A stripped version of BBedit or something similar that would allow folks to write quick docs, small shell scripts or other tasks that you do a thousand times a year would be great.

A great example of this is Code for Elementary OS:

This doesn't have to be the greatest text editor ever made, but it would be a huge step forward for the developer community to have something that works out of the box. Plus formats like Markdown are becoming more common in non-technical environments.

Why would Apple do this if third-party apps exist?

For the same reason they make iPhoto even though photo editors exist. Your new laptop from Apple should be functional out of the box for a wide range of tasks and adding a text editor that can work on files that lots of Apple professional users interact with on a daily basis underscores how seriously Apple takes that market. Plus maybe it'll form the core of a new Xcode lite, codenamed Xcode functional.

  • An Apple monitor designed for text

This is a bit more of a reach, I know that. But Apple could really use a less-expensive entry into the market and in the same way the Pro Display XDR is designed for the top-end of the video editing community, I would love something like that but designed more for viewing text. It wouldn't have nearly the same level of complexity but it would be nice to be able to get a "full end to end" developer solution from Apple again.

IPS panels would be great for this, as the color accuracy and great viewing angels would be ideal for development but you won't care about them being limited to 60hz. A 27 inch panel is totally sufficient and I'd love to be able to order them at the same time as my laptop for a new developer. There are lots of third-party alternatives but frankly dealing with the endless word soup that is monitors these days is difficult to even begin to track. I love my Dell UltraSharp U2515H but I don't even know how long I can keep buying them or how to find the successor to it when it gets decommissioned.

Actually I guess its already been decommissioned and also no monitors are for sale. 

What did I miss?

What would you add to the MacBook Pro to make it a better developer machine? Let me know at @duggan_mathew on twitter.