<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[matduggan.com]]></title><description><![CDATA[Old fashioned YAML farmer]]></description><link>https://matduggan.com/</link><generator>Ghost 5.75</generator><lastBuildDate>Fri, 23 Feb 2024 07:54:53 GMT</lastBuildDate><atom:link href="https://matduggan.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Tech Support Stories Part 2]]></title><description><![CDATA[<p>Since folks seemed to like the first one, I figured I would do another one. These are just interesting stories from my whole time doing IT-type work. Feel free to subscribe via RSS but know that this isn&apos;t the only kind of writing I do. </p><h3 id="getting-started">Getting Started</h3><p>I</p>]]></description><link>https://matduggan.com/tech-support-stories-part-2/</link><guid isPermaLink="false">659d070d84ab300001fe7f86</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 16 Feb 2024 11:00:58 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1551703599-6b3e8379aa8c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI2fHxzZXJ2ZXJ8ZW58MHx8fHwxNzA0Nzg5NzgzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1551703599-6b3e8379aa8c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI2fHxzZXJ2ZXJ8ZW58MHx8fHwxNzA0Nzg5NzgzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Tech Support Stories Part 2"><p>Since folks seemed to like the first one, I figured I would do another one. These are just interesting stories from my whole time doing IT-type work. Feel free to subscribe via RSS but know that this isn&apos;t the only kind of writing I do. </p><h3 id="getting-started">Getting Started</h3><p>I grew up in a place that could serve as the laboratory calibration small town USA. It had a large courthouse, a diner, one liquor store infamous for serving underage teens and a library. When I turned 12 my dad asked if I wanted to work for free for a local computer shop. My parents, like girlfriends, friends and a spouse in the future would be, were worried about the amount of time I was spending in the basement surrounded by half-broken electronics. </p><p>The shop was on the outskirts of town, a steel warehouse structure converted into offices. It was a father and son business, the father running the counter and phones with the son mostly doing on-site visits to businesses. They were deeply religious, members of a religion where church on Sunday was an all-day affair. Despite agreeing to let me work there for free, the son mostly didn&apos;t acknowledge that I was there. He seemed content to let me be and focus on his dream of setting up a wireless ISP with large microwave radio links. </p><p>Bill was put in charge of training me. He had been a Vietnam veteran who had lost a leg below the knee in the war. His favorite trick was to rotate the fake leg 180 degrees up and then turn his chair around when kids walked in, laughing as they ran away screaming. He had been a radio operator in the war and had spent most of his career working on radio equipment before getting this computer repair job as &quot;something to keep myself busy&quot;. I was put to work fixing Windows 98 and later XP desktop computers. </p><p>This was my introduction to &quot;Troubleshooting Theory&quot; which Bill had honed over decades of fixing electronics. It effectively boiled down to:</p><ul><li>Ask users what happened before the failure.</li><li>Develop a theory of the failure and a test to confirm. </li><li>Record the steps you have taken so you don&apos;t repeat them</li><li>Check the machine after every step to ensure you didn&apos;t make it worse. </li><li>Software is unreliable, remove it as a factor whenever possible. </li><li>Document the fix in your own notes. </li><li>If you make the problem worse in your testing, walk away for a bit and start from the top. You are probably missing something obvious.  </li></ul><p>Nothing here is revolutionary but the quiet consistency of his approach is still something I use today. He was someone who believed that there was nothing exceptional in fixing technology but that people are too lazy to read the instruction manual. I started with &quot;my PC is slow&quot; tickets, which are basically &quot;every computer that comes in&quot;. Windows 98 had a lot of bizarre behavior that was hard for normal users to understand. This was my first exposure to &quot;the Registry&quot;.</p><p><strong>The Registry</strong></p><p>For those of you blessed to have started your exposure to Windows after the era of the registry, it was a hierarchical database used in older Windows versions that stored the information necessary to configure the system. User profiles, what applications are installed, what icons are for what folders, what hardware is in the system, it was all in this database. This database became the source of truth for everything in the system and also the only way to figure out what the system actually thought the value of something was. </p><p>The longer a normal person used a Windows device, the more cluttered this database becomes. Combined with adding and deleting files creating fragmentation on the spinning rust drives and you would get a constant stream of people attesting that their machine was slower than it was before. The combination of some Registry Editor work to remove entries and de-fragmentation would buy you some time, but effectively there was a ticking clock hanging over every Windows install before you would need to reinstall. </p><p>In short order I learned troubleshooting Windows was a waste of time. Even if you knew why 98 was doing something, you rarely could fix it. So I would just run assembly lines of re-installs, backing up all the users files to a file-share and then clicking through the 98 install screen a thousand times. It sounds boring now but I was thrilled by the work, even though copying lots of files off of bogged down Windows 98 machines was painfully, hilariously slow. </p><p>Since Bill believed in telling people they were (effectively) stupid and had broken their machines through an inability to understand simple instructions, I took over the delicate act of lying to users. A lot of Windows IT work is lying to people on the phone trying to walk a delicate line. You can&apos;t blame the software <em>too much</em> because we still want them to continue buying computers, but at the same time you don&apos;t want to tell the truth which was almost always &quot;you did something wrong and broke it&quot;. I felt the lying in this case was practically a public service. </p><p>As time went on I graduated to hardware repairs, which was fascinating in that era. Things like &quot;getting video to output onto a monitor&quot; or &quot;getting sound to come out of the sound card&quot; were still <em>minor miracles</em> that often didn&apos;t work. Hardware failures were often showing up with blown capacitors. I lived on Bill&apos;s endless supply of cups of noodles, sparkling water bottles and his incredibly collection of hot sauce. The man loved hot sauce, buying every random variation he could find. His entire workstation was lined with little bottles of threatening-sounding sauces.</p><p>The hardware repairs quickly became my favorite. Bill taught me how to solder and I discovered most problems were pretty easy to fix. Capacitors of this time period were, for whatever reason, constantly exploding. Often even expensive components could be fixed right up by replacing a fan, soldering a new capacitor on or applying thermal paste correctly. Customers loved it because they didn&apos;t need to buy totally new components and I loved it because it made me feel like a real expert (even though I wasn&apos;t and this was mostly visual diagnosis of problems). </p><p>When Windows XP started to be a thing was the first time I felt some level of frustration. XP was so broken when it came out that it effectively put us underwater as a team. After awhile I felt like there wasn&apos;t much else for me to do in this space. Windows just broke all the time. I wasn&apos;t really getting better at fixing them, because there wasn&apos;t anything to fix. As Dell took over the PC desktop market in the area, everything from the videocard to the soundcard were on the motherboard, meaning all repairs boiled down to &quot;replace the motherboard&quot;. </p><p>That was the end of my Windows career. I sold my PC gear, bought an iBook and from then on I was all-in on Mac. I haven&apos;t used Windows in any serious way since.</p><h3 id="high-school-ccna">High School CCNA</h3><p>While I was in high school, Cisco offered us this unique course. You could attend the Cisco Academy inside of high school, where you would study and eventually sit for your CCNA certification. It was a weird era where everyone understood computers were important to how life was going to work in the future but nobody understood what that meant. Cisco was trying to insert the idea that every organization on earth was going to need someone to configure Cisco routers and switches. </p><p>So we went, learned how to write Cisco configurations, recover passwords, reset devices and configure ports. Switches at this point were painfully simple, with concepts like VPNs working but not fully baked ideas. These were 10/100 port switches with PoE and had most of the basic features you would expect. It was a lot of fun to have a class where we would go down there and effectively mess with racks of networking equipment to try and get stuff to work. We&apos;d set up DHCP servers and try to get anything to be able to talk to anything else on our local networks.</p><p>We mostly worked with the Cisco Catalyst 1900 which are models I would see well past their end of life in offices throughout my career. This class introduced me to a lot of the basic ideas I still use today. Designing network topology, the OSI model, having VLANs span switches, how routing tables work, IPv4 subnetting, all these concepts were introduced to me here and laid a good foundation for all the work I was to do later. More than the knowledge though, I appreciated the community. </p><p>This was the first time I discovered a group of people with the same interests and passions as me. Computer nerd was still very much an insult during this period of time, when admitting you enjoyed this sort of stuff opened you up to mocking. So you kinda didn&apos;t mention how much time you spent taking apart NESs from garage sales or you&apos;d invite just a torrent of abuse. However here was a place where we could chat, compare personal projects and troubleshoot. I looked forward to it the 2 days a week I had the class. </p><p>To be clear, this was not a rich school. I grew up in a small town in Ohio whose primary industries were agriculture and making the Etch-A-Sketch. Our high school was full of asbestos to the extent that we were warned not to touch the ceiling tiles lest the dust get on us. The math teacher organized a prayer circle around the flagpole every morning as close to violating the Supreme Court ruling on prayer in school as he could get without actually breaking it. But somehow they threw this program together for a few years and I ended up benefiting from it. </p><p>The teacher also had contacts with lots of programmers and tech workers, which was the first time I had ever had contact with people in the tech field. They would come into this class and tell us what it was like to be a programmer or a network engineer at this time. It really opened my eyes to what was possible, since people in my life still made fun of the idea of &quot;working with computers&quot;. Silicon Valley to people in the Midwest was long-haired hippies playing hacky sack, not doing actual work. These people looked way too tired to be accused of not doing real work. </p><p>Mostly though I appreciated our teacher, Mr. Bohnlein. The teacher was an old-school nerd who had been programming since the 70s. He had been a high school teacher for decades but a very passionate Mac user in his personal life. I remember he was extremely skilled at letting us fail for a long time while still giving us hints towards the correct solution. When it came time to take the test, I sailed through it thanks to his help. The students in the class used to make fun of him for his insistence on buying Apple stock. We all thought the company was going to be dead in the next 5 years. &quot;The iPod is the inferior MP3 player&quot; I remember stating <em>very confidently</em>.</p><p>He retired comfortable. </p><h3 id="playboy">Playboy</h3><p>One call I would get from time to time was to the Chicago Playboy office. This office was beautiful, high up overlooking the water with a very cool &quot;Mad Men&quot; layout. The writers and editors were up on a second level catwalk, with little &quot;pod&quot; offices that had glass walls. They dressed great and were just very stylish people. I was surprised to discover so many of the photographers were female, but I mostly didn&apos;t interact with them. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://s3-rd-prod.chicagobusiness.com/s3fs-public/styles/1024x512/public/CRED03-160519759-AR.jpeg" class="kg-image" alt="Tech Support Stories Part 2" loading="lazy"><figcaption><span style="white-space: pre-wrap;">Playboy was on the top floors</span></figcaption></figure><p>The group I did spend time with was the website team, which unfortunately worked in conventional cubicles facing a giant monitor showing a dashboard of the websites performance and stats. I remember that the carpet was weirdly shaggy and purple, which made finding screws when I dropped them tricky. Often I had to wave a magnet over the ground and hope it sucked up the screw I had lost. The website crew was great to work with, super nice, but the route to their offices involved going by just mountains of Playboy branded stuff. </p><p>It was just rack after rack of Playboy clothes, lighters, swimsuits, underwear, water bottles. Every single item you could imagine had that rabbit logo on it.  You see it a lot around, but I&apos;ve never seen it all piled up together. Beyond that was a series of photo studios, with tons of lighting and props. I have no idea if they shot the content for the magazine there (I never saw anyone naked) but it seemed like a lot of the merchandise promo photos were shot there. The photo pipeline was a well-oiled machine, with SD cards almost immediately getting backed up onto multiple locations. They had editing stations right by the photo shooting areas and the entire staff was zero-nonsense. </p><p>The repairs were pretty standard stuff, mostly iMac and Mac Pro fixes, but the reason it stood out to me was the weird amount of pornography they tried to give me. Magazines, posters, a book once (like an actual hardcover photo book) which was incredibly nice of the IT guy I worked with, but felt like a strange thing to end a computer repair session with. He would give these to me in a cubicle filled with things made of animals. He had an antler letter opener, wore a ring that looked like it was made out of bone or antler along with a lot of photos of him holding up the heads of dead animals. </p><p>The IT field and the gun enthusiast community has a lot of overlap. It makes sense, people who enjoy comparing and shopping for very specific equipment that has long serial number-type names along with weirdly strong brand allegiances. I had no particularly strong stance on hunting guns, having grown up in a rural area where everyone had a shotgun somewhere in the house. As a kid it was common for every visit to a new house to involve being warned to stay away from the gun cabinet. However hunting stories are a particular kind of boring, often beginning with a journey to somewhere I would never want to go and a lot of details I don&apos;t need. &quot;I was debating between bringing the Tikka T3 and the Remington 700 but you know the recoil on the T3x is crazy&quot;. &quot;Obviously it&apos;s a three-day drive from the airport to the hunting area in nowhere Texas but we passed the time talking about our favorite jerky&quot;.  </p><p>I often spent this time trapped in cubicles or offices thinking about these men suddenly forced to fight these animals hand to hand. Are deer hard to knock out with your fists? Presumably they have a lot of brain protection from all the male jousting. I think it would quickly become the most popular show on television, just middle-aged IT managers trying to choke a white-tailed deer as it runs around an arena. We&apos;d sell big steins of beer and turkey legs, like Medieval Times, for spectators. You and a date would crash the giant glasses together and cheer as people run for their lives from a moose. </p><p>Once after a repair session, while waiting for the L, I tripped and some of the stuff in my bag spilled out. This woman on the platform looked down at just a thick stack of porn magazines sliding over the platform and then at me. I still think about what she must have thought about me. It&apos;s not just that I had a Playboy, but like 6, as if I was one of the secret sexual deviants you read about on the news. &quot;He looked like a normal person but everywhere he went he had a thick stack of porn.&quot; </p><h3 id="shedd-aquarium">Shedd Aquarium</h3><p>One of my favorite jobs in the entire city was the Shedd aquarium. I would enter around the side by the loading dock, which is also where many of the animals would come in through. Almost every morning there would be just these giant containers of misc seafood for the animals packed into the loading dock. It was actually really nice to see how high quality it was, like I&apos;ve eaten dodgier seafood than what they serve the seals at Shedd. </p><p>It did make me laugh when you&apos;d see the care and attention that went into the food for the animals and then you&apos;d go by the cafeteria and see kids sucking down chicken nuggets and diet coke. But it was impossible not to be charmed by the intense focus these people had for animals. I used to break some of the rules and spy on the penguins, my favorite animals. There is something endlessly amusing about seeing penguins in non-animal places. Try not to smile at penguins walking down a hallway, it&apos;s impossible. </p><p>The back area of the aquarium feels like a secret world, with lots of staircases going behind the tanks. Often I would be in a conversation and look through the exhibit, making eye contact with a guest on the other side of the water. It was a very easy place to get lost, often heading down a series of catwalks and down a few stairs to a random door. Even after going there a few times, I appreciated an escort to ensure I didn&apos;t head down a random hallway and into an animal area or accidentally emerge in front of a crowd of visitors.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2024/01/image-1.png" class="kg-image" alt="Tech Support Stories Part 2" loading="lazy" width="591" height="404"><figcaption><span style="white-space: pre-wrap;">The offices were tucked away up here overlooking the water show</span></figcaption></figure><p>I worked with the graphic design team that was split between the visuals inside the aquarium and their ad campaigns. It was my introduction to concepts like color calibration and large format printing, The team was great and a lot of fun to work with, very passionate about their work. However one part of their workflow threw me off a lot at first. Fonts. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://imag.malavida.com/mvimgbig/download-fs/fontexplorer-x-8007-1.jpg" class="kg-image" alt="Tech Support Stories Part 2" loading="lazy"><figcaption><span style="white-space: pre-wrap;">Spent a lot of time figuring out how this software worked</span></figcaption></figure><p>So I, like many people, had not spent a lot of time thinking about the existence of fonts. In school I wrote papers exclusively in Times New Roman for some reason that was never explained to me. However in design teams buying and licensing fonts for each employee and project was a big deal. The technology that most places used at the time to manage these fonts were FontExplorer X Pro, which had a server and client side.</p><p>Quickly I learned a lot about fonts because debugging font issues is one of the stranger corners of technical troubleshooting. First some Adobe applications hijacked the system font directories, meaning even if you had injected the right font in the user directory they might not appear. Second fonts themselves were weird. TrueType fonts, which is the older format and the one a lot of these companies still dealt with, are at their lowest level &quot;a sequence of points on a grid&quot;. You combine curves and straight lines into what we call a glyph. </p><figure class="kg-card kg-image-card"><img src="https://developer.apple.com/fonts/TrueType-Reference-Manual/RM01/fig04.gif" class="kg-image" alt="Tech Support Stories Part 2" loading="lazy" width="298" height="391"></figure><p>Most of the fonts I worked with had started out with the goal of printing on paper. Now many of those were being repurposed for digital assets as well as printing on paper, which introduced no end of weirdness. Here are just a few of the things I tried to help with:</p><ul><li>Print and screen use different color modules</li><li>DPI for print and PPI for digital aren&apos;t the same</li><li>No screen is the same. The differences between how a digital asset looked on a nice screen vs a cheap screen wasn&apos;t trivial, even if we tried to color calibrate both</li></ul><p>In general though I liked working with designers. They often knew exactly what they wanted to get out of my technical assistance, providing me with a ton of examples of what was wrong. Their passion for the graphic design work they were doing inside the aquarium and outside was clear with everyone I spoke with. It&apos;s rare to find a group of people who truly enjoys their jobs. </p><p>My primary task though was managing and backing up the Mac servers onto tape. For those who haven&apos;t used tape backups, it&apos;s a slow way to backup a lot of data that requires a lot of testing of backups (along with a good storage system for not confusing people). I quickly came to despise running large-scale tape backups. The rate of errors discovered when attempting to restore backups as a test was horrifying. </p><p>The tape backup was overall a complete fucking disaster. There were two tape drives from IBM and way too often a tape written by one drive wouldn&apos;t be readable by the other one. The sticker system used to track the tape backups got messed up when I went on vacation and when I came back I couldn&apos;t make heads or tails over what had happened. Every week I stopped by and basically tried anything I could think of to get the tape backup to work correctly every time. </p><p>Then I did something I&apos;m not proud of. The idea of them calling me and telling me all their hard work was gone was keeping me up at night. So without telling them, I stuck an external 3.5 drive with as much storage as I could afford behind the server tucked away and started copying everything to the tapes and the drive. The IT department had vetoed this idea before but I did it without their permission and basically bet the farm that if the server drives failed and the tape didn&apos;t restore, they&apos;d forgive me for making another onsite backup. </p><p>I found out years later that their IT found the drive, assumed they had installed it and switched over to backing up on disks in a Drobo since it was much easier to keep running. </p><h3 id="united-airlines">United Airlines</h3><p>Another frequent customer of mine was United Airlines. They had a suburban office which remains the strangest designed office I&apos;ve ever been in. There was a pretty normal lobby, with executive offices upstairs, a cafeteria and meeting room on the ground floor and then a nuclear bunker style basement. Most of the offices I went to were in the basement along cement corridors so long that they had those little carts with the orange flashing lights zooming down there. It sort of felt like you were at the airport. You could actually ask for a ride on the carts and get one, which I only did once but it was extremely fun. </p><p>The team that asked for technical support the most was the design team for the in-flight magazine, Hemispheres. They were all-Mac and located in a side room with no windows in this massive basement complex. So you&apos;d go into just this broiling hot little room with Mac Pros humming along and zero airflow. The walls were brown, the carpet was brown, it was one of the least pleasant offices I&apos;ve ever been in. Despite working for an in-flight magazine, these people were deadly serious about their work and had frequent contact with Apple about suggested improvements to workflows or tooling. </p><p>It was, to be frank, a baffling job. The United Airlines IT didn&apos;t want anything to do with the Macs, so I was there to do extremely normal things. I&apos;m talking about applying software updates, install Adobe products, things that anyone is capable of doing without any help. I&apos;d often be asked to wait in a conference room for hours until someone remembered I was there and would ask me to do something. Their internet was so slow I would download the Mac updates at home and bring them into the compound on a hard drive. I&apos;ve never seen corporate internet this slow in my life. </p><p>It wasn&apos;t the proudest I&apos;ve ever been of a job but I was absolutely broke. So I would spend hours watching the progress bar tick by on Mac updates and bill them for it. I tried to do anything to fill the time. I wrapped cables in velcro, refilled printers, reorganized ethernet cables. It was too lucrative for me to walk away but it was also the most bored I&apos;ve ever been in my life. I once emptied the recycling for everyone just to feel like I had done something that day, only to piss off the janitor. &quot;What, is this your job?&quot; he shouted as I handed him back the recycling bin. </p><p>The thing I remember the most was how impossibly hard it was to get paid. You would need to go to the end of this hallway, which had an Accounts Payable window slot with an older woman working there. Then you would physically submit the invoice to her, she would take it and put it in an old-timey wooden invoice tracking system. I&apos;m talking sometimes months from when I submitted the invoice to when I got paid. I would borderline harass this woman, asking her on the way to the bathroom like &quot;hey any chance I could get paid before Christmas? I gotta get the kids presents this year.&quot;</p><p>I didn&apos;t have kids, but I figured it sounded more convincing. I shouldn&apos;t have bothered with the lie, she looked at me with zero expression and resumed reading a magazine. At this point I was so poor that I had a budget of $20 a day, so waiting months to get paid by United put me in serious risk of not being able to pay my rent. In the end I learned a super valuable lesson about working for giant corporations. It&apos;s a great way to get paid as long as time was no object, but it&apos;s a dangerous waiting game to play. </p><h3 id="schools">Schools</h3><p>Colleges hiring me to come out and do specific jobs wasn&apos;t uncommon. Setting up a media lab was probably the most common request, where I would show up, set up a bunch of Mac Pros with fiber and an Xserve somewhere to store the files. This was fine work, but it wasn&apos;t very exciting and typically involved a lot of unboxing stuff and figuring out how to run fiber. The weirdest niche I found myself in was somehow I became the go-to person for Jewish schools in the Chicago suburbs. </p><p>It started with Ida Crown Jewish Academy in Skokie, IL. I went in to fix a bunch of white MacBooks and iMacs and while I was there I showed the teachers how to automate some of their tasks with Automator. </p><figure class="kg-card kg-image-card"><img src="https://eshop.macsales.com/blog/wp-content/uploads/2021/09/automator-basics.jpeg" class="kg-image" alt="Tech Support Stories Part 2" loading="lazy" width="1398" height="787"></figure><p>Automator was a drag and drop automation tool that let you effectively write scripts to do certain tasks. I showed them how to automate some of the grading process with CSVs and after that I became the person they always called. Soon after, I started getting calls for all the Jewish schools in the area. To be clear there are not a lot of these schools and they are extremely small.</p><p>On average I&apos;d say somewhere around 200-300 students in each school. Also they had pretty intense security, probably the most I&apos;d seen at a high school before or since. To be honest I don&apos;t know why they picked me as the Mac IT guy, I don&apos;t have any particular feelings about the Jewish faith. The times when the schools staff would ask questions about my faith, they seemed pleased by my complete lack of interest in the topic. As someone who grew up with Christian fundamentalist cults constantly trying to recruit me, I appreciated them dropping it and never mentioning it again.</p><p>I loved these jobs because the schools were well organized, the staff knew everyone and they had a list of specific tasks for me when I showed up. Half my life doing independent IT was sitting in waiting rooms while the person who hired me to show up actually came and got me, so this was delightful. I started doing more &quot;teacher automation&quot; work, which was mostly AppleScript or Automator doing the repetitive tasks that these people were staying late to get done. </p><p>It wasn&apos;t until one of the schools offered me a full-time job that I realized my time in IT was coming to a close. The automation and writing AppleScript was so much more fun than anything I was doing related to Active Directory or printers. It had started to become more clear with the changes Apple was making that they were less and less interested in the creative professional space, which was my bread and butter. This school was super nice, but I knew if I started working here I would be here forever and it was too boring to do forever. </p><p>That&apos;s when I started transitioning to more traditional Linux sysadmin work. But I still think back fondly of a lot of those trips around Chicago. </p><p>Questions/comments/concerns: <a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a></p>]]></content:encoded></item><item><title><![CDATA[Typewriters and WordPerfect]]></title><description><![CDATA[My love of WordPerfect and discovering the full written history of the product and company. ]]></description><link>https://matduggan.com/typewriters-and-wordperfect/</link><guid isPermaLink="false">65c217b4d071590001d7cf67</guid><category><![CDATA[Work]]></category><category><![CDATA[Personal]]></category><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Tue, 06 Feb 2024 11:31:46 GMT</pubDate><content:encoded><![CDATA[<p>The first and greatest trick of all technology is to make words appear. I will remember forever the feeling of writing my first paper on a typewriter as a kid. The tactile clunk and slight depression of the letters on the page made me feel like I was making something. It transformed my trivial thoughts to something more serious and weighty. I beamed with pride when I would be the only person who would hand in typed documents instead of the cursive of my classmates.</p><p>I learned how to type on the schools Brother Charger 11 typewriter, which by the time I got there were one step away from being thrown away. It was one of the last of its kind, being a manual portable typewriter before electric typewriters took over the entire market. Our typing teacher was a nun who had learned how to type on them and insisted they be what we tried first. Typewriters were heavy things, with a thunk and a clang going along with almost anything you did.</p><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/4dd10f34-18b4-458d-f2ba-db2fe4cc7600/public" class="kg-image" alt loading="lazy"></figure><p>Despite being used to teach kids to type for years, they were effectively the same as the day they had been purchased. The typewriters sat against the wall in their little attached cases with colors that seemed to exist from the 1950s until the end of the 70s and then we stopped remembering how to mix them. The other kids in my class hated the typewriters since it was easier to just write on loose leaf paper and hand that in, plus the typing tests involved your hands being covered with a cardboard shell to prevent you from looking.</p><p>I, like all tech people, decided that instead of fixing my terrible handwriting, I would put in 10x as much work to skip the effort. So I typed everything I could, trying to get out of as many cursive class requirements as possible. As I was doing that, my father was bringing me along to various courthouses and law offices in Ohio when I had snow days or days off school and he didn&apos;t want to leave me alone in the house.</p><p>These trips were great, mostly because people forgot I was there. I&apos;d watch violent criminal trials, sit in the secretary areas of courthouses eating cookies that were snuck over to me, the whole thing was great. Multiple times I would be sitting on the bench outside of holding cell for prisoners before they would appear in court (often for some procedural thing) and they&apos;d give me advice. I remember one guy who was just covered in tattoos advising me that &quot;stealing cars may look fun and it is fun, but don&apos;t crash because the police WILL COME and ask for registration information&quot;. 10 year old me would nod sagely and store this information for the future.</p><p>It was at one of these courthouses that I was introduced to something mind-blowing. It was a computer running WordPerfect.</p><h3 id="wordperfect">WordPerfect?</h3><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/16d145bc-7d12-4152-b804-8c7229c90c00/public" class="kg-image" alt loading="lazy"></figure><p>For a long time the word processor of choice by professionals was WordPerfect. I got to watch the transformation from machine-gun sounding electric typewriters to the glow of CRT monitors. While the business world had switched over pretty quickly, it took a bit longer for government organizations to drop the typewriters and switch. I started learning how to use a word processor with WordPerfect 5.1, which came with an instruction manual big enough to stop a bullet.</p><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/ed1a13ac-68fc-451c-a130-7115f14ac000/public" class="kg-image" alt loading="lazy"></figure><p>For those unaware, WordPerfect introduced some patterns that have persisted throughout time as the best way to do things. It was very reliable software that came with 2 killer features that put the bullet in the head of typewriters: Move and Cancel. Ctrl-F4 let you grab a sentence and then hit enter to move it anywhere else. In an era of dangerous menus, F1 would reliably back you out of any setting in WordPerfect and get you back to where you started without causing damage. Add in some basic file navigation with F5 and you had the beginnings of every text processing tool that came after.</p><p>I fell in love with it, eventually getting one of the old courthouse computers in my house to do papers on. We set it up on a giant table next to the front door and I would happily bang away at the thing, churning out papers with the correct date in there (without having to look it up with Shift-F5). In many ways this was the most formative concept of how software worked that I would encounter.</p><p>WordPerfect was the first software I saw that understood the idea of WYSIWYG. If you changed the margins in the program, the view reflected that change. You weren&apos;t limited to one page of text at a time but could quickly wheel through all the text. It didn&apos;t have &quot;modes&quot;, similar to Vim today, where you needed to pick Create, Edit or Insert. WordPerfect if you started typing it would insert text. It would then push the other text out of the way instead of overwriting it. It clicks as a natural way for text to work on a screen.</p><p>Thanks to the magic of emulation, I&apos;m still able to run this software (and in fact am typing this on it right now). It turns out it is just as good as I remember, if not better. If you are interested in how there <a href="https://mendelson.org/wpdos/" rel="noreferrer">is a great write-up here</a>. However as good as the software is, it turns out there is an amazing history of WordPerfect available for free online.</p><p><em>Almost Perfect</em> is the story of WordPerfect&apos;s rise and fall from the perspective of someone who was there. I loved reading this and am so grateful that the entire text exists online. It contains some absolute gems like:</p><blockquote>One other serious problem was our growing reputation for buggy software. Any complex software program has a number of bugs which evade the testing process. We had ours, and as quickly as we found them, we fixed them. Every couple of months we issued improved software with new release numbers. By the spring of 1983, we had already sent out versions 2.20, 2.21, and 2.23 (2.22 was not good enough to make it out the door). Unfortunately, shipping these new versions with new numbers was taken as evidence by the press and by our dealers that we were shipping bad software. Ironically, our reputation was being destroyed because we were efficient at fixing our bugs.</blockquote><blockquote>Our profits were penalized as well. Every time we changed a version number on the outside of the box, dealers wanted to exchange their old software for new. We did not like exchanging their stock, because the costs of remanufacturing the software and shipping it back and forth were steep. This seemed like a waste of money, since the bug fixes were minor and did not affect most users.</blockquote><blockquote>Our solution was not to stop releasing the fixes, but to stop changing the version numbers. We changed the date of the software on the diskettes inside the box, but we left the outside of the box the same, a practice known in the industry as slipstreaming. This was a controversial solution, but our bad reputation disappeared. We learned that perception was more important than reality. Our software was no better or worse than it had been before, but in the absence of the new version numbers, it was perceived as being much better.</blockquote><p>You can find the entire thing here: <a href="http://www.wordplace.com/ap/index.shtml">http://www.wordplace.com/ap/index.shtml</a></p>]]></content:encoded></item><item><title><![CDATA[Fixing Macs Door to Door]]></title><description><![CDATA[Fun stories from my time working as an AppleCare Dispatch contractor going door to door in Chicago.]]></description><link>https://matduggan.com/fixing-macs-door-to-door/</link><guid isPermaLink="false">658be3e7cb7cf50001838ee8</guid><category><![CDATA[Apple]]></category><category><![CDATA[Work]]></category><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 05 Jan 2024 11:57:37 GMT</pubDate><media:content url="https://matduggan.com/content/images/2023/12/aerial-view-of-lake-michigan-near-chicago-frozen-during-the-winter-united-states-AAEF08238.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://matduggan.com/content/images/2023/12/aerial-view-of-lake-michigan-near-chicago-frozen-during-the-winter-united-states-AAEF08238.jpg" alt="Fixing Macs Door to Door"><p>When I graduated college in 2008, even our commencement speaker talked about how moving back in with your parents is nothing to be ashamed of. I sat there thinking <em>well that certainly can&apos;t be a good sign</em>. Since I had no aspirations and my girlfriend was moving to Chicago, I figured why not follow her. I had been there a few times and there were no jobs in Michigan. We found a cheap apartment near her law school and I started job hunting. </p><p>After a few weeks applying to every job on Craigslist, I landed an odd job working for an Apple Authorized Repair Center. The store was in a strip mall in the suburbs of Chicago with a Dollar Store and a Chinese buffet next door. My primary qualifications were that I was willing to work for not a lot of money and I would buy my own tools. My interview was with a deeply Catholic boss who focused on how I had been an alter boy growing up. Like all of my bosses early on, his primary quality was he was a bad judge of character. </p><p>I was hired to do something that I haven&apos;t seen anyone else talk about on the Internet and wanted to record before it was lost to time. It was a weird program, a throwback to the pre-Apple Store days of Apple Mac support that was called AppleCare Dispatch. It still appears to exist (<a href="https://www.apple.com/support/products/mac/">https://www.apple.com/support/products/mac/</a>) but I don&apos;t know of any AASPs still dispatching employees. It&apos;s possible that Apple has subcontracted it out to someone else. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2024/01/image.png" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy" width="993" height="45" srcset="https://matduggan.com/content/images/size/w600/2024/01/image.png 600w, https://matduggan.com/content/images/2024/01/image.png 993w" sizes="(min-width: 720px) 720px"></figure><h2 id="applecare-dispatch">AppleCare Dispatch</h2><p>Basically if you owned a desktop Mac and lived in certain geographic areas, when you contacted AppleCare to get warranty support they could send someone like me out with a part. Normally they&apos;d do this only for customers who were extremely upset or had a store repair go poorly. I&apos;d get a notice that AppleCare was dispatching a part, we&apos;d get it from FedEx and then I&apos;d fill a backpack full of tools and head out to you on foot. </p><p>While we had the same certifications as an Apple Genius, unlike the Genius Bar we weren&apos;t trained on any sort of &quot;customer service&quot; element. All we did was Mac hardware repairs all day, with pretty tight expectations of turnaround. So how it worked at the time was basically if the Apple Store was underwater with in-house repairs, or you asked for at-home or the customer was Very Important, we would get sent out. I would head out to you on foot with my CTA card. </p><p>That&apos;s correct, I didn&apos;t own a car. AppleCare didn&apos;t pay a lot for each dispatch and my salary of $25,000 a year plus some for each repair didn&apos;t go far in Chicago even in the Great Recession. So this job involved me basically taking every form of public transportation in Chicago to every corner of the city. I&apos;d show up at your door within a 2 hour time window, take your desktop Mac apart in your house, swap the part, run the diagnostic and then take the old part with me and mail it back to Apple. </p><p>Apple provided a backend web panel which came with a chat client. Your personal Apple ID was linked with the web tool (I think it was called ASX) where you could order parts for repairs as well as open up a chat with the Apple rep there to escalate an issue or ask for additional assistance. The system worked pretty well, with Apple paying reduced rates for each additional part after the first part you ordered. This encouraged us all to get pretty good at specific diagnostics with a minimal number of swaps. </p><p>Our relationship to Apple was bizarre. Very few people at Apple even knew the program existed, seemingly only senior AppleCare support people. We could get audited for repair quality, but I don&apos;t remember that ever happening. Customer satisfaction was extremely important and basically determined the rate we got paid, so we were almost never late to appointments and typically tried to make the experience as nice as possible. Even Apple Store staff seemed baffled by us on the rare occasions we ran into each other. </p><p>There weren&apos;t a lot of us working in Chicago around 2008-2010, maybe 20 in total. The community was small and I quickly met most of my peers who worked at other independent retail shops. If our customer satisfaction numbers were high, Apple never really bothered us. They&apos;d provide all the internal PDF repair guides, internal diagnostic tools and that was it. </p><p>It is still surprising that Apple turned us loose onto strangers without anyone from Apple speaking to us or making us watch a video. Our exam was mostly about not ordering too many parts and ensuring we could read the PDF guide of how to fix a Mac. A lot of the program was a clear holdover from the pre-iPod Apple, where resources were scarce and oversight minimal. As Apple Retail grew, the relationship to Apple Authorized Service Providers got more adversarial and controlling. But that&apos;s a story for another time.</p><h3 id="tools-etc">Tools etc </h3><p>For the first two years I used a Manhattan Portage bag, which looked nice but was honestly a mistake. My shoulder ended up pretty hurt after carrying a heavy messenger bag for 6+ hours a day. </p><figure class="kg-card kg-image-card"><img src="https://www.manhattanportage.com/media/catalog/product/cache/5d6092f643b784d9c9e99823eee7dcab/1/7/1714_blk_angle_3_1.jpg" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy"></figure><p>The only screwdrivers I bothered with was Wiha precision screwdrivers. I tried all of them and Wiha were consistently the best by a mile. Wiha has a list of screwdrivers by Apple model available here: <a href="https://www.wihatools.com/blogs/articles/apple-and-wiha-tools">https://www.wihatools.com/blogs/articles/apple-and-wiha-tools</a></p><p>Macs of this period booted off of FireWire, so that&apos;s what I had with me. FireWire 800 LaCie drives were the standard issue drives in the field. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/data-src-image-c131a17b-079c-4a33-8b40-cf18940b04ff.jpeg" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy" width="296" height="170"></figure><p>You&apos;d partition it to have a series of OS X Installers on there (so you could restore the customer back to what they had before) along with a few bootable installs of OS X. These were where you&apos;d run your diagnostic software. The most commonly used ones were as follows: </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://daisydiskapp.com/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">DaisyDisk, the most popular disk space analyzer</div><div class="kg-bookmark-description">Get a visual breakdown of your disk space in form of an interactive map, reveal the biggest space wasters, and remove them with a simple drag and drop.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://daisydiskapp.com/favicon/favicon.ico" alt="Fixing Macs Door to Door"><span class="kg-bookmark-author">DaisyDisk</span><span class="kg-bookmark-publisher">Software&#xA0;Ambience&#xA0;Corp. All&#xA0;rights&#xA0;reserved.</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://daisydiskapp.com/img/card-2023-12-02-14-58-14.jpg" alt="Fixing Macs Door to Door"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.alsoft.com/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">ALSOFT - Makers of DiskWarrior.</div><div class="kg-bookmark-description">DiskWarrior is a utility program designed from the ground up with a totally different approach to preventing and resolving directory damage.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://images.squarespace-cdn.com/content/v1/5ad6666345776e48c58629d5/1565815245086-8A693B3YCQYVD33QWQG1/favicon.ico?format=100w" alt="Fixing Macs Door to Door"><span class="kg-bookmark-author">ALSOFT</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://images.squarespace-cdn.com/content/v1/5ad6666345776e48c58629d5/1566324726836-MEWGZJR0RS86656MB262/Spinning-Wheel-Flat.png" alt="Fixing Macs Door to Door"></div></a></figure><p><a href="https://www.cleverfiles.com/pro.html">https://www.cleverfiles.com/pro.html</a></p><ul><li><a href="https://www.temu.com/ul/kuiper/un9.html?subj=goods-un&amp;_bg_fs=1&amp;_p_jump_id=894&amp;_x_vst_scene=adg&amp;goods_id=601099521402742&amp;sku_id=17592237085426&amp;adg_ctx=a-ecc88057~c-707e76bf~f-dc1865d9&amp;_x_ads_sub_channel=shopping&amp;_p_rfs=1&amp;_x_ns_prz_type=3&amp;_x_ns_sku_id=17592237085426&amp;mrk_rec=1&amp;_x_ads_channel=google&amp;_x_gmc_account=5076073866&amp;_x_login_type=Google&amp;_x_ads_account=5695467342&amp;_x_ads_set=20797576552&amp;_x_ads_id=155487865083&amp;_x_ads_creative_id=681708980119&amp;_x_ns_source=g&amp;_x_ns_gclid=EAIaIQobChMIiaKbz6eygwMVmIpQBh0jAwd2EAQYASABEgL4wPD_BwE&amp;_x_ns_placement=&amp;_x_ns_match_type=&amp;_x_ns_ad_position=&amp;_x_ns_product_id=17592237085426&amp;_x_ns_target=&amp;_x_ns_devicemodel=&amp;_x_ns_wbraid=CjgKCAiAs6-sBhANEigAWPOIhhN-ri0r4C3iH_5qtalU-pCY1av0tKJFNNUPXftHpOHMNU57GgKV-g&amp;_x_ns_gbraid=0AAAAAo4mICG1MeRLHQ8GZ9YCr_BliyPV-&amp;_x_ns_targetid=pla-2195477599320&amp;gad_source=1&amp;gclid=EAIaIQobChMIiaKbz6eygwMVmIpQBh0jAwd2EAQYASABEgL4wPD_BwE" rel="noreferrer">Kapton tape to hold cables in place</a></li><li><a href="https://www.amazon.com/Fixinus-Universal-Spudger-Opening-Tablets/dp/B01GNYK0K6?th=1" rel="noreferrer">Black spudgers</a></li><li><a href="https://eshop.macsales.com/shop/newertech-universal-drive-adapter" rel="noreferrer">Universal drive cable</a></li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://electroncomputers.com/images/Mac-Repair-Services-North-York-Toronto.jpg" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy"><figcaption><span style="white-space: pre-wrap;">Remember back when Macs were something you could fix? Crazy times</span></figcaption></figure><p></p><h3 id="911-truther">9/11 Truther </h3><p>One of my first calls was for a Mac Pro at a private residence. It was a logic board, which means the motherboard of the Mac. I wasn&apos;t thrilled, because removing and replacing the Mac Pro logic board was a time-consuming repair that required a lot of light. Instead of a clean workspace with bright lights I got a guy who would not let me go until I had watched how 9/11 was an inside job. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://img.ricardostatic.ch/images/6cbd9f4b-7be8-4a91-abe4-5c7f90e12b1a/t_1000x750/mac-pro-41-2009-motherboard-logic-board" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy"><figcaption><span style="white-space: pre-wrap;">The logic board in question</span></figcaption></figure><p>&quot;Look, you don&apos;t really think the towers were blown up by planes do you?&quot; he said as he dug around this giant stack of papers to find...presumably some sort of Apple-related document. I had told him that I had everything I needed, but that I had a tight deadline and needed to start right now. &quot;Sure, but I&apos;ll put the video on in the background and you can just listen to it while you work.&quot; So while I took a Mac Pro down to the studs and rebuilt it, this poorly narrated video explained how it was the CIA behind 9/11. </p><p>His office or, &quot;command center&quot;, looked like a set of the X-Files. There were folders and scraps of paper everywhere along with photos of buildings, planes, random men wearing sunglasses. I think it was supposed to come across as if he was doing an investigation, but it reminded me more of a neighbor who struggled with hoarding. If there was an organizational system, I couldn&apos;t figure it out. Why was this person so willing to dedicate an large portion of their house to &quot;solving a mystery&quot; the rest of us had long since moved on from?</p><p>The Mac Pro answered all my questions when it booted up. The desktop was full of videos he had edited of 9/11 truth material along with website assets for where he sold these videos. This guy wasn&apos;t just a believer, he produced the stuff. When I finished, we had to run a diagnostic test to basically confirm the thing still worked as well as move the serial number onto the logic board. When it cleared diagnostic I took off, thanking him for his time and wishing him a nice day. He looked devastated and asked if I wanted to go grab a drink at the bar and continue our conversation. I declined, jogging to the L. </p><h3 id="the-doctors">The Doctors </h3><p>One of the rich folks I was sent out to lived in one of those short, super expensive buildings on Lake Shore Drive. For those unfamiliar, these shorter buildings facing the water in Chicago are often divided into a few large houses. Basically you pass through an army of doorman, get shown into an elevator that opens into the persons house. That was, if you could get through the doormen. </p><p>The staff in rich peoples houses want to immediately establish with any contractor coming into the home that they&apos;re superior to you. This happened to me <em>constantly</em>, from personal assistants to doormen, maids, nannies, etc. Doormen in particular liked to make a big deal of demonstrating that they could stop me from going up. This one stuck out because he made me take the freight elevator, letting me know &quot;the main elevator is for people who live here and people who work here&quot;. I muttered about how I was also working there and he rolled his eyes and called me an asshole. </p><p>On another visit to a different building I had a doorman physically threaten &quot;to throw me down&quot; if I tried to get on the elevator. The reason was all contractors had to have insurance registered with the building before they did work there, even though I wasn&apos;t.....removing wires from the wall. The owner came down and explained that I wasn&apos;t going to do any work, I was just &quot;a friend visiting&quot;. I felt bad for the doorman in that moment, in a dumb hat and ill-fitting jacket with his brittle authority shattered. </p><p>So I took the freight elevator up, getting let into what I would come to see as &quot;the rich persons template home&quot;. My time going into rich peoples houses were always disappointing, as they are often a collection of nice items sort of strewn around. I was shown by the husband into the library, a beautiful room full of books with what I (assumed) were prints of paintings in nice frames leaning against the bookshelves. There was an iMac with a dead hard drive, which is an easy repair.</p><p>The process for fixing a hard drive was &quot;boot to DiskWarrior, attempt to fix disk, have it fail, swap the drive&quot;. Even if DiskWarrior fixed the Mac and it booted, I would still swap the drive (why not and it&apos;s what I was paid to do) but then I didn&apos;t have to have the talk. This is where I would need to basically sit someone down and tell them their data was gone. &quot;What about my taxes?!?&quot; I would shake my head sadly. Thankfully this time the drive was still functional so I could copy the data over with a SATA to USB adapter.  </p><p>As I reinstalled OS X, I walked around the room and looked at the books. I realized they were old, really old and the paintings on the floor were not prints. There were sketches by Picasso, other names I had heard in passing through going through art museums. When he came back in, I asked why there was a lot of art. &quot;Oh sure, my dads, his big collection, I&apos;m going to hang it up once we get settled.&quot; He, like his wife, didn&apos;t really acknowledge my presence unless I directed a question right at him. I started to google some of the books, my eyes getting wide. There was millions of dollars in this room gathering dust. He never made eye contact with me during this period and quickly left the room. </p><p>This seems strange but was really common among these clients. I truly think for many of the C-level type people whose house I went to, they didn&apos;t really even see me. I had people turn the lights off in rooms I was in, forget I was there and leave (while arming the security system). For whatever reason I instantly became part of the furniture. When I went to the kitchen for a drink of water, the maid let me know that they have lived there for coming up on 5 years. </p><p>This was surprising to me because the apartment looked like they had moved in two weeks ago. There were still boxes on the floor, a tv sitting on the windowsill and what I would come to understand was a &quot;prop fridge&quot;. It had bottled water, a single bottle of expensive champagne, juices, fruit and often some sort of energy drink. No leftovers, everything gets swapped out before it goes bad and gets replaced. &quot;They&apos;re always at work&quot; she explained, grabbing her bag and offering to let me out before she locked up. They were both specialist doctors and this was apparently where they recharged their batteries. </p><p>After the first AppleCare Dispatch visit they would call me back for years to fix random problems. I don&apos;t think either of them ever learned my name. </p><h3 id="harpo-studio">HARPO Studio</h3><p>I was once called to fix a &quot;high profile&quot; computer at HARPO studios in Chicago. This was where they filmed the Oprah Winfrey Show, which I obviously knew of the existence of but had never watched. Often these celebrity calls went to me, likely because I didn&apos;t care and didn&apos;t particularly want them. I was directed to park across the street and told even though the signs said &quot;no parking&quot; that they had a &quot;deal with the city&quot;. </p><p>This repair was suspicious and I got the sense that someone had name dropped Oprah to maybe get it done. AppleCare rarely sent me multiple parts unless the case was unusual or the person had gotten escalated through the system. If you emailed Steve Jobs back in the day and his staff approved a repair, it attached a special code to the serial number that allowed us to order effectively unlimited items against the serial number. However with the rare &quot;celebrity&quot; case, we would often find AppleCare did the same thing, throwing parts at us to make the problem go away. </p><p>The back area of HARPO was busy, with what seemed like an almost exclusively female staff. &quot;Look it&apos;s important that if you see Oprah, you act normally, please don&apos;t ask her for an autograph or a photo&quot;. I nodded, only somewhat paying attention because never in a million years would I do that. This office felt like the set of The West Wing, with people constantly walking and talking along with a lot of hand motions. My guide led me to a back office with a desk on one side and a long table full of papers and folders. The woman told me to &quot;fix the iMac&quot; and left the room. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.joshpabstphoto.com/wp-content/uploads/2015/01/harpo-studio-chicago-office-sterling-bay-joshpabstphoto-9-1920x1280.jpg" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy"><figcaption><span style="white-space: pre-wrap;">Not the exact office but you get the jist</span></figcaption></figure><p>I swapped the iMac hard drive and screen, along with the memory and wifi then dived under the desk the <em>second</em> Oprah walked in. The woman and Oprah had a conversation about scheduling someone at a farm, or how shooting at a farm was going and then she was gone. When I popped my head up, the woman looked at me and was like &quot;can you believe you got to meet Oprah?&quot; She had a big smile, like she had given me the chance of a lifetime. </p><p>The bummer about the aluminum iMac repairs is you have to take the entire screen off to get anything done. This meant I couldn&apos;t just run away and hide my shame after effectively diving under a table to escape Oprah, a woman who I am certain couldn&apos;t have cared less what came out of my mouth. I could have said &quot;I love to eat cheese sometimes&quot; and she would have nodded and left the room.  </p><figure class="kg-card kg-image-card"><img src="https://i.ytimg.com/vi/QZdWvQoSCQc/hqdefault.jpg" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy"></figure><p>So you have to pop the glass off (with suction cups, not your hands like a psycho as shown above), then unscrew and remove the LCD and then finally you get access to the actual components. Any dust that got on the LCD would stick and annoy people, so you had to try and keep it as clean as possible while moving quickly to get the swap done. The nightmare was breaking the thick cables that connected the screen to the logic board, something I did once and required a late night trip to an electronics repair guy who got me sorted out with a soldering iron. </p><p>The back alley electronics repair guy is the dark secret of the Dispatch world. If you messed up a part, pulled a cable or broke a connector, Apple could ask you to pay for that part. The Apple list price for parts were hilariously overpriced. Logic boards were like $700-$900, each stick of RAM was like $90 for ones you could buy on crucial for $25. This could destroy your pay for that month, so you&apos;d end up going to Al, who ran basically a &quot;solder Apple stuff back together&quot; business in his garage. He wore overalls and talked a lot about old airplanes, which you&apos;d need to endure in order to get the part fixed. Then I&apos;d try to get the part swapped and just pray that the thing would turn on long enough for you to get off the property. Ironically his parts often lasted longer than the official Apple refurbished parts. </p><p>After I hid under the desk deliberately, I lied for years afterwards, telling people I didn&apos;t have time to say hi. In reality my mind completely blanked when she walked in. I stayed under the desk because I was nervous that everyone was going to look at me to be like &quot;I loved when you did X&quot; and my brain couldn&apos;t form a single memory of anything Oprah had ever done. I remembered Tom Cruise jumping on a couch but I couldn&apos;t recall if this was a good thing or a bad thing when it happened. </p><p>Oh and the car that I parked in the area the city didn&apos;t enforce? It had a parking ticket, which was great because I had borrowed the car. Most of the payment from my brush with celebrity went to the ticket and a tank of gas. </p><h3 id="brownstone-moms">Brownstone Moms</h3><p>One of the most common calls I got was to rich peoples houses in Lincoln Park, Streeterville, Old Town and a few other wealthy neighborhoods. They often live in distinct brownstone houses with small yards with a &quot;public&quot; entrance in the front, a family entrance on the side and then a staff entrance through the back or in the basement. </p><p>These houses were owned by some of the richest people in Chicago. The houses themselves were beautiful, but they don&apos;t operate like normal houses. Mostly they were run by the wives, who often had their own personal assistants. It was an endless sea of contractors coming in and out, coordinated by the mom and sometimes the nanny. </p><p>Once I was there, they&apos;d pay me to do whatever random technical tasks existed outside of the initial repair. I typically didn&apos;t mind since I was pretty fast at the initial repair and the other stuff was easy, mostly setting up printers or routers. The sense I got was if the household made the AppleCare folks life a living hell, I would get sent out to make the problem disappear.  These people often had extremely high expectations of customer service, which could be difficult at times.</p><p>There was a whole ecosystem of these small companies I started to run into more and more. They seemed to specialize in catering to rich people, providing tutoring services, in-house chefs, drivers, security and every other service under the sun. One of the AV installation companies and I worked together off the books after-hours to set up Apple TVs and Mac Minis as the digital media hubs in a lot of these houses. They&apos;d pay me to set up 200 iPods as party favors or wire an iPad into every room.</p><p>Often I&apos;d show up only to tell them their hard drive was dead and everything was gone. This was just how things worked before iCloud Photos, nobody kept backups and everything was constantly lost forever. Here they would often threaten or plead with me, sometimes insinuating they &quot;knew people&quot; at Apple or could get me fired. <em>Jokes on you people, I don&apos;t even know people at Apple</em> was often what ran through my head. Threats quickly lost their power when you realized nobody at any point had asked your name or any information about yourself. It&apos;s hard to threaten an anonymous person. </p><p>The golden rule that <em>every single one</em> of these assistants warned me about was not to bother the husband when he gets home. Typically these CEO-types would come in, say a few words to their kids and then retreat to their own area of the house. These were often TV rooms or home theaters, elaborate set pieces with $100,000+ of AV equipment in there that was treated like it was a secret lair of the house. To be clear, none of these men ever cared at all that I was there. They didn&apos;t seem to care that anybody was there, often barely acknowledging their wives even though an <em>immense</em> amount of work had gone into preparing for his return. </p><p>As smartphones became more of a thing, the number of &quot;please spy on my teen&quot; requests exploded. These varied from installing basically spyware on their kids laptops to attempting to install early MDM software on the kids iPhones. I was always uncomfortable with these jobs, in large part because the teens were extremely mean to me. One girl waited until her mom left the room to casually turn to me and say &quot;I will pay you $500 to lie to my mom and say you set this up&quot;. </p><p>I was offended that this 15 year old thought she could buy me, in large part because she was correct. I took the $500 and told the mom the tracking software was all set up. She nodded and told me she would check that it was working and &quot;call me back if it wasn&apos;t&quot;. I knew she was never going to check, so that part didn&apos;t spook me. I just hoped the kid didn&apos;t get kidnapped or something and I would end up on the evening news. But I was also a little short that month for rent so what can you do. </p><p><strong>Tip for anyone reading this looking to get into this rich person Mac business</strong></p><p>So the short answer is Time Machine is how you get paid month after month. Nobody checks Time Machine or pays attention to the &quot;days since&quot; notification. I wrote an AppleScript back in the day to alert you to Time Machine failures through email, but there is an app now that does the same thing: <a href="https://tmnotifier.com/">https://tmnotifier.com/</a></p><p>Basically when the backups fail, you schedule a visit and fix the problem. When they start to run out of space, you buy a new bigger drive. Then you backup the Time Machine to some sort of encrypted external location so when the drive (inevitably) gets stolen you can restore the files. The reason they keep paying you is you&apos;ll get a call at some point to come to the house at a weird hour and recover a PDF or a school assignment. That one call is how you get permanent standing appointments. </p><p>Nobody will ever ask you how it works, so just find the system you like best and do that. I preferred local Time Machine over something like remote backup only because you&apos;ll be sitting there until the entire restore is done and nothing beats local. Executives will often fill the &quot;family computer&quot; with secret corporate documents they needed printed off, so be careful with these backups. Encrypt, encrypt, encrypt then encrypt again. Don&apos;t bother explaining how the encryption works, just design the system with the assumption that someone will at some point put a CSV with your social security number onto this fun family iMac covered in dinosaur stickers. </p><h3 id="robbed-for-broken-parts">Robbed for Broken Parts</h3><p>A common customer for repairs would be schools, who would work with Apple to open a case for 100 laptops or 20 iMacs at a time. I liked these &quot;mass repair&quot; days, typically because the IT department for Chicago Public Schools would set us up with a nice clean area to work and I could just listen to a podcast and swap hard drives or replace top cases. However this mass repair was in one of Chicago&apos;s rougher neighborhoods. </p><p>Personal safety was a common topic among the dispatch folks when we would get together for a pizza and a beer. Everyone had bizarre stories but I was the only one not working out of my car. The general sense among the community was it was not an &quot;if&quot; but a &quot;when&quot; until you were robbed. Typically my rule was if I started to get nervous I&apos;d &quot;call back to the office&quot; to check if a part had arrived. Often this would calm people down, reminding them that people knew where I was. Everyone had a story of getting followed back to their car and I had been followed back to the train once or twice. </p><p>On this trip though everything went wrong that could go wrong. My phone, the HTC Dream running v1 of Android had decided to effectively &quot;stop phoning&quot;. It was still on but decided we were not, in fact, in the middle of a large city. I was instead in a remote forest miles away from a cell tower. I got to the school later than I wanted to be there, showing up at noon. When I tried to push it and come back the next day the staff let me know the janitors knew I would be there and would let me out. </p><p>So after replacing a ton of various Mac parts I walked out with boxes of broken parts in my bag and a bunch in an iMac box that someone had given me. My plan was I would head back home, get them checked in and labeled and then drop them off at a FedEx store. When I got out and realized it was dark, I started to accept something bad was likely about to happen to me. Live in a city for any amount of time and you&apos;ll start to develop a subconscious odds calculator. The closing line on this wasn&apos;t looking great. </p><p>Sure enough while waiting for the bus, I was approached by a man who made it clear he wanted the boxes. He didn&apos;t have a weapon but started to go on about &quot;kicking the shit&quot; out of me and I figured that was good enough for me. He clearly thought there was an iMac in the box and I didn&apos;t want to be here when he realized that wasn&apos;t true. I handed over my big pile of broken parts and sprinted to the bus that was just pulling up, begging the driver to keep driving. As a CTA bus driver, he had of course witnessed every possible horror a human can inflict on another human and was entirely unphased by my outburst. &quot;Sit down or get off the bus&quot;. </p><p>When I got home I opened a chat with as Apple rep who seemed unsure of what to do. I asked if they wanted me to go to the police and the rep said if I wanted to I could, but after &quot;talking to some people on this side&quot; they would just mark the parts as lost in transit and it wouldn&apos;t knock my metrics. I thanked them and didn&apos;t think much more of the incident until weeks later when someone from Apple mailed me a small Apple notebook. </p><p>They never directly addressed the incident (truly the notebook might be unrelated) but I always thought the timing was funny. Get robbed, get a notebook. I still have the notebook. </p><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/1b5c908d-6f92-4b7d-5c49-ad5dbf4db500/public" class="kg-image" alt="Fixing Macs Door to Door" loading="lazy"></figure><p></p><p>Questions/comments/concerns? Find me on Mastodon: <a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a></p><p></p><h3 id></h3><p></p><p></p><h3 id="-1"></h3><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Tech and the Twilight of Democracy]]></title><description><![CDATA[<p>We live in dangerous times. The average level of peacefulness around the world has dropped for the 9th straight year. The impact of violence on the global economy increased by $1 trillion to a record $17.5 trillion. This is equivalent to 13% of global GDP, approximately $2,200 per</p>]]></description><link>https://matduggan.com/tech-and-the-end-of-democracy/</link><guid isPermaLink="false">65855642cb7cf50001838d49</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 22 Dec 2023 13:35:20 GMT</pubDate><content:encoded><![CDATA[<p>We live in dangerous times. The average level of peacefulness around the world has dropped for the 9th straight year. The impact of violence on the global economy increased by $1 trillion to a record $17.5 trillion. This is equivalent to 13% of global GDP, approximately $2,200 per person. The graphs seem to be trending in the wrong direction by virtually any metric you can imagine. [<a href="https://www.visionofhumanity.org/conflict-deaths-at-highest-level-this-century-causing-world-peacefulness-to-decline/" rel="noreferrer">Source</a>]</p><figure class="kg-card kg-image-card"><img src="https://www.visionofhumanity.org/wp-content/uploads/2023/06/voh-2023-blog-gpi-2023-e.jpg" class="kg-image" alt loading="lazy"></figure><p>It can be difficult to point to who are the &quot;most powerful countries&quot;, but I think by most metrics the following countries would certainly fall into that list. These leaders of the world paint a dire picture for the future of democratic rule. In no particular order:</p><ul><li>United States: currently ranked as a Deficient Democracy and a country who is facing the very real possibility of the upcoming presidential election being its last. The current president, despite low crime and good economic numbers, is facing a close race and a hard reelection. His challenger, Donald Trump, has promised the following:</li></ul><blockquote>
<p>&#x201C;We pledge to you that we will root out the Communists, Marxists, fascists, and the radical-left thugs that live like vermin within the confines of our country, that lie and steal and cheat on elections,&#x201D; Donald Trump said this past November, in a campaign speech that was ostensibly honoring Veterans Day. &#x201C;The real threat is not from the radical right; the real threat is from the radical left &#x2026; The threat from outside forces is far less sinister, dangerous, and grave than the threat from within. Our threat is from within.&#x201D;</p>
</blockquote>
<p>Given his strong polling there is no reason to think the US will not fall from Deficient Democracy to Hybrid Regime or even further. </p><ul><li>China: in the face of increased economic opportunity and growth, there was a hope that China would grow to become more open. If anything, China trends in a very different direction. China is considered to be amongst the least democratic countries in the world. </li></ul><blockquote>Over the past 10 years, the Communist Party has moved from collective leadership with the general secretary, considered first among equals on the elite Politburo Standing Committee &#x2014; a practice established in the &#x201C;reform and opening&#x201D; era after the Cultural Revolution &#x2014; to Xi&#x2019;s supreme leadership, analysts say.</blockquote><blockquote>In 2018, Chinese lawmakers amended the constitution abolishing presidential term limits - paving the way for Xi to rule for life. In a further move to assert his authority, the party pledged to uphold the &quot;Two Establishes,&#x201D; party-speak for loyalty to him, in a historical resolution passed in 2021. </blockquote><p>[<a href="https://www.voanews.com/a/xi-s-consolidation-of-power-equals-suppression-in-china-conflicts-abroad-analysts-say/7002873.html" rel="noreferrer">Source</a>]</p><ul><li>EU: Currently they stand alone as keeping the development of democracy alive. However even here they have begun to pass more extreme anti-immigration legislation as an attempt to appease right-leaning voters and keep the more extreme political parties out of office. France recently passed a hard-line anti-immigrant bill designed specifically to keep Le Pen supporters happy [<a href="https://www.theguardian.com/world/2023/dec/20/france-immigration-bill-passed-controversy-emmanuel-macron-marine-le-pen" rel="noreferrer">source</a>] and in Germany the desire for a dictator has continued to grow. Currently, across all age groups, between 5-7% of those surveyed support a dictatorship with a single strong party and leader for Germany. This result is double the long-term average.&#xA0;[<a href="https://www.dw.com/en/germany-right-wing-extremism-and-hostility-toward-democracy-growing/a-66881144" rel="noreferrer">source</a>]</li><li>India: Having been recently downgraded to a Hybrid Regime, India is currently in the process of an aggressive consolidation of power by the executive with the assistance of both old and new laws. </li></ul><blockquote>The Modi government has increasingly employed two kinds of laws to silence its critics&#x2014;colonial-era sedition laws and the Unlawful Activities Prevention Act (UAPA). Authorities have regularly booked individuals under sedition laws for dissent in the form of posters, social-media posts, slogans, personal communications, and in one case, posting celebratory messages for a Pakistani cricket win. Sedition cases rose by 28 percent<strong><em>&#xA0;</em></strong>between 2010 and 2021. Of the sedition cases filed against citizens for criticizing the government, 96 percent were filed after Modi came to power in 2014. One report estimates that over the course of just one year, ten-thousand tribal activists in a single district were charged with sedition for invoking their land rights.</blockquote><blockquote>The Unlawful Activities Prevention Act was amended in 2019 to allow the government to designate individuals as terrorists without a specific link to a terrorist organization. There is no mechanism of judicial redress to challenge this categorization. The law now specifies that it can be used to target individuals committing any act &#x201C;likely to threaten&#x201D; or &#x201C;likely to strike terror in people.&#x201D; Between 2015 and 2019, there was a 72 percent increase in arrests under the UAPA, with 98 percent of those arrested remaining in jail without bail.</blockquote><p>[<a href="https://www.journalofdemocracy.org/articles/why-indias-democracy-is-dying/" rel="noreferrer">Source</a>]</p><ul><li>Russia: There has been a long-standing debate over whether Russia was a full dictatorship or some hybrid model. The invasion of Ukraine seems to have put all those questions to bed. </li></ul><blockquote>On 8 December, Andrey Klishas, the Head of the Federation Council Committee on Constitutional Legislation, made a point in an <a href="https://www.vedomosti.ru/politics/articles/2022/12/06/954040-dlya-vtoroi-volni-mobilizatsii">interview with <em>Vedomosti</em></a> which was already tacitly understood by Russia-watchers, but still shocking to hear.&#xA0;&#xA0; In answer to a question on why the partial mobilisation decree had not been repealed now the process was completed, he explained to the Kremlin-friendly correspondent there was no need for legislation: &#x2018;There is no greater power than the President&#x2019;s words.&#x2019; So there it is &#x2013; Russia is by definition a dictatorship. For the unawares reader, <em>Vedomosti</em> was one of Russia&#x2019;s leading, intelligent and independent newspapers; it fell afoul of the authorities and today is a government propaganda channel.</blockquote><p>[<a href="https://wavellroom.com/2023/01/25/its-official-russia-is-a-dictatorship/" rel="noreferrer">Source</a>]</p><p>We have no reason at this point to think this trend will slow or reverse itself. It appears that, despite the constant refrain of my childhood that progression towards democracy was an inevitable result of free and open trade, this was another neoliberal fantasy. We live in a world where the most powerful countries are actively trending away from what we would consider to be core democratic values and towards more xenophobic and authoritarian governments. </p><p>However I&apos;m not here to lecture, only to lay the foundation. In the face of this data, I thought it could be interesting to discuss some what-ifs, trying to imagine what the future of technology will look like in the face of this strong global anti-democratic trend. What technologies will we all be asked to make and what concessions will be forced upon us? </p><p><strong>Disclaimer</strong>: I am not an expert on foreign policy, or really anything. Approach these topics not as absolute truths but as discussion points. I will attempt to provide citations and factual basis for my guesses, but as always feel free to disagree. Don&apos;t send me threatening messages as sometimes happens when I write things like this. I don&apos;t care about you and don&apos;t read them. </p><p>So let&apos;s make some predictions. What kind of world are we heading into? What are the major trends and things to look out for. </p><h3 id="the-internet-stops-being-global">The Internet Stops Being Global</h3><p>The Internet has always been a fractured thing. Far from the dream of perfectly equal traffic being carried across the fastest route between user and service, the real internet is a complicated series of arrangements between the tiers of ISPs and the services that ride those rails. First, what is the internet?</p><p>The thing we call the Internet is a big collection of separate, linked systems, each of which is managed as a single domain called an Autonomous Systems (AS). There are over sixty thousand AS numbers (ASNs) assigned to a wide variety of companies, educational, non-profit and government entities. The AS networks that form the primary transport for the Internet are independently controlled by Internet Service Providers (ISPs). The BGP protocol binds these entities together. </p><p>When we talk about ISPs, we&apos;re talking about 3 tiers. Tier 1 are defined by not paying to have their traffic delivered through similar-sized networks, can deliver to the whole internet, peer on multiple continents and have direct access to a fiber cable in the ocean. Tier 2 provide paid transit through Tier 1 and through peering with other Tier 2. Tier 3 is what hooks up end users and businesses and connects to a Tier 2. </p><figure class="kg-card kg-image-card"><img src="https://notes.networklessons.com/attachments/excalidraw/internet-tier1-tier2-tier3-isps.excalidraw.svg" class="kg-image" alt="internet-tier1-tier2-tier3-isps.excalidraw" loading="lazy"></figure><p>The internet is not as reliable as some people pretend it is. Instead its a very fragile entity well within the governmental scope of the countries where the pieces reside. As governments become less open, their Internet becomes less open. India regularly shuts down the Internet to stop dissent or to control protests or any civil unrest (<a href="https://www.theguardian.com/world/2023/sep/25/a-tool-of-political-control-how-india-became-the-world-leader-in-internet-blackouts" rel="noreferrer">source</a>) and I would expect that to grow into even more extreme regulations as time goes on. </p><blockquote>The &quot;IT Rules 2011&quot; were adopted in April 2011 as a supplement to the 2000 Information Technology Act (ITA). The new rules require Internet companies to remove within 36 hours of being notified by the authorities any content that is deemed objectionable, particularly if its nature is &quot;defamatory,&quot; &quot;hateful&quot;, &quot;harmful to minors&quot;, or &quot;infringes copyright&quot;. Cybercaf&#xE9; owners are required to photograph their customers, follow instructions on how their caf&#xE9;s should be set up so that all computer screens are in plain sight, keep copies of client IDs and their browsing histories for one year, and forward this data to the government each month.</blockquote><p>China has effectively made its own Internet and Russia is currently in the process of doing the same thing (<a href="The &quot;IT Rules 2011&quot; were adopted in April 2011 as a supplement to the 2000 Information Technology Act (ITA). The new rules require Internet companies to remove within 36 hours of being notified by the authorities any content that is deemed objectionable, particularly if its nature is &quot;defamatory,&quot; &quot;hateful&quot;, &quot;harmful to minors&quot;, or &quot;infringes copyright&quot;. Cybercaf&#xE9; owners are required to photograph their customers, follow instructions on how their caf&#xE9;s should be set up so that all computer screens are in plain sight, keep copies of client IDs and their browsing histories for one year, and forward this data to the government each month." rel="noreferrer">source</a>). The US has its infamous Section 702. </p><blockquote><a href="https://www.aclu.org/issues/national-security/privacy-and-surveillance/warrantless-surveillance-under-section-702-fisa">Section 702 of the Foreign Intelligence Surveillance Act</a> permits the U.S. government to engage in mass, warrantless surveillance of Americans&#x2019; international communications, including phone calls, texts, emails, social media messages, and web browsing. The government claims to be pursuing vaguely defined foreign intelligence &#x201C;targets,&#x201D; but its targets need not be spies, terrorists, or criminals. They can be virtually any foreigner abroad: journalists, academic researchers, scientists, or businesspeople. </blockquote><p>[<a href="https://www.aclu.org/issues/national-security/warrantless-surveillance-under-section-702-fisa?redirect=issues/national-security/privacy-and-surveillance/warrantless-surveillance-under-section-702-fisa" rel="noreferrer">source</a>]</p><p>As time progresses I would expect to see the restrictions on Internet traffic to increase, not decrease. Much is made of the sanctity of encrypted messages between individuals, but in practice this is less critical since even if the message body is itself encrypted, the metadata often isn&apos;t. The reality is even if the individual messages between people are encrypted, a graph of relationships is still possible through all the additional information around the message. </p><p><strong>Predictions</strong></p><ul><li>Expect to see more pressure placed on ISPs and less on tech companies. Google, Apple, Meta and others have shown some willingness to buck governmental pressure. However given the growth in cellular data usage and the shift of consumers from laptops/desktops to mobile, expect to see more restrictions at the mobile cellular network where even simple DNS blocking or tracking is harder to stop.  </li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://www.ericsson.com/en/reports-and-papers/mobility-report/dataforecasts/mobile-traffic-forecast"><img src="https://matduggan.com/content/images/2023/12/image-4.png" class="kg-image" alt loading="lazy" width="929" height="684" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-4.png 600w, https://matduggan.com/content/images/2023/12/image-4.png 929w" sizes="(min-width: 720px) 720px"></a><figcaption><span style="white-space: pre-wrap;">[source]</span></figcaption></figure><ul><li>Widespread surveillance of all Internet traffic will continue to grow and governments will become more willing to turn off or greatly limit Internet access in the face of disruptions or threats. Expect to see even regional governments able to turn off mobile Internet in the face of protests or riots. </li><li>Look to the war in Gaza as an example of what this might look like. </li></ul><figure class="kg-card kg-image-card"><img src="https://pulse.internetsociety.org/wp-content/uploads/2023/10/cloudflare-radar-traffic-trends-xy-20231010-202310104-1024x576.png" class="kg-image" alt="Palestine-Israel Conflict Impacts Internet Access" loading="lazy"></figure><p>Shutting off the Internet will be a more common tactic to limit the flow of information out and to disrupt attempts to organize or communicate across members of the opposition. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/image-5.png" class="kg-image" alt loading="lazy" width="1626" height="900" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-5.png 600w, https://matduggan.com/content/images/size/w1000/2023/12/image-5.png 1000w, https://matduggan.com/content/images/size/w1600/2023/12/image-5.png 1600w, https://matduggan.com/content/images/2023/12/image-5.png 1626w" sizes="(min-width: 720px) 720px"></figure><p>As of me writing this there are 8 ongoing governmental Internet shutdowns and 119 in the last 12 months. I would expect this pace to dramatically increase.[<a href="https://pulse.internetsociety.org/shutdowns" rel="noreferrer">source</a>]</p><p>The end result of all of these disruptions will be an increasingly siloed Internet specific to your country. It&apos;ll be harder for normal people on the ground in a crisis or governmental crackdown to tell people what is happening and, with the next technology, easier for those forces to make telling what is happening on the ground next to impossible. </p><h3 id="llms-make-telling-the-truth-impossible">LLMs Make Telling the Truth Impossible</h3><p>Technology was supposed to usher in an age of less-centralized truth. No longer would we be reliant on the journalists of the past. Instead we could get our information directly from the people on the ground without filtering or editorializing. The goal was a more fair version of news that was more honest and less manipulated. </p><p>The actual product is far from that. Social media has become a powerful tool for propaganda, with algorithms designed to keep users engaged with content they find relevant allowing normal people access to conspiracy theories and propaganda with no filters or ethics. Russia and China, following a new version of their old Cold War playbooks, have excelled at this new world of disinformation, making it difficult to tell what is real and what is fake. </p><p>In 20 years we&apos;ll look back at this period as being the almost innocent beginning of this trend. With realistic deepfakes, it will soon be impossible to tell what a leader did or didn&apos;t say. Since China, Russia and increasingly the US have no concept of &quot;ethical journalism&quot; and either answer to government leaders or a desire for more ratings, it will soon be possible to create entirely false news streams that cater to whatever viewpoint your audience finds appealing at that time. </p><p><strong>Predictions</strong></p><ul><li>Future conflicts will find social media immediately swamped with LLM backed accounts attempting to create the perception that even a deeply disliked action (a Chinese blockage or invasion of Taiwan) is more nuanced. World leaders will find it difficult to tell what voters actually think and it will be hard to form consensus across political affiliations even on seemingly straight-forward issues. </li><li>Politicians and their supporters will use the possibility of deepfakes to attempt to explain away any video or image of them engaging in nefarious actions. Even if deepfakes aren&apos;t widely deployed, the possibility of them will transition us into a post-truth reality. Even if you watch a video of the president giving a speech advocating something truly terrible, supporters will be able to dismiss it without consideration. </li><li>Technology companies, facing a closed Internet and increasingly hostile financial landscape, will inevitably provide this technology as a service. Expect to see a series of cut-out companies but the underlying technology will be the same. </li><li>We won&apos;t ever find reliable LLM detection technology and there won&apos;t be a way to mass filter out this content from social media. </li><li>Even if you are careful about your consumption of media, it will be very hard to tell truth from fabrication for savvy consumers of information. Even if you are not swayed by the LLM generated content, you will not be able to keep up with the sheer output with conventional fact checking. </li></ul><h3 id="global-warming-and-war-kills-the-gadget">Global Warming (and War) Kills the Gadget </h3><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/a6dc7c53-ba03-4f26-fff3-db077bafc300/public" class="kg-image" alt loading="lazy"></figure><p>We know that Global Warming is going to have a devastating impact on shipping routes around the world. We&apos;re already seeing more storms impacting ports that are absolutely critical to the digital logistics chain.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2023/12/image-6.png" class="kg-image" alt loading="lazy" width="903" height="470" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-6.png 600w, https://matduggan.com/content/images/2023/12/image-6.png 903w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">[</span><a href="https://www.edf.org/sites/default/files/press-releases/RTI-EDF%20Act%20Now%20or%20Pay%20Later%20Climate%20Impact%20Shipping.pdf" rel="noreferrer"><span style="white-space: pre-wrap;">source</span></a><span style="white-space: pre-wrap;">]</span></figcaption></figure><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/image-7.png" class="kg-image" alt loading="lazy" width="439" height="340"></figure><p>With the COP28 conference a complete failure and none of the countries previously mentioned interested in addressing Global Warming, expect to see this trend continue unchecked. Without democratic pressures, we would expect to see countries like India, China, the US and others continue to take the most profitable course of action regardless of long-term cost. </p><p>The net result will be a widespread disruption in the complicated supply chain  that provides the hardware necessary to continue to grow the digital economy. It will be more difficult for datacenters, mobile network providers and individual consumers to get replacement parts for hardware or to upgrade that hardware. Since much of the manufacturing expertise required to make these parts is almost exclusively contained within the impacted zones, setting up alternative factories will be difficult or impossible. </p><blockquote>What&#x2019;s likely incentivizing semiconductor makers more than government dollars are geopolitical changes. Taiwan is potentially a major choke point in any electronics supply chain. Any electronic part, whether for a smart phone, a television, a home computer, or a data center likely includes critical components that came through Taiwan.<br><br>&#x201C;If you look across the Taiwan Strait, you&#x2019;ve got this 900-pound gorilla called China that is saying &apos;Taiwan belongs to us, and if you won&#x2019;t give it to us, we&#x2019;ll take it at some point,&apos;&#x201D; Johnson said. &#x201C;What would happen to the semiconductor industry if TSMCs fabs were destroyed? Disaster.&#x201D;<br><br>Before Chinese President Xi Jinping became president in 2012, Western nations had a relatively healthy trade relationship with China. Since that time, <a href="https://www.cfr.org/backgrounder/contentious-us-china-trade-relationship" rel="nofollow noopener">it has become more contentious</a>.<br><br>&#x201C;Before Xi came in power, we had this great trade relationship. And there was the belief that if you treated China like a grown-up partner, they&#x2019;d start acting like one; that turned out to be a very bad assumption,&#x201D; Johnson said. &#x201C;So yeah, the idea of bringing the entire supply chain back to the US? Probably not practical.<br><br>&quot;But you want to figure out how to diversify away from China as much as you can. I don&#x2019;t consider China a reliable business partner anymore.&#x201D;</blockquote><p>[<a href="https://www.computerworld.com/article/3692888/as-us-moves-to-regain-microchip-leadership-some-say-it-never-lost-it.html" rel="noreferrer">source</a>]</p><p><strong>Predictions</strong></p><ul><li>As relations with China continue to degrade, expect to see tech companies struggle to find replacements for difficult to manufacture parts.</li><li>Even among countries where relations are good, the decision to ignore Global Warming means we&apos;ll see increased severe disruption of maritime shipping with destruction or flooding of vulnerable ports causing massive parts shortages. </li><li>It&apos;ll be harder to replace devices and harder to fix the ones you already have</li><li>Expect to see a lot of &quot;right to repair&quot; bills as governments, unable to solve the logistical struggles, will push the issue down to being the responsibility of tech companies who will need to change their designs and manufacturing locations. </li><li>Also expect to see the same model of something in the field for a lot longer. A cellphone or random IoT device will go from being easy to replace overnight to possibly involving a multi-week or even several month delay. Consumers will come to expect that they will be able to keep technology operational for longer. </li></ul><h3 id="tech-companies-will-be-pressured-to-comply">Tech Companies will be Pressured to Comply</h3><p>We currently live in a strange middle period where companies can still (mostly) say no to governments. While there are consequences, these are mostly financial or limitations on where the company can sell their products. However that period appears to be coming to an end. Governments around the world are looking at Big Tech and looking to apply regulations to those businesses. [<a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;ved=2ahUKEwitoZ-GkaODAxULYPEDHaG9AfYQFnoECDgQAQ&amp;url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fenriquedans%2F2021%2F05%2F02%2Faround-the-world-governments-are-readying-to-regulate-bigtech%2F&amp;usg=AOvVaw3Y94nsqBy_Ic_OIQ8LzRuX&amp;opi=89978449" rel="noreferrer">source</a>]</p><blockquote>More governments arrested users for nonviolent political, social, or religious speech than ever before. Officials suspended internet access in at least 20 countries, and 21 states blocked access to social media platforms. Authorities in at least 45 countries are suspected of obtaining sophisticated spyware or data-extraction technology from private vendors.</blockquote><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2023/12/image-10.png" class="kg-image" alt loading="lazy" width="1080" height="1177" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-10.png 600w, https://matduggan.com/content/images/size/w1000/2023/12/image-10.png 1000w, https://matduggan.com/content/images/2023/12/image-10.png 1080w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">[</span><a href="https://freedomhouse.org/report/freedom-net/2021/global-drive-control-big-tech" rel="noreferrer"><span style="white-space: pre-wrap;">source</span></a><span style="white-space: pre-wrap;">]</span></figcaption></figure><ul><li>Expect to see governments step up their expectations of what Tech is willing to do for them. Being told it is &quot;impossible&quot; to get information out of an encrypted exchange will get less and less traction. </li><li>Platforms like YouTube will be under immense pressure to either curtail fake video or promote face video promoted by the government in question. Bans or slowdowns will be common-place. </li><li>Getting users to provide more government ID under the guise of protecting underage users so that social media accounts can result in more effective criminal prosecution will become common. </li></ul><p></p><h3 id="conclusion">Conclusion</h3><p>Technology is not immune to changes in political structure. As we trend away from free and open communication across borders and towards more closed borders and war, we should expect to see technology reflect those changes. Hopefully this provides you with some interesting things to consider. </p><p>Whether these trends are reversible or not is not for me to say. I have no idea how to make a functional democracy, so fixing it is beyond my skills. I do hope I&apos;m wrong, but I feel my predictions fit within the data I was able to find. </p><p>As always I&apos;m open to feedback. The best place to find me is on Mastodon: <a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a></p>]]></content:encoded></item><item><title><![CDATA[Why Kubernetes needs an LTS]]></title><description><![CDATA[<p>There is no denying that containers have taken over the mindset of most modern teams. With containers, comes the need to have orchestration to run those containers and currently there is no real alternative to Kubernetes. Love it or hate it, it has become the standard platform we have largely</p>]]></description><link>https://matduggan.com/why-kubernetes-needs-an-lts/</link><guid isPermaLink="false">656dc13317976e0001930c22</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Mon, 04 Dec 2023 12:58:47 GMT</pubDate><content:encoded><![CDATA[<p>There is no denying that containers have taken over the mindset of most modern teams. With containers, comes the need to have orchestration to run those containers and currently there is no real alternative to Kubernetes. Love it or hate it, it has become the standard platform we have largely adopted as an industry. If you exceed the size of docker-compose, k8s is the next step in that journey.</p><p>Despite the complexity and some of the hiccups around deploying, most organizations that use k8s that I&apos;ve worked with seem to have positive feelings about it. It is reliable and the depth and width of the community support means you are never the first to encounter a problem. However Kubernetes is not a slow-moving target by infrastructure standards. </p><p> </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/image.png" class="kg-image" alt loading="lazy" width="742" height="470" srcset="https://matduggan.com/content/images/size/w600/2023/12/image.png 600w, https://matduggan.com/content/images/2023/12/image.png 742w" sizes="(min-width: 720px) 720px"></figure><p>Kubernetes follows an N-2 support policy (meaning that the 3 most recent minor versions receive security and bug fixes) along with a <a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-release/2572-release-cadence">15-week release cycle</a>. This results in a release being supported for 14 months (12 months of support and 2 months of upgrade period). If we compare that to Debian, the OS project a lot of organizations base their support cycles on, we can see the immediate difference. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/image-1.png" class="kg-image" alt loading="lazy" width="751" height="474" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-1.png 600w, https://matduggan.com/content/images/2023/12/image-1.png 751w" sizes="(min-width: 720px) 720px"></figure><p>Red Hat, whose entire existence is based on organizations not being able to upgrade often, shows you at what cadence some orgs can roll out large changes. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/image-2.png" class="kg-image" alt loading="lazy" width="736" height="742" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-2.png 600w, https://matduggan.com/content/images/2023/12/image-2.png 736w" sizes="(min-width: 720px) 720px"></figure><p>Now if Kubernetes adopted this cycle across OSS and cloud providers, I would say &quot;there is solid evidence that it can be done and these clusters can be kept up to date&quot;. However cloud providers don&apos;t hold their customers to these extremely tight time windows. GCP, who has access to many of the Kubernetes maintainers and works extremely closely with the project, doesn&apos;t hold customers to anywhere near these timelines. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/12/image-3.png" class="kg-image" alt loading="lazy" width="795" height="358" srcset="https://matduggan.com/content/images/size/w600/2023/12/image-3.png 600w, https://matduggan.com/content/images/2023/12/image-3.png 795w" sizes="(min-width: 720px) 720px"></figure><p>Neither does AWS or Azure. The reality is that nobody expects companies to keep pace with that cadence of releases because the tooling to do so doesn&apos;t really exist. Validating that a cluster can be upgraded and that it is safe to do so requires the use of third-party tooling or to have a pretty good understanding of what APIs are getting deprecated when. Add in time for validating in staging environments along with the sheer time involved in babysitting a Kubernetes cluster upgrade and a clear problem emerges. </p><h3 id="what-does-upgrading-a-k8s-cluster-even-look-like">What does upgrading a k8s cluster even look like?</h3><p>For those unaware of what a manual upgrade looks like, this is the rough checklist.</p><ul><li>Check all third-party extensions such as network and storage plugins</li><li>Update etcd (all instances)</li><li>Update kube-apiserver (all control plane hosts)</li><li>Update kube-controller-manager</li><li>Update kube-scheduler</li><li>Update the cloud controller manager, if you use one</li><li>Update kubectl </li><li>Drain every node and either replace the node or upgrade the node and then readd and monitor to ensure it continues to work</li><li>Run <code>kubectl convert</code> as required on manifests</li></ul><p>None of this is rocket science and all of it can be automated, but it still requires someone to effectively be super on top of these releases. Most importantly <strong>it is not substantially easier than making a new cluster. </strong>If upgrading is, at best, slightly easier than making a new cluster and often quite a bit harder, teams can get stuck unsure what is the correct course of action. However given the aggressive pace of releases, spinning up a new cluster for every new version and migrating services over to it can be really logistically challenging. </p><p>Consider that you don&apos;t want to be on the .0 of a k8s release, typically .2. You lose a fair amount of your 14 month window waiting for that criteria. Then you spin up the new cluster and start migrating services over to it. For most teams this involves a fair amount of duplication and wasted resources, since you will likely have double the number of nodes running for at least some period in there. CI/CD pipelines need to get modified, docs need to get changed, DNS entries have to get swapped. </p><p>None of this is impossible stuff, or even terribly difficult stuff, but it is critical and even with automation the risk of one of these steps failing silently is high risk enough that few folks I know would fire and forget. Instead clusters seem to be in a state of constant falling behind unless the teams are empowered to make keeping up with upgrades a key value they bring to the org. </p><p>My experience with this has been extremely bad, often joining teams where a cluster has been left to languish for too long and now we&apos;re running into concerns over whether it can be safely upgraded at all. Typically my first three months running an old cluster is telling leadership I need to blow our budget out a bit to spin up a new cluster and cut over to it namespace by namespace. It&apos;s not the most gentle onboarding process. </p><h3 id="proposed-lts">Proposed LTS</h3><p>I&apos;m not suggesting that the k8s maintainers attempt to keep versions around forever. Their pace of innovation and adding new features is a key reason the platform has thrived. What I&apos;m suggesting is a dead-end LTS with no upgrade path out of it. GKE allowed customers to be on 1.24 for 584 days and 1.26 for 572 days. Azure has a more generous LTS date of 2 years from the GA date and EKS from AWS is sitting at around 800 days that a version is supported from launch to end of LTS. </p><p>These are more in line with the pace of upgrades that organizations can safely plan for. I would propose an LTS release with a 24 months of support from GA and an understanding that the Kubernetes team can&apos;t offer an upgrade to the next LTS. The proposed workflow for operations teams would be clusters that live for 24 months and then organizations need to migrate off of them and create a new cluster. </p><p>This workflow makes sense for a lot of reasons. First creating fresh new nodes at regular intervals is best practice, allowing organizations to upgrade the underlying linux OS and hypervisor upgrades. While you should obviously be upgrading more often than every 2 years, this would be a good check-in point. It also means teams take a look at the entire stack, starting with a fresh ETCD, new versions of Ingress controllers, all the critical parts that organizations might be loathe to poke unless absolutely necessary. </p><p>I also suspect that the community would come in and offer a ton of guidance on how to upgrade from LTS to LTS, since this is a good injection point for either a commercial product or an OSS tool to assist with the process. But this wouldn&apos;t bind the maintainers to such a project, which I think is critical both for pace of innovation and just complexity. K8s is a complicated collection of software with a lot of moving pieces and testing it as-is already reaches a scale most people won&apos;t need to think about in their entire careers. I don&apos;t think its fair to put this on that same group of maintainers. </p><h3 id="lts-wg">LTS WG</h3><p>The k8s team is reviving the LTS workgroup, which was disbanded previously. I&apos;m cautiously optimistic that this group will have more success and I hope that they can do something to make a happier middle ground between hosted platform and OSS stack. I haven&apos;t seen much from that group yet (the mailing list is empty: <a href="https://groups.google.com/a/kubernetes.io/g/wg-lts">https://groups.google.com/a/kubernetes.io/g/wg-lts</a>) and the Slack seems pretty dead as well. However I&apos;ll attempt to follow along with them as they discuss the suggestion and update if there is any movement. </p><p>I really hope the team seriously considers something like this. It would be a massive benefit to operators of k8s around the world to not have to be in a state of constantly upgrading existing clusters. It would simplify the third-party ecosystem as well, allowing for easier validation against a known-stable target that will be around for a little while. It also encourages better workflows from cluster operators, pushing them towards the correct answer of getting in the habit of making new clusters at regular intervals vs keeping clusters around forever. </p>]]></content:encoded></item><item><title><![CDATA[AI is Already Killing Books]]></title><description><![CDATA[<p>I love reading. It is the thing on this earth that brings me the most joy. I attribute no small part of who I am and how I think to the authors who I have encountered in my life. The speed by which LLMs are destroying this ecosystem is a</p>]]></description><link>https://matduggan.com/ai-is-gonna-kill-books/</link><guid isPermaLink="false">655368a217976e0001930858</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 24 Nov 2023 14:00:57 GMT</pubDate><content:encoded><![CDATA[<p>I love reading. It is the thing on this earth that brings me the most joy. I attribute no small part of who I am and how I think to the authors who I have encountered in my life. The speed by which LLMs are destroying this ecosystem is a tragedy that we&apos;re not going to understand for a generation. We keep talking about it as an optimization, like writing is a factory and books are the products that fly off the line. I think it&apos;s a tragedy that will cause people to give up on the idea of writing as a career, closing off a vital avenue for human expression and communication.</p><p>Books, like everything else, has evolved in the face of the internet. For a long time publishers were the ultimate gatekeepers and authors tried to eek out an existence by submitting to anyone who would read their stuff. Most books were commercial failures, but some became massive hits. Then eBooks came out and suddenly authors could bypass the publishers and editors of the world to get directly to readers. This was promised to unleash a wave of quality the world had never seen before. </p><p>In practice, to put it kindly, eBooks are a mixed success. Some authors benefited greatly from the situation, able to establish strong followings and keep a much higher percentage of their revenue than they would with a conventional publisher. Most released a book, nobody ever read it and that was it. However there was a medium success, where authors could find a niche and generate a pretty reliable stream of income. Not giant numbers, but even 100 copies sold or borrowed under Kindle Unlimited a month spread out across enough titles can let you survive. </p><p>AI-written text is quickly filling these niches, since scammers are able to identity lucrative subsections where it might not be worth a year of a persons life to try and write a book this audience will like, but having a machine generate a book and throw it up there is incredibly cheap. I&apos;m seeing them more and more, these free on Kindle Unlimited books with incredibly specific topics that seem tailored towards getting recommended to users in sub-genres. </p><p>There is no feeling of betrayal like thinking you are about to read something that another person slaved over, only to discover you&apos;ve been tricked. They had an idea, maybe even a good idea and instead of putting in the work and actually sitting there crafting something worth my precious hours on this Earth to read, they wasted my time with LLM dribble. Those too formal, politically neutral, long-winded paragraphs stare back at me as the ultimate indictment of how little of a shit the person who &quot;wrote this&quot; cared about my experience reading it. It&apos;s like getting served a microwave dinner at a sit down restaurant.</p><p>Maybe you don&apos;t believe me, or see the problem. Let me at least try to explain why this matters. Why the relationship between author and reader is important and requires mutual respect. Finally why this destruction is going to matter in the decades to come. </p><h3 id="tldr">TLDR</h3><p>Since I know a lot of people aren&apos;t gonna read the whole thing (which is fine), let me just bulletpoint my responses to anticipated objections addressed later.</p><ul><li><strong>LLMs will let people who couldn&apos;t write books before do it. </strong>That isn&apos;t a perk. Part of the reason people invest so many hours into reading is because we know the author invested far more in writing. The sea of unread, maybe great books, was already huge. This is expanding the problem and breaking the relationship of trust between author and reader.</li><li><strong>It&apos;s not different from spellcheck or grammar check. </strong>It is though and you know that. Those tools made complex lookups easier against a large collection of rules, this is generating whole blobs of text. Don&apos;t be obtuse. </li><li><strong>They let me get my words down with less work. </strong>There is a key thing about any creative area but especially in writing that people forget. Good writing kills its darlings. If you don&apos;t care enough about a section to write it, then I don&apos;t care enough to read it. Save us both time and just cut it. </li><li><strong>Your blog is very verbose. </strong>I never said I was a good writer.</li><li><strong>The market will fix the problem. </strong>The book market relies on a vast army of unpaid volunteers to effectively sacrifice their time and wade through a sea of trash to find the gems. Throwing more books at them just means more gems get lost. Like any volunteer effort, the pool of people doesn&apos;t grow at the same rate as the problem. </li><li><strong>How big of a problem is it? </strong>Considering how often I&apos;m seeing them, it feels big, but it is hard to calculate a number. It isn&apos;t just me <a href="https://www.extremetech.com/computing/amazon-is-full-of-ai-written-novels-that-dont-make-sense" rel="noreferrer">link</a></li></ul><h3 id="why-does-it-matter">Why Does It Matter?</h3><p>Allow me to veer into my personal background to provide context on why I care. I grew up in small towns across rural Ohio, places where the people who lived there either had no choice but to stay or chose to stay because of the simple lifestyle and absolute consensus on American Christian values. We said the Pledge of Allegiance aggressively, we all went to church on Sunday, gay people didn&apos;t exist and the only non-white people we saw were the migrant farm workers who we all pretended didn&apos;t exist living in the trailers around farms surrounding the town. As a kid it was fine, children are neither celebrated or hated in this culture, instead we were mostly left alone. </p><p>There is a violent edge to these places that people don&apos;t see right away. You aren&apos;t encouraged to ask a lot of questions about the world around you. We were constantly flooded with religious messaging, at school, home, church, church camp, weekly classes at night or bible studies, movies and television that was specifically encouraged because they had a religious element. Anything outside of this realm was met with a chilly reaction from most adults, if not outright threats of violence. My parents didn&apos;t hit me, but I was very much in the minority of my group. More than once we turned the sound up on a videogame or tv to drown out the sobs of a child being struck with a hand or belt while we were at a friends house. </p><p>Small town opinion turns on a dime and around 4th grade it turned on me. Everyone knows your status because there aren&apos;t a lot of people so I couldn&apos;t just go hang out in a new neighborhood. Suddenly I had a lot of alone time, which I filled with reading. These books didn&apos;t just fill time, they made me invisible. I had something to do during lunch, recess, whenever. Soon I had consumed everything within the children&apos;s section of the library I was interested in reading and graduated to the adult section. </p><p><strong>Adult Section</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.mywcpl.org/sites/default/files/migrated/Carnegie100th.JPG" class="kg-image" alt="Bryan Main Library | Williams County Public Library" loading="lazy"><figcaption><span style="white-space: pre-wrap;">Not a terribly impressive building that I spent a lot of time in.</span></figcaption></figure><p>I was fortunate enough not to grow up today, where this loneliness and anger might have found an online community. They would reinforce my feelings, confirming that I was in the right and everyone else was in the wrong. If they rejected me, I would have wandered until I found another group. The power of the internet is the ability to self-select for your level of depravity. </p><p>Instead, wandering the poorly lit stacks of the only library in town, I came across a book that child me couldn&apos;t walk pass. A heavy tomb that seemed to contain exactly the sort of cursed knowledge that had been kept from me my entire life. The Book of the Dead.</p><figure class="kg-card kg-image-card"><img src="https://cdn10.bigcommerce.com/s-g9n04qy/products/259247/images/263934/517785VZJDL._SL1200___24538.1500738264.500.500.jpg?c=2" class="kg-image" alt="The Ancient Egyptian Book Of The Dead by University of Texas Press &amp;  Faulkner" loading="lazy"></figure><p>The version I read was an old hardcover, tucked away in a corner with a title that was too good to pass up. A book about <em>other religions</em>, old religions? From a <em>Muslim</em> country? I knew I couldn&apos;t take it home. If anyone saw me with this it would raise a lot of questions I couldn&apos;t answer. Instead I struggled through it sitting at the long wooden tables after school and on the weekends, trying to make sense of what was happening. </p><p>The text (for those that are curious: <a href="https://www.ucl.ac.uk/museums-static/digitalegypt/literature/religious/bdbynumber.html">https://www.ucl.ac.uk/museums-static/digitalegypt/literature/religious/bdbynumber.html</a>) is dense and hard to read. It took me forever to get through it, missing a lot of the meaning. I would spend days sitting there writing in my little composition notebook, looking up words and trying to parse hard to read sentences. The Book of the Dead is about two hundred &quot;spells&quot; or maybe &quot;chants&quot; would be a better way to describe it that basically take someone through the process of death. From preservation to the afterlife and finally to judgement, the soul was escorted through the process and each part was touched upon. </p><p>The part that blew my mind was the Hymn to Osiris</p><blockquote>&quot;(1) Hail to thee, Osiris, lord of eternity, king of the gods, thou who hast many names, thou disposer of created things, thou who hast hidden forms in the temples, thou sacred one, thou KA who dwellest in Tattu, thou mighty (2) one in Sekhem, thou lord to whom invocations are made in Anti, thou who art over the offerings in Annu, thou lord who makest inquisition in two-fold right and truth, thou hidden soul, the lord of Qerert, thou who disposest affairs in the city of the White Wall, thou soul of Ra, thou very body of Ra who restest in (3) Suten-henen, thou to whom adorations are made in the region of Nart, thou who makest the soul to rise, thou lord of the Great House in Khemennu, thou mighty of terror in Shas-hetep, thou lord of eternity, thou chief of Abtu, thou who sittest upon thy throne in Ta-tchesert, thou whose name is established in the mouths of (4) men, thou unformed matter of the world, thou god Tum, thou who providest with food the ka&apos;s who are with the company of the gods, thou perfect <em>khu</em> among <em>khu&apos;s</em>, thou provider of the waters of Nu, thou giver of the wind, thou producer of the wind of the evening from thy nostrils for the satisfaction of thy heart. Thou makest (5) plants to grow at thy desire, thou givest birth to . . . . . . . ; to thee are obedient the stars in the heights, and thou openest the mighty gates. Thou art the lord to whom hymns of praise are sung in the southern heaven, and unto thee are adorations paid in the northern heaven. The never setting stars (6) are before thy face, and they are thy thrones, even as also are those that never rest. An offering cometh to thee by the command of Seb. The company of the gods adoreth thee, the stars of the <em>tuat</em> bow to the earth in adoration before thee, [all] domains pay homage to thee, and the ends of the earth offer entreaty and supplication. When those who are among the holy ones (7) see thee they tremble at thee, and the whole world giveth praise unto thee when it meeteth thy majesty. Thou art a glorious <em>sahu</em> among the <em>sahu&apos;s</em>, upon thee hath dignity been conferred, thy dominion is eternal, O thou beautiful Form of the company of the gods; thou gracious one who art beloved by him that (8) seeth thee. Thou settest thy fear in all the world, and through love for thee all proclaim thy name before that of all other gods. Unto thee are offerings made by all mankind, O thou lord to whom commemorations are made, both in heaven and in earth. Many are the shouts of joy that rise to thee at the Uak[*] festival, and cries of delight ascend to thee from the (9) whole world with one voice. Thou art the chief and prince of thy brethren, thou art the prince of the company of the gods, thou stablishest right and truth everywhere, thou placest thy son upon thy throne, thou art the object of praise of thy father Seb, and of the love of thy mother Nut. Thou art exceeding mighty, thou overthrowest those who oppose thee, thou art mighty of hand, and thou slaughterest thine (10) enemy. Thou settest thy fear in thy foe, thou removest his boundaries, thy heart is fixed, and thy feet are watchful. Thou art the heir of Seb and the sovereign of all the earth;</blockquote><p>To a child raised in a heavily Christian environment, this isn&apos;t just <em>close</em> to biblical writing, it&apos;s the same. The whole world praises and worships him with a father and mother and woe to his foes who challenge him? I had assumed all of this was unique to Christianity. I knew there had been other religions but I didn&apos;t know they were saying <em>the exact same things. </em></p><p>As important as the text is <em>the surrounding context the academic sources put the text in</em>. An expert walks me through how translations work, the source of the material, how our understanding has changed over time. As a kid drawn in by a cool title, I&apos;m learning a lot about how to intake information. I&apos;m learning real history has citations, explanations, debates, ambiguity. Real academic writing has a style, which when I stumble across the metaphysical Egyptian magic nonsense makes it easy to spot. </p><p><strong>The reason this book mattered is the expert human commentary. </strong>The words themselves with some basic context wouldn&apos;t have meant anything. It&apos;s by understand the amount of work that went into this translation, what it means, what it also <em>could mean, </em>that the importance sets in. That&apos;s the human element which creates all the value. You aren&apos;t reading old words, you are being taken on a guided tour by someone who has lived with this text for a long time and knows it up and down. </p><p>I quickly expanded, growing from this historical text to a wide range of topics. I quickly find there is someone there to meet me at every stage of life. When I&apos;m lonely or angry as a teenager I find those authors and stories that speak to that, put those feelings into a context and bigger picture. <em>This isn&apos;t a new experience, people have felt this way going back to the very beginning.</em> So much of the value isn&apos;t just the words, it&apos;s the sense of a relationship between me and the author. When you encounter this in fiction or in historical text, you come to understand as overwhelming as it feels <em>in that second</em> it is part of being a human being. This person experienced it and lived, you will too. </p><p>You also get to experience emotions that you may never experience. <em>A Passage to India</em> was a book I enjoyed a lot as a teen, even though it is about the story of two British women touring around India and clashing with the colonial realities of British history. I know nothing about British culture, the 1920s, all of this is as alien to me as anything else. It&apos;s fiction but with so much historical backing you still feel like you are seeing something different, something new. </p><p>That&apos;s a powerful part of why books work. Even if you the author are just imagining those scenarios, real life bleeds in. You can make text that reads like A Farewell to Arms, but you would miss the point if you did. It&apos;s more interesting and more powerful because its Hemingway basically recanting his wartime experience through his characters (obviously pumping up the manliness as he goes). It is when writers draw on their personal lives that it hits hardest.</p><p>Instead of finding a community that reinforced how alone and sad I was in that moment, I found evidence it didn&apos;t matter. People had survived far worse and ultimately turned out to be fine. You can&apos;t read about the complex relationship of fear and respect Egyptians had with the Nile, where too little water was dead and too much was also death, then endlessly fixate on your own problems. Humanity is capable of adaptation and the promise is, so are you. </p><h3 id="why-ai-threatens-books">Why AI Threatens Books</h3><p>As readers get older and they spend a few decades going through books, they discover authors they like and more importantly styles they like. However you also like to see some experimentation in the craft, maybe with some rough edges. To me it&apos;s like people who listen to concert recordings instead of the studio album. Maybe it&apos;s a little rougher but there is also genius there from time to time. eBooks quickly became where you found the indie gems that would later get snapped up by publishers.</p><p>The key difference between physical and eBooks is bookstores and libraries are curated. They&apos;ll stock the shelves with things they like and things that will sell. Indie bookstores tend to veer a little more towards things they like, but in general it&apos;s not hard to tell the difference between the stack of books the staff loves and the ones they think the general population will buy. However each one had to get read by a person. That is the key difference between music or film and books. </p><p>A music reviewer needs to invest between 30-60 minutes to listen to an album. A movie reviewer somewhere between 1-3 hours. An owner of a bookstore in Chicago broke down his experience pretty well:</p><p>Average person: 4 books a year if they read at all</p><p>Readers (people who consider it a hobby): 30-50 books a year</p><p>Super readers: 80 books</p><p>80 books is not a lot of books. Adult novels clock in at about 90,000 words, 200-300 words per minute reading speed, 7-8 hours to get through a book. To combat this discrepancy websites like Goodreads were popularized because frankly you cannot invest 8 hours of your life in shitty eBooks very often. At the very least your investment should hopefully scare off others considering doing the same (or at least they can make an informed choice). </p><p>The ebook market also started to not be somewhere you wanted to wade in randomly due to the spike in New Age nonsense writing and openly racist or sexist titles. This book below was found by searching the term &quot;war&quot; and going to the second page. As a kid I would have had to send a money order to the KKK to get my hands on a book like this, but now it&apos;s in my virtual bookstore next to everything else. Since Amazon, despite their wealth and power, has no interest in policing their content, you are forced to solve the problem through community effort. </p><p></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2023/11/image.png" class="kg-image" alt loading="lazy" width="1488" height="279" srcset="https://matduggan.com/content/images/size/w600/2023/11/image.png 600w, https://matduggan.com/content/images/size/w1000/2023/11/image.png 1000w, https://matduggan.com/content/images/2023/11/image.png 1488w" sizes="(min-width: 720px) 720px"><figcaption><a href="https://www.amazon.com/s?k=WAR&amp;i=stripbooks-intl-ship&amp;page=2&amp;crid=2M4G32CTJBZ6V&amp;qid=1700229573&amp;sprefix=war%2Cstripbooks-intl-ship%2C248&amp;ref=sr_pg_2"><span style="white-space: pre-wrap;">https://www.amazon.com/s?k=WAR&amp;i=stripbooks-intl-ship&amp;page=2&amp;crid=2M4G32CTJBZ6V&amp;qid=1700229573&amp;sprefix=war%2Cstripbooks-intl-ship%2C248&amp;ref=sr_pg_2https://www.amazon.com/s?k=WAR&amp;i=stripbooks-intl-ship&amp;page=2&amp;crid=2M4G32CTJBZ6V&amp;qid=1700229573&amp;sprefix=war%2Cstripbooks-intl-ship%2C248&amp;ref=sr_pg_2</span></a></figcaption></figure><p>The reason why AI books are so devastating to this ecosystem should be obvious, but let&apos;s lay it out. It breaks the emotional connection between reader and writer and creates a sense of paranoia. Is this real or fake? In order to discover it, someone else needs to invest <em>a full work day</em> into reading it to figure out. Then you need to join a community with enough trusted reviewers willing to donate their time for free to tell you whether the book is good or bad. Finally you need to hope that you are a member of the right book reading community to discover the review. </p><p>So if we were barely surviving the flood of eBooks and missing <em>tons and tons</em> of good books, the last thing we needed was someone to crank up the volume of books shooting out into the marketplace. The chances that one of the sacred reviewers even finds a new authors book decreases, so the community won&apos;t find it and the author will see that they have no audience and will either stop writing or will ensure they don&apos;t write another book like the first book. The feedback loop, which was already breaking under the load, completely collapses. </p><p>Now that AI books exist, the probability that I will ever blind purchase another eBook on Amazon from an unknown author drops to zero. Now more than ever I entirely rely on the reviews of others. Before I might have wandered through the virtual stacks, but no more. I&apos;m not alone in this assessment, friends and family have reported the same feeling, even if they haven&apos;t themselves been burned by an AI book they knew about. </p><p>AI books solve a problem that didn&apos;t exist, which is this presumption by tech people that what we needed was more people writing books. Instead, like so many technical solutions to problems that the architects never took any time to understand, the result doesn&apos;t help smaller players. It places all the power back into publishers and the small cadre of super reviewers since they&apos;re willing to invest the time to check for at least some low benchmark of quality. </p><p>The sad part is this is unstoppable. eBooks are too easy to make with LLMs and no reliable detection systems exist to screen them before they&apos;re uploaded to the market. Amazon has no interest in setting realistic limits to how many books users can upload to the Kindle Store, still letting people upload a laughable <em>three books a day</em>. Google Play Store seems to have no limit, same with Apple Books. It&apos;s depressing that another market will become so crowded with trash, but nobody in a position to change it seems to care. </p><h3 id="the-future">The Future</h3><p>So where does that leave us? Well kind of back to where we started. If you are excellent at marketing and can get the name of your eBook out there, then people can go directly to it. But similar to how the App Store and Play Store are ruined for new app discoverability, it&apos;s a lopsided system which favors existing players and stacks the deck against anyone new. Publishers will still be able to get the authors to do the free market research through the eBook market and then snap up proven winners. </p><p>Since readers pay the price for this system by investing money and time into fake books, it both increases the amount of terrible out there and further incentives the push down in eBook price. If there are 600,000 &quot;free&quot; eBooks on Kindle Unlimited and you are trying to complete with a book that took a fraction of the time to produce, you are going to struggle to justify more than the $1.99-$2.99 price point. So not only are you selling a year (or years) of your life for the cost of a large soda, the probability of someone organically finding your book went from &quot;bad&quot; to &quot;grain of sand in the ocean&quot;.  </p><p>Even if there are laws, there is no chance they&apos;ll be able to make a meaningful difference unless the laws mandate that AI produced text is watermarked in some distinct way that everyone will immediately remove. So what becomes a &quot;hard but possible&quot; dream turns into a &quot;attempting to become a professional athlete&quot; level of statistical improbability. The end result will be fewer people trying so we get less good stories and instead just endlessly retread the writing of the past. </p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Help Everyone Do Better Security]]></title><description><![CDATA[<p>One interesting thing about the contrast between infrastructure and security is the expectation of open-source software. When a common problem arises we all experience, a company will launch a product to solve this problem. In infrastructure, typically the core tool is open-source and free to use, with some value-add services</p>]]></description><link>https://matduggan.com/security-feels-pointless/</link><guid isPermaLink="false">653383f4b217840001b4d9b8</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 27 Oct 2023 09:57:36 GMT</pubDate><content:encoded><![CDATA[<p>One interesting thing about the contrast between infrastructure and security is the expectation of open-source software. When a common problem arises we all experience, a company will launch a product to solve this problem. In infrastructure, typically the core tool is open-source and free to use, with some value-add services or hosting put behind licensing and paid support contracts. On the security side, the expectation seems to be that the base technology will be open-source but any refinement is not. &#xA0;If I find a great tool to manage SSH certificates, I have to pay for it and I can&apos;t see how it works. If I rely on a company to handle my login, I can ask for their security audits (sometimes) but the actual nuts and bolts of &quot;how they solved this problem&quot; is obscured from me. </p><p>Instead of &quot;building on the shoulders of giants&quot;, it&apos;s more like &quot;You&apos;ve never made a car before. So you make your first car, load it full of passengers, send it down the road until it hits a pothole and detonates.&quot; Then someone will wander by and explain how what you did was wrong. People working on their first car to send down the road become scared because they have another example of how to make the car incorrectly, but are not that much closer to a correct one given the nearly endless complexity. They may have many examples of &quot;car&quot; but they don&apos;t know if this blueprint is a good car or a bad car (or an old car that was good and is now bad). </p><p>In order to be good at security, one has to see good security first. I can understand in the abstract how SSH certificates should work, but to implement it I would have to go through the work of someone with a deep understanding of the problem to grasp the specifics. I may understand in the abstract how OAuth works, but the low level &quot;how do I get this value/store it correctly/validate it correctly&quot; is different. You can tell me until you are blue in the face how to do logins wrong, but I have very few criteria by which I can tell if I am doing it right. </p><p>To be clear there is no shortage of PDFs and checklists telling me how my security should look at an abstract level. Good developers will look at those checklists, look at their code, squint and say &quot;yeah I think that makes sense&quot;. They don&apos;t necessarily have the mindset of &quot;how do I think like someone attempting to break this code&quot;, in part because they may have no idea how the code works. Their code presents the user a screen, they receive a token, that token is used for other things and they got an email address in the process. The massive number of moving parts they just used is obscured from them, code they&apos;ll never see.</p><p>Just to do session cookies correctly, you need to know about and check the following things:</p><ul><li>Is the expiration good and are you checking it on the server?</li><li>Have you checked that you never send the Cookie header back to the client and break the security model? Can you write a test for this? How time consuming will that test be?</li><li>Have you set the <code>Secure</code> flag? Did you set the <code>SameSite</code> flag? Can you use the <code>HttpOnly</code> flag? Did you set it?</li><li>Did you scope the domain and path?</li><li>Did you write checks to ensure you aren&apos;t logging or storing the cookies wrong?</li></ul><p>That is so many places to get <em>just one thing</em> wrong.</p><p>We have to come up with a better way of throwing flares up in peoples way. More aggressive deprecation, more frequent spec bumps, some way of communicating to people &quot;the way you have done things is legacy and you should look at something else&quot;. On the other side we need a way to say &quot;this is a good way to do it&quot; and &quot;that is a bad way to do it&quot; with code I can see. Pen-testing, scanners, these are all fine, but without some concept of &quot;blessed good examples&quot; it can feel like patching a ship in the dark. I closed that hole, but I don&apos;t know how many more there are until a tool or attacker finds it. </p><p>I&apos;m gonna go through four examples of critical load-bearing security-related tooling or technology that is set up wrong by default or very difficult to do correctly. This is stuff everyone gets nervous when they touch because it doesn&apos;t help you set it up right. If we want people to do this stuff right, the spec needs to be more opinionated about right and wrong and we need to show people what right looks like on a code level. </p><h3 id="ssh-keys">SSH Keys</h3><p>This entire field of modern programming is build on the back of SSH keys. Starting in 1995 and continuing now with OpenSSH, the protocol uses an asymmetric encryption process with the Diffie-Hellman (DH) key exchange algorithm to form a shared secret key for the SSH connection. SFTP, deploying code from CI/CD systems, accessing servers, using git, all of this happens largely on the back of SSH keys. Now you might be thinking &quot;wait, SSH keys are great&quot;. </p><p>At a small scale SSH is easy and effortless. <code>ssh-keygen -t rsa</code>, select where to store it and if you want a passphrase. <code>ssh-copy-id username@remoteserverip</code> to move it to the remote box, assuming you set up the remote box with <code>cloud-init</code> or <code>ansible</code> or whatever. At the end of every ssh tutorial there is a paragraph that reads something like the following: &quot;please ensure you rotate, audit and check all SSH keys for permissions&quot;. This is where things get impossible. </p><p>SSH keys don&apos;t help administrators do the right thing. Here&apos;s all the things I don&apos;t know about the SSH key I would need to know to do it correctly:</p><ul><li>When was the key made? Is this a new SSH key or are they reusing a personal one or one from another job? I have no idea.</li><li>Was this key secured with a passhrase? Again like such a basic thing, can I ensure all the keys on my server were set up with a passphrase? Like just include some flag on the public key that says &quot;yeah the private key has a passphrase&quot;. I understand you <em>could</em> fake it but the massive gain in security for everyone outweighs the possibility that someone manipulates a public key to say &quot;this has a passphrase&quot;. </li><li>Expiration. I need a value that I can statelessly query to say &quot;is this public key expired or not&quot; and also to check when enrolling public keys &quot;does this key live too long&quot;. </li></ul><p>This isn&apos;t just a &quot;what-if&quot; conversation. I&apos;ve seen this and I bet you have too, or would if you looked at your servers.</p><ul><li>Many keys on servers are unused and represent access that was never properly terminated or shouldn&apos;t have been granted. I find across most jobs it&apos;s like 10% of the keys that ever get used. </li><li>Nobody knows who has the corresponding private keys. We understand the user who made them, but we don&apos;t know where they are now.</li><li>Alright so we use certificates! Well except they&apos;re special to OpenSSH, make auditing SSH key based access impossible since you don&apos;t know what keys the server will accept by looking at it and all the granting and revoking tooling is on you to build. </li></ul><p> <strong>OpenSSH Certificates solves almost all these problems. &#xA0;</strong>You get expiration, limiting commands, limit IP address etc. It&apos;s a step forward but we&apos;re not using them in small and medium orgs due to the complexity of setup and we need to port some of these security concerns down the chain. It&apos;s exactly what I was talking about in the beginning. The default experience is terrible because backwards compatibility and for those 1% who know of the existence of SSH Certificates and can operationally support the creation of this mission-critical tooling, they reap the benefits. </p><p>So sure if I set up all of the infrastructure to do all the pieces, I can enforce ssh key rotation. I&apos;ll check the public key into object storage, sync it with all my servers, check the date the key was entered and remove it after a certain date. But seriously? We can&apos;t make a new version of the SSH key with some metadata? The entire internet operates off SSH keys and they&apos;re a half-done idea, fixed through the addition of certificates nobody uses cause writing the tooling to handle the user certificate process is a major project where if you break it, you can&apos;t get into the box.</p><p>This is a crazy state of affairs. We <em>know</em> SSH keys live in infrastructure forever, we know they&apos;re used for way too long all over the place and we know the <em>only</em> way to enforce rotation patterns is through the use of expiration. We also know that passphrases are absolutely essential for the use of keys. Effectively to use SSH keys you need to stick a PAM in there to enforce 2FA like <code>libpam-google-authenticator</code>. BTW, talking about &quot;critical infrastructure not getting a ton of time&quot;, this is the repo of the package every tutorial recommends. Maybe nothing substantial has happened in 3 years but feels a little unlikely. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/10/image-5.png" class="kg-image" alt loading="lazy" width="903" height="726" srcset="https://matduggan.com/content/images/size/w600/2023/10/image-5.png 600w, https://matduggan.com/content/images/2023/10/image-5.png 903w" sizes="(min-width: 720px) 720px"></figure><p></p><h3 id="mobile-device-managementdevice-scanningnetwork-mitm-scanning">Mobile Device Management/Device Scanning/Network MITM Scanning</h3><p>Nothing screams &quot;security theater&quot; to me like the absolutely excessive MDM that has come to plague major companies. I have had the &quot;joy&quot; of working for 3 large companies that went all-in on this stuff and each time have gotten the pleasure of rip your hair out levels of frustration. I&apos;m not an admin on my laptop, so now someone who has no idea what my job is or what I need to do it gets to decide what software I get to install. All my network traffic gets scanned, so forget privacy on the device. At random intervals my laptop becomes unusable because every file on the device needs to &quot;get scanned&quot; for something.</p><p>Now in theory the way this stuff is supposed to work is a back and forth between security, IT and the users. In practice it&apos;s a one-way street that once the stupid shit gets bought and turned on, it never gets turned off. All of the organizational incentives are there to keep piling this worthless crap on previously functional machines and then almost dare the employee to get any actual work done. It just doesn&apos;t make any sense to take this heavy of a hand with this stuff. </p><p><strong>What about stuff exploiting employee devices?</strong></p><p>I mean if you have a well-researched paper which shows that this stuff actually makes a difference, I&apos;d love to see it. Mostly it seems from my reading like vendors repeating sales talking points to IT departments until they accept it as gospel truth, mixed with various audits requiring the tooling be on. Also we know from recent security exploits that social engineering against IT Helpdesk is a new strategy that is paying off, so assuming your IT pros will catch the problems that normal users won&apos;t is clearly a flawed strategy. </p><p>The current design is so user-hostile and so aggressively invasive that there is just no way to think of it other than &quot;my employer thinks I&apos;m an idiot&quot;. So often in these companies you are told the strategies to work around stuff. I once worked with a team where everybody used a decommissioned desktop tucked away in a closet connected to an Ethernet port with normal internet access to do actual work. They were SSHing into it from their locked-down work computers because they didn&apos;t have to open a ticket every time they needed to do everything and <em>hid the desktops existence from IT</em>. </p><p><strong>I&apos;m not blaming the people turning it on</strong></p><p>The incentives here are all wrong. There&apos;s no reward in security for not turning on the annoying or invasive feature so rank and file people are happy. On the off chance that is the vector by which you are attacked, you will be held responsible for that decision. So why not turn it all on? I totally understand it, especially when we all know every company has a VIP list of people for whom this shit isn&apos;t turned on, so the people who make the decisions about this aren&apos;t actually bearing the cost of it being on. </p><p><strong>&quot;Don&apos;t use your work laptop for personal stuff&quot;: </strong>hey before you hit me up with this gem, save it. I spend too many hours of my life at work to never have the two overlap. I need to write emails, look up stuff, schedule appointments, so just take this horrible know-it-all attitude and throw it away. People use work devices for personal stuff and telling them not to is a waste of oxygen. </p><h3 id="jwts">JWTs</h3><p>You have users and you have services. The users need to access the things they are allowed to access, the services need to be able to talk to each other and share information in a way where you know the information wasn&apos;t tampered with. It&apos;s JSON, but special limited edition JSON. You have a header, which says what it is (a JWT) and the signing algorithm being used. </p><pre><code>{
  &quot;alg&quot;: &quot;HS256&quot;,
  &quot;typ&quot;: &quot;JWT&quot;
}</code></pre><p>You have a payload with claims. There are predefined (still optional) claims and then public and private claims. So here are some common ones:</p><ul><li>&quot;iss&quot; (Issuer) Claim: identifies the principal that issued the JWT</li><li>&quot;sub&quot; (Subject) Claim: The &quot;sub&quot; (subject) claim identifies the principal that is the subject of the JWT.</li></ul><p>You can see them all <a href="https://datatracker.ietf.org/doc/html/rfc7519#section-4.1">here.</a> The diagram below shows the design <a href="https://pragmaticwebsecurity.com/img/articles/hard-parts-of-jwt/schematic_symmetric.png">source</a></p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/10/image-6.png" class="kg-image" alt loading="lazy" width="2000" height="372" srcset="https://matduggan.com/content/images/size/w600/2023/10/image-6.png 600w, https://matduggan.com/content/images/size/w1000/2023/10/image-6.png 1000w, https://matduggan.com/content/images/size/w1600/2023/10/image-6.png 1600w, https://matduggan.com/content/images/size/w2400/2023/10/image-6.png 2400w" sizes="(min-width: 720px) 720px"></figure><p><strong>Seems great. What&apos;s the problem?</strong></p><p>See that middle part where both things need access to the same secret key? That&apos;s the problem. The service that makes the JWT and the service that verifies the JWT are both reading and using the same key, so there&apos;s nothing stopping me from making my own JWT with new insane permissions on application 2 and having it get verified. That&apos;s only the beginning of the issues with JWTs. This isn&apos;t called out to people, so when you are dealing with micro-services or multiple APIs where you pass around JWTs, often there is an assumption of security where one doesn&apos;t exist. </p><p>Asymmetric JWT implementations exist and work well, but so often people do not think about it or realize such an option exists. There is no reason to keep on-boarding people with this default dangerous design assuming they will &quot;figure out&quot; the correct way to do things later. We see this all over the place with JWTs though. </p><ul><li>Looking at the <code>alg</code> claim in the header and using it rather than hardcoding the algorithm that your application uses. Easy mistake to make, I&apos;ve seen it a lot. </li><li>Encryption vs signatures. So often with JWTs people think the payload is encrypted. Can we warn them to use JWEs? This is such a common misunderstanding among people starting with JWTs it seems insane to me to not warn people somehow. </li><li>Should I use a JWT? Or a JWE? Should I sign AND encrypt the thing where the JWS (the signed version of the JWT) is the encrypted payload of the JWE? Are normal people supposed to make this decision? </li><li>Who in the hell said <code>none</code> should be a supported algorithm? Are you drunk? Just don&apos;t let me use a bad one. (&quot;Well it is the right decision for my app because the encryption channel means the JWT doesn&apos;t matter&quot; &quot;Well then don&apos;t check the signature and move on if you don&apos;t care.&quot;)</li><li><code>several Javascript Object Signing and Encryption (JOSE) libraries fail to validate their inputs correctly when performing elliptic curve key agreement (the &quot;ECDH-ES&quot; algorithm). An attacker that is able to send JWEs of its choosing that use invalid curve points and observe the cleartext outputs resulting from decryption with the invalid curve points can use this vulnerability to recover the recipient&apos;s private key.</code> Oh sure that&apos;s a problem I can check for. Thanks for the help. </li><li>Don&apos;t let the super important claims like expiration be optional. Come on folks, why let people pick and choose like that? It&apos;s just gonna cause problems. OpenID Connect went through great lengths to improve the security properties of a JWT. For example, the protocol mandates the use of the <code>exp</code>, <code>iss</code> and <code>aud</code> claims. To do it right, I need those claims, so don&apos;t make them optional. </li></ul><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/10/83ruhf.jpg" class="kg-image" alt loading="lazy" width="500" height="503"></figure><p>Quick, what&apos;s the right choice?</p><ul><li>HS256 - HMAC using SHA-256 hash algorithm</li><li>HS384 - HMAC using SHA-384 hash algorithm</li><li>HS512 - HMAC using SHA-512 hash algorithm</li><li>ES256 - ECDSA signature algorithm using SHA-256 hash algorithm</li><li>ES256K - ECDSA signature algorithm with secp256k1 curve using SHA-256 hash algorithm</li><li>ES384 - ECDSA signature algorithm using SHA-384 hash algorithm</li><li>ES512 - ECDSA signature algorithm using SHA-512 hash algorithm</li><li>RS256 - RSASSA-PKCS1-v1_5 signature algorithm using SHA-256 hash algorithm</li><li>RS384 - RSASSA-PKCS1-v1_5 signature algorithm using SHA-384 hash algorithm</li><li>RS512 - RSASSA-PKCS1-v1_5 signature algorithm using SHA-512 hash algorithm</li><li>PS256 - RSASSA-PSS signature using SHA-256 and MGF1 padding with SHA-256</li><li>PS384 - RSASSA-PSS signature using SHA-384 and MGF1 padding with SHA-384</li><li>PS512 - RSASSA-PSS signature using SHA-512 and MGF1 padding with SHA-512</li><li>EdDSA - Both Ed25519 signature using SHA-512 and Ed448 signature using SHA-3 are supported. Ed25519 and Ed448 provide 128-bit and 224-bit security respectively.</li></ul><p><strong>You are holding it wrong. </strong>Don&apos;t tell me to issue and use x509 certificates. Trying that for micro-services cut years off my life. </p><p><strong>But have you tried XML DSIG? </strong></p><figure class="kg-card kg-image-card"><img src="https://media.tenor.com/mlVNapbQJ_UAAAAC/jeff-goldblum-how-dare-you-speak-to-me.gif" class="kg-image" alt loading="lazy"></figure><p>I need to both give something to the user that I can verify that tells me what they&apos;re supposed to be able to do and I need some way of having services pass the auth back and forth. So many places have adopted JWTs because JSON = easy to handle. If there is a right (or wrong) algorithm, guide me there. It is <em>fine</em> to say &quot;this is now depreciated&quot;. That&apos;s a totally normal thing to tell developers and it happens all the time. But please help us all do the right thing.</p><h3 id="login">Login</h3><p>Alright I am making a very basic application. It will provide many useful features for users around the world. I just need them to be able to log into the thing. I guess username and password right? I want users to have a nice, understood experience. </p><p><strong>No you stupid idiot passwords are fundamentally broken</strong></p><p>Well you decide to try anyway. You find this helpful cheat sheet. </p><ul><li>Use <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id">Argon2id</a> with a minimum configuration of 19 MiB of memory, an iteration count of 2, and 1 degree of parallelism.</li><li>If <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id">Argon2id</a> is not available, use <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#scrypt">scrypt</a> with a minimum CPU/memory cost parameter of (2^17), a minimum block size of 8 (1024 bytes), and a parallelization parameter of 1.</li><li>For legacy systems using <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#bcrypt">bcrypt</a>, use a work factor of 10 or more and with a password limit of 72 bytes.</li><li>If FIPS-140 compliance is required, use <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#pbkdf2">PBKDF2</a> with a work factor of 600,000 or more and set with an internal hash function of HMAC-SHA-256.</li><li>Consider using a <a href="https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#peppering">pepper</a> to provide additional defense in depth (though alone, it provides no additional secure characteristics).</li></ul><p>None of these mean anything to you but that&apos;s fine. It looks pretty straightforward at first. </p><pre><code>&gt;&gt;&gt; from argon2 import PasswordHasher
&gt;&gt;&gt; ph = PasswordHasher()
&gt;&gt;&gt; hash = ph.hash(&quot;correct horse battery staple&quot;)
&gt;&gt;&gt; hash  # doctest: +SKIP
&apos;$argon2id$v=19$m=65536,t=3,p=4$MIIRqgvgQbgj220jfp0MPA$YfwJSVjtjSU0zzV/P3S9nnQ/USre2wvJMjfCIjrTQbg&apos;
&gt;&gt;&gt; ph.verify(hash, &quot;correct horse battery staple&quot;)
True
&gt;&gt;&gt; ph.check_needs_rehash(hash)
False
&gt;&gt;&gt; ph.verify(hash, &quot;Tr0ub4dor&amp;3&quot;)
Traceback (most recent call last):
  ...
argon2.exceptions.VerifyMismatchError: The password does not match the supplied hash
</code></pre><p>Got it. But then you see this.</p><pre><code>Rather than a simple work factor like other algorithms, Argon2id has three different parameters that can be configured. Argon2id should use one of the following configuration settings as a base minimum which includes the minimum memory size (m), the minimum number of iterations (t) and the degree of parallelism (p).

    m=47104 (46 MiB), t=1, p=1 (Do not use with Argon2i)
    m=19456 (19 MiB), t=2, p=1 (Do not use with Argon2i)
    m=12288 (12 MiB), t=3, p=1
    m=9216 (9 MiB), t=4, p=1
    m=7168 (7 MiB), t=5, p=1
</code></pre><p>What the fuck does that mean. Do I want more memory and fewer iterations? That &#xA0;doesn&apos;t sound right. Then you end up here: <a href="https://www.rfc-editor.org/rfc/rfc9106.html">https://www.rfc-editor.org/rfc/rfc9106.html</a> which says I should be using argon2.profiles.RFC_9106_HIGH_MEMORY. Ok but it warns me that it requires 2 GiB, which seems like a lot? How does that scale with a lot of users? Does it change? Should I do low_memory? </p><p><strong>Alright I&apos;m sufficiently scared off. I&apos;ll use something else. </strong></p><p>I&apos;ve heard about passkeys and they seem easy enough. I&apos;ll do that. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/10/Screenshot-2023-10-26-at-10-06-37-Passkeys-Can-I-use...-Support-tables-for-HTML5-CSS3-etc-2.png" class="kg-image" alt loading="lazy" width="2000" height="393" srcset="https://matduggan.com/content/images/size/w600/2023/10/Screenshot-2023-10-26-at-10-06-37-Passkeys-Can-I-use...-Support-tables-for-HTML5-CSS3-etc-2.png 600w, https://matduggan.com/content/images/size/w1000/2023/10/Screenshot-2023-10-26-at-10-06-37-Passkeys-Can-I-use...-Support-tables-for-HTML5-CSS3-etc-2.png 1000w, https://matduggan.com/content/images/size/w1600/2023/10/Screenshot-2023-10-26-at-10-06-37-Passkeys-Can-I-use...-Support-tables-for-HTML5-CSS3-etc-2.png 1600w, https://matduggan.com/content/images/size/w2400/2023/10/Screenshot-2023-10-26-at-10-06-37-Passkeys-Can-I-use...-Support-tables-for-HTML5-CSS3-etc-2.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Alright well that&apos;s ok. I got....most of the big ones. </p><pre><code>If you have Windows 10 or up, you can use passkeys. To store passkeys, you must set up Windows Hello. Windows Hello doesn&#x2019;t currently support synchronization or backup, so passkeys are only saved to your computer. If your computer is lost or the operating system is reinstalled, you can&#x2019;t recover your passkeys.</code></pre><p>Nevermind I can&apos;t use passkeys. Good to know. </p><p><strong>Well if you put the passkeys in 1password then it works</strong></p><p>Great so passkeys cost $5 a month per user and they get to pay for the priviledge of using my site. Sounds totally workable. </p><h3 id="openid-connectoauth">OpenID Connect/OAuth </h3><p>Ok so first I need to figure out what kind of this thing I need. I&apos;ll just read through all the initial information I need to make this decision. </p><ul><li><a href="https://datatracker.ietf.org/doc/html/rfc7636">Proof Key for Code Exchange by OAuth Public Clients</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc6819">OAuth 2.0 Threat Model and Security Considerations</a></li><li><a href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics">OAuth 2.0 Security Best Current Practice</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc9068">JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc8252">OAuth 2.0 for Native Apps</a></li><li><a href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-browser-based-apps">OAuth 2.0 for Browser-Based Apps</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc8628">OAuth 2.0 Device Authorization Grant</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc8414">OAuth 2.0 Authorization Server Metadata</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7591">OAuth 2.0 Dynamic Client Registration Protocol</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7592">OAuth 2.0 Dynamic Client Registration Management Protocol</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc9126">OAuth 2.0 Pushed Authorization Requests</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc8705">OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc9101">The OAuth 2.0 Authorization Framework: JWT-Secured Authorization Request (JAR)</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7521">Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7523">JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7522">Security Assertion Markup Language (SAML) 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc6750">The OAuth 2.0 Authorization Framework: Bearer Token Usage</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7636">Proof Key for Code Exchange by OAuth Public Clients</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7009">OAuth 2.0 Token Revokation</a></li><li><a href="https://datatracker.ietf.org/doc/html/rfc7662">OAuth 2.0 Token Introspection</a></li><li><a href="https://openid.net/specs/openid-connect-session-1_0.html">OpenID Connect Session Management</a></li><li><a href="https://openid.net/specs/openid-connect-frontchannel-1_0.html">OpenID Connect Front-Channel Logout</a></li><li><a href="https://openid.net/specs/openid-connect-backchannel-1_0.html">OpenID Connect Back-Channel Logout</a></li><li><a href="https://openid.net/specs/openid-connect-federation-1_0.html">OpenID Connect Federation</a></li><li><a href="https://openid.net/wg/sse/">OpenID Connect SSE</a></li><li><a href="https://openid.net/specs/openid-caep-specification-1_0-ID1.html">OpenID Connect CAEP</a></li></ul><p>Now that I&apos;ve completed a masters degree in login, it&apos;s time for me to begin. </p><p><strong>Apple</strong></p><p>Sign in with Apple is only supported with paid developer accounts. I don&apos;t really wanna pay $100 a year for login. </p><p><strong>Facebook/Google/Microsoft</strong></p><p>So each one of these requires me to create an account, set up their tokens and embed the button. Not a huge deal, but I can never get rid of any of these and if one was to get deactivated, it would be a problem. See when Login with Twitter stopped being a thing people could use. Plus with Google and Microsoft they also offer email services, so presumably a lot of people will be using their email address, then I&apos;ve gotta create a flow on the backend where I can associate the same user with multiple email addresses. Fine, no big deal.</p><p>I&apos;m also loading Javascript from these companies on my page and telling them who my customers are. This is (of course) necessary, but seems overkill for the problem I&apos;m trying to solve. I need to know that the user is who they say they are, but I don&apos;t need to know what the user can do inside of their Google account. </p><p><strong>I don&apos;t really want this data</strong></p><p>Here&apos;s the default data I get with Login with Facebook after the user goes through a scary authorization page.</p><ul><li><code>id</code></li><li><code>first_name</code></li><li><code>last_name</code></li><li><code>middle_name</code></li><li><code>name</code></li><li><code>name_format</code></li><li><code>picture</code></li><li><code>short_name</code></li><li><code>email</code></li></ul><p>I don&apos;t need that. Same with Google</p><ul><li><code>BasicProfile.getId()</code></li><li><code>BasicProfile.getName()</code></li><li><code>BasicProfile.getGivenName()</code></li><li><code>BasicProfile.getFamilyName()</code></li><li><code>BasicProfile.getImageUrl()</code></li><li><code>BasicProfile.getEmail()</code></li></ul><p>I&apos;m not trying to say this is bad. These are great tools and I think the Google one especially is well made. I just don&apos;t want to prompt users to give me access to data if I don&apos;t want the data and I especially don&apos;t want the data if I have no idea if its the data you intended to give me. Who hasn&apos;t hit the &quot;Login with Facebook&quot; button and wondered &quot;what email is this company going to send to&quot;. My Microsoft account is back from when I bought an Xbox OG. I have no idea where it sends messages now.</p><p><strong>Fine, Magic Links</strong></p><p>I don&apos;t know how to hash stuff correctly in such a way that I am confident I won&apos;t mess it up. Passkeys don&apos;t work yet. I can use OpenID Connect but really it is overkill for this use case since I don&apos;t want to operate as the user on the third-party and I don&apos;t want access to all the users information since I intend to ask them how they want me to contact them. The remaining option is &quot;magic links&quot;. </p><p>How do we set up magic links securely?</p><ul><li>Short lifespan for the password. The one-time password issued will be valid for 5 minutes before it expires</li><li>The user&apos;s email is specified alongside login tokens to stop URLs being brute-forced</li><li>Each login token will be at least 20 digits</li><li>The initial request and its response must take place from the same IP address</li><li>The initial request and its response must take place in the same browser</li><li>Each one-time link can only be used once</li><li>Only the last one-time link issued will be accepted. Once the latest one is issued, any others are invalidated.</li></ul><p>The fundamental problem here is that email isn&apos;t a reliable system of delivery. It&apos;s a best-effort system. So if something goes wrong, takes a long time, etc, there isn&apos;t much I can really do to troubleshoot that. My advice to the user would be like &quot;I guess you need to try a different email address&quot;. </p><p>So in order to do this for actual normal people to use, I have to turn off a lot of those security settings. I can&apos;t guarantee people don&apos;t sign up on their phones and then go to their laptops (so no IP address or browser check). I can&apos;t guarantee when they&apos;ll get the email (so no 5 minute check). I also don&apos;t know the <em>order</em> in which they&apos;re gonna get these emails, so it will be super frustrating for people if I send them 3 emails and the second one is actually the most &quot;recent&quot;. </p><p>I also have no idea how secure this email account is. Effectively I&apos;m just punting on security because it is hard and saying &quot;well this is your problem now&quot;. </p><h3 id="i-could-go-on-and-on-and-on-and-on">I could go on and on and on and on</h3><p>I could write 20,000 words on this topic and still not be at the end. The word miserable barely does justice to how badly this stuff is designed for people to use. Complexity is an unavoidable side effect of flexibility in software. If your thing can do many things, it is harder to use. </p><p>We rely on expertise as a species to assist us with areas outside of our normal functions. I don&apos;t know anything about medicine, I go to a doctor. I have no idea how one drives a semi truck or flies a plane or digs a mine. Our ability to let people specialize is a key component to our ability to advance. So it is not reasonable to say &quot;if you do anything with security at all you must become an expert in security&quot;. </p><p>Part of that is you need to use your skill and intelligence to push me along the right path. Don&apos;t say &quot;this is the most recommended and this is less recommended and this one is third recommended&quot;. Show me what you want people to build and I bet most teams will jump at the chance to say &quot;oh thank God, I can copy and paste a good example&quot;. </p><p>Corrections/notes/&quot;I think you are stupid&quot;: <a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a></p>]]></content:encoded></item><item><title><![CDATA[Can We Make Idiot-Proof Infrastructure pt1?]]></title><description><![CDATA[<p>One complaint I hear all the time online and in real life is how complicated infrastructure is. You either commit to a vendor platform like ECS, Lightsail, Elastic Beanstalk or Cloud Run or you go all in with something like Kubernetes. The first are easy to run but lock you</p>]]></description><link>https://matduggan.com/idiot-proof-infrastructure/</link><guid isPermaLink="false">650c310fa66cda0001544037</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 20 Oct 2023 13:03:48 GMT</pubDate><content:encoded><![CDATA[<p>One complaint I hear all the time online and in real life is how complicated infrastructure is. You either commit to a vendor platform like ECS, Lightsail, Elastic Beanstalk or Cloud Run or you go all in with something like Kubernetes. The first are easy to run but lock you in and also sometimes get abandoned by the vendor (looking at you Beanstalk). Kubernetes runs everywhere but it is hard and complicated and has a lot of moving parts. </p><p>The assumption seems to be that with containers there should be an easier way to do this. I thought it was an interesting thought experiment. Could I, a random idiot, design a simpler infrastructure? Something you could adopt to any cloud provider without doing a ton of work, that is relatively future proof and that would scale to the point when something more complicated made sense? I have no idea but I thought it could be fun to try. </p><h3 id="fundamentals-of-basic-infrastructure">Fundamentals of Basic Infrastructure</h3><p>Here are the parameters we&apos;re attempting to work within:</p><ul><li>It should require minimal maintenance. You are a small crew trying to get a product out the door and you don&apos;t want to waste a ton of time.</li><li>You cannot assume you will detect problems. You lack the security and monitoring infrastructure to truly &quot;audit&quot; the state of the world and need to assume that you won&apos;t be able to detect a breach. Anything you put out there has to start as secure as possible and pretty much fix itself.</li><li>Controlling costs is key. You don&apos;t have the budget for surprises and massive spikes in CPU usage is likely a problem and not organic growth (or if it is organic growth, you&apos;ll want to likely be involved with deciding what to do about it)</li><li>The infrastructure should be relatively portable. We&apos;re going to try and keep everything movable without too many expensive parts. </li><li>Perfect uptime isn&apos;t the goal. Restarting containers isn&apos;t a hitless operation and while there are ways to queue up requests and replay them, we&apos;re gonna try to not bite off that level of complexity with the first draft. We&apos;re gonna drop some requests on the floor, but I think we can minimize that number. </li></ul><h3 id="basic-setup">Basic Setup</h3><p>You&apos;ve got your good idea, you&apos;ve written some code and you have a private repo in GitHub. Great, now you need to get the thing out onto the internet. Let&apos;s start with some good tips before we get anywhere near to the internet itself. </p><ul><li>Semantic Versioning is your friend. If you get into the habit now of structuring commits and cutting releases, you are going to reap those benefits down the line. It seems silly <em>right this second</em> when the entirety of the application code fits inside of your head, but soon that won&apos;t be the case if you continue to work on it. I really like <a href="https://github.com/google-github-actions/release-please-action">Release-Please</a> as a tool to cut releases automatically based on commits and let you use the version number to be a meaningful piece of data for you to work off. </li><li>Containers are mandatory. Just don&apos;t overthink this and commit early. Don&apos;t focus on <em>container disk space usage</em>. Disk space is not our largest concern. We want an easy to work with platform with a minimum amount of surface area for attacks. While <a href="https://github.com/GoogleContainerTools/distroless">Distroless</a> isn&apos;t actually....without a linux Distro (I&apos;m not entirely clear why that name was chosen), it is a great place to start. If you can get away with using these, this is what you want to do. <a href="https://github.com/GoogleContainerTools/distroless">Link</a></li><li>Be careful about what dependencies you rely on in the early phase. So many jobs I&apos;ve had there are a few unmaintained packages that are <em>mission critical</em> impossible to remove load-bearing weights around our necks. If you can do it with the standard library great. When you find a dependency on the internet, look at what you need it to do and see &quot;can I just copy paste the 40 lines of code I need from this&quot; vs adding a new dependency forever. Dependency minimization isn&apos;t very cool right now but I think especially when starting out it pays off big. </li><li>Healthcheck. You need some route on your app that you can hit which provides a good probability that the application is up and functional. /health or whatever, but this is gonna be pretty key to the rest of this works. </li></ul><h3 id="deployment-and-orchestration">Deployment and Orchestration</h3><p>Alright so you&apos;ve made the app, you have some way of tracking major/minor etc. Everything works great on your laptop. How do we put it on the internet.</p><ul><li>You want a way to take a container and deploy it out to a Linux host</li><li>You don&apos;t want to patch or maintain the host</li><li>You need to know if the deployment has gone wrong</li><li>Either the deployment should roll back automatically or fail safe waiting for intervention</li><li>The whole thing needs to be as safe as possible. </li></ul><p>Is there a lightweight way to do this? Maybe!</p><h3 id="basic-design">Basic Design</h3><p>Cloudflare -&gt; &#xA0;Autoscaling Group -&gt; 4 instances setup with Cloud init -&gt; Docker Compose with Watchtower -&gt; DBaaS</p><p>When we deploy we&apos;ll be hitting the IP addresses of the instances on the Watertower HTTP route with curl and telling it to connect to our private container registry and pull down new versions of our application. We shouldn&apos;t need to SSH into the boxes ever and when a box dies or needs to be replaced, we can just delete it and run Terraform again to make a new one. SSL will be static long-lived certificates and we should be able to distribute traffic across different cloud providers however we&apos;d like. </p><p><strong>Cloudflare as the Glue</strong></p><p>I know, a lot of you are rolling your eyes. &quot;This isn&apos;t portable at all!&quot; Let me defend my work at bit. We need a WAF, we need SSL, we need DNS, we need a load balancer and we need metrics. I can do all of that with open-source projects, but it&apos;s not easy. As I was writing it out, it started to get (actually) quite difficult to do. </p><p>Cloudflare is very cheap for what they offer. We aren&apos;t using anything here that we couldn&apos;t move somewhere else if needed. It scales pretty well, up to 20 origins (which isn&apos;t amazing but if you have hit 20 servers serving customer traffic you are ready to move up in complexity). You are free to change the backend CPU as needed (or even experiment with local machines, mix and match datacenter and cloud, etc). You also get a nice dashboard of what is going on without any work. It&apos;s a hard value proposition to fight against, especially when almost all of it is free. I also have no ideological dog in the fight of OSS vs SaaS.</p><p><strong>Pricing</strong></p><p>Up to 2 origin servers: $5 per month</p><p>Additional origins, up to 20: $5 per month per origin</p><p>First 500k DNS requests are free</p><p>$0.50 per every 500k DNS requests after</p><p>Compared to ALB pricing, we can see why this is more idiot proof. There we have 4 dimensions to cost: <strong>New connections (per second), Active connections (per minute), Processed bytes (GBs per hour), Rule evaluations (per second). </strong>The hourly bill is calculated by taking the maximum LCUs consumed across the four dimensions and we&apos;re charged on the highest one. Now ALBs can be much cheaper than Cloudflare, but it&apos;s harder to control the cost. If one element starts to explode in price, there isn&apos;t a lot you can do to bring it back down. </p><p>Cloudflare we&apos;re looking at $20 a month and then traffic. So if we get 60,000,000 requests a month we&apos;re paying $60 a month in DNS and $20 for the load balancer. For ALB it would largely depend on the type of traffic we&apos;re getting and how it is distributed. </p><p>BUT there are also much cheaper options. For &#x20AC;7 a month on Hetzner, you can get 25 targets and 20 TB of network traffic. &#x20AC; 1/TB for network traffic above that. So for our same cost we could handle a pretty incredible amount of traffic through Hetzner, but it commits us to them and violates the spirit of this thing. I just wanted to mention it in case someone was getting ready to &quot;actually&quot; me. </p><p>Also keep in mind we&apos;re just in the &quot;trying ideas out&quot; part of the exercise. Let&apos;s define a load balancer. </p><pre><code>provider &quot;cloudflare&quot; {
  email   = &quot;your_email@example.com&quot;
  api_key = &quot;your_api_key&quot;
}

resource &quot;cloudflare_load_balancer&quot; &quot;example_lb&quot; {
  name   = &quot;example-load-balancer.example.com&quot;
  zone_id = &quot;0da42c8d2132a9ddaf714f9e7c920711&quot;
  default_pool_ids = [cloudflare_load_balancer_pool.pool1.id, cloudflare_load_balancer_pool.pool2.id]
  fallback_pool_id = cloudflare_load_balancer_pool.pool1.id
  steering_policy = &quot;random&quot;
  session_affinity = &quot;none&quot;
  proxied = true

  # Add other load balancer settings here from https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/load_balancer
  }</code></pre><p>Then we need a monitor.</p><pre><code>resource &quot;cloudflare_load_balancer_monitor&quot; &quot;example&quot; {
  account_id     = &quot;f037e56e89293a057740de681ac9abbe&quot;
  type           = &quot;http&quot;
  expected_body  = &quot;alive&quot;
  expected_codes = &quot;2xx&quot;
  method         = &quot;GET&quot;
  timeout        = 7
  path           = &quot;/health&quot;
  interval       = 60
  retries        = 2
  description    = &quot;example http load balancer&quot;
  header {
    header = &quot;Host&quot;
    values = [&quot;example.com&quot;]
  }
  allow_insecure   = false
  follow_redirects = true
  probe_zone       = &quot;example.com&quot;
}</code></pre><p>Finally we need some pools</p><pre><code>resource &quot;cloudflare_load_balancer_pool&quot; &quot;pool1&quot; {
  account_id = &quot;f037e56e89293a057740de681ac9abbe&quot;
  name       = &quot;pool1&quot;
  monitor = cloudflare_load_balancer_monitor.example.id
  origins {
    name    = &quot;server01&quot;
    address = &quot;d9bb:3880:71b0:5fab:e426:8883:5a75:e82e&quot;
    enabled = false
    header {
      header = &quot;Host&quot;
      values = [&quot;server01&quot;]
    }
  }
  origins {
    name    = &quot;server02&quot;
    address = &quot;9726:61db:23a9:41d5:7eb0:649a:87b0:4291&quot;
    header {
      header = &quot;Host&quot;
      values = [&quot;server02&quot;]
    }
  }
  description        = &quot;example load balancer pool 1&quot;
  enabled            = false
  minimum_origins    = 1
  notification_email = &quot;someone@example.com&quot;
  load_shedding {
    default_percent = 55
    default_policy  = &quot;random&quot;
  }
  origin_steering {
    policy = &quot;random&quot;
  }
}

resource &quot;cloudflare_load_balancer_pool&quot; &quot;pool2&quot; {
  account_id = &quot;f037e56e89293a057740de681ac9abbe&quot;
  name       = &quot;pool2&quot;
  monitor = cloudflare_load_balancer_monitor.example.id
  origins {
    name    = &quot;server03&quot;
    address = &quot;3601:03b9:88b7:fa50:8163:818c:eceb:bc14&quot;
    enabled = false
    header {
      header = &quot;Host&quot;
      values = [&quot;server03&quot;]
    }
  }
  origins {
    name    = &quot;server04&quot;
    address = &quot;8118:87ef:6b50:099d:fc4a:e66d:a991:5d20&quot;
    header {
      header = &quot;Host&quot;
      values = [&quot;server04&quot;]
    }
  }
  description        = &quot;example load balancer pool 2&quot;
  enabled            = false
  minimum_origins    = 1
  notification_email = &quot;someone@example.com&quot;
  load_shedding {
    default_percent = 55
    default_policy  = &quot;random&quot;
  }
  origin_steering {
    policy = &quot;random&quot;
  }
}</code></pre><p>The addresses are just placeholders, but you&apos;ll need to swap values etc. This gives us a nice basic load balancer. Note that we don&apos;t have session affinity turned on, so we&apos;ll need to add Redis or something to help with state server-side. The IP addresses we point to will need to be reserved on the cloud provider side, but we can use IPv6 so hopefully should save us a few dollars a month there. </p><h3 id="how-much-uptime-is-enough-uptime">How much uptime is enough uptime</h3><p>So there are two paths here we have to discuss before we get much further. </p><p><strong>Path 1</strong></p><p>When we deploy to a server, we make an API call to Cloudflare to mark the origin as not enabled. Then we wait for the connections to drain, deploy the container, bring it back up, wait for it to be healthy and then we mark it enabled again. This is traditionally the way we would need to do things, if we were targeting zero downtime. </p><p>Now we can do this. We have places later that we could stick such a script. But this is gonna be brittle. We&apos;d basically need to do something like the following. </p><ul><li>Run a GET against https://api.cloudflare.com/client/v4/user/load_balancers/pools</li><li>Take the result, look at the IP addresses, figure out which one is the machine in question and then mark it as not enabled IF all other origins were healthy. We wouldn&apos;t want to remove multiple machines at the same time. So we&apos;d then need to hit: https://api.cloudflare.com/client/v4/user/load_balancers/pools/{identifier}/health and confirm the health of the pools. </li><li>But &quot;health&quot; isn&apos;t an instant concept. There is a delay between the concept of when the origin is unhealthly and I&apos;ll know about it, depending on how often I check and retries. So this isn&apos;t a perfect system, but it should work pretty well as long as I add a bit of jitter to it. </li></ul><p>I think this exceeds what I want to do for the first pass. We can do it, but it&apos;s not consistent with the uptime discussion we had before. This is brittle and is going to require a lot of babysitting to get right.</p><p><strong>Path 2</strong></p><p>We rely on the healthchecks to steer traffic and assume that our deployments are going to be pretty fast, so while we might drop some traffic on the floor, a user (with our random distribution and server-side sessions) should be able to reload the page and hopefully get past the problem. It might not scale forever but it does remove a lot of our complexity. </p><p>Let&apos;s go with Path 2 for now. </p><h3 id="server-setup-waf">Server setup + WAF</h3><p>Alright so we&apos;ve got the load balancer, it sits on the internet and takes traffic. Fabulous stuff. How do we set up a server? To do it cross-platform we have to use cloud-init. </p><p>The basics are pretty straight forward. We&apos;re gonna use latest debian and we&apos;re gonna update it and restart. Then we&apos;re gonna install Docker Compose and then finally stick a few files in there to run this. This is all pretty easy, but we do have a problem we need to tackle first. We need some way to do a level of secrets management so we can write out Terraform and cloud-init files, keep them in version control but also not have the secrets just kinda live there. </p><h3 id="sops">SOPS</h3><p>So typically for secret management we want to use whatever our cloud provider gives us, but since we don&apos;t have something like that, we&apos;ll need to do something more basic. </p><p>We&apos;ll use <code>age</code> for encryption which is a great simple encryption library. <a href="https://github.com/FiloSottile/age">You can install it here.</a> We run <code>age-keygen -o key.txt</code> which gives us our secret file. Then we need to set an environmental variable with the path to the key like this: <code>SOPS_AGE_KEY_FILE=/Users/mathew.duggan/key.txt</code></p><p>For those unfamiliar with how SOPS (<a href="https://github.com/getsops/sops#encrypting-using-age">installed here</a>) works, basically you generate the age key as shown above and then you can encrypt files through a CLI or with Terraform locally. So:</p><pre><code>secrets.json
{
   &quot;username&quot;: &quot;admin&quot;,
   &quot;password&quot;: &quot;password&quot;
}</code></pre><p>Turns into:</p><pre><code>{
	&quot;username&quot;: &quot;ENC[AES256_GCM,data:+bGf/sI=,iv:J47szLfZ5wMWr6Ghc94VAABXs2Ec4Hi+e3ohc2HuF/Q=,tag:XIY1jOgDe9SBDMGxFhLwtw==,type:str]&quot;,
	&quot;password&quot;: &quot;ENC[AES256_GCM,data:RIHz14crqEk=,iv:H3g7/4Bd5vB/6U+Kf+rIR/xBRIGHGoZeN7U1zi5lgsM=,tag:+vD9BXb18rLhpf/sTsvYEA==,type:str]&quot;,
	&quot;sops&quot;: {
		&quot;kms&quot;: null,
		&quot;gcp_kms&quot;: null,
		&quot;azure_kv&quot;: null,
		&quot;hc_vault&quot;: null,
		&quot;age&quot;: [
			{
				&quot;recipient&quot;: &quot;age1j6dmaunhspfvh78lgnrtr6zkd7whcypcz6jdwypaydc6gaa79vtq5ryvzf&quot;,
				&quot;enc&quot;: &quot;-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA1YlcvdkpGc3pBbVFiUnhP\nYVJnalp0WlREVjlQZkFROGtvcWN2VWxsUUJnCmYvZ1ZPd3NzTjZxNHd6MEVNcmI1\nTTBZdnFaSEFSaXZRK28rc01VZGRxWHMKLS0tIGpZUjZCNDFDUnIvYXRJTDhtcGlu\nT3JJWlN1YlJYeU1ueEQ1cytDbDFXQ00K70mBEowf/AGgiFFNj3ocv0NfbI1IMJX/\nMJHMKtXPYJsoSKJla6Y+cXMXPe7LNNorSnmqvkNF7rgEMvONMNoEiA==\n-----END AGE ENCRYPTED FILE-----\n&quot;
			}
		],
		&quot;lastmodified&quot;: &quot;2023-10-19T13:06:42Z&quot;,
		&quot;mac&quot;: &quot;ENC[AES256_GCM,data:q8R8Zb+PtpBs6TBPu6VJsQXEKLwi2+WtpE3culIy1obUNdfjWaXyBtC/zbWI5eeh2Z4u//2p49G2bMv0jSzMJZnH4TLIzpHxnd6XFjzu4TqObM6FnI3ZW/SSoPwTRxgHqvooMffm3NO5pxoz3FhnJDHwYk+jTK+JoGxyZF5nBe4=,iv:Ey+so87o/kYbvOaSUXs+vyIrEQXEC39vmswdl0L3Gvw=,tag:5mWJTfBgCFjXVuoYBUiDCA==,type:str]&quot;,
		&quot;pgp&quot;: null,
		&quot;unencrypted_suffix&quot;: &quot;_unencrypted&quot;,
		&quot;version&quot;: &quot;3.8.1&quot;
	}
}</code></pre><p>By running this: <code>sops --encrypt --age age1j6dmaunhspfvh78lgnrtr6zkd7whcypcz6jdwypaydc6gaa79vtq5ryvzf secrets.json &gt; secrets.enc.json</code></p><p>So we can use this with Terraform pretty easily. We run <code>export SOPS_AGE_KEY_FILE=/Users/mathew.duggan/key.txt</code> just to ensure everything is set and then the Terraform looks like the following:</p><pre><code>terraform {
  required_providers {
    sops = {
      source = &quot;carlpett/sops&quot;
      version = &quot;~&gt; 0.5&quot;
    }
  }
}

data &quot;sops_file&quot; &quot;secret&quot; {
  source_file = &quot;secrets.enc.json&quot;
}

output &quot;root-value-password&quot; {
  # Access the password variable from the map
  value = data.sops_file.secret.data[&quot;password&quot;]
  sensitive = true
}</code></pre><p>Now you can use SOPS with AWS, GCP, Azure, or use their own secrets system. I present this only as a &quot;we&apos;re small and am looking for a way to easily encrypt configuration files&quot;. </p><h3 id="cloud-init">Cloud init </h3><p>So now we&apos;re to the last part of the server setup. We&apos;ll need to define a <code>cloud-init</code> YAML to set up the host and we&apos;ll need to define a Docker Compose file to set up the application that is going to handle all the pulling of images from here. Now thankfully we should be able to reuse this stuff for the foreseeable future. </p><pre><code>#cloud-config

package_update: true
package_upgrade: true
package_reboot_if_required: true

groups:
    - docker

users:
    - name: admin
      lock_passwd: true
      shell: /bin/bash
      ssh_authorized_keys:
      - ${init_ssh_public_key}
      groups: docker
      sudo: ALL=(ALL) NOPASSWD:ALL

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - unattended-upgrades
  - nginx
  
write_files:
  - owner: root:root
    encoding: b64
    path: /etc/ssl/cloudflare.crt
    content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlHQ2pDQ0EvS2dBd0lCQWdJSVY1RzZsVmJDTG1Fd0RRWUpLb1pJaHZjTkFRRU5CUUF3Z1pBeEN6QUpCZ05WDQpCQVlUQWxWVE1Sa3dGd1lEVlFRS0V4QkRiRzkxWkVac1lYSmxMQ0JKYm1NdU1SUXdFZ1lEVlFRTEV3dFBjbWxuDQphVzRnVUhWc2JERVdNQlFHQTFVRUJ4TU5VMkZ1SUVaeVlXNWphWE5qYnpFVE1CRUdBMVVFQ0JNS1EyRnNhV1p2DQpjbTVwWVRFak1DRUdBMVVFQXhNYWIzSnBaMmx1TFhCMWJHd3VZMnh2ZFdSbWJHRnlaUzV1WlhRd0hoY05NVGt4DQpNREV3TVRnME5UQXdXaGNOTWpreE1UQXhNVGN3TURBd1dqQ0JrREVMTUFrR0ExVUVCaE1DVlZNeEdUQVhCZ05WDQpCQW9URUVOc2IzVmtSbXhoY21Vc0lFbHVZeTR4RkRBU0JnTlZCQXNUQzA5eWFXZHBiaUJRZFd4c01SWXdGQVlEDQpWUVFIRXcxVFlXNGdSbkpoYm1OcGMyTnZNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoTVNNd0lRWURWUVFEDQpFeHB2Y21sbmFXNHRjSFZzYkM1amJHOTFaR1pzWVhKbExtNWxkRENDQWlJd0RRWUpLb1pJaHZjTkFRRUJCUUFEDQpnZ0lQQURDQ0Fnb0NnZ0lCQU4yeTJ6b2pZZmwwYktmaHAwQUpCRmVWK2pRcWJDdzNzSG12RVB3TG1xRExxeW5JDQo0MnRaWFI1eTkxNFpCOVpyd2JML0s1TzQ2ZXhkL0x1akpuVjJiM2R6Y3g1cnRpUXpzbzB4emxqcWJuYlFUMjBlDQppaHgvV3JGNE9rWkt5ZFp6c2RhSnNXQVB1cGxESDVQN0o4MnEzcmU4OGpRZGdFNWhxanFGWjNjbENHN2x4b0J3DQpoTGFhem0zTkpKbFVmemRrOTdvdVJ2bkZHQXVYZDVjUVZ4OGpZT09lVTYwc1dxbU1lNFFIZE92cHFCOTFiSm9ZDQpRU0tWRmpVZ0hlVHBOOHROcEtKZmI5TEluM3B1bjNiQzlOS05IdFJLTU5YM0tsL3NBUHE3cS9BbG5kdkEyS3czDQpEa3VtMm1IUVVHZHpWSHFjT2dlYTlCR2pMSzJoN1N1WDkzelRXTDAydTc5OWRyNlhrcmFkL1dTaEhjaGZqalJuDQphTDM1bmlKVURyMDJZSnRQZ3hXT2JzcmZPVTYzQjhqdUxVcGhXLzRCT2pqSnlBRzVsOWoxLy9hVUdFaS9zRWU1DQpscVZ2MFA3OFFyeG94UitNTVhpSndRYWI1RkI4VEcvYWM2bVJIZ0Y5Q21rWDkwdWFSaCtPQzA3WGpUZGZTS0dSDQpQcE05aEIyWmhMb2wvbmY4cW1vTGRvRDVIdk9EWnVLdTIrbXVLZVZIWGd3Mi9BNndNN093cmlueFppeUJrNUhoDQpDdmFBREg3UFpwVTZ6L3p2NU5VNUhTdlhpS3RDekZ1RHU0L1pmaTM0UmZIWGVDVWZIQWI0S2ZOUlhKd01zeFVhDQorNFpwU0FYMkc2Um5HVTVtZXVYcFU1L1YrRFFKcC9lNjlYeXlZNlJYRG9NeXdhRUZsSWxYQnFqUlJBMnBBZ01CDQpBQUdqWmpCa01BNEdBMVVkRHdFQi93UUVBd0lCQmpBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQ01CMEdBMVVkDQpEZ1FXQkJSRFdVc3JhWXVBNFJFemFsZk5Wemphbm4zRjZ6QWZCZ05WSFNNRUdEQVdnQlJEV1VzcmFZdUE0UkV6DQphbGZOVnpqYW5uM0Y2ekFOQmdrcWhraUc5dzBCQVEwRkFBT0NBZ0VBa1ErVDlucWNTbEF1Vy85MERlWW1RT1cxDQpRaHFPb3I1cHNCRUd2eGJOR1YyaGRMSlk4aDZRVXE0OEJDZXZjTUNoZy9MMUNrem5CTkk0MGkzLzZoZURuM0lTDQp6VkV3WEtmMzRwUEZDQUNXVk1aeGJRamtOUlRpSDhpUnVyOUVzYU5RNW9YQ1BKa2h3ZzIrSUZ5b1BBQVlVUm9YDQpWY0k5U0NEVWE0NWNsbVlISi9YWXdWMWljR1ZJOC85YjJKVXFrbG5PVGE1dHVnd0lVaTVzVGZpcE5jSlhIaGd6DQo2QktZRGwwL1VQMGxMS2JzVUVUWGVUR0RpRHB4WllJZ2JjRnJSRERrSEM2QlN2ZFdWRWlINWI5bUgyQk9ONjB6DQowTzBqOEVFS1R3aTlqbmFmVnRaUVhQL0Q4eW9Wb3dkRkRqWGNLa09QRi8xZ0loOXFyRlI2R2RvUFZnQjNTa0xjDQo1dWxCcVphQ0htNTYzanN2V2Iva1hKbmxGeFcrMWJzTzlCREQ2RHdlQmNHZE51cmdtSDYyNXdCWGtzU2REN3kvDQpmYWtrOERhZ2piaktTaFlsUEVGT0FxRWNsaXdqRjQ1ZWFiTDB0MjdNSlY2MU8vakh6SEwzZGtuWGVFNEJEYTJqDQpiQStKYnlKZVVNdFU3S01zeHZ4ODJSbWhxQkVKSkRCQ0ozc2NWcHR2aERNUnJ0cURCVzVKU2h4b0FPY3BGUUdtDQppWVdpY240Nm5QRGpnVFUwYlgxWlBwVHByeVhidmNpVkw1UmtWQnV5WDJudGNPTERQbFpXZ3haQ0JwOTZ4MDdGDQpBbk96S2daazRSelpQTkF4Q1hFUlZ4YWpuL0ZMY09oZ2xWQUtvNUgwYWMrQWl0bFEwaXA1NUQyL21mOG83MnRNDQpmVlE2VnB5akVYZGlJWFdVcS9vPQ0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
  - owner: root:root
    encoding: b64
    path: /etc/ssl/cert.pem
    content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlFcGpDQ0E0NmdBd0lCQWdJVUgzZXMwaHVaQy8rTUNxQWRyWXEwTE05UFY4QXdEUVlKS29aSWh2Y05BUUVMDQpCUUF3Z1lzeEN6QUpCZ05WQkFZVEFsVlRNUmt3RndZRFZRUUtFeEJEYkc5MVpFWnNZWEpsTENCSmJtTXVNVFF3DQpNZ1lEVlFRTEV5dERiRzkxWkVac1lYSmxJRTl5YVdkcGJpQlRVMHdnUTJWeWRHbG1hV05oZEdVZ1FYVjBhRzl5DQphWFI1TVJZd0ZBWURWUVFIRXcxVFlXNGdSbkpoYm1OcGMyTnZNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoDQpNQjRYRFRJek1EY3pNVEUzTXprd01Gb1hEVE00TURjeU56RTNNemt3TUZvd1lqRVpNQmNHQTFVRUNoTVFRMnh2DQpkV1JHYkdGeVpTd2dTVzVqTGpFZE1Cc0dBMVVFQ3hNVVEyeHZkV1JHYkdGeVpTQlBjbWxuYVc0Z1EwRXhKakFrDQpCZ05WQkFNVEhVTnNiM1ZrUm14aGNtVWdUM0pwWjJsdUlFTmxjblJwWm1sallYUmxNSUlCSWpBTkJna3Foa2lHDQo5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdmtmbjB1eVZ3LzlSYlBDbDQ2dzhIeVZnTXZKREtVUWgvQUk0DQpIODRXRGRzM1hTRmxrbmFIK0FQdmJoM0Rsc3M5NEZnRDVGVVRMdENzQzRtSFpZVlNiRzJqeCtJbjJGcTdTSjdUDQp1QlJUbHBXWmNyVEViRjRBa00wRm53NGwwbEdQeFlZRjRaOG5uZm13YUtvNnlwb0Ftd3draXJWWXU3dWE4Mm01DQp3eWoyZHZKcWNkUExxTXdHRFVkYnlYemdwZE9IaXRBVFFoTE56VmtaOEI1L2RyODcweDR3TE8rRkVOOG92QUprDQpaNVZCRndSOEI5WEs4dUtEcmdBZkxYUVM5UVZ3WHpjcmQxQVp6S1RDVnBlMmlwemFiSGN5TUt1WDdpZjRTRGQ1DQpiZ2Ird1hycGY2dkNRWklDa3REdWJFcDdCVzlCNVhIUnlmMnJ2Yms2VEtjZ2xTbGNRUUlEQVFBQm80SUJLRENDDQpBU1F3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01DQmdnckJnRUZCUWNEDQpBVEFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU3pwcWpFOEJUK0FKYUg2c3VnRmwxajdqend4REFmDQpCZ05WSFNNRUdEQVdnQlFrNkZOWFhYdzBRSWVwNjVUYnV1RVdlUHdwcERCQUJnZ3JCZ0VGQlFjQkFRUTBNREl3DQpNQVlJS3dZQkJRVUhNQUdHSkdoMGRIQTZMeTl2WTNOd0xtTnNiM1ZrWm14aGNtVXVZMjl0TDI5eWFXZHBibDlqDQpZVEFwQmdOVkhSRUVJakFnZ2c4cUxtMWhkR1IxWjJkaGJpNWpiMjJDRFcxaGRHUjFaMmRoYmk1amIyMHdPQVlEDQpWUjBmQkRFd0x6QXRvQ3VnS1lZbmFIUjBjRG92TDJOeWJDNWpiRzkxWkdac1lYSmxMbU52YlM5dmNtbG5hVzVmDQpZMkV1WTNKc01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3VvUG9KV05VZ0xPRXVmendLRlprMHBvL2tNR29qDQoxYTdCSGEzcWtNWGUrN2J4aW1pQTBvYzcyVEhYSm8zVm82bTIwaGRpbDRiSzVPYzZoTGpiUTFOR2ZXNm84MXk2DQpyUXZEaXBXN3JuL3R3V3hPTkpHTFNDZDZFalpqWXpUUW5EdFBSQWQrVnBwV1BuNUtLZHRSNkM2ZjhaMFlqeldjDQp3b3JLdkRuV2E5b0gycEUzZUNSRUZsc1lRUUtVNWxOYUpibm9nRXNaY2ZDa0MvU0JCaTRaN0lIRnJzWnd1YTU5DQorVDIxUWNOd3BKbExLZ2VRZlpLazMzTFc5MFlyYjRhNStMaTljQzZsVC9MRHdTc20ySkVVVm1nbDJOaC8wV2dpDQpBcHFxUjV5dmUwdUI2M0tTdW90Z2hyWlp0cnNhVW1OYytjRjhneHU4Si8rdXFhaWZQWk83NVZtVw0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
  - owner: root:root
    encoding: b64
    path: /etc/ssl/key.pem
    permissions: &apos;0600&apos;
    content: ${private_ssl_key}
  - owner: admin:docker
    path: /home/admin/docker-compose.yaml
    content: |
    version: &quot;3&quot;
    services:
      app:
        image: ghcr.io/&lt;org&gt;/&lt;image&gt;:&lt;tag&gt;
        restart: unless-stopped
        ports:
          - &quot;8000:2368&quot;
        labels:
          - &quot;com.centurylinklabs.watchtower.enable=true&quot;
      watchtower:
        image: containrrr/watchtower
        command: --debug --http-api-update
        restart: unless-stopped
        environment:
          - WATCHTOWER_HTTP_API_TOKEN=${watchtower_token}
        labels:
          - &quot;com.centurylinklabs.watchtower.enable=false&quot;
        ports:
          - &quot;8080:8080&quot;
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
          - /home/admin/.docker/config.json:/config.json
    - owner: www-data:www-data
      path: /etc/nginx/sites-available/default
      content: |
        server {
          listen 443 ssl http2;
          listen [::]:443 ssl http2;
          charset UTF-8;
          ssl_session_timeout 5m;
          ssl_prefer_server_ciphers on;
          ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
          ssl_protocols TLSv1.2;
          ssl_buffer_size 4k;
          ssl_certificate         /etc/ssl/cert.pem;
          ssl_certificate_key     /etc/ssl/key.pem;
          ssl_client_certificate /etc/ssl/cloudflare.crt;
          ssl_verify_client on;
          
          server_name hostname.com www.hostname.com;
          
          location / {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_http_version 1.1;
            proxy_buffering        on;
            proxy_pass http://127.0.0.1:8000;
            proxy_redirect off;
            }
            
          location /v1/update {
            proxy_http_version 1.1;
            proxy_buffering on;
            proxy_pass http://127.0.0.1:8080;
            proxy_redirect off;
            }
          }
  
runcmd:
  - curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
  - add-apt-repository &quot;deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/debian $(lsb_release -cs) stable&quot;
  - apt-get update -y
  - apt-get install -y docker-ce docker-ce-cli containerd.io
  - systemctl start docker
  - systemctl enable docker
  - curl -L &quot;https://github.com/docker/compose/releases/download/2.23.0/docker-compose-$(uname -s)-$(uname -m)&quot; -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose
  - su admin -c &apos;docker login -u ${docker_username} -p ${docker_password} ${docker_repository}&apos;
  - su admin -c &apos;docker compose -f /home/admin/docker-compose.yml up -d&apos;
</code></pre><p>Now obviously you&apos;ll need to modify this and test it, it took some tweaks to get it working on mine and I&apos;m confident there are improvements we could make. However I think we can use it as a sample reference doc with the understanding it is NOT ready to copy and paste. </p><p>So here&apos;s the basic flow. We&apos;re going to use the SSL certificates Cloudflare gives us as well as inserting their certificate for Authenticated Origin Pulls. This ensures all the traffic coming to our server is from Cloudflare. Now we could be traffic from another Cloudflare customer, a malicious one, but at least this gives us a good starting point to limit the traffic. Plus presumably if there is a malicious customer hitting you, at least you can reach out to Cloudflare and they&apos;ll do....something. </p><p>Now we put it together with Terraform and we have something we can deploy. We&apos;ll do Digital Ocean as our example but the cloud provider part doesn&apos;t really matter. </p><p><strong>secrets.json</strong></p><figure class="kg-card kg-code-card"><pre><code>{
   &quot;private_ssl_key&quot;: &quot;LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlFcGpDQ0E0NmdBd0lCQWdJVUgzZXMwaHVaQy8rTUNxQWRyWXEwTE05UFY4QXdEUVlKS29aSWh2Y05BUUVMDQpCUUF3Z1lzeEN6QUpCZ05WQkFZVEFsVlRNUmt3RndZRFZRUUtFeEJEYkc5MVpFWnNZWEpsTENCSmJtTXVNVFF3DQpNZ1lEVlFRTEV5dERiRzkxWkVac1lYSmxJRTl5YVdkcGJpQlRVMHdnUTJWeWRHbG1hV05oZEdVZ1FYVjBhRzl5DQphWFI1TVJZd0ZBWURWUVFIRXcxVFlXNGdSbkpoYm1OcGMyTnZNUk13RVFZRFZRUUlFd3BEWVd4cFptOXlibWxoDQpNQjRYRFRJek1EY3pNVEUzTXprd01Gb1hEVE00TURjeU56RTNNemt3TUZvd1lqRVpNQmNHQTFVRUNoTVFRMnh2DQpkV1JHYkdGeVpTd2dTVzVqTGpFZE1Cc0dBMVVFQ3hNVVEyeHZkV1JHYkdGeVpTQlBjbWxuYVc0Z1EwRXhKakFrDQpCZ05WQkFNVEhVTnNiM1ZrUm14aGNtVWdUM0pwWjJsdUlFTmxjblJwWm1sallYUmxNSUlCSWpBTkJna3Foa2lHDQo5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdmtmbjB1eVZ3LzlSYlBDbDQ2dzhIeVZnTXZKREtVUWgvQUk0DQpIODRXRGRzM1hTRmxrbmFIK0FQdmJoM0Rsc3M5NEZnRDVGVVRMdENzQzRtSFpZVlNiRzJqeCtJbjJGcTdTSjdUDQp1QlJUbHBXWmNyVEViRjRBa00wRm53NGwwbEdQeFlZRjRaOG5uZm13YUtvNnlwb0Ftd3draXJWWXU3dWE4Mm01DQp3eWoyZHZKcWNkUExxTXdHRFVkYnlYemdwZE9IaXRBVFFoTE56VmtaOEI1L2RyODcweDR3TE8rRkVOOG92QUprDQpaNVZCRndSOEI5WEs4dUtEcmdBZkxYUVM5UVZ3WHpjcmQxQVp6S1RDVnBlMmlwemFiSGN5TUt1WDdpZjRTRGQ1DQpiZ2Ird1hycGY2dkNRWklDa3REdWJFcDdCVzlCNVhIUnlmMnJ2Yms2VEtjZ2xTbGNRUUlEQVFBQm80SUJLRENDDQpBU1F3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01DQmdnckJnRUZCUWNEDQpBVEFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU3pwcWpFOEJUK0FKYUg2c3VnRmwxajdqend4REFmDQpCZ05WSFNNRUdEQVdnQlFrNkZOWFhYdzBRSWVwNjVUYnV1RVdlUHdwcERCQUJnZ3JCZ0VGQlFjQkFRUTBNREl3DQpNQVlJS3dZQkJRVUhNQUdHSkdoMGRIQTZMeTl2WTNOd0xtTnNiM1ZrWm14aGNtVXVZMjl0TDI5eWFXZHBibDlqDQpZVEFwQmdOVkhSRUVJakFnZ2c4cUxtMWhkR1IxWjJkaGJpNWpiMjJDRFcxaGRHUjFaMmRoYmk1amIyMHdPQVlEDQpWUjBmQkRFd0x6QXRvQ3VnS1lZbmFIUjBjRG92TDJOeWJDNWpiRzkxWkdac1lYSmxMbU52YlM5dmNtbG5hVzVmDQpZMkV1WTNKc01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3VvUG9KV05VZ0xPRXVmendLRlprMHBvL2tNR29qDQoxYTdCSGEzcWtNWGUrN2J4aW1pQTBvYzcyVEhYSm8zVm82bTIwaGRpbDRiSzVPYzZoTGpiUTFOR2ZXNm84MXk2DQpyUXZEaXBXN3JuL3R3V3hPTkpHTFNDZDZFalpqWXpUUW5EdFBSQWQrVnBwV1BuNUtLZHRSNkM2ZjhaMFlqeldjDQp3b3JLdkRuV2E5b0gycEUzZUNSRUZsc1lRUUtVNWxOYUpibm9nRXNaY2ZDa0MvU0JCaTRaN0lIRnJzWnd1YTU5DQorVDIxUWNOd3BKbExLZ2VRZlpLazMzTFc5MFlyYjRhNStMaTljQzZsVC9MRHdTc20ySkVVVm1nbDJOaC8wV2dpDQpBcHFxUjV5dmUwdUI2M0tTdW90Z2hyWlp0cnNhVW1OYytjRjhneHU4Si8rdXFhaWZQWk83NVZtVw0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ&quot;,
   &quot;watchtower_token&quot;: &quot;tx#okr#n+8_wpf%#n9cxr@30vi7wy_@*@69bw+smfic&amp;k^zb8h&quot;,
   &quot;docker_username&quot;: &quot;username&quot;,
   &quot;docker_password&quot;: &quot;password&quot;,
   &quot;docker_repository&quot;: &quot;repository&quot;
}</code></pre><figcaption>Base64 encoded private key for SSL along with the watchtower token to access the API and everything else</figcaption></figure><p><strong>Terraform file</strong></p><pre><code>terraform {
  required_providers {
    digitalocean = {
      source  = &quot;digitalocean/digitalocean&quot;
      version = &quot;2.30.0&quot;
    }
    sops = {
      source  = &quot;carlpett/sops&quot;
      version = &quot;~&gt; 0.5&quot;
    }
    cloudflare = {
      source  = &quot;cloudflare/cloudflare&quot;
      version = &quot;4.17.0&quot;
    }
  }
}

variable &quot;ssh_public_key&quot; {
  type        = string
  description = &quot;SSH public key&quot;
  default     = &quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeogciUcb1roDZWVXaTFrMSqU66qlb4YT2GhDMZQm+cM6kxAgl5GY72Yiuir/Sml8pHMvTRPV5ezg+17gSntnBtIbf3wNwuB0F/21l7vGS2XteY6p557cRHZjSFuc2uPiysnI21FfZCrsEJ7uM3Ebyd/zJ394URcWQm54NtVh/QxuHzfuK9QCbxhlsXXFAfTnrWvLVGQkq/R+fjtKy12o42Y59JIsZT4aORSGujDiagBysGOCXonYqRhs9gmdZPkcKUe3r8j6fZRY2l8/QX3D6zhDZ8x74Gi70ojuvR8oCsWs9tB2sF/XQi806G/s/mbhh6hcj7ALyo5Th+jw7I8rj matdevdug@matdevdug-ThinkPad-X1-Carbon-5th&quot;
}

provider &quot;digitalocean&quot; {
  token = &quot;secret_api_key&quot;
}

data &quot;sops_file&quot; &quot;secret&quot; {
  source_file = &quot;secrets.enc.json&quot;
}

locals {
  virtual_machines = {
    &quot;server01&quot; = { vm_size = &quot;s-4vcpu-8gb&quot;, zone = &quot;nyc1&quot; },
    &quot;server02&quot; = { vm_size = &quot;s-4vcpu-8gb&quot;, zone = &quot;nyc1&quot; },
    &quot;server03&quot; = { vm_size = &quot;s-4vcpu-8gb&quot;, zone = &quot;nyc1&quot; },
    &quot;server04&quot; = { vm_size = &quot;s-4vcpu-8gb&quot;, zone = &quot;nyc1&quot; }
  }
}

resource &quot;digitalocean_droplet&quot; &quot;web&quot; {
  for_each = local.virtual_machines
  name     = each.key
  image    = &quot;debian-12-x64&quot;
  size     = each.value.vm_size
  region   = each.value.zone
  user_data = templatefile(&quot;${path.module}/cloud-init.yaml&quot;, {
    init_ssh_public_key = file(var.ssh_public_key)
    private_ssl_key     = data.sops_file.secret.data[&quot;private_ssl_key&quot;]
    watchtower_token    = data.sops_file.secret.data[&quot;watchtower_token&quot;]
    docker_username     = data.sops_file.secret.data[&quot;docker_username&quot;]
    docker_password     = data.sops_file.secret.data[&quot;docker_password&quot;]
    docker_repository   = data.sops_file.secret.data[&quot;docker_repository&quot;]
  })
}

resource &quot;digitalocean_reserved_ip&quot; &quot;example&quot; {
  for_each   = digitalocean_droplet.web
  droplet_id = each.value.id
  region     = each.value.region
}</code></pre><h3 id="hooking-it-all-together">Hooking it all together</h3><p>So we&apos;ll need to go back to the Cloudflare terraform and set the reserved_ips we get from the cloud provider as the IPs for the origins. Then we should be able to go through, set <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/">Authenticated Origin Pulls up</a> as well as SSL to &quot;Strict&quot; in the Cloudflare control panel. Finally since we have Watchtower set up, all we need to deploy a new version of the application is to write a simple deploy script that curls each one of our servers IP addresses with the Watchtower HTTP Token set to tell it to pull a new version of our container from our registry and deploy it. <a href="https://containrrr.dev/watchtower/http-api-mode/">Read more about that here.</a></p><p>In my testing (which was somewhat limited), even though the scripts needed tweaks and modifications, the underlying concept actually worked pretty well. I was able to see all my traffic coming through Cloudflare easily, the SSL components all worked and whenever I wanted to upgrade a host it was pretty simple to stop traffic to the host in the web UI, reboot or destroy and run Terraform again and then send traffic to it again. </p><p>In terms of encryption while my <code>age</code> solution wasn&apos;t perfect I think it&apos;ll hold together reasonably well. It is a secret value which you can safely commit to source control and rotate the secret pretty easily whenever you want. </p><h3 id="next-steps">Next Steps</h3><ul><li>Put the whole thing together in a structured Terraform module so it&apos;s more reliable and less prone to random breakage</li><li>Write out a bunch of different cloud provider options to make it easier to switch between them</li><li>Write a simple CLI to remove an origin from the load balancer before running the deploy and then confirming the origin is healthy before sticking it back in (for the requirement of zero-downtime deployments)</li><li>Taking a second pass at the encryption story. </li></ul><p>Going through this is a useful exercise in explaining why these infrastructure products are so complicated. They&apos;re complicated because its hard to do and has a lot of moving parts. Even with the heavy use of existing tooling, this thing turned out to be more complicated than I expected. </p><p>Hopefully this has been an interesting thought experiment. I&apos;m excited to take another pass at this idea and potentially turn it into a more usable product. If this was helpful (or if I missed something based), I&apos;m always open to feedback. Especially if you thought of an optimization! &#xA0;<a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a></p>]]></content:encoded></item><item><title><![CDATA[Terraform Cloud Review]]></title><description><![CDATA[Review of Terraform Cloud]]></description><link>https://matduggan.com/terraform-cloud-review/</link><guid isPermaLink="false">6501742fa66cda0001543ee9</guid><category><![CDATA[DevOps]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Wed, 13 Sep 2023 11:17:53 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2023/09/SCR-20230913-hud-1.png" class="kg-image" alt loading="lazy" width="557" height="557"><figcaption><a href="https://www.teepublic.com/poster-and-art/48898845-dtf-disappointing-the-whole-family">Source</a></figcaption></figure><p>If I were told to go off and make a hosted Terraform product, I would probably end up with a list of features that looked something like the following:</p><ul><li>Extremely reliable state tracking</li><li>Assistance with upgrading between versions of Terraform and providers and letting users know when it looked safe to upgrade and when there might be problems between versions</li><li>Consistent running of Terraform with a fresh container image each time, providers and versions cached on the host VM so the experience is as fast as possible</li><li>As many linting, formatting and HCL optimizations I can offer, configurable on and off</li><li>Investing as much engineering work as I can afford in providing users an experience where, unlike with the free Terraform, if a plan succeeds on Terraform Cloud, the Apply will succeed</li><li>Assisting with Workspace creation. Since we want to keep the number of resources low, seeing if we can leverage machine learning to say &quot;we think you should group these resources together as their own workspace&quot; and showing you how to do that</li><li>Figure out some way for organizations to interact with the Terraform resources other than just running the Terraform CLI, so users can create richer experiences for their teams through easy automation that feeds back into the global source of truth that is my incredibly reliable state tracking</li><li>Try to do whatever I can to encourage more resources in my cloud. Unlimited storage, lots of workspaces, helping people set up workspaces. The more stuff in there the more valuable it is for the org to use (and also more logistically challenging for them to cancel)</li></ul><p>This is me would be a product I would feel confident charging a lot of money for. Terraform Cloud is not that product. It has some of these features locked behind the most expensive tiers, but not enough of them to justify the price. </p><p><a href="https://matduggan.com/terraform-is-dead-long-live-pulumi/">I&apos;ve written about my feelings around the Terraform license change before.</a> I won&apos;t bore you with that again. However since now the safest way to use Terraform is to pay Hashicorp, what does that look like? As someone who has used Terraform for years and Terraform Cloud almost daily for a year, it&apos;s a profoundly underwhelming experience.</p><p>Currently it is a little-loved product with lots of errors and sharp edges. This is as v0.1 of a version of this as I could imagine, except the pace of development has been glacial. Terraform Cloud is a &quot;good enough&quot; platform that seems to understand that if you could do better, you would. Like a diner at 2 AM on the side of the highway, it&apos;s primary selling point is the fact that it is there. That and the license terms you will need to accept soon. </p><h3 id="terraform-cloudbasic-walkthrough">Terraform Cloud - Basic Walkthrough</h3><p>At a high level Terraform Cloud allows organizations to centralize their Projects and Workspaces and store that state with Hashicorp. It also gives you access to a Registry for you to host your own privacy Terraform modules and use them in your workspaces. The top level options look as follows:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/503c3ae3-b645-4830-acc1-3e09f6ccdd00/public" class="kg-image" alt loading="lazy"><figcaption>That&apos;s it!</figcaption></figure><p> </p><p>You may be wondering &quot;What does Usage do?&quot; I have no idea, as the web UI has never worked for me even though I appear to have all the permissions one could have. I have seen the following since getting my account:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2023/09/SCR-20230913-erw.png" class="kg-image" alt loading="lazy" width="1763" height="512" srcset="https://matduggan.com/content/images/size/w600/2023/09/SCR-20230913-erw.png 600w, https://matduggan.com/content/images/size/w1000/2023/09/SCR-20230913-erw.png 1000w, https://matduggan.com/content/images/size/w1600/2023/09/SCR-20230913-erw.png 1600w, https://matduggan.com/content/images/2023/09/SCR-20230913-erw.png 1763w" sizes="(min-width: 720px) 720px"><figcaption>I&apos;m not sure what wasn&apos;t found.</figcaption></figure><p>I&apos;m not sure what access I lack or if the page was intended to work. It&apos;s very mysterious in that way.</p><p>There is Explorer, which lets you basically see &quot;what versions of things do I use across the different repos&quot;. You can&apos;t do anything with that information, like I can&apos;t say &quot;alright well upgrade these two to the version that everyone else uses&quot;. It&apos;s also a beta feature and not one that existed when I first started using the platform.</p><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/6f2db180-7b8d-41e0-2380-e3143020dd00/public" class="kg-image" alt loading="lazy"></figure><p>Finally there are the Workspaces, where you spend 99% of your time.</p><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/8cba47fb-1398-47cb-dec8-f887cf383700/public" class="kg-image" alt loading="lazy"></figure><p>You get some ok stats here. Up in the top left you see &quot;Needs Attention&quot;, &quot;Errors&quot;, &quot;Running&quot;, &quot;Hold&quot; and then &quot;Applied.&quot; Even though you may have many Workspaces, you cannot change how many you see here. 20 is the correct number I guess.</p><h3 id="creating-a-workspace">Creating a Workspace</h3><figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/7c963ad6-76b4-4dec-8d13-da4eb3a46900/public" class="kg-image" alt loading="lazy"></figure><p>Workspaces are either based on a repo, CLI driven or you call the API. You tell it what VCS, what repo, if you want to use the root of the repo or a sub-directory (which is good because soon you&apos;ll have too many resources for one workspace for everything). You tell it Auto Apply (which is checked by default) or Manual and when to trigger a run (whenever a change, whenever specific files in a path change or whenever you push a tag). That&apos;s it.</p><p>You can see all the runs, what their status is and basically what resources have changed or will change. Any plan that you run from your laptop also show up here. Now you don&apos;t need to manage your runs here, you can still do local, but then there is absolutely no reason to use this product. Almost all of the features rely on your runs being handled by Hashicorp here inside of a Workspace.</p><h3 id="workspace-flow">Workspace flow</h3><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/09/SCR-20230913-i11-1.png" class="kg-image" alt loading="lazy" width="1277" height="1034" srcset="https://matduggan.com/content/images/size/w600/2023/09/SCR-20230913-i11-1.png 600w, https://matduggan.com/content/images/size/w1000/2023/09/SCR-20230913-i11-1.png 1000w, https://matduggan.com/content/images/2023/09/SCR-20230913-i11-1.png 1277w" sizes="(min-width: 720px) 720px"></figure><p>Workspaces show you when the run was, how long the plan took, what resources are associated with this (10 resources at a time even though you might have thousands. Details links you to the last run, there are tags and run triggers. Run triggers allow you to link workspaces together, so this workspace would dependent on the output of another workspace. </p><p>The settings are as follows:</p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/09/image.png" class="kg-image" alt loading="lazy" width="214" height="177"></figure><p>Runs is pretty straight forward. States allow you to inspect the state changes directly. So you can see the full JSON of a resource and roll back to this specific state version. This can be nice for reviewing what specifically changed on each resource, but in my experience you don&apos;t get much over looking at the actual code. But if you are in a situation where something has suddenly broken and you need a fast way of saying &quot;what was added and what was removed&quot;, this is where you would go. </p><p><strong>NOTE: BE SUPER CAREFUL WITH THIS</strong></p><p>The state inspector has the potential to show TONS of sensitive data. It&apos;s all the data in Terraform in the raw form. Just be aware it exists when you start using the service and take a look to ensure there isn&apos;t anything you didn&apos;t want there. </p><p>Variables are variables and the settings allow you to lock the workspace, apply Sentinel settings, set an SSH key for downloading private modules and finally if you want changes to the VCS to trigger an action here. So for instance, when you merge in a PR you can trigger Terraform Cloud to automatically apply this workspace. Nothing super new here compared to any CI/CD system, but still it is all baked in. </p><p>That&apos;s it!</p><h3 id="no-code-modules">No-Code Modules</h3><p>One selling point I heard a lot about, but haven&apos;t actually seen anyone use. The idea is good though, where you write premade modules and push them to your private registry. Then members of your organization can just run them to do things like &quot;stand up a template web application stack&quot;. Hashicorp has a <a href="https://developer.hashicorp.com/terraform/tutorials/cloud/no-code-provisioning">tutorial here</a> that I ran though and found it to work pretty much as expected. It isn&apos;t anywhere near the level of power that I would want, compared to something like Pulumi, but it is a nice step forward for automating truly constant tasks (like adding domain names to an internal domain or provisioning some SSL certificate for testing).</p><h3 id="dynamic-credentials">Dynamic Credentials</h3><p>You can link Terraform Cloud and Vault, if you use it, so you no longer need to stick long-living credentials inside of the Workspace to access cloud providers. Instead you can leverage Vault to get short-lived credentials that improve the security of the Workspaces. I ran through this and did have problems getting it worked for GCP, but AWS seemed to work well. It requires some setup inside of the actual repository, but it&apos;s a nice security improvement vs leaving production credentials in this random web application and hoping you don&apos;t mess up the user scoping.</p><p>User scoping is controlled primarily through &quot;projects&quot;, which basically trickle down to the user level. You make a project, which has workspaces, that have their own variables and then assign that to a team or business unit. That same logic is reflected inside of credentials. </p><h3 id="private-registry">Private Registry</h3><p>This is one thing Hashicorp nailed. It&apos;s very easy to hook up Terraform Cloud to allow your workspaces to access internal modules backed by your private repositories. It supports the same documentation options as public modules, tracks downloads and allows for easy versioning control through git tags. I have nothing but good things to say about this entire thing.</p><p>Sharing between organizations is something they lock at the top tier, but this seems like a very niche usecase so I don&apos;t consider it to be too big of a problem. However if you are someone looking to produce a private provider or module for your customers to use, I would reach out to Hashicorp and see how they want you to do that. </p><p>The primary value for this is just to easily store all of your IaC logic in modules and then rely on the versioning inside of different environments to roll out changes. For instance, we do this for things like upgrading a system. Make the change, publish the new version to the private registry and then slowly roll it out. Then you can monitor the rollout through <code>git grep</code> pretty easily. </p><h3 id="pricing">Pricing</h3><p>$0.00014 per hour per resource. So a lot of money when you think &quot;every IAM custom role, every DNS record, every SSL certificate, every single thing in your entire organization&quot;. You do get a lot of the nice features at this &quot;standard&quot; tier, but I&apos;m kinda shocked they don&apos;t unlock all the enterprise features at this price point. No-code provisioning is only available at the higher levels, as well as Drift detection, Continuous validation (checks between runs to see if anything has changed ) as well as Ephemeral workspaces. The last one is a shame, because it looks like a great feature. Set up your workspace to self-destruct at regular intervals so you can nuke development environments. I&apos;d love to use that but alas. </p><h3 id="problems">Problems</h3><p>Oh the problems. So the runners sometimes get &quot;stuck&quot;, which seems to usually happen after someone cancels a job in the web UI. You&apos;ll run into an issue, try to cancel a job, fix the problem and rerun the runner only to have it get stuck forever. I&apos;ve sat there and watched it try to load the modules for 45 minutes. There isn&apos;t any way I have seen to tell Terraform Cloud &quot;this runner is broken, go get me another one&quot;. Sometimes they get stuck for an unknown reason.</p><p>Since you need to make all the plans and applies remotely to get any value out of the service, it can also sometimes cause traffic jams in your org. If you work with Terraform a lot, you know you need to run plans pretty regularly. Since you need to wait for a runner every single time, you can end up wasting a lot of time sitting there waiting for another job to finish. Again I&apos;m not sure what triggers you getting another runner. You can self host, but then I&apos;m truly baffled at what value this tool brings.</p><p>Even if that was an option for you and you wanted to do it, its locked behind the highest subscription tier. So I can&apos;t even say &quot;add a self-hosted runner just for plans&quot; so I could unstick my team. This seems like an obvious add, along with a lot more runner controls so I could see what was happening and how to avoid getting it jammed up. </p><h3 id="conclusion">Conclusion</h3><p>I feel bad this is so short, but there just isn&apos;t anything else to write. This is a super bare-bones tool that does what it says on the box for a lot of money. It doesn&apos;t give you a ton of value over Spacelift or or any of the others. I can&apos;t recommend it, it doesn&apos;t work particularly well and I haven&apos;t enjoyed my time with it. Managing it vs using an S3 bucket is an experience I would describe as &quot;marginally better&quot;. It&apos;s nice that it handles contention across team mates for me, but so do all the others at a lower price.</p><p>I cannot think of a single reason to recommend this over Spacelift, which has better pricing, better tooling and seems to have a better runner system except for the license change. Which was clearly the point of the license change. However for those evaluating options, head elsewhere. This thing isn&apos;t worth the money.</p>]]></content:encoded></item><item><title><![CDATA[We need a different name for non-technical tech conferences]]></title><description><![CDATA[<p>I recently returned from Google Cloud Next. Typically I wouldn&apos;t go to a vendor conference like this, since they&apos;re usually thinly veiled sales meetings wearing the trench-coat of a conference. However I&apos;ve been to a few GCP events and found them to be technical</p>]]></description><link>https://matduggan.com/we-need-a-difference-name-for-non-technical-tech-conferences/</link><guid isPermaLink="false">64f835ada404cb00014f5dc1</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Wed, 06 Sep 2023 11:37:42 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1571645163064-77faa9676a46?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDM3fHxjb25mZXJlbmNlfGVufDB8fHx8MTY5Mzk4ODI1NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1571645163064-77faa9676a46?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDM3fHxjb25mZXJlbmNlfGVufDB8fHx8MTY5Mzk4ODI1NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="We need a different name for non-technical tech conferences"><p>I recently returned from Google Cloud Next. Typically I wouldn&apos;t go to a vendor conference like this, since they&apos;re usually thinly veiled sales meetings wearing the trench-coat of a conference. However I&apos;ve been to a few GCP events and found them to be technical and well-run, so I rolled the dice and hopped on the 11 hour flight from London to San Francisco. &#xA0;</p><p>We all piled into Moscone Center and I was pretty hopeful. There were a lot of engineers from Google and other reputable orgs, the list of talks we had signed up for before showing up sounded good, or at least useful. I figured this could be a good opportunity to get some idea of where GCP was going and perhaps hear about some large customers technical workarounds to known limitations and issues with the platform. Then we got to the keynote. </p><p>AI. The only topic discussed and the only thing anybody at the executive level cared about was AI. This would become a theme, a constant refrain among every executive-type I spoke to. AI was going to replace customer service, programmers, marketing, copy writers, seemingly every single person in the company except for the executives. It seemed only the VPs and the janitors were safe. None of the leaders I spoke to afterwards seemed to appreciate my observation that if they spent most of their day in meetings being shown slide decks, wouldn&apos;t they be <em>the easiest to replace with a robot?</em> Or maybe their replacement could be a mop with sunglasses leaned against an office chair if no robot was available.</p><p>I understand keynotes aren&apos;t for engineers, but the sense I got from this was &quot;nothing has happened in GCP anywhere else except for AI&quot;. This isn&apos;t true, like objectively I know new things have been launched, but it sends a pretty clear message that it&apos;s not a priority if nobody at the executive level seems to care about them. This is also a concern because Google famously has institutional ADHD with an inability to maintain long-term focus on slowly incrementing and improving a product. Instead it launches amazing products, years ahead of the competition then, like a child bored with a toy, drops them into the backyard and wanders away. But whatever, let&apos;s move on from the keynote. </p><p>Over the next few days what I was to experience was an event with some fun moments, mostly devoid of any technical discussion whatsoever. Rarely were talks geared towards technical staff, when technical questions came up during the recorded events they were almost never answered. Most importantly there was no presentation I heard that even remotely touched on long-known missing features of GCP when compared to peers or roadmaps. When I asked technical questions, often Google employees would come up to me after the talk with the answer, which I appreciate. But everyone at home and in the future won&apos;t get that experience and miss out on the benefit. </p><p>Most talks were the GCP products marketing page turned into slides, with a seemingly mandatory reference to AI in each one. Several presenters joked about &quot;that was my required AI callout&quot;, which started funny but as time went on I began to worry...maybe they <em>were actually</em> required to mention AI? There are almost no live demos (pre-recorded which is ok but live is more compelling), zero code shown, mostly a tour of existing things the GCP web console could do along with a few new features. I ended up getting more value from finding the PMs of various products on the floor and subjecting to these poor souls to my many questions. </p><p>This isn&apos;t just a Google problem. Every engineer I spoke to about this talked about a similar time they got burned going to &quot;not a conference conference&quot;. From AWS to Salesforce and Facebook, these organizations pitch people on getting facetime with engineers and concrete answers to questions. Instead they&apos;re opportunity to try and pitch you on more products, letting executives feel loved by ensuring they get one-on-one time from senior folks in the parent company. They sound great but mostly it&apos;s an opportunity to collect stickers. </p><p>We need to stop pretending these types of conferences are technical conferences. They&apos;re not. It&apos;s an opportunity for non-technical people inside of your organization who interact with your technical SaaS providers to get facetime with employees of that company and ask basic questions in a shame-free environment. That has value and should be something that exists, but you should also make sure engineers don&apos;t wander into these things. </p><p>Here are the 7 things I think you shouldn&apos;t do if you call yourself a tech conference. </p><h3 id="7-deadly-sins-of-tech-conferences">7 Deadly Sins of &quot;Tech&quot; Conferences</h3><ul><li>Discussing internal tools that aren&apos;t open source and that I can&apos;t see or use. It&apos;s great if X corp has worked together with Google to make the perfect solution to a common problem. It doesn&apos;t mean shit to me if I can&apos;t use it or at least see it and ask questions about it. Don&apos;t let it into the slide deck if it has zero value to the community outside of showing that &quot;solving this problem is possible&quot;. </li><li>Not letting people who work with customers talk about common problems. I know, from talking to Google folks and from lots of talks with other customers, common issues people experience with GCP products. Some are misconfigurations or not understanding what the product is good at and designed to do. If you talk about a service, you need to discuss something about &quot;common pitfalls&quot; or &quot;working around frequently seen issues&quot;. </li><li>Pretending a sales pitch is a talk. Nothing makes me see red like halfway through a talk, inviting the head of sales onto the stage to pitch me on their product. Jesus Christ, there&apos;s a whole section of sales stuff, you gotta leave me alone in the middle of talks. </li><li>Not allowing a way for people to get questions into the livestream. Now this isn&apos;t true for every conference, but if this is the one time a year people can ask questions of the PM for a major product and see if they intend to fix a problem, let me ask that question. I&apos;ll gladly submit it beforehand and let people vote on it, or whatever you want. It can&apos;t be a free-for-all but there has to be something.</li><li>Skipping all specifics. If you are telling me that X service is going to solve all my problems and you have 45 minutes, don&apos;t spend 30 explaining how great it is in the abstract. Show me how it solves those problems in detail. Some of the Google presenters did this and I&apos;m extremely grateful to them, but it should have been standard. I saw the &quot;Google is committed to privacy and safety&quot; generic slides so many times across different presentations that I remembered the stock photo of two women looking at code and started trying to read what she had written. I think it was Javascript. </li><li>Blurring the line between presenter and sponsor. Most well-run tech conferences I&apos;ve been to make it super clear when you are hearing from a sponsor vs when someone is giving an unbiased opinion. A lot of these not-tech tech conferences don&apos;t, where it sounds like a Google employee is endorsing a third-party solution who has also sponsored the event. For folks new to this environment, it&apos;s misleading. Is Google saying this is the only way they endorse doing x? </li><li>Keeping all the real content behind NDAs. Now during Next there were a lot of super useful meetings that happened, but I wasn&apos;t in them. I had to learn about them from people at the bar who had signed NDAs and were invited to learn actual information. If you aren&apos;t going to talk about roadmap or any technical details or improvements publically, don&apos;t bother having the conference. Release a PDF with whatever new sales content you want me to read. The folks who are invited to the real meetings can still go to those. No judgement, you don&apos;t want to have those chats publically, but don&apos;t pretend you might this year. </li></ul><p>One last thing: if you are going to have a big conference with people meeting with your team, figure out some way you want them to communicate with that team. Maybe temporary email addresses or something? Most people won&apos;t use them, but it means a lot to people to think they have a way of having some line of communication with the company. If they get weird then just deactivate the temp email. It&apos;s weird to tell people &quot;just come find me afterwards&quot;. Where?</p><h3 id="what-are-big-companies-supposed-to-do">What are big companies supposed to do?</h3><p>I understand large companies are loathe to share details unless forced to. I also understand that companies hate letting engineers speak directly to the end users, for fear that the people who make the sausage and the people who consume the sausage might learn something terrible about how its made. That is the cost of holding a tech conference about your products. You have to let these two groups of people interact with each other and ask questions. </p><p>Now obviously there are plenty of great conferences based on open-source technology or about more general themes. These tend to be really high quality and I&apos;ve gone to a ton I love. However there is value, as we all become more and more dependent on cloud providers, to letting me know more about what this platform is moving towards. I need to know what platforms like GCP are working on so I know what is the technology inside the stack on the rise and which are on the decline. </p><p>Instead these conferences are for investors and the business community instead of anyone interested in the products. The point of Next was to show the community that Google is serious about AI. Just like the point of the last Google conference was to show investors that Google is serious about AI. I&apos;m confident the next conference on any topic Google has will also be asked to demonstrate their serious committment to AI technology. </p><p>You can still have these. Call them something else. Call them &quot;leadership conferences&quot; or &quot;vision conferences&quot;. Talk to Marketing and see what words you can slap in there that conveys &quot;you are an important person we want to talk about our products with&quot; that also tells me, a technical peon, that you don&apos;t want me there. I&apos;ll be overjoyed not to fly 11 hours and you&apos;ll be thrilled not to have me asking questions of your engineers. Everybody wins. </p>]]></content:encoded></item><item><title><![CDATA[Terraform is dead; Long live Pulumi?]]></title><description><![CDATA[<p>The best tools in tech scale. They&apos;re not always easy to learn, they might take some time to get good with but once you start to use them they just stick with you forever. On the command line, things like <code>gawk</code> and <code>sed</code> jump to mind, tools that</p>]]></description><link>https://matduggan.com/terraform-is-dead-long-live-pulumi/</link><guid isPermaLink="false">64db30b487e1660001377e9b</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 18 Aug 2023 09:29:01 GMT</pubDate><media:content url="https://matduggan.com/content/images/2023/08/prompthero-prompt-7df1c945ab1.webp" medium="image"/><content:encoded><![CDATA[<img src="https://matduggan.com/content/images/2023/08/prompthero-prompt-7df1c945ab1.webp" alt="Terraform is dead; Long live Pulumi?"><p>The best tools in tech scale. They&apos;re not always easy to learn, they might take some time to get good with but once you start to use them they just stick with you forever. On the command line, things like <code>gawk</code> and <code>sed</code> jump to mind, tools that have saved me more than once. I&apos;ve spent a decade now using Vim and I work with people who started using Emacs in university and still use it for 5 hours+ a day. You use them for basic problems all the time but when you need that complexity and depth of options, they scale with your problem. In the cloud when I think of tools like this, things like s3 and SQS come to mind, set and forget tooling that you can use from day 1 to day 1000. </p><p>Not every tool is like this. I&apos;ve been using Terraform at least once a week for the last 5 years. I have led migrating two companies to Infrastructure as Code with Terraform from using the web UI of their cloud provider, writing easily tens of thousands of lines of HCL along the way. At first I loved Terraform, HCL felt easy to write, the providers from places like AWS and GCP are well maintained and there are tons of resources on the internet to get you out of any problem. </p><p>As the years went on, our relationship soured. Terraform has warts that, at this point, either aren&apos;t solvable or aren&apos;t something that can be solved without throwing away a lot of previous work. In no particular orders, here are my big issues with Terraform:</p><ul><li>It scales poorly. Terraform often starts with <code>dev</code> <code>stage</code> and <code>prod</code> as different workspaces. However since both <code>terraform plan</code> and <code>terraform apply</code> make API calls to your cloud provider for each resource, it doesn&apos;t take long for this to start to take a long time. You run <code>plan</code> a lot when working with Terraform, so this isn&apos;t a trivial thing. </li><li>Then you don&apos;t want to repeat yourself, so you start moving more complicated logic into Modules. At this point the environments are completely isolated state files with no mixing, if you try to cross accounts things get more complicated. The basic structure you quickly adopt looks like this.<br> </li></ul><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/08/image-5.png" class="kg-image" alt="Terraform is dead; Long live Pulumi?" loading="lazy" width="441" height="391"></figure><ul><li>At some point you need to have better DRY coverage, better environment handling, different backends for different environments and you need to work with multiple modules concurrently. Then you explore Terragrunt which is a great tool, but is now another tool on top of the first Infrastructure as code tool and it works with Terraform Cloud but it requires some tweaks to do so. </li><li>Now you and your team realize that Terraform can destroy the entire company if you make a mistake, so you start to subdivide different resources out into different states. Typically you&apos;ll have the &quot;stateless resources&quot; in one area and the &quot;stateful&quot; resources in another, but actually dividing stuff up into one or another isn&apos;t completely straightforward. Destroying an SQS queue is really bad, but is it stateful? Kubernetes nodes don&apos;t have state but they&apos;re not instantaneous to fix either. </li><li>HCL isn&apos;t a programming language. It&apos;s a fine alternative to YAML or JSON, but it lacks a lot of the tooling you want when dealing with more complex scenarios. You can do many of the normal things like conditionals, joins, trys, loops, for_each, but they&apos;re clunky and limited when compared to something like Golang or Python. </li><li>The tooling around HCL is pretty barebones. You get some syntax checking, but otherwise it&apos;s a lot of switching tmux panes to figure out why it worked one place and didn&apos;t work another place. </li><li><code>terraform validate</code> and <code>terraform plan</code> don&apos;t mean the thing is going to work. You can write something, it&apos;ll pass both check stages and fail on <code>apply</code>. This can be really bad as your team needs to basically wait for you to fix whatever you did so the infrastructure isn&apos;t in an inconsistent place or half working. This shouldn&apos;t happen in theory but its a common problem. </li><li>If an <code>apply</code> fails, it&apos;s not always possible to back out. This is especially scary when there are timeouts, when something is still happening inside of the providers stack but now Terraform has given up on knowing what state it was left in. </li><li>Versioning is bad. Typically whatever version of Terraform you started with is what you have until someone decides to try to upgrade and hope nothing breaks. <code>tfenv</code> becomes a mission critical tool. Provider version drift is common, again typically &quot;whatever the latest version was when someone wrote this module&quot;. </li></ul><h3 id="license-change">License Change</h3><p>All of this is annoying, but I&apos;ve learned to grumble and live with it. Then HashiCorp decided to pull the panic lever of &quot;open-source&quot; companies which is a big license change. Even though Terraform Cloud, their money-making product, was never open-source, they decided that the Terraform CLI needed to fall under the BSL. <a href="https://github.com/hashicorp/terraform/blob/main/LICENSE">You can read it here.</a> The specific clause people are getting upset about is below:</p><blockquote>You may make production use of the Licensed Work,<br>provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp&apos;s products.</blockquote><p>Now this clause, combined with the 4 year expiration date, effectively kills the Terraform ecosystem. Nobody is going to authorize internal teams to open-source any complementary tooling with the BSL in place and there certainly isn&apos;t going to be any competitive pressure to improve Terraform. While it doesn&apos;t, at least how I read it as not a lawyer, really impact most usage of Terraform as just a tool that you run on your laptop, it does make the future of Terraform development directly tied to Terraform Cloud. This wouldn&apos;t be a problem except Terraform Cloud is bad. </p><h3 id="terraform-cloud">Terraform Cloud</h3><p>I&apos;ve used it for a year, it&apos;s extremely bare-bones software. It picks the latest version when you make the workspace of Terraform and then that&apos;s it. It doesn&apos;t help you upgrade Terraform, it doesn&apos;t really do any checking or optimizations, structure suggestions or anything else you need as Terraform scales. It sorta integrates with Terragrunt but not really. Basically it is identical to the CLI output of Terraform with some slight visual dressing. Then there&apos;s the kicker: the price. </p><p>$0.00014 per resource per <em>hour</em>. This is predatory pricing. First, because Terraform drops in value to zero if you can&apos;t put everything into Infrastructure as Code. HashiCorp knows this, hence the per-resource price. Second because they know it&apos;s impossible for me, the maintainer of the account, to police. What am I supposed to do, tell people &quot;no you cannot have a custom IAM policy because we can&apos;t have people writing safe scoped roles&quot;? Maybe I should start forcing subdomain sharing, make sure we don&apos;t get too spoiled with all these free hostnames. Finally it&apos;s especially grating because we&apos;re talking about sticking small collections of JSON onto object storage. There&apos;s no engineering per resource, no scaling concerns on HashiCorp&apos;s side and disk space is cheap to boot. </p><p>This combined with the license change is enough for me. I&apos;m out. I&apos;ll deal with some grief to use your product, but at this point HashiCorp has overplayed the value of Terraform. It&apos;s a clunky tool that scales poorly and I need to do all the scaling and upgrade work myself with third-party tools, even if I pay you for your cloud product. The per-hour pricing is just the final nail in the coffin from HashiCorp. </p><p>I asked around for an alternative and someone recommend Pulumi. I&apos;ve never heard of them before, so I thought this could be a super fun opportunity to try them out. </p><h3 id="pulumi">Pulumi</h3><p>Pulumi and Terraform are similar, except unlike Terraform with HCL, Pulumi has lots of scale built in. Why? Because you can use a real programming language to write your Infrastructure as Code. It&apos;s a clever concept, letting you scale up the complexity of your project from writing just YAML to writing Golang or Python. </p><p>Here is the basic outline of how Pulumi structures infrastructure. </p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/08/image-6.png" class="kg-image" alt="Terraform is dead; Long live Pulumi?" loading="lazy" width="620" height="456" srcset="https://matduggan.com/content/images/size/w600/2023/08/image-6.png 600w, https://matduggan.com/content/images/2023/08/image-6.png 620w"></figure><p>You write programs inside of projects with Nodejs, Python, Golang, .Net, Java or YAML. Programs define resources. You then run the programs inside of stacks, which are different environments. It&apos;s nice that Pulumi comes with the project structure defined vs Terraform you define it yourself. Every stack has its own state out of the box which again is a built-in optimization. </p><p>Installation was easy and t<a href="https://www.pulumi.com/docs/install/">hey had all the expected install options.</a> Going through the <a href="https://github.com/pulumi/pulumi">source code</a> I was impressed with the quality, but was concerned about the 1,718 open issues as of writing this. Clicking around it does seem like they&apos;re actively working on them and it has your normal percentage of &quot;not real issues but just people opening them as issues&quot; problem. Also a lot of open issues with comments suggests an engaged user base. The setup on my side was very easy and I opted not to use their cloud product, mostly because it has the same problem that Terraform Cloud has.</p><blockquote>A Pulumi Credit is the price for managing one resource for one hour. If using the Team Edition, each credit costs $0.0005. For billing purposes, we count any resource that&apos;s declared in a Pulumi program. This includes <a href="https://www.pulumi.com/docs/concepts/resources#custom-resources">provider resources</a> (e.g., an Amazon S3 bucket), <a href="https://www.pulumi.com/docs/concepts/resources#components">component resources</a>which are groupings of resources (e.g., an Amazon EKS cluster), and <a href="https://www.pulumi.com/docs/concepts/stack">stacks</a> which contain resources (e.g., dev, test, prod stacks).</blockquote><blockquote>You consume one Pulumi Credit to manage each resource for an hour. For example, one stack containing one S3 bucket and one EC2 instance is three resources that are counted in your bill. Example: If you manage 625 resources with Pulumi every month, you will use 450,000 Pulumi Credits each month. Your monthly bill would be $150 USD = (450,000 total credits - 150,000 free credits) * $0.0005.</blockquote><p>My mouth was actually agape when I got to that monthly bill. I get 150k credits for &quot;free&quot; with Teams which is 200 resources a month. That is <em>absolutely nothing</em>. That&apos;s &quot;my DNS records live in Infrastructure as Code&quot;. But paying per hour doesn&apos;t even unlock all the features! I&apos;m limited on team size, I don&apos;t get SSO, I don&apos;t get support. Also you are the smaller player, how do you charge <em>more</em> than HashiCorp? Disk space is real cheap and these files are very small. Charge me $99 a month per runner or per user or whatever you need to, but I don&apos;t want to ask the question &quot;are we putting too much of our infrastructure into code&quot;. It&apos;s either all in there or there&apos;s zero point and this pricing works directly against that goal. </p><p>Alright so Pulumi Cloud is out. Maybe the Enterprise pricing is better but that&apos;s not on the website so I can&apos;t make a decision based on that. I can&apos;t mentally handle getting on another sales email list. Thankfully Pulumi has state locking with S3 now <a href="https://github.com/pulumi/pulumi/pull/2697">according to this</a> so this isn&apos;t a deal-breaker. &#xA0;Let&apos;s see what running it just locally looks like.</p><h3 id="pulumi-open-source-only">Pulumi Open-Source only</h3><p>Thankfully they make that pretty easy. <code><code>pulumi login --local</code></code> means your state is stored locally, encrypted with a passphrase. To use s3 just switch that to <code>pulumi login s3://</code> Now managing state locally or using S3 isn&apos;t a new thing, but it&apos;s nice that switching between them is pretty easy. You can start local, grow to S3 and then migrate to their Cloud product as you need. Run <code>pulumi new python</code> for a new blank Python setup. </p><pre><code>&#x276F; pulumi new python
This command will walk you through creating a new Pulumi project.

Enter a value or leave blank to accept the (default), and press &lt;ENTER&gt;.
Press ^C at any time to quit.

project name: (test) test
project description: (A minimal Python Pulumi program)
Created project &apos;test&apos;

stack name: (dev)
Created stack &apos;dev&apos;
Enter your passphrase to protect config/secrets:
Re-enter your passphrase to confirm:

Installing dependencies...

Creating virtual environment...
Finished creating virtual environment
Updating pip, setuptools, and wheel in virtual environment...</code></pre><p>I love that it does all the correct Python things. We have a <code>venv</code>, we&apos;ve got a <code>requirements.txt</code> and we&apos;ve got a simple configuration file. Working with it was delightful. Setting my Hetzner API key as a secret was easy and straight-forward with: <code>pulumi config set hcloud:token XXXXXXXXXXXXXX --secret</code>. So what does working with it look like. Let&apos;s look at an error. </p><pre><code>&#x276F; pulumi preview
Enter your passphrase to unlock config/secrets
    (set PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE to remember):
Previewing update (dev):
     Type                 Name               Plan     Info
     pulumi:pulumi:Stack  matduggan.com-dev           1 error


Diagnostics:
  pulumi:pulumi:Stack (matduggan.com-dev):
    error: Program failed with an unhandled exception:
    Traceback (most recent call last):
      File &quot;/opt/homebrew/bin/pulumi-language-python-exec&quot;, line 197, in &lt;module&gt;
        loop.run_until_complete(coro)
      File &quot;/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py&quot;, line 653, in run_until_complete
        return future.result()
               ^^^^^^^^^^^^^^^
      File &quot;/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py&quot;, line 137, in run_in_stack
        await run_pulumi_func(lambda: Stack(func))
      File &quot;/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py&quot;, line 49, in run_pulumi_func
        func()
      File &quot;/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py&quot;, line 137, in &lt;lambda&gt;
        await run_pulumi_func(lambda: Stack(func))
                                      ^^^^^^^^^^^
      File &quot;/Users/mathew.duggan/Documents/work/pulumi/venv/lib/python3.11/site-packages/pulumi/runtime/stack.py&quot;, line 160, in __init__
        func()
      File &quot;/opt/homebrew/bin/pulumi-language-python-exec&quot;, line 165, in run
        return runpy.run_path(args.PROGRAM, run_name=&apos;__main__&apos;)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File &quot;&lt;frozen runpy&gt;&quot;, line 304, in run_path
      File &quot;&lt;frozen runpy&gt;&quot;, line 240, in _get_main_module_details
      File &quot;&lt;frozen runpy&gt;&quot;, line 159, in _get_module_details
      File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1074, in get_code
      File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1004, in source_to_code
      File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed
      File &quot;/Users/mathew.duggan/Documents/work/pulumi/__main__.py&quot;, line 14
        )], user_data=&quot;&quot;&quot;
                      ^
    SyntaxError: unterminated triple-quoted string literal (detected at line 17)</code></pre><p>We get all the super clear output of a Python error message, we still get the secrets encryption and we get all the options of Python when writing the file. However things get a little unusual when I go to inspect the state files. </p><h3 id="local-state-files">Local State Files</h3><p>For some reason when I select local, Pulumi doesn&apos;t store the state files in the same directory as where I&apos;m working. Instead it stores them as a user preference at <code>~/.pulumi</code> which is odd. I understand I selected local, but it&apos;s weird to assume I don&apos;t want to store the state in git or something. It is also storing a lot of things in my user directory: <code>358 directories, 848 files</code>. Every template is its own directory. </p><p>How can you set it up to work correctly? </p><pre><code>rm -rf ~/.pulumi
mkdir test &amp;&amp; cd test
mkdir pulumi
pulumi login file://pulumi/
pulumi new --force python
cd ~/.pulumi
336 directories, 815 files
</code></pre><p>If you go back into the directory &#xA0;and go to <code>/test/pulumi/.pulumi</code> you do see the state files. The force flag is required to let it create the new project inside a directory with stuff already in it. It all ends up working but it&apos;s clunky. </p><p>Maybe I&apos;m alone on this, but I feel like this is unnecessarily complicated. If I&apos;m going to work locally, the assumption should be I&apos;m going to sit this inside of a repo. Or at the very least I&apos;m going to expect the directory to be a self-contained thing. Also don&apos;t put stuff at $HOME/.pulumi. The correct location is <code>~/.config</code>. I understand nobody follows that rule but the right places to put it are: in the directory where I make the project or in ~/.config. </p><h3 id="s3-compatible-state">S3-compatible State</h3><p>Since this is the more common workflow, let me talk a bit about S3 remote backend. I tried to do a lot of testing to cover as many use-cases as possible. The lockfile works and is per stack, so you do have that basic level of functionality. Stacks cannot reference each other&apos;s outputs unless they are in the same bucket as far as I can tell, so you would need to plan for one bucket. Sharing stack names across multiple projects works, so you don&apos;t need to worry that every project has a <code>dev</code>, <code>stage</code> and <code>prod</code>. State encryption is your problem, but that&apos;s pretty easy to deal with in modern object storage. </p><p>The login process is basically <code>pulumi login &apos;s3://?region=us-east-1&amp;awssdk=v2&amp;profile=&apos;</code> and for GCP <code>pulumi login gs://</code>. You can see all the custom backend setup docs <a href="https://www.pulumi.com/docs/concepts/state/#aws-s3">here.</a> I also moved between custom backends, going from <code>local</code> to <code>s3</code> and from <code>s3</code> to GCP. It all functioned like I would expect, which was nice. </p><p>Otherwise nothing exciting to report. In my testing it worked as well as local, and trying to break it with a few folks working on the same repo didn&apos;t reveal any obvious problems. It seems as reliable as Terraform in S3, which is to say not perfect but pretty good. </p><h3 id="daily-use">Daily use</h3><p>Once Pulumi was set up to use object storage, I tried to use it to manage a non-production project in Google Cloud along with someone else who agreed to work with me on it. I figured with at least two people doing the work, the experience would be more realistic. </p><p>Compared to working with Terraform, I felt like Pulumi was easier to use. Having all of the options and autocomplete of an IDE available to me when I wanted it really sped things up, plus handling edge cases that previously would have required a lot of very sensitive HCL were very simple with Python. I also liked being able to write tests for infrastructure code, which made things like database operations feel less dangerous. In Terraform the only safety check is whoever is looking at the output, so having another level of checking before potentially destroying resources was nice. </p><p>While Pulumi does provide more opinions on how to structure it, even with two of us there were quickly some disagreements on the right way to do things. I prefer more of a monolithic design and my peer prefers smaller stacks, which you can do but I find chaining together the stack output to be more work than its worth. I found the micro-service style in Pulumi to be a bit grating and easy to break, while the monolithic style was much easier for me to work in. </p><p>Setting up a CI/CD pipeline wasn&apos;t too challenging, basing everything <a href="https://hub.docker.com/r/pulumi/pulumi">off of this image.</a> All the CI/CD docs on their website presuppose you are using the Cloud product, which again makes sense and I would be glad to do if they changed the pricing. However rolling your own isn&apos;t hard, it works as expected, but I want to point out one sticking point I ran into that isn&apos;t really Pulumi&apos;s fault so much as it is &quot;the complexity of adding in secrets support&quot;. </p><h3 id="pulumi-secrets">Pulumi Secrets</h3><p>So Pulumi integrates with a lot of secret managers, which is great. It also has its own secret manager which works fine. The key things to keep in mind are: if you are adding a secret, make sure you flag it as a secret to keep it from getting printed on the output. If you are going to use an external secrets manager, set aside some time to get that working. It took a bit of work to get the permissions such that CI/CD and everything else worked as expected, especially with the micro-service design where one program relied on the output of another program. <a href="https://www.pulumi.com/docs/concepts/secrets/">You can read the docs here.</a></p><h3 id="unexpected-benefits">Unexpected Benefits</h3><p>Here are some delightful (maybe obvious) things I ran into while working with Pulumi. </p><ul><li>We already have experts in these languages. It was great to be able to ask someone with years of Python development experience &quot;what is the best way to structure large Python projects&quot;. There is so much expertise and documentation out there vs the wasteland that is Terraform project architecture. </li><li>Being able to use a database. Holy crap, this was a real game-changer to me. I pulled down the GCP IAM stock roles, stuck them in SQLite and then was able to query them depending on the set of permissions the service account or user group required. Very small thing, but a massive time-saver vs me going to the website and searching around. It also lets me automate the entire process of Ticket -&gt; PR for IAM role.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matduggan.com/content/images/2023/08/automation-api.png" class="kg-image" alt="Terraform is dead; Long live Pulumi?" loading="lazy" width="2000" height="1116" srcset="https://matduggan.com/content/images/size/w600/2023/08/automation-api.png 600w, https://matduggan.com/content/images/size/w1000/2023/08/automation-api.png 1000w, https://matduggan.com/content/images/size/w1600/2023/08/automation-api.png 1600w, https://matduggan.com/content/images/size/w2400/2023/08/automation-api.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>This is what I&apos;m talking about.</figcaption></figure><ul><li>You can set up easy APIs. Making a website that generates HCL to stick into a repo and then make a PR? Nightmare. Writing a simple Flask app that runs Pulumi against your infrastructure with scoped permissions? Not bad at all. If your org does something like &quot;add a lot of DNS records&quot; or &quot;add a lot of SSH keys&quot;, this really has the potential to change your workday. Also it&apos;s easy to set up an abstraction for your entire Infrastructure. <a href="https://www.pulumi.com/docs/using-pulumi/automation-api/getting-started-automation-api/">Pulumi has docs on how to get started with all of this here.</a> Slack bots, simple command-line tools, all of it was easy to do. </li><li>Tests. It&apos;s nice to be able to treat infrastructure like its important. </li><li>Getting better at a real job skill. Every hour I get more skilled in writing Golang, I&apos;m more valuable to my organization. I&apos;m also just getting more hours writing code in an actual programming language, which is always good. Every hour I invest in HCL is an hour I invested in something that no other tool will ever use. </li><li>Speed seemed faster than Terraform. I don&apos;t know why that would be, but it did feel like especially on successive previews the results just came much faster. This was true on our CI/CD jobs as well, timing them against Terraform it seemed like Pulumi was faster most of the time. Take this with a pile of salt, I didn&apos;t do a real benchmark and ultimately we&apos;re hitting the same APIs, so I doubt there&apos;s a giant performance difference. </li></ul><h3 id="conclusion">Conclusion</h3><p>Do I think Pulumi can take over the Terraform throne? There&apos;s a lot to like here. The product is one of those great ideas, a natural evolution from where we started in DevOps to where we want to go. Moving towards treating infrastructure like everything else is the next logical leap and they have already done a lot of the ground work. I want Pulumi to succeed, I like it as a product. </p><p>However it needs to get out of its own way. The pricing needs a rethink, make it a no-brainier for me to use your cloud product and get fully integrated into it. If you give me a reliable, consistent bill I can present to leadership, I don&apos;t have to worry about Pulumi as a service I need to police. The entire organization can be let loose to write whatever infra they need, which benefits us and Pulumi as we&apos;ll be more dependent on their internal tooling. </p><p>If cost management is a big issue, have me bring my own object storage and VMs for runners. Pulumi can still thrive and be very successfully without being a zero-setup business. This is a tool for people <em>who maintain large infrastructures</em>. We can handle some infrastructure requirements if that is the sticking point. &#xA0;</p><p>Hopefully the folks running Pulumi see this moment as the opportunity it is, both for the field at large to move past markup languages and for them to make a grab for a large share of the market. </p><p>If there is interest I can do more write-ups on sample Flask apps or Slack bots or whatever. Also if I made a mistake or you think something needs clarification, feel free to reach out to me here: <a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a>.</p>]]></content:encoded></item><item><title><![CDATA[Adventures in IPv6 Part 2]]></title><description><![CDATA[<p>As I discussed in <a href="https://matduggan.com/ipv6-is-a-disaster-and-its-our-fault/">Part 1</a> I&apos;ve converted this site over to pure IPv6. Well at least as pure as I could get away with. I still have some problems though, chief among them that I cannot send emails with the Ghost CMS. I&apos;ve switched from</p>]]></description><link>https://matduggan.com/adventures-in-ipv6-part-2/</link><guid isPermaLink="false">64d1f92fcebc9400015bd5e5</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Tue, 08 Aug 2023 11:17:20 GMT</pubDate><content:encoded><![CDATA[<p>As I discussed in <a href="https://matduggan.com/ipv6-is-a-disaster-and-its-our-fault/">Part 1</a> I&apos;ve converted this site over to pure IPv6. Well at least as pure as I could get away with. I still have some problems though, chief among them that I cannot send emails with the Ghost CMS. I&apos;ve switched from Mailgun to Scaleway which does have IPv6 for their SMTP service. </p><pre><code>smtp.tem.scw.cloud has IPv6 address 2001:bc8:1201:21:d6ae:52ff:fed0:418e
smtp.tem.scw.cloud has IPv6 address 2001:bc8:1201:21:d6ae:52ff:fed0:6aac</code></pre><p>I&apos;ve also confirmed that my docker-compose stack running Ghost can successfully reach IPv6 external addresses with no issues. </p><pre><code>matdevdug-busy-1      | PING google.com (2a00:1450:4002:411::200e): 56 data bytes
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=0 ttl=113 time=15.079 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=1 ttl=113 time=14.607 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=2 ttl=113 time=14.540 ms
matdevdug-busy-1      | 64 bytes from 2a00:1450:4002:411::200e: seq=3 ttl=113 time=14.593 ms
matdevdug-busy-1      |
matdevdug-busy-1      |
matdevdug-busy-1      | --- google.com ping statistics ---
matdevdug-busy-1      | 4 packets transmitted, 4 packets received, 0% packet loss
matdevdug-busy-1      | round-trip min/avg/max = 14.540/14.704/15.079 ms</code></pre><p>I&apos;ve also confirmed that Scaleway is reachable by the container no problem with the domain name, so it isn&apos;t a DNS problem. </p><pre><code>PING smtp.tem.scw.cloud(ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac)) 56 data bytes
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=1 ttl=53 time=23.1 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=2 ttl=53 time=22.2 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=3 ttl=53 time=22.2 ms
64 bytes from ff6ad116-d710-4726-b5d3-1687dceb56cb.fr-par-2.baremetal.scw.cloud (2001:bc8:1201:21:d6ae:52ff:fed0:6aac): icmp_seq=4 ttl=53 time=22.1 ms

--- smtp.tem.scw.cloud ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 22.086/22.397/23.063/0.388 ms</code></pre><p>At this point I have three theories. </p><ol><li>It&apos;s an SMTP problem. Possible, but unlikely given how long SMTP has supported IPv6. A quick check by running it over bash <a href="https://mailtrap.io/blog/bash-send-email/">by following the instructions here</a> shows that works fine. </li><li>Something is blocking the port. </li></ol><pre><code>telnet smtp.tem.scw.cloud 587
Trying 2001:bc8:1201:21:d6ae:52ff:fed0:6aac...
Connected to smtp.tem.scw.cloud.
Escape character is &apos;^]&apos;.
220 smtp.scw-tem.cloud ESMTP Service Ready</code></pre><p>Alright it&apos;s not that. </p><p>3. Nodemailer is being stupid. <a href="https://github.com/TryGhost/Ghost/blob/cb21763865f6b417a45c54c8ac2b9bdbf8570302/ghost/core/core/server/services/mail/GhostMailer.js#L45">It looks like Ghost relies on Nodemailer so let&apos;s check it out.</a> Let&apos;s install Node and NPM on my debian junk machine. </p><figure class="kg-card kg-code-card"><pre><code>sudo apt install npm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  eslint gyp handlebars libjs-async libjs-events libjs-inherits libjs-is-typedarray libjs-prettify libjs-regenerate libjs-source-map
  libjs-sprintf-js libjs-typedarray-to-buffer libjs-util libnode-dev libssl-dev libuv1-dev node-abbrev node-agent-base node-ajv node-ajv-keywords
  node-ampproject-remapping node-ansi-escapes node-ansi-regex node-ansi-styles node-anymatch node-aproba node-archy node-are-we-there-yet
  node-argparse node-arrify node-assert node-async node-async-each node-babel-helper-define-polyfill-provider node-babel-plugin-add-module-exports
  node-babel-plugin-lodash node-babel-plugin-polyfill-corejs2 node-babel-plugin-polyfill-corejs3 node-babel-plugin-polyfill-regenerator node-babel7
  node-babel7-runtime node-balanced-match node-base64-js node-binary-extensions node-brace-expansion node-braces node-browserslist node-builtins
  node-cacache node-camelcase node-caniuse-lite node-chalk node-chokidar node-chownr node-chrome-trace-event node-ci-info node-cli-table node-cliui
  node-clone node-clone-deep node-color-convert node-color-name node-colors node-columnify node-commander node-commondir node-concat-stream
  node-console-control-strings node-convert-source-map node-copy-concurrently node-core-js node-core-js-compat node-core-js-pure node-core-util-is
  node-css-loader node-css-selector-tokenizer node-data-uri-to-buffer node-debbundle-es-to-primitive node-debug node-decamelize
  node-decompress-response node-deep-equal node-deep-is node-defaults node-define-properties node-defined node-del node-delegates node-depd
  node-diff node-doctrine node-electron-to-chromium node-encoding node-end-of-stream node-enhanced-resolve node-err-code node-errno node-error-ex
  node-es-abstract node-es-module-lexer node-es6-error node-escape-string-regexp node-escodegen node-eslint-scope node-eslint-utils
  node-eslint-visitor-keys node-espree node-esprima node-esquery node-esrecurse node-estraverse node-esutils node-events node-fancy-log
  node-fast-deep-equal node-fast-levenshtein node-fetch node-file-entry-cache node-fill-range node-find-cache-dir node-find-up node-flat-cache
  node-flatted node-for-in node-for-own node-foreground-child node-fs-readdir-recursive node-fs-write-stream-atomic node-fs.realpath
  node-function-bind node-functional-red-black-tree node-gauge node-get-caller-file node-get-stream node-glob node-glob-parent node-globals
  node-globby node-got node-graceful-fs node-gyp node-has-flag node-has-unicode node-hosted-git-info node-https-proxy-agent node-iconv-lite
  node-icss-utils node-ieee754 node-iferr node-ignore node-imurmurhash node-indent-string node-inflight node-inherits node-ini node-interpret
  node-ip node-ip-regex node-is-arrayish node-is-binary-path node-is-buffer node-is-extendable node-is-extglob node-is-glob node-is-number
  node-is-path-cwd node-is-path-inside node-is-plain-obj node-is-plain-object node-is-stream node-is-typedarray node-is-windows node-isarray
  node-isexe node-isobject node-istanbul node-jest-debbundle node-jest-worker node-js-tokens node-js-yaml node-jsesc node-json-buffer
  node-json-parse-better-errors node-json-schema node-json-schema-traverse node-json-stable-stringify node-json5 node-jsonify node-jsonparse
  node-kind-of node-levn node-loader-runner node-locate-path node-lodash node-lodash-packages node-lowercase-keys node-lru-cache node-make-dir
  node-memfs node-memory-fs node-merge-stream node-micromatch node-mime node-mime-types node-mimic-response node-minimatch node-minimist
  node-minipass node-mkdirp node-move-concurrently node-ms node-mute-stream node-n3 node-negotiator node-neo-async node-nopt
  node-normalize-package-data node-normalize-path node-npm-bundled node-npm-package-arg node-npm-run-path node-npmlog node-object-assign
  node-object-inspect node-once node-optimist node-optionator node-osenv node-p-cancelable node-p-limit node-p-locate node-p-map node-parse-json
  node-path-dirname node-path-exists node-path-is-absolute node-path-is-inside node-path-type node-picocolors node-pify node-pkg-dir node-postcss
  node-postcss-modules-extract-imports node-postcss-modules-values node-postcss-value-parser node-prelude-ls node-process-nextick-args node-progress
  node-promise-inflight node-promise-retry node-promzard node-prr node-pump node-punycode node-quick-lru node-randombytes node-read
  node-read-package-json node-read-pkg node-readable-stream node-readdirp node-rechoir node-regenerate node-regenerate-unicode-properties
  node-regenerator-runtime node-regenerator-transform node-regexpp node-regexpu-core node-regjsgen node-regjsparser node-repeat-string
  node-require-directory node-resolve node-resolve-cwd node-resolve-from node-resumer node-retry node-rimraf node-run-queue node-safe-buffer
  node-schema-utils node-semver node-serialize-javascript node-set-blocking node-set-immediate-shim node-shebang-command node-shebang-regex
  node-signal-exit node-slash node-slice-ansi node-source-list-map node-source-map node-source-map-support node-spdx-correct node-spdx-exceptions
  node-spdx-expression-parse node-spdx-license-ids node-sprintf-js node-ssri node-string-decoder node-string-width node-strip-ansi node-strip-bom
  node-strip-json-comments node-supports-color node-tapable node-tape node-tar node-terser node-text-table node-through node-time-stamp
  node-to-fast-properties node-to-regex-range node-tslib node-type-check node-typedarray node-typedarray-to-buffer
  node-unicode-canonical-property-names-ecmascript node-unicode-match-property-ecmascript node-unicode-match-property-value-ecmascript
  node-unicode-property-aliases-ecmascript node-unique-filename node-uri-js node-util node-util-deprecate node-uuid node-v8-compile-cache
  node-v8flags node-validate-npm-package-license node-validate-npm-package-name node-watchpack node-wcwidth.js node-webassemblyjs
  node-webpack-sources node-which node-wide-align node-wordwrap node-wrap-ansi node-wrappy node-write node-write-file-atomic node-y18n node-yallist
  node-yargs node-yargs-parser terser webpack
Suggested packages:
  node-babel-eslint node-esprima-fb node-inquirer libjs-angularjs libssl-doc node-babel-plugin-polyfill-es-shims node-babel7-debug javascript-common
  livescript chai node-jest-diff node-opener
Recommended packages:
  javascript-common build-essential node-tap
The following NEW packages will be installed:
  eslint gyp handlebars libjs-async libjs-events libjs-inherits libjs-is-typedarray libjs-prettify libjs-regenerate libjs-source-map
  libjs-sprintf-js libjs-typedarray-to-buffer libjs-util libnode-dev libssl-dev libuv1-dev node-abbrev node-agent-base node-ajv node-ajv-keywords
  node-ampproject-remapping node-ansi-escapes node-ansi-regex node-ansi-styles node-anymatch node-aproba node-archy node-are-we-there-yet
  node-argparse node-arrify node-assert node-async node-async-each node-babel-helper-define-polyfill-provider node-babel-plugin-add-module-exports
  node-babel-plugin-lodash node-babel-plugin-polyfill-corejs2 node-babel-plugin-polyfill-corejs3 node-babel-plugin-polyfill-regenerator node-babel7
  node-babel7-runtime node-balanced-match node-base64-js node-binary-extensions node-brace-expansion node-braces node-browserslist node-builtins
  node-cacache node-camelcase node-caniuse-lite node-chalk node-chokidar node-chownr node-chrome-trace-event node-ci-info node-cli-table node-cliui
  node-clone node-clone-deep node-color-convert node-color-name node-colors node-columnify node-commander node-commondir node-concat-stream
  node-console-control-strings node-convert-source-map node-copy-concurrently node-core-js node-core-js-compat node-core-js-pure node-core-util-is
  node-css-loader node-css-selector-tokenizer node-data-uri-to-buffer node-debbundle-es-to-primitive node-debug node-decamelize
  node-decompress-response node-deep-equal node-deep-is node-defaults node-define-properties node-defined node-del node-delegates node-depd
  node-diff node-doctrine node-electron-to-chromium node-encoding node-end-of-stream node-enhanced-resolve node-err-code node-errno node-error-ex
  node-es-abstract node-es-module-lexer node-es6-error node-escape-string-regexp node-escodegen node-eslint-scope node-eslint-utils
  node-eslint-visitor-keys node-espree node-esprima node-esquery node-esrecurse node-estraverse node-esutils node-events node-fancy-log
  node-fast-deep-equal node-fast-levenshtein node-fetch node-file-entry-cache node-fill-range node-find-cache-dir node-find-up node-flat-cache
  node-flatted node-for-in node-for-own node-foreground-child node-fs-readdir-recursive node-fs-write-stream-atomic node-fs.realpath
  node-function-bind node-functional-red-black-tree node-gauge node-get-caller-file node-get-stream node-glob node-glob-parent node-globals
  node-globby node-got node-graceful-fs node-gyp node-has-flag node-has-unicode node-hosted-git-info node-https-proxy-agent node-iconv-lite
  node-icss-utils node-ieee754 node-iferr node-ignore node-imurmurhash node-indent-string node-inflight node-inherits node-ini node-interpret
  node-ip node-ip-regex node-is-arrayish node-is-binary-path node-is-buffer node-is-extendable node-is-extglob node-is-glob node-is-number
  node-is-path-cwd node-is-path-inside node-is-plain-obj node-is-plain-object node-is-stream node-is-typedarray node-is-windows node-isarray
  node-isexe node-isobject node-istanbul node-jest-debbundle node-jest-worker node-js-tokens node-js-yaml node-jsesc node-json-buffer
  node-json-parse-better-errors node-json-schema node-json-schema-traverse node-json-stable-stringify node-json5 node-jsonify node-jsonparse
  node-kind-of node-levn node-loader-runner node-locate-path node-lodash node-lodash-packages node-lowercase-keys node-lru-cache node-make-dir
  node-memfs node-memory-fs node-merge-stream node-micromatch node-mime node-mime-types node-mimic-response node-minimatch node-minimist
  node-minipass node-mkdirp node-move-concurrently node-ms node-mute-stream node-n3 node-negotiator node-neo-async node-nopt
  node-normalize-package-data node-normalize-path node-npm-bundled node-npm-package-arg node-npm-run-path node-npmlog node-object-assign
  node-object-inspect node-once node-optimist node-optionator node-osenv node-p-cancelable node-p-limit node-p-locate node-p-map node-parse-json
  node-path-dirname node-path-exists node-path-is-absolute node-path-is-inside node-path-type node-picocolors node-pify node-pkg-dir node-postcss
  node-postcss-modules-extract-imports node-postcss-modules-values node-postcss-value-parser node-prelude-ls node-process-nextick-args node-progress
  node-promise-inflight node-promise-retry node-promzard node-prr node-pump node-punycode node-quick-lru node-randombytes node-read
  node-read-package-json node-read-pkg node-readable-stream node-readdirp node-rechoir node-regenerate node-regenerate-unicode-properties
  node-regenerator-runtime node-regenerator-transform node-regexpp node-regexpu-core node-regjsgen node-regjsparser node-repeat-string
  node-require-directory node-resolve node-resolve-cwd node-resolve-from node-resumer node-retry node-rimraf node-run-queue node-safe-buffer
  node-schema-utils node-semver node-serialize-javascript node-set-blocking node-set-immediate-shim node-shebang-command node-shebang-regex
  node-signal-exit node-slash node-slice-ansi node-source-list-map node-source-map node-source-map-support node-spdx-correct node-spdx-exceptions
  node-spdx-expression-parse node-spdx-license-ids node-sprintf-js node-ssri node-string-decoder node-string-width node-strip-ansi node-strip-bom
  node-strip-json-comments node-supports-color node-tapable node-tape node-tar node-terser node-text-table node-through node-time-stamp
  node-to-fast-properties node-to-regex-range node-tslib node-type-check node-typedarray node-typedarray-to-buffer
  node-unicode-canonical-property-names-ecmascript node-unicode-match-property-ecmascript node-unicode-match-property-value-ecmascript
  node-unicode-property-aliases-ecmascript node-unique-filename node-uri-js node-util node-util-deprecate node-uuid node-v8-compile-cache
  node-v8flags node-validate-npm-package-license node-validate-npm-package-name node-watchpack node-wcwidth.js node-webassemblyjs
  node-webpack-sources node-which node-wide-align node-wordwrap node-wrap-ansi node-wrappy node-write node-write-file-atomic node-y18n node-yallist
  node-yargs node-yargs-parser npm terser webpack
0 upgraded, 349 newly installed, 0 to remove and 1 not upgraded.
Need to get 13.8 MB of archives.
After this operation, 106 MB of additional disk space will be used.
Do you want to continue? [Y/n]</code></pre><figcaption>Jesus Christ NPM, what is happening</figcaption></figure><p>Now that I have that nightmare factory installed. </p><pre><code>&quot;use strict&quot;;
const nodemailer = require(&quot;nodemailer&quot;);

const transporter = nodemailer.createTransport({
  host: &quot;smtp.tem.scw.cloud&quot;,
  port: 587,
  // Just so I don&apos;t need to worry about it
  secure: false,
  auth: {
    // TODO: replace `user` and `pass` values from &lt;https://forwardemail.net&gt;
    user: &apos;scaleway-user-name&apos;,
    pass: &apos;scaleway-password&apos;
  }
});

// async..await is not allowed in global scope, must use a wrapper
async function main() {
  // send mail with defined transport object
  const info = await transporter.sendMail({
    from: &apos;&quot;Dead People &#x1F47B;&quot; &lt;noreply@matduggan.com&gt;&apos;, // sender address
    to: &quot;mat@matduggan.com&quot;, // list of receivers
    subject: &quot;Hello&quot;, // Subject line
    text: &quot;Hello world&quot;, // plain text body
    html: &quot;&lt;b&gt;Hello world?&lt;/b&gt;&quot;, // html body
  });

  console.log(&quot;Message sent: %s&quot;, info.messageId);
}

main().catch(console.error);</code></pre><p>Looks like Nodemailer doesn&apos;t seem to understand this is an IPv6 box. </p><pre><code>node example.js
Error: connect ENETUNREACH 51.159.99.81:587 - Local (0.0.0.0:0)
    at internalConnect (node:net:1060:16)
    at defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18)
    at node:net:1244:9
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
  errno: -101,
  code: &apos;ESOCKET&apos;,
  syscall: &apos;connect&apos;,
  address: &apos;51.159.99.81&apos;,
  port: 587,
  command: &apos;CONN&apos;
}</code></pre><p>It looks like this should have been fixed here: <a href="https://github.com/nodemailer/nodemailer/pull/1311">https://github.com/nodemailer/nodemailer/pull/1311</a> but clearly isn&apos;t. What happens if I just manually set the IPv6 address. </p><p><code>Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate&apos;s altnames: IP: 2001:bc8:1201:21:d6ae:52ff:fed0:6aac is not in the cert&apos;s list:</code></p><p>However if you set it to use an IP for host and a DNS entry for hostname, everything seems to work great. </p><pre><code>&quot;use strict&quot;;
const nodemailer = require(&quot;nodemailer&quot;);

const transporter = nodemailer.createTransport({
  host: &quot;2001:bc8:1201:21:d6ae:52ff:fed0:6aac&quot;,
  port: 587,
  secure: false,
  tls: {
    rejectUnauthorized: true,
    servername: &quot;smtp.tem.scw.cloud&quot;},
  auth: {
    user: &apos;scaleway-username&apos;,
    pass: &apos;scaleway-password&apos;
  }
});

// async..await is not allowed in global scope, must use a wrapper
async function main() {
  // send mail with defined transport object
  const info = await transporter.sendMail({
    from: &apos;&quot;Test&quot; &lt;noreply@matduggan.com&gt;&apos;, // sender address
    to: mat@matduggan.com&quot;, // list of receivers
    subject: &quot;Hello &#x2714;&quot;, // Subject line
    text: &quot;Hello world?&quot;, // plain text body
    html: &quot;&lt;b&gt;Hello world?&lt;/b&gt;&quot;, // html body
  });

  console.log(&quot;Message sent: %s&quot;, info.messageId);
}

main().catch(console.error);</code></pre><p>Alright well issue submitted here: <a href="https://github.com/TryGhost/Ghost/issues/17627">https://github.com/TryGhost/Ghost/issues/17627</a></p><p>It is a little alarming that the biggest Node email package doesn&apos;t work with IPv6 and seemingly only one person noticed and tried to fix it. Well whatever, we have a workaround. </p><h3 id="python">Python</h3><p>Alright let&apos;s try to fix the pip problems I was seeing before in various scripts. </p><pre><code>pip3 install requests
error: externally-managed-environment

&#xD7; This environment is externally managed
&#x2570;&#x2500;&gt; To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.</code></pre><p>Right I forgot Python was doing this now. Fine, I&apos;ll use <code>venv</code>, not a problem. I guess first I compile a version of Python if I want the latest? I don&apos;t see any newer ARM packages out there. Alright, compiling Python. </p><p><code>sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget libbz2-dev</code></p><p><code>wget <a href="https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz">https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz</a></code></p><p><code>cd Python-3.11.4/</code></p><p><code>sudo make -j 2</code></p><p><code>sudo make altinstall</code></p><p>Alright now pip works great on the latest version inside of a <code>venv</code>. My scripts all seem to work fine and there appears to be no issues. Whatever problem there was before is resolved. Specific shoutout to <code>requests</code> where I&apos;m doing some strange things with network traffic and it seems to have no problems. </p><h3 id="conclusion">Conclusion</h3><p>So the amount of work to get a pretty simple blog up was nontrivial, but we&apos;re here now. I have a patch for Ghost that I can apply to the container, Python seems to be working fine/great now and Docker seems to work as long as I use a user-created network with IPv6 strictly defined. The Docker default bridge also works if you specify the links inside of the docker-compose file, but that seems to be depricated so let&apos;s not waste too much time on that. For those looking for instructions on the Docker part <a href="https://dev.to/joeneville_/build-a-docker-ipv6-network-dfj">I just followed the guide outlined here.</a></p><p>Now that everything is up and running it seems fine, but again if you are thinking of running an IPv6 only server infrastructure, set aside a lot of time for problem solving. Even simple applications like this require a <em>lot</em> of research to get up and running successfully with outbound network functioning and everything linked up in the correct way. </p>]]></content:encoded></item><item><title><![CDATA[IPv6 Is A Disaster (but we can fix it)]]></title><description><![CDATA[<p>IP addresses have been in the news a lot lately and not for good reasons. AWS has <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwijs_melLuAAxUBSfEDHROmA_8QFnoECA8QAQ&amp;url=https%3A%2F%2Faws.amazon.com%2Fblogs%2Faws%2Fnew-aws-public-ipv4-address-charge-public-ip-insights%2F&amp;usg=AOvVaw2fcdbcV9UuAHAi05mTITp8&amp;opi=89978449">announced they are charging $.005 per IPv4 address per hour</a>, joining other cloud providers in charging for the luxury of a public IPv4 address. GCP charges $.004, same with Azure and Hetzner charges</p>]]></description><link>https://matduggan.com/ipv6-is-a-disaster-and-its-our-fault/</link><guid isPermaLink="false">64c8d1b4798dbb0001b3b66f</guid><category><![CDATA[Networking]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 04 Aug 2023 11:43:47 GMT</pubDate><content:encoded><![CDATA[<p>IP addresses have been in the news a lot lately and not for good reasons. AWS has <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwijs_melLuAAxUBSfEDHROmA_8QFnoECA8QAQ&amp;url=https%3A%2F%2Faws.amazon.com%2Fblogs%2Faws%2Fnew-aws-public-ipv4-address-charge-public-ip-insights%2F&amp;usg=AOvVaw2fcdbcV9UuAHAi05mTITp8&amp;opi=89978449">announced they are charging $.005 per IPv4 address per hour</a>, joining other cloud providers in charging for the luxury of a public IPv4 address. GCP charges $.004, same with Azure and Hetzner charges &#x20AC;0.001/h. Clearly the era of cloud providers going out and purchasing more IPv4 space is coming to an end. As time goes on, the addresses are just more valuable and it makes less sense to give them out for free. </p><p>So the writing is on the wall. We need to switch to IPv6. Now I was first told that we were going to need to switch to IPv6 when I was in high school in my first Cisco class and I&apos;m 36 now, to give you some perspective on how long this has been &quot;coming down the pipe&quot;. Up to this point I haven&apos;t done much at all with IPv6, there has been almost no market demand for those skills and I&apos;ve never had a job where anybody seemed all that interested in doing it. So I skipped learning about it, which is a shame because it&apos;s actually a great advancement in networking. </p><p>Now is the second best time to learn though, so I decided to migrate this blog to IPv6 only. We&apos;ll stick it behind a CDN to handle the IPv4 traffic, but let&apos;s join the cool kids club. What I found was horrifying: almost nothing works out of the box. Major &#xA0;dependencies cease functioning right away and workarounds cannot be described as production ready. The migration process for teams to IPv6 is going to be very rocky, mostly because almost nobody has done the work. We all skipped it for years and now we&apos;ll need to pay the price. </p><h3 id="why-is-ipv6-worth-the-work">Why is IPv6 worth the work?</h3><p>I&apos;m not gonna do a thing about what is IPv4 vs IPv6. There are plenty of great articles on the internet about that. Let&apos;s just quickly recap though &quot;why would anyone want to make the jump to IPv6&quot;. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.redhat.com/sysadmin/sites/default/files/styles/embed_large/public/2019-09/IPv6 packet headers.jpg?itok=PjMvf8HW" class="kg-image" alt loading="lazy"><figcaption>An IPv6 packet header</figcaption></figure><ul><li>Address space (obviously)</li><li>Smaller number of header fields (8 vs 13 on v4)</li></ul><figure class="kg-card kg-image-card"><img src="https://assets-global.website-files.com/61027bb0bc31fc6cafefbc0c/6372c4e0ad50813e10081cac_YQffi9iGZvl4JyFKu5cof9TF9ERYVm4VXEdeykDUd_MS6cFcCqcxMw0vL_5V1CxbDOysTimd5W92eC4oSLHUyG6HckZLMzxnPl9_6o4B8onWKEanxEC36soaWvVqgCqWgqpP6knl5biJVXzpHKDUX0EPAS7YYmR5w6CoHX4ymMo31G4SwPuO-cg-HFkfsA.png" class="kg-image" alt loading="lazy"></figure><ul><li>Faster processing: No more checksum, so routers don&apos;t have to do a recalculation for every packet. </li><li>Faster routing: More summary routes and hierarchical routes. (Don&apos;t know what that is? No stress. Summary route = combining multiple IPs so you don&apos;t need all the addresses, just the general direction based on the first part of the address. Ditto with routes, since IPv6 is globally unique you can have small and efficient backbone routing.)</li><li>QoS: Traffic Class and Flow Label fields make QoS easier. </li><li>Auto-addressing. This allows IPv6 hosts on a LAN to connect without a router or DHCP server. </li><li>You can add IPsec to IPv6 with the Authentication Header and Encapsulating Security Payload. </li></ul><p>Finally the biggest one: <strong>because IPv6 addresses are free and IPv4 ones are not. </strong></p><h3 id="setting-up-an-ipv6-only-server">Setting up an IPv6-Only Server</h3><p>The actual setup process was simple. I provisioned a Debian box and selected &quot;IPv6&quot;. Then I got my first surprise. My box didn&apos;t get <em>an</em> IPv6 address. I was given a /64 of addresses, which is <strong>18,446,744,073,709,551,616. </strong>It is good to know that my small ARM server could scale to run all the network infrastructure for every company I&apos;ve ever worked for on all public addresses. </p><p>Now this sounds wasteful but when you look at how IPv6 works, it really isn&apos;t. Since IPv6 is much less &quot;chatty&quot; than IPv4, even if I had 10,000 hosts on this network it doesn&apos;t matter. As discussed <a href="https://datatracker.ietf.org/doc/rfc5375/">here</a> it actually makes sense to keep all the IPv6 space, even if at first it comes across as insanely wasteful. So just don&apos;t think about how many addresses are getting sent to each device. </p><p><strong>Important: resist the urge to optimize address utilization. </strong>Talking to more experienced networking folks, this seems to be a common trap people fall into. We&apos;ve all spent so much time worrying about how much space we have remaining in an IPv4 block and designing around that problem. That issue doesn&apos;t exist anymore. A /64 prefix is the smallest you should configure on an interface. </p><p>Attempting to stick a smaller prefix, which is something I&apos;ve heard people try, like a /68 or a /96 can break stateless address auto-configuration. Your mentality should be a /48 per site. That&apos;s what the Regional Internet Registries hands out when allocating IPv6. When thinking about network organization, you need to think about the nibble boundary. (I know, it sounds like I&apos;m making shit up now). It&apos;s basically a way to make IPv6 easier to read. </p><p>Let&apos;s say you have 2402:9400:10::/48. You would divide it up as follows if you wanted only /64 for each box as a flat network.</p><!--kg-card-begin: html--><table id="v6SubnetTable"><tbody><tr><th id="leftcol">Subnet #</th><th>Subnet Address</th></tr><tr><td id="leftcol">0</td><td>2402:9400:10<span style="color:#1f77b4">:</span>:/64</td></tr><tr><td id="leftcol">1</td><td>2402:9400:10<span style="color:#1f77b4">:1</span>::/64</td></tr><tr><td id="leftcol">2</td><td>2402:9400:10<span style="color:#1f77b4">:2</span>::/64</td></tr><tr><td id="leftcol">3</td><td>2402:9400:10<span style="color:#1f77b4">:3</span>::/64</td></tr><tr><td id="leftcol">4</td><td>2402:9400:10<span style="color:#1f77b4">:4</span>::/64</td></tr><tr><td id="leftcol">5</td><td>2402:9400:10<span style="color:#1f77b4">:5</span>::/64</td></tr></tbody></table><!--kg-card-end: html--><p>A /52 works a similar way. </p><!--kg-card-begin: html--><table id="v6SubnetTable"><tbody><tr><th id="leftcol">Subnet #</th><th>Subnet Address</th></tr><tr><td id="leftcol">0</td><td>2402:9400:10<span style="color:#1f77b4">:</span>:/52</td></tr><tr><td id="leftcol">1</td><td>2402:9400:10<span style="color:#1f77b4">:1</span>000::/52</td></tr><tr><td id="leftcol">2</td><td>2402:9400:10<span style="color:#1f77b4">:2</span>000::/52</td></tr><tr><td id="leftcol">3</td><td>2402:9400:10<span style="color:#1f77b4">:3</span>000::/52</td></tr><tr><td id="leftcol">4</td><td>2402:9400:10<span style="color:#1f77b4">:4</span>000::/52</td></tr><tr><td id="leftcol">5</td><td>2402:9400:10<span style="color:#1f77b4">:5</span>000::/52</td></tr></tbody></table><!--kg-card-end: html--><p> You can still at a glance know which subnet you are looking at. </p><p>Alright I&apos;ve got my box ready to go. Let&apos;s try to set it up like a normal server.</p><p></p><p><strong>Problem 1 - I can&apos;t SSH in</strong></p><p>This was a predictable problem. Neither my work or home ISP supports IPv6. So it&apos;s great that I have this box set up, but now I can&apos;t really do anything with it. Fine, I attach an IPv4 address for now, SSH in and I&apos;ll set up <code>cloudflared</code> to run a tunnel. Presumably they&apos;ll handle the conversion on their side. </p><p>Except that isn&apos;t how Cloudflare rolls. Imagine my surprise when the tunnel collapses when I remove the IPv4 address. By default the <code>cloudflared</code> utility assumes IPv4 and you need to go in and edit the systemd service file to add: <code>--edge-ip-version 6</code>. After this, the tunnel is up and I&apos;m able to SSH in. </p><p><strong>Problem 2 - I can&apos;t use GitHub</strong></p><p>Alright so I&apos;m on the box. Now it&apos;s time to start setting up stuff. I run my server setup script and it immediately fails. It&apos;s trying to access the installation script for <a href="https://github.com/ddworken/hishtory">hishtory</a>, a great shell history utility I use on all my personal stuff. It&apos;s trying to pull the install file from GitHub and failing. &quot;Certainly that can&apos;t be right. GitHub must support IPv6?&quot; </p><p><a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwjmhd_jm7uAAxWeR_EDHQrcA1YQFnoECCEQAQ&amp;url=https%3A%2F%2Fgithub.com%2Forgs%2Fcommunity%2Fdiscussions%2F10539&amp;usg=AOvVaw3OKRXbLJZuGALGB7e7VzMi&amp;opi=89978449">Nope.</a> Alright fine, seems REALLY bad that the service the entire internet uses to release software doesn&apos;t work with IPv6, but you know Microsoft is broke and also only cares about fake AI now, so whatever. <a href="https://www.transip.eu/knowledgebase/entry/5277-using-transip-github-ipv6-proxy/">I ended up using the TransIP Github Proxy which worked fine.</a> Now I have access to Github. But then Python fails with <code>urllib.error.URLError: &lt;urlopen error [Errno 101] Network is unreachable&gt;</code>. Alright I give up on this. My guess is the version of Python 3 in Debian doesn&apos;t like IPv6, but I&apos;m not in the mood to troubleshoot it right now. </p><p><strong>Problem 3 - Can&apos;t set up Datadog</strong></p><p>Let&apos;s do something more basic. Certainly I can set up Datadog to keep an eye on this box. I don&apos;t need a lot of metrics, just a few historical load numbers. Go to Datadog, log in and start to walk through the process. Immediately collapses. The simple setup has you run <code>curl -L <a href="https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh">https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh</a></code>. Now S3 supports IPv6, so what the fuck?</p><pre><code>curl -v https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh
*   Trying [64:ff9b::34d9:8430]:443...
*   Trying 52.216.133.245:443...
* Immediate connect fail for 52.216.133.245: Network is unreachable
*   Trying 54.231.138.48:443...
* Immediate connect fail for 54.231.138.48: Network is unreachable
*   Trying 52.217.96.222:443...
* Immediate connect fail for 52.217.96.222: Network is unreachable
*   Trying 52.216.152.62:443...
* Immediate connect fail for 52.216.152.62: Network is unreachable
*   Trying 54.231.229.16:443...
* Immediate connect fail for 54.231.229.16: Network is unreachable
*   Trying 52.216.210.200:443...
* Immediate connect fail for 52.216.210.200: Network is unreachable
*   Trying 52.217.89.94:443...
* Immediate connect fail for 52.217.89.94: Network is unreachable
*   Trying 52.216.205.173:443...
* Immediate connect fail for 52.216.205.173: Network is unreachable</code></pre><p>It&apos;s not S3 or the box, because I can connect to the test S3 bucket AWS provides just fine.</p><pre><code>curl -v  http://s3.dualstack.us-west-2.amazonaws.com/
*   Trying [2600:1fa0:40bf:a809:345c:d3f8::]:80...
* Connected to s3.dualstack.us-west-2.amazonaws.com (2600:1fa0:40bf:a809:345c:d3f8::) port 80 (#0)
&gt; GET / HTTP/1.1
&gt; Host: s3.dualstack.us-west-2.amazonaws.com
&gt; User-Agent: curl/7.88.1
&gt; Accept: */*
&gt;
&lt; HTTP/1.1 307 Temporary Redirect
&lt; x-amz-id-2: r1WAG/NYpaggrPl3Oja4SG1CrcBZ+1RIpYKivAiIhiICtfwiItTgLfm6McPXXJpKWeM848YWvOQ=
&lt; x-amz-request-id: BPCVA8T6SZMTB3EF
&lt; Date: Tue, 01 Aug 2023 10:31:27 GMT
&lt; Location: https://aws.amazon.com/s3/
&lt; Server: AmazonS3
&lt; Content-Length: 0
&lt;
* Connection #0 to host s3.dualstack.us-west-2.amazonaws.com left intact</code></pre><p>Fine I&apos;ll do it the manual way through apt. </p><p><code>0% [Connecting to apt.datadoghq.com (18.66.192.22)]</code></p><p>Goddamnit. Alright Datadog is out. It&apos;s at this point I realize the experiment of trying to go IPv6 only isn&apos;t going to work. Almost nothing seems to work right without proxies and hacks. I&apos;ll try to stick as much as I can on IPv6 but going exclusive isn&apos;t an option at this point. </p><h3 id="nat64">NAT64</h3><p>So in order to access IPv4 resources from IPv6 you need to go through a NAT64 service. I ended up using this one: <a href="https://nat64.net/">https://nat64.net/</a>. Immediately all my problems stopped and I was able to access resources normally. I am a little nervous about relying exclusively on what appears to be a hobby project for accessing critical internet resources, but since nobody seems to care upstream of me about IPv6 I don&apos;t think I have a lot of choice. </p><p>I am surprised there aren&apos;t more of these. This is the best list I was able to find:</p><figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/08/image-4.png" class="kg-image" alt loading="lazy" width="546" height="272"></figure><p>Most of them seem to be gone now. Dresel&apos;s link doesn&apos;t work, Trex in my testing had problems, August Internet is gone, most of the Go6lab test devices are down, Tuxis worked but they launched the service in 2019 and seem to have no further interaction with it. Basically Kasper Dupont seems to be the only person on the internet with any sort of widespread interest in allowing IPv6 to actually work. Props to you Kasper. </p><p>Basically one person props up this entire part of the internet.</p><h3 id="kasper-dupont">Kasper Dupont</h3><p>So I was curious about Kasper and emailed him to ask a few questions. You can see that back and forth below. </p><blockquote>Me: I found the Public NAT64 service super useful in the transition but would love to know a little bit more about why you do it.<br><br>Kasper: I do it primarily because I want to push IPv6 forward. For a few years<br>I had the opportunity to have a native IPv6-only network at home with<br>DNS64+NAT64, and I found that to be a pleasant experience which I<br>wanted to give more people a chance to experience.<br><br>When I brought up the first NAT64 gateway it was just a proof of<br>concept of a NAT64 extension I wanted to push. The NAT64 service took<br>off, the extension - not so much.<br><br>A few months ago I finally got native IPv6 at my current home, so now<br>I can use my own service in a fashion which much more resembles how my<br>target users would use it.<br><br>Me: You seem to be one of the few remaining free public services like this on the internet and would love to know a bit more about what motivated you to do it, how much it costs to run, anything you would feel comfortable sharing.<br><br>Kasper: For my personal products I have a total of 7 VMs across different<br>hosting providers. Some of them I purchase from Hetzner at 4.51 Euro<br>per month: <a href="https://hetzner.cloud/?ref=fFum6YUDlpJz" rel="noopener noreferrer">https://hetzner.cloud/?ref=fFum6YUDlpJz</a><br><br>The other VMs are a bit more expensive, but not a lot.<br><br>Out of those VMs the 4 are used for the NAT64 service and the others<br>are used for other IPv6 transition related services. For example I<br>also run this service on a single VM: <a href="http://v4-frontend.netiter.com/" rel="noopener noreferrer">http://v4-frontend.netiter.com/</a><br><br>I hope to eventually make arrangements with transit providers which<br>will allow me to grow the capacity of the service and make it<br>profitable such that I can work on IPv6 full time rather than as a<br>side gig. The ideal outcome of that would be that IPv4-only content<br>providers pay the cost through their transit bandwidth payments.<br><br>Me: Any technical details you would like to mention would also be great<br><br>Kasper: That&apos;s my kind of audience :-)<br><br>I can get really really technical.<br><br>I think what primarily sets my service aside from other services is<br>that each of my DNS64 servers is automatically updated with NAT64<br>prefixes based on health checks of all the gateways. That means the<br>outage of any single NAT64 gateway will be mostly invisible to users.<br>This also helps with maintenance. I think that makes my NAT64 service<br>the one with the highest availability among the public NAT64 services.<br><br>The NAT64 code is developed entirely by myself and currently runs as a<br>user mode daemon on Linux. I am considering porting the most<br>performance critical part to a kernel module.<br></blockquote><h3 id="this-site">This site</h3><p>Alright so I got the basics up and running. In order to pull docker containers over IPv6 you need to add: <code>registry.ipv6.docker.com/library/</code> to the front of the image name. So for instance: <br><code>image: mysql:8.0</code> becomes <code>image: registry.ipv6.docker.com/library/mysql:8.0</code></p><p>Docker warns you this setup isn&apos;t production ready. I&apos;m not really sure what that means for Docker. Presumably if it were to stop you should be able to just pull normally?</p><p>Once that was done, we set up the site as an AAAA DNS record and allowed Cloudflare to proxy, meaning they handle the advertisement of IPv6 and bring the traffic to me. One thing I did modify from before was previously I was using Caddy webserver but since I now have a hard reliance on Cloudflare for most of my traffic, I switched to Nginx. One nice thing you can do now that you know all traffic is coming from Cloudflare is switch how SSL works. </p><p>Now I have an Origin Certificate from Cloudflare hard-loaded into Nginx with Authenticated Origin Pulls set up so that I know for sure all traffic is running through Cloudflare. The certificate is signed for 15 years, so I can feel pretty confident sticking it in my secrets management system and not thinking about it ever again. For those that are interested there is a tutorial here on how to do it: <a href="https://www.digitalocean.com/community/tutorials/how-to-host-a-website-using-cloudflare-and-nginx-on-ubuntu-22-04">https://www.digitalocean.com/community/tutorials/how-to-host-a-website-using-cloudflare-and-nginx-on-ubuntu-22-04</a></p><p>Alright the site is back up and working fine. It&apos;s what you are reading right now, so if it&apos;s up then the system works. </p><h3 id="unsolved-problems">Unsolved Problems</h3><ul><li>My containers still can&apos;t communicate with IPv4 resources even though they&apos;re on an IPv6 network with an IPv6 bridge. The DNS64 resolution is working, and I&apos;ve added fixed-cidr-v6 into Docker. I can talk to IPv6 resources just fine, but the NAT64 conversion process doesn&apos;t work. I&apos;m going to keep plugging away at it. </li><li>Before you ping me I did add NAT with ip6tables. </li><li>SMTP server problems. I haven&apos;t been able to find a commercial SMTP service that has an AAAA record. Mailgun and SES were both duds as were a few of the smaller ones I tried. Even Fastmail didn&apos;t have anything that could help me. If you know of one please let me know: <a href="https://c.im/@matdevdug">https://c.im/@matdevdug</a></li></ul><h3 id="why-not-stick-with-ipv4">Why not stick with IPv4?</h3><p>Putting aside &quot;because we&apos;re running out of addresses&quot; for a minute. If we had adopted IPv6 earlier, the way we do infrastructure could be radically different. So often companies use technology like load balancers and tunnels not because they actually need anything that these things do, but because they need some sort of logical division between private IP ranges and a public IP address they can stick in an DNS A record. </p><p>If you break a load balancer into its basic parts, it is doing two things. It is distributing incoming packets onto the back-end servers and it s checking the health of those servers and taking unhealthy ones out of the rotation. Nowadays they often handle things like SSL termination and metrics, but it&apos;s not a requirement to be called a load balancer. </p><p>There are many ways to load balance, but the most common are as follows:</p><ol><li>Round-robin of connection requests. </li><li>Weighted Round-Robin with different servers getting more or less.</li><li>Least-Connection with servers that have the fewest connections getting more requests. </li><li>Weighted Least-Connection, same thing but you can tilt it towards certain boxes. </li></ol><p>What you notice is there isn&apos;t anything there that requires, or really even benefits from a private IP address vs a public IP address. Configuring the hosts to accept traffic from only one source (the load balancer) is pretty simple and relatively cheap to do, computationally speaking. A lot of the infrastructure designs we&apos;ve been forced into, things like VPCs, NAT gateways, public vs private subnets, all of these things could have been skipped or relied on less. </p><p>The other irony is that IP whitelisting, which currently is a broken security practice that is mostly a waste of time as we all use IP addresses owned by cloud providers, would actually be something that mattered. The process for companies to purchase a /44 for themselves would have gotten easier with demand and it would have been more common for people to go and buy a block of IPs from American Registry for Internet Numbers (ARIN), R&#xE9;seaux IP Europ&#xE9;ens Network Coordination Centre (RIPE), or Asia-Pacific Network Information Centre (APNIC). </p><p>You would never need to think &quot;well is Google going to buy more IP addresses&quot; or &quot;I need to monitor GitHub support page to make sure they don&apos;t add more later&quot;. You&apos;d have one block they&apos;d use for their entire business until the end of time. Container systems wouldn&apos;t need to assign internal IP addresses on each host, it would be trivial to allocate chunks of public IPs for them to use and also advertise over standard public DNS as needed. </p><p>Obviously I&apos;m not saying private networks serve no function. My point is a lot of the network design we&apos;ve adopted isn&apos;t based on necessity but on forced design. I suspect we would have ended up designing applications with the knowledge that they sit on the open internet vs relying entirely on the security of a private VPC. Given how security exploits work this probably would have been a benefit to overall security and design. </p><p>So even if cost and availability isn&apos;t a concern for you, allowing your organization more ownership and control over how your network functions has real measurable value. </p><h3 id="is-this-gonna-get-better">Is this gonna get better?</h3><p>So this sucks. You either pay cloud providers more money or you get a broken internet. My hope is that the folks who don&apos;t want to pay push more IPv6 adoption, but it&apos;s also a shame that it has taken so long for us to get here. All these problems and issues could have been addressed gradually and instead it&apos;s going to be something where people freak out until the teams that own these resources make the required changes. </p><p>I&apos;m hopeful the end result might be better. I think at the very least it might open up more opportunities for smaller companies looking to establish themselves permanently with an IP range that they&apos;ll own forever, plus as IPv6 gets more mainstream it will (hopefully) get easier for customers to live with. But I have to say right now this is so broken it&apos;s kind of amazing. </p><p>If you are a small company looking to not pay the extra IP tax, set aside a lot of time to solve a myriad of problems you are going to encounter. </p><p>Thoughts/corrections/objections: <a href="https://c.im/@matdevdug">matdevdug@ci.im</a></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Serverless Functions Post-Mortem]]></title><description><![CDATA[<p>Around 2016, the term &quot;serverless functions&quot; started to take off in the tech industry. In short order, it was presented as the undeniable future of infrastructure. It&apos;s the ultimate solution to redundancy, geographic resilience, load balancing and autoscaling. Never again would we need to patch, tweak</p>]]></description><link>https://matduggan.com/serverless-functions-post-mortem/</link><guid isPermaLink="false">64c8b834c9ed23000104a942</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 28 Jul 2023 14:00:19 GMT</pubDate><content:encoded><![CDATA[<p>Around 2016, the term &quot;serverless functions&quot; started to take off in the tech industry. In short order, it was presented as the undeniable future of infrastructure. It&apos;s the ultimate solution to redundancy, geographic resilience, load balancing and autoscaling. Never again would we need to patch, tweak or monitor an application. The cloud providers would do it, all we had to do is hit a button and deploy to internet.</p><p>I was introduced to it like most infrastructure technology is presented to me, which is as a veiled threat. &quot;Looks like we won&apos;t need as many Operations folks in the future with X&quot; is typically how executives discuss it. Early in my career this talk filled me with fear, but now that I&apos;ve heard it 10+ times, I adopt a &quot;wait and see&quot; mentality. I was told the same thing about VMs, going from IBM and Oracle to Linux, going from owning the datacenter to renting a cage to going to the cloud. Every time it seems I survive.</p><p>Even as far as tech hype goes, serverless functions picked up steam fast. Technologies like AWS Lambda and GCP Cloud Functions were adopted by orgs I worked at very fast compared to other technology. Conference after conference and expert after expert proclaimed that serverless was inevitable. &#xA0;It felt like AWS Lambda and others were being adopted for production workloads at a breakneck pace.</p><p>Then, without much fanfare, it stopped. Other serverless technologies like GKE Autopilot and ECS are still going strong, but the idea of a serverless function replacing the traditional web framework or API has almost disappeared. Even cloud providers pivoted, positioning the tools as more &quot;glue between services&quot; than the services themselves. The addition of being able to run Docker containers as functions seemed to help a bit, but it remains a niche component of the API world.</p><p>What happened? Why were so many smart people wrong? What can we learn as a community about hype and marketing around new tools?</p><h3 id="promise-of-serverless">Promise of serverless</h3><figure class="kg-card kg-image-card"><img src="https://dashbird.io/wp-content/uploads/2020/10/when-serverless-apps-will-fail-typical-architecture.png" class="kg-image" alt loading="lazy"></figure><p>Above we see a serverless application as initially pitched. Users would ingress through the API Gateway technology, which handles everything from traffic management, CORS, authorization and API version management. It basically serves as the web server and framework all in one. Easy to test new versions with multiple versions of the same API at the same time, easy to monitor and easy to set up.</p><p>After that comes the actual serverless function. These could be written in whatever language you wanted and could run for up to 15 minutes as of 2023. So instead of having, say, a Rails application where you are combining the Model-View-Controller into a monolith, you can break it into each route and use different tools to solve for each situation.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2020/10/02/Lambda-CRUD.png" class="kg-image" alt loading="lazy"><figcaption>This suggests how one might structure a new PHP applications for instance.</figcaption></figure><p>Since these were only invoked in response to a request coming from a user, it was declared a cost savings. You weren&apos;t paying for server resources you weren&apos;t using, unlike traditional servers where you would provision the expected capacity beforehand based on a guess. The backend would also endlessly scale, meaning it would be impossible to overwhelm the service with traffic. No more needing to worry about DDoS or floods of traffic.</p><p>Finally at the end would be a database managed by your cloud provider. All in all you aren&apos;t managing any element of this process, so no servers or software updates. You could deploy a thousand times a day and precisely control the rollout and rollback of code. Each function could be written in the language that best suited it. So maybe your team writes most things in Python or Ruby but then goes back through for high volume routes and does those in Golang.</p><p>Combined with technologies like S3 and DynamoDB along with SNS you have a compelling package. You could still send messages between functions with SNS topics. Storage was effectively unlimited with S3 and you had a reliable and flexible key-value store with DynamoDB. Plus you ditched the infrastructure folks, the monolith, any dependency on the host OS and you were billed by your cloud provider for your actual usage based on the millisecond.</p><h3 id="initial-problems">Initial Problems</h3><p>The initial adoption of serverless was challenging for teams, especially teams used to monolith development.</p><ul><li>Local development. Typically a developer pulls down the entire application they&apos;re working on and runs it on their device to be able to test quickly. With serverless, that doesn&apos;t really work since the application is potentially thousands of different services written in different languages. You <em>can</em> do this with serverless functions but it&apos;s way more complicated.</li><li>Hard to set resources correctly. How much memory did this function need under testing can be very different from how much it needs under production. Developers tended to set their limits high to avoid problems, wiping out much of the cost savings. There is no easy way to adjust functions based on real-world data outside of doing it by hand one by one.</li><li>AWS did make this process easier with <a href="https://docs.aws.amazon.com/lambda/latest/operatorguide/profile-functions.html">AWS Lambda Power Tuning</a> but you&apos;ll still need to roll out the changes yourself function by function. Since even a medium sized application can be made up of 100+ functions, this is a non-trivial thing to do. Plus these aren&apos;t static things, changes can get rolled out that dramatically change the memory usage with no warning</li><li>Is it working? Observability is harder with a distributed system vs a monolith and serverless just added to that. Metrics are less useful as are old systems like uptime checks. You need, certainly in the beginning, to rely on logs and traces a lot more. For smaller teams especially, the monitoring shift from &quot;uptime checks + grafana&quot; to a more complex log-based profile of health was a rough adjustment.</li></ul><p>All these problems were challenges but it seems many were able to get through it with momentum intact. We started to see a lot of small applications launch that were serverless function based, from APIs to hobby developer projects. All of this is reflected by the Datadog State of Serverless report for 2020 which <a href="https://www.datadoghq.com/state-of-serverless-2020/">you can see here.</a></p><p>At this point everything seems great. 80% of AWS container users have adopted Lambda in some capacity, paired with SQS and DynamoDB. NodeJS and Python are the dominant languages, which is a little eyebrow raising. This suggests that picking the right language for the job didn&apos;t end up happening, instead picking the language easiest for the developer. But that&apos;s fine, that is also an optimization.</p><p>What happened? What went wrong?</p><h3 id="production-problems">Production Problems</h3><p>Across the industry we started to hear feedback from teams that had gone hard into serverless functions backing back out. I started to see problems in my own teams that had adopted serverless. The following trends came up in no particular order.</p><ul><li>Latency. Traditional web frameworks and containers are fast at processing requests, typically hitting latency in database calls. Serverless functions were slow depending on the last time you invoked them. This led to teams needing to keep &quot;functions warm.&quot; What does this mean?</li></ul><figure class="kg-card kg-image-card"><img src="https://docs.aws.amazon.com/images/lambda/latest/operatorguide/images/perf-optimize-figure-1.png" class="kg-image" alt loading="lazy"></figure><p>When the function gets a request it downloads the code and gets ready to run it. After that for a period of time, the function is just ready to rerun until it is recycled and the process needs to be run again. The way around this at first was typically an EventBridge rule to keep the function running every minute. This kind of works but not really.</p><p>Later Provisioned Concurrency was added, which is effectively....a server. It&apos;s a VM where your code is already loaded. You are limited per account to how many functions you can have set to be Provisioned Concurrency, so it&apos;s hardly a silver bullet. Again none of this happens automatically, so its up to someone to go through and carefully tune each function to ensure it is in the right category.</p><ul><li>Scaling. Serverless functions don&apos;t scale to infinity. You can scale concurrency levels up every minute by an additional 500 microVMs. But it is very possible for one function to eat all of the capacity for every other function. Again it requires someone to go through and understand what Reserved Concurrency each function needs and divide that up as a component of the whole.</li></ul><p>In addition, serverless functions don&apos;t magically get rid of database concurrency limits. So you&apos;ll hit situations where a spike of traffic somewhere else kills your ability to access the database. This is also true of monoliths, but it is typically easier to see when this is happening when the logs and metrics are all flowing from the same spot.</p><p>In practice it is <em>far harder</em> to scale serverless functions than an autoscaling group. With autoscaling groups I can just add more servers and be done with it. With serverless functions I need an in-depth understanding of each route of my app and where those resources are being spent. Traditional VMs give you a lot of flexibility in dealing with spikes, but serverless functions don&apos;t.</p><p>There are also tiers of scaling. You need to think of KMS throttling, serverless function concurrency limit, database connection limits, slow queries. Some of these don&apos;t go away with traditional web apps, but many do. Solutions started to pop up but they often weren&apos;t great.</p><p>Teams switched from always having a detailed response from the API to just returning a 200 showing that the request had been received. That allowed teams to stick stuff into an SQS queue and process it later. This works unless there is a problem in processing, breaking the expectations from most clients that 200 means the request was successful, not that the request had been received.</p><p>Functions often needed to be rewritten as you went, moving everything you could to the initialization phase and keeping all the connection logic out of the handler code. The initial momentem of serverless was crashing into the rewrites as teams learned painful lesson after painful lesson.</p><ul><li>Price. Instead of being fire and forget, serverless functions proved to be <em>very expensive</em> at scale. Developers don&apos;t think of routes of an API in terms of how many seconds they need to run and how much memory they use. It was a change in thinking and certainly compared to a flat per-month EC2 pricing, the spikes in traffic and usage was an unpleasant surprise for a lot of teams.</li></ul><p>Combined with the cost of RDS and API Gateway and you are looking at a lot of cash going out every month.</p><p>The other cost was the requirement that you have a full suite of cloud services identical to production for testing. How do you test your application end to end with serverless functions? You need to stand up the exact same thing as production. Traditional applications you could test on your laptop and run tests against it in the CI/CD pipeline before deployment. Serverless stacks you need to rely a lot more on Blue/Green deployments and monitoring failure rates.</p><ul><li>Slow deployments. Pushing out a ton of new Lambdas is a time-consuming process. I&apos;ve waited 30+ minutes for a medium-sized application. God knows how long people running massive stacks were waiting.</li><li>Security. Not running the server is great, but you still need to run all the dependencies. It&apos;s possible for teams to spawn tons of functions with different versions of the same dependencies, or even choosing to use different libraries. This makes auditing your dependency security very hard, even with automation checking your repos. It is more difficult to guarantee that every compromised version of X dependency is removed from production than it would be for a smaller number of traditional servers.</li></ul><h3 id="why-didnt-this-work">Why didn&apos;t this work?</h3><p>I think three primary mistakes were made.</p><ol><li>The complexity of running a server in a modern cloud platform was massively overstated. Especially with containers, running a linux box of some variety and pushing containers to it isn&apos;t that hard. All the cloud platform offer load balancers, letting you offload SSL termination, so really <em>any</em> Linux box with Podman or Docker can run listening on that port until the box has some sort of error.<br><br>Setting up Jenkins to be able to monitor Docker Hub for an image change and trigger a deployment is not that hard. If the servers are just doing that, setting up a new box doesn&apos;t require the deep infrastructure skills that serverless function advocates were talking about. The &quot;skill gap&quot; just didn&apos;t exist in the way that people were talking about. <br></li><li>People didn&apos;t think critically about price. Serverless functions <em>look</em> cheap, but we never think about how many seconds or minute a server is busy. That isn&apos;t how we&apos;ve been conditioned to think about applications and it showed. Often the first bill was a shocker, meaning the savings from maintenance had to be massive and they just weren&apos;t.</li><li>Really hard to debug problems. Relying on logs and X-Ray to figure out what went wrong is just much harder than pulling the entire stack down to your laptop and triggering the same requests. It is a new skill and one that people had not developed up to that point. The first time you have a long-running production issue that would have been trivial to fix in the old monolith application design style that persists for a long time in the serverless function world, the enthusiasm from leadership evaporates very quickly.</li></ol><h3 id="conclusion">Conclusion</h3><p>Serverless functions fizzled out and it&apos;s important for us as an industry to understand why the hype wasn&apos;t real. Important questions were skipped over in an attempt to increase buy-in to cloud platforms and simplify the deployment and development story for teams. Hopefully this provides us a chance to be more skeptical of promises like this in the future. We should have adopted a much more wait and see to this technology instead of rushing straight in and hitting all the sharp edges right away.</p><p>Currently serverless functions live as what they&apos;re best at, which is either glue between different services, triggering longer-running jobs or as very simple platforms that allow for tight cost control by single developers who are putting together something for public use. If you want to use something serverless for more, you would be better off looking at something like ECS with Fargate or Cloud Run in GCP.</p><p></p>]]></content:encoded></item><item><title><![CDATA[CodePerfect 95 Review]]></title><description><![CDATA[<p>I have a long history of loving text editors. Their simplicity and purity of design is appealing to me, as is their long lifespans. Writing a text editor that becomes popular really becomes a lifelong responsibility and opportunity, which is just very cool to me. They become subcultures onto themselves.</p>]]></description><link>https://matduggan.com/codeperfect-95-review/</link><guid isPermaLink="false">64c8b834c9ed23000104a93e</guid><dc:creator><![CDATA[Mathew Duggan]]></dc:creator><pubDate>Fri, 14 Jul 2023 14:00:08 GMT</pubDate><media:content url="https://matduggan.com/content/images/2023/07/SCR-20230705-fdb.png" medium="image"/><content:encoded><![CDATA[<img src="https://matduggan.com/content/images/2023/07/SCR-20230705-fdb.png" alt="CodePerfect 95 Review"><p>I have a long history of loving text editors. Their simplicity and purity of design is appealing to me, as is their long lifespans. Writing a text editor that becomes popular really becomes a lifelong responsibility and opportunity, which is just very cool to me. They become subcultures onto themselves. IDEs I have less love for. </p>
<p>There&apos;s nothing wrong with using one, in fact I use them for troubleshooting on a pretty regular basis. I just haven&apos;t found one I love yet. They either have a million plugins (so I&apos;m constantly getting notifications for updates) or they just have thousands upon thousands of features, so even to get started I need to watch a few YouTube tutorials and read a dozen pages of docs. I love JetBrains products but the first time I tried to use PyCharm for a serious project I felt like I was launching a shuttle into space. </p>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/6b234c5e-53ff-41fa-04d1-dce81b290500/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"><figcaption><span>Busy is a bit of an understatement</span></figcaption></figure>
<p>However I find myself writing a lot of Golang lately, as it has become the common microservice language across a couple of jobs now. I actually like it, but I&apos;m always looking for an IDE to help me write it faster and better. My workflow is typically to write it in Helix or Vim and then use the IDE for inspecting the code before putting it in a commit, or for faster debugging than have two tabs open in the Tmux and switching between them. It works, but it&apos;s not exactly an elegant solution. </p>
<p>I stumbled across CodePerfect 95 and fell in love with the visual styling. So I had to give it a try. Their site is here: <a href="https://codeperfect95.com/">https://codeperfect95.com/</a></p>
<h3 id="visuals">Visuals</h3>
<p>It&apos;s hard to overstate how much I love this design. It is very Mac OS 9 in a way that I just was instantly drawn to. Everything from the atypical color choices to the fonts are just classic Apple design. </p>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/19f4a04c-71b7-46cf-8ca2-5c42d6afd600/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"><figcaption><span>Mac OS 9</span></figcaption></figure>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-6.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="654" height="805" srcset="https://matduggan.com/content/images/size/w600/2023/07/image-6.png 600w, https://matduggan.com/content/images/2023/07/image-6.png 654w"></figure>
<p>Whoever picked this logo, I was instantly delighted with it. </p>
<figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/71175327-b561-454c-32d6-d8662e696200/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"></figure>
<figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/0fffafd5-0c0d-4e94-9df1-01a885e53100/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"></figure>
<p>There were a few quibbles. It should respect the system dark/light mode, even if it goes against the design of the application. That&apos;s a users preference and should get reflected in some way. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-7.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="1278" height="278" srcset="https://matduggan.com/content/images/size/w600/2023/07/image-7.png 600w, https://matduggan.com/content/images/size/w1000/2023/07/image-7.png 1000w, https://matduggan.com/content/images/2023/07/image-7.png 1278w" sizes="(min-width: 720px) 720px"></figure>
<p>Also as far as I could tell, nothing about the font used or any of the design elements were customizable. This is fine for me, I actually prefer when tools have strong opinions and present them to me, but I know for some people the ability to switch the monospace font used is a big deal. In general there are just not a lot of options, which is great for me but you should be aware of. </p>
<figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/9bf2469e-190a-42a8-e759-f220299e4a00/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"></figure>
<h3 id="usage">Usage</h3>
<p>Alright so I got a free 7 day trial when I downloaded it and I really tried to kick the tires as much as possible. So I converted over to it for all my work during that period. This app promises speed and delivers. It is as fast as a terminal application and comes with most of the window and tab customization you would typically turn to a tool like Tmux for. </p>
<p>It apparently indexes the project when you open it, but honestly it happened so fast I didn&apos;t even notice what it was doing. As fast as I could open the project and remember what the project was, I could search or do whatever. I&apos;m sure if you work on giant projects that might not be the case, but nothing I threw at the index process seemed to choke it at all. </p>
<p>It supports panes and tabs, so basically using <code>Cmd+number</code> to switch panes. It&apos;s super fast and I found very comfortable. The only thing that is slightly strange is when you open a new pane, it shows absolutely nothing. No file path, no &quot;click here to open&quot;. You need to understand that when you switch to an empty pane you have to open a file. This is what the pane view looks like:</p>
<figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/958380b6-53e0-420a-274f-96b841143b00/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"></figure>
<p><code>Cmd+P</code> is fuzzy find and works as expected. So if you are used to using Vim to search and open files, this is going to feel very familiar to you. <code>Cmd+T</code> is the Symbol GoTo which works like all of these you have ever used:</p>
<figure class="kg-card kg-image-card"><img src="https://imagedelivery.net/zTZJzgDLaZ7u1hvTz4LleQ/28f3e722-699d-48bc-3a94-d31432e9e900/public" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"></figure>
<p>You can jump to the definition of an identifier, completion, etc. All of this worked exactly like you would think it does. It was very fast and easy to do. I really liked some of the completion stuff. For instance, Generate Function actually saved me a fair amount of time. </p>
<p>Given:</p>
<pre><code>dog := Dog{}
bark(dog, 1, false)</code></pre>
<p>You can mouse over and generate this:</p>
<pre><code>func bark(v0 Dog, v1 int, v2 bool) {
  panic(&quot;not implemented&quot;)
}</code></pre>
<p>This is their docs example but when I tested it, it seemed to work well. </p>
<p>The font is pretty easy to read but I would have love to tweak the colors a bit. They went with kind of a muted color scheme, whereas I prefer a strong visual difference between comments and actual code. All the UI elements are black and white, very strong contract, so to make the actual workspace muted and a little hard to read is strange. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/SCR-20230706-fjt.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="1464" height="1194" srcset="https://matduggan.com/content/images/size/w600/2023/07/SCR-20230706-fjt.png 600w, https://matduggan.com/content/images/size/w1000/2023/07/SCR-20230706-fjt.png 1000w, https://matduggan.com/content/images/2023/07/SCR-20230706-fjt.png 1464w" sizes="(min-width: 720px) 720px"></figure>
<p>VSCode defaults to a more aggressive and easier to read design, especially in sunlight. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-13.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="735" height="612" srcset="https://matduggan.com/content/images/size/w600/2023/07/image-13.png 600w, https://matduggan.com/content/images/2023/07/image-13.png 735w" sizes="(min-width: 720px) 720px"></figure>
<h3 id="builds">Builds</h3>
<p>So one of the primary reasons IDEs are so nice to use is the integrated build system. However with Golang builds are pretty straightforward typically, so there isn&apos;t a lot to report here. It&apos;s basically &quot;what arguments do you pass to <code>go build</code> saved as a profile&quot;. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-8.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="621" height="383" srcset="https://matduggan.com/content/images/size/w600/2023/07/image-8.png 600w, https://matduggan.com/content/images/2023/07/image-8.png 621w"></figure>
<p>It works well though. No complaints and stepping through the build errors was easy and fast to do. Not fancy but works like it says on the box. </p>
<h3 id="work-impressions">Work Impressions</h3>
<p>I was able to do everything I would need to do with a typical Golang application inside the IDE, which is not a small task. I liked features like the <a href="https://docs.codeperfect95.com/postfix-completion">Postfix completion</a> which did actually save me a fair amount of time once I started using them. </p>
<p>However I ended up missing a few of the GoLand features like Code Coverage checking for tests and built-in support for Kubernetes and Terraform, just because it&apos;s common to touch all subsystems when I&apos;m working on something and not just exclusively Go code. You definitely see some value with having a tool customized for one environment over having a general purpose tool with plugins, but it was a little hard to give up all the customization options with GoLand. Then again it reduces complexity and onboarding time, so it&apos;s a trade-off. </p>
<h3 id="pricing-and-license">Pricing and License</h3>
<p>First with a product like this I like to check the Terms and Conditions. I was surprised that they....basically don&apos;t have any.</p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-9.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="675" height="612" srcset="https://matduggan.com/content/images/size/w600/2023/07/image-9.png 600w, https://matduggan.com/content/images/2023/07/image-9.png 675w"></figure>
<p>Clearly no lawyers were involved in this process, which seems odd. This reads like a Ron Swanson ToS.</p>
<figure class="kg-card kg-image-card"><img src="https://i.kym-cdn.com/photos/images/newsfeed/001/276/727/7e7.jpg" class="kg-image" alt="CodePerfect 95 Review" loading="lazy"></figure>
<p>The way you buy licenses is also a little unusual. It&apos;s an attempt to bridge the Jetbrains previous perpetual license and the perpetual fallback license. </p>
<pre><code>A key has two parts: a one-time perpetual license, and subscription-based automatic updates. You can choose either one, or both:

    License only
        A perpetual license locked to a particular version.
        After 3 included months of updates, locked to the final version.
    License and subscription
        A perpetual license with access to ongoing updates .
        When your subscription ends, your perpetual license is locked to the final version.
    Subscription only
        Access to the software during your subscription.
        You lose access when your subscription ends.</code></pre>
<p>I&apos;m also not clear what they mean by &quot;cannot be expensed&quot;. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-10.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="579" height="759"></figure>
<p>Why can&apos;t I expense it? According to what? You writing on a webpage &quot;you cannot expense it&quot;? This seems like a way to extract more money from people depending on whether they&apos;re using it at work or home. </p>
<p>Jetbrains does something similar but <a href="https://www.jetbrains.com/legal/docs/toolbox/license_personal/">they have an actual license you agree to.</a> There&apos;s no documentation of a license here, so I don&apos;t know if this matters at all. If CodePerfect wants to run their business like this, I guess they can, but they&apos;re going to need to have a document that says something like this:</p>
<pre><code>3.4. This subscription is only for natural persons who are purchasing a subscription to Products using only their own funds. Notwithstanding anything to the contrary in this Agreement, you may not use any of the Products, and this grant of rights shall not be in effect, in the event that you do not pay Subscription fees using your own funds. If any third party pays the Subscription fees or if you expect or receive reimbursement for those fees from any third party, this grant of rights shall be invalid and void.</code></pre>
<p>I feel like $40 for software where I only get 3 months of updates is not an amazing deal. Sublime Text is $99 for 3 years. Nova is $99 for one year. Examining the changelog it appears they&apos;re still closing relatively big bugs even now, so I would be a tiny bit nervous about getting locked into whatever version I&apos;m at in three months forever. <a href="https://docs.codeperfect95.com/changelog/23.06.7">Changelog</a></p>
<p>The subscription was also not a great deal. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-11.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="554" height="795"></figure>
<p>So I mean the easiest comparison would be GoLand. </p>
<figure class="kg-card kg-image-card"><img src="https://matduggan.com/content/images/2023/07/image-12.png" class="kg-image" alt="CodePerfect 95 Review" loading="lazy" width="289" height="257"></figure>
<p>$10 a month = $120 for the year and I get the perpetual fallback license. $100 for the year and I get CodePerfect (I understand the annual price break). The pricing isn&apos;t crazy but JetBrains is an established company with a known track record of shipping IDEs. I would be a bit hesitant to shell out for this based on a 7 day trial for a product that has existed for 302 days as of July 5th. I&apos;d rather they charge me $99 for a license with 12 months of updates that just ends instead of a subscription. It&apos;s also strange that they don&apos;t seem to change the currency based on the location of the user. </p>
<p>My issue with all this is getting a one-time payment reimbursed is not bad. Subscriptions are typically frowned upon as expenses at most places I&apos;ve worked unless they&apos;re training for the entire department.  For my own personal usage, I would be hesitant to sign up for a new subscription from an unknown entity, especially when the ToS is a paragraph and the &quot;license&quot; I am agreeing to doesn&apos;t seem to exist? A lot of this is just new software growing pains, but I hope they&apos;re aware. </p>
<h3 id="conclusion">Conclusion</h3>
<p>CodePerfect 95 is my favorite kind of software. It&apos;s functional yet fun, with some whimsy and joy mixed in with practical features. It works well and is as fast as promised. I enjoyed my week of using it, finding it to be mostly usable as JetBrains GoLand but in a lighter piece of software. So would I buy it?</p>
<p>I&apos;m hesitant. I want to buy it, but there&apos;s zero chance I could get a legal department to approve this for an enterprise purchase. So my option would be to buy the more expensive version and expense it or just pay for it myself. Subscription fatigue is a real thing and I will typically pay a 20% premium to not have to deal with it. To not have to deal with a subscription I would need to buy a license every 3 months for $160 a year in total. </p>
<p>I can&apos;t get there yet. I&apos;ve joined their newsletter and I&apos;ll keep an eye on it. If it continues to be a product in six months I&apos;ll pull the trigger. Switching workflows is a lot of work for me and it requires enough time to mentally adjust that I don&apos;t want to fall in love with a tool and then have it disappear. If they did $99 for a year license that just expired I&apos;d buy it today. </p>]]></content:encoded></item></channel></rss>