Recently, I’ve been on a search for interactive runbooks. My team has several use-cases for such a tool including:

  • repeatable, runnable procedures
  • a REST book that can be used to query and validate APIs
  • to document complex API integrations

First, a bit about my team. We’re globally-distributed, and maintain a growing number of microservices built mostly using Mulesoft, but with some AWS services, Drupal, and Symfony in the mix. We’ve been transitioning to Mulesoft and API-led strategy from a mostly PHP-based fleet of apps. This means there’s a good deal of PHP knowledge, along with several other languages and technologies (Go, Java, Dataweave, JavaScript, SQL, Python, BASH).

Available Tools

There are a growing number of tools that provide runbook functionality, and there are tradeoffs with each option. Some of the current tools in the runbook category include VS Code Custom Notebooks, Elixir Livebook, Jupyter Notebook, and Emacs org-mode. Below is a table comparing some of the characteristics of each option.

Trait / Tool VS Code Custom Notebooks Elixir Livebook Jupyter Notebook Emacs org-mode
Supported Tech Pluggable Elixir/Erlang Python Polyglot
Format JSON Markdown JSON Org Markup
Human Readable No Yes No Yes
GitHub Readable No Yes Yes Yes
Software VS Code Insiders + Plugin Elixir Livebook Jupyter Notebook Emacs

Criteria Breakdown

The criteria that are most important to my team (not necessarily in this order):

  1. readability as plain text
  2. ease of install and setup
  3. minimal friction to adopt
  4. can execute the technologies we use

Readability as plain text

This means we don’t need to rely on any special technology to use a runbook - we can easily read the runbook and perform the necessary steps to fulfill the requirements of a procedure. Elixir Livebook and Emacs org-mode both meet this criteria.

Ease of install and setup

VS Code Custom Notebooks requires installing the Insiders release (inherent instability), plus a separate plugin for each type of notebook. We would need to develop custom plugins to cover some of our procedures. Custom Notebooks is also heavily reliant on the JavaScript packaging ecosystem, which we are using infrequently these days.

Elixir Livebook requires installing Elixir. We would need to assemble a custom solution for running our automatable tasks.

Jupyter Notebook requires Python and Pip. We would need to assemble a custom solution for running our automatable tasks.

Emacs with org-mode requires installing Emacs.

Minimal friction to adopt

VS Code Custom Notebooks would involve friction around lack of guarantees with stability, the requirement to create custom notebook plugins, and maintaining NPM dependencies.

Elixir Livebook would require learning a new programming language, and creating a custom solution to run our tasks.

Jupyter Notebook would require many team members to learn a new programming language, and we’d need to create a custom solution to run our tasks.

Emacs org-mode would require learning some basic Emacs conventions and maybe a bit of lisp.

Can execute the technologies we use

While any of these options is capable of executing technologies we use, Emacs org-mode is the most capable out of the box. Org-mode is also mature and there is a wide ecosystem of plugins.

Emacs org-mode

Given the current team context, Emacs org-mode seems like a good choice. If we were working more with Python, JavaScript, Elixir, and/or machine learning, one of the other options would rank higher.

Below is an org file that demonstrates a basic runbook implementation with a code block dependency chain.

* Hello World

  This is a hello world notebook.

  To execute any of the code blocks, place the cursor in the block and type CTRL-C CTRL-C.

** Hello World Setup

   The below code block adds shell execution capability to the org babel session (default is emacs-lisp only). This block has been added as a dependency for subsequent code blocks.
   
   #+NAME: setup
   #+BEGIN_SRC emacs-lisp
   (org-babel-do-load-languages
    'org-babel-load-languages
    '((shell . t)))
   #+END_SRC

   #+RESULTS: setup

** Hello World Shell

   The below code block prints the current Unix timestamp.
   
   #+NAME: hello-shell
   #+BEGIN_SRC shell :var preflight=setup
   date +%s
   #+END_SRC

   #+RESULTS: hello-shell
   : 1649002346

** Hello World REST

   The below code block executes a cURL command to convert the current Unix timestamp (from the above code block) to a date-time via a public API (contrived, but useful as a demonstration).

   #+NAME: hello-rest
   #+BEGIN_SRC shell :var shellstamp=hello-shell
     curl "https://showcase.api.linx.twenty57.net/UnixTime/fromunix?timestamp=$shellstamp"
   #+END_SRC

   #+RESULTS: hello-rest
   : 2022-04-03 16:16:45

And here’s a link to the same document as a GitHub Gist (note the results sections are not printed but can be viewed in the raw document):

I recently assembled a CLI app using Go and Open API and wanted to share some of the process.

Summary of Steps

  1. Acquire an Open API spec
  2. Generate a Go API client using OpenAPI Generator
  3. Generate a Go CLI using Cobra
  4. Integrate the API Client into the CLI

Build Detail

I wanted to automate pulling search results from Sumo Logic, and took a look at the Search Job API. Sumo Logic provides a pretty robust Open API spec for most of the APIs they offer, but unfortunately Search Job API is not covered. I started from their existing API spec, and adapted it for the Search Job API using the provided Search Job API documentation. The API spec can be referenced here.

After the API spec was assembled, I was able to use that to generate a Go API client library using the OpenAPI Generator command line tool:

openapi-generator-cli generate -i sumologic-search-job-api.yaml -g go -o client

Many programming languages besides Go are supported by the openapi-generator tool. This is great because it makes it easy to create any API client you need from a single spec resource.

I published the API client to a new GitHub repository, which I was then able to reference as a dependency.

For the CLI, I initialized a new Go project and pulled in the API client:

mkdir sumo-search-job-cli
cd sumo-search-job-cli
go mod init sumo-search-job-cli
go get github.com/nhoag/sumologic-search-job-client-go

I then added the Cobra Go CLI framework as a dependency and got to work generating the CLI and commands:

cobra init --viper
cobra add command-name

I added a command for each available operation, including a validate and execute function for each. I also created an interface to the API client library for handling shared configuration injection logic that might be coming from a config file or from command options. I then created a “full process” command to tie everything together into a single user action. This was pretty convenient, as I could reference validate and execute functions from atomic commands when creating the “full process” command.

The final product for the CLI can be found on GitHub. Instructions for installation and usage are provided in the README and via command help (sumo -h).

I recently attended Laracon Online 2018 and got a lot out of it! I happened to see a tweet about early bird tickets in January and couldn’t pass up the opportunity. I’ve admittedly been sheltered in Drupal/Symfony over the last several years, and it was a bit of a shock to see the popularity and rich features of Laravel (like Rails for PHP). The presenters were knowledgable, practiced, and engaging. The format flowed well and was expertly M.C.’d by Ian Landsman. I’ve come away with a lot to mull over, and following are some thoughts, takeaways, and references.

It was nice for so many reasons to be able to attend a conference from my home! It’s tough to travel right now with two young kids, and this format makes it possible to get a conference experience without sacrificing family obligations. It’s kinda surprising to me that it’s not more common, or offered as a cheaper option for in-person conferences.

The conference used a Discourse instance for chat, which seemed like a good choice. I heard grumblings about the lack of a Slack channel (some folks managed to assemble on Slack anyway), but I didn’t feel like I needed anything more. The Discourse site is easy to navigate and will be available for a year. The conference was broadcast via Zoom, which was solid throughout, with a minor exception of slow-down/speed-ups on one presentation. Thanks to @ianlandsman for showcasing Zoom green screen capability (TIL)!

I started out taking some notes, but soon learned that Michael Roderick was sharing much more detailed conference notes in a Github repo (including code snippets and diagrams!). Thanks! Nicely done.

The talks spanned a range of topics, from front-end to back-end, with a good dose of “human experience” sprinkled in. Adam Wathan kicked things off with a nail-biter of a live coding presentation demonstrating a Vue.js component refactor. Steve Schoger demonstrated modern design principles through transforming a typical app design to be more streamlined and professional. Taylor Otwell walked through contributions from the newly minted Laravel 5.6 (complete with attributions), and the proprietary Spark project. Chris Fidao reviewed common bottlenecks and solutions when scaling Laravel apps. Wes Bos discussed modern and upcoming Javascript features. Jonathan Reinink gave a fast-paced live coding demo of implementing increasingly complex Eloquent queries without sacrificing performance. Sandi Metz gave a moving talk on the common element of effective teams. Matt Stauffer ended the day with a compelling reminder to have fun!

Some big takeaways from the conference (for me):

  • Presenters were mostly using Sublime Text (o_O?) and getting it done!
  • Look out for the “n+1” performance problem, and remedy with Eager loading
  • Psychological safety is crucial to effective engineering/teamwork
  • I need a frontend project so I can flex my Javascript muscles
  • Laravel (software, popularity, momentum) is HUGE

A few small critiques:

  • A nice enhancement to the current conference setup would be to include a stenographer for real-time captions and improved accessibility
  • I would love to see a more diverse group of speakers
  • I saw at least one image that didn’t belong in a professional presentation

Coming from the Drupal/Symfony world, it was awesome to get a window into another realm of PHP. Laravel is definitely something I’ll be evaluating more fully, and Laracon Online has provided a number of reference pointers for learning more about the framework.

Several years ago, I created a fleet of bots that pull data from one API and post to another API. Over the years, the bots have needed various interventions to get them back up and running, but I’ve shied away from upgrading underlying technologies. Thankfully, when I put these bots together, I used a system of layered Docker builds. I didn’t remember having done this, but after a quick review it became clear that this update was going to be easy.

Regarding the layered Docker builds, I had started with my own custom base image, then built the framework dependency, followed by the application image, and then finally a Dockerfile for app configuration. In this case, I needed to convert use of http to https in the application layer, as http support was removed from one of the APIs. After updating the app, I rebuilt the application image from the latest app release, and then rerolled the individual bot configurations on top.

I didn’t need to touch the framework image, which is fortunate because the framework version used is no longer available through supported channels. The level of effort would have gone up significantly if I didn’t have a ready custom-built image with the needed version. I highly recommend doing this layered image approach if you have toy projects you can’t poor loads of time and effort into updating and maintaining. It’s slightly more work up front, but will provide a stable bedrock for future project iteration.

Over the last few weeks I’ve been digging away on my dotfiles and figured I’d write up the interesting aspects of the various decisions, methods and changes involved. The high-level overview is that I moved to a bare Git repository structure, added an installer script and some light multi-OS support, and added automated linting and a few basic tests.

After using a symlinked dotfiles system for a couple of years, I got curious as to whether there might be decent alternatives. I tried rcm, but started having issues with it and lost interest. Then I found the following comment thread that describes using a bare Git repository, the simplicity of which caught my attention:

Making the conversion was easy enough, and the process helped alleviate some cruft that had built up. After standardizing my home directory and removing unnecessary symlinks, I created a one-liner to replace symlinks with hard files. Then I set up the bare Git repo format, switched everything over, and haven’t looked back!

I opted to alias git operations in the home directory to ‘dots’. It hasn’t been too bad to rewire my brain to use git as ‘dots’ in this instance. The only thing to watch out for is not accidentally adding untracked files (a habit that’s worthwhile to break), as doing so here could expose sensitive info!

I wanted to make my dotfiles installable on a non-macOS computer. I started out using a snapshot of a basic Ubuntu GUI install on VirtualBox. This is where the installer script initially became useful, as it saved running long-ish command sequences over and over. It later proved useful for automated testing! There were several fun issues to work out, and in the end my dotfiles gained robustness and a few bug fixes.

I added Travis CI integration, initially just for formatting and linting checks, and later got into some basic tests via bats. Travis CI has several tools built-in, including shellcheck, shfmt, and bats. So far, I haven’t found good lint and test tooling for zsh. Thankfully, most of my stuff is bash compatible, and these Travis CI built-ins have proven useful.

In adding bats tests, there were some delightful surprises. A fingerprints function I’d written a while back ultimately proved to be useless. It was an overly complex wrapper for ssh-keygen that was trying to add functionality that was already present (showing fingerprints for multi-key files). I might never have discovered this if it weren’t for attempting to create a bats test. A less delightful surprise was that Travis CI openssh is an older version that does not support passing the hash option (-E).

In the end, my dotfiles feel lighter and leaner. There’s still a lot to do, but it’s comforting to have some basic tests in place - even with the slight mismatch of tools to code.