I recently attended Laracon Online 2018 and got a lot out of it! I happened to see a tweet about early bird tickets in January and couldn’t pass up the opportunity. I’ve admittedly been sheltered in Drupal/Symfony over the last several years, and it was a bit of a shock to see the popularity and rich features of Laravel (like Rails for PHP). The presenters were knowledgable, practiced, and engaging. The format flowed well and was expertly M.C.’d by Ian Landsman. I’ve come away with a lot to mull over, and following are some thoughts, takeaways, and references.

It was nice for so many reasons to be able to attend a conference from my home! It’s tough to travel right now with two young kids, and this format makes it possible to get a conference experience without sacrificing family obligations. It’s kinda surprising to me that it’s not more common, or offered as a cheaper option for in-person conferences.

The conference used a Discourse instance for chat, which seemed like a good choice. I heard grumblings about the lack of a Slack channel (some folks managed to assemble on Slack anyway), but I didn’t feel like I needed anything more. The Discourse site is easy to navigate and will be available for a year. The conference was broadcast via Zoom, which was solid throughout, with a minor exception of slow-down/speed-ups on one presentation. Thanks to @ianlandsman for showcasing Zoom green screen capability (TIL)!

I started out taking some notes, but soon learned that Michael Roderick was sharing much more detailed conference notes in a Github repo (including code snippets and diagrams!). Thanks! Nicely done.

The talks spanned a range of topics, from front-end to back-end, with a good dose of “human experience” sprinkled in. Adam Wathan kicked things off with a nail-biter of a live coding presentation demonstrating a Vue.js component refactor. Steve Schoger demonstrated modern design principles through transforming a typical app design to be more streamlined and professional. Taylor Otwell walked through contributions from the newly minted Laravel 5.6 (complete with attributions), and the proprietary Spark project. Chris Fidao reviewed common bottlenecks and solutions when scaling Laravel apps. Wes Bos discussed modern and upcoming Javascript features. Jonathan Reinink gave a fast-paced live coding demo of implementing increasingly complex Eloquent queries without sacrificing performance. Sandi Metz gave a moving talk on the common element of effective teams. Matt Stauffer ended the day with a compelling reminder to have fun!

Some big takeaways from the conference (for me):

  • Presenters were mostly using Sublime Text (o_O?) and getting it done!
  • Look out for the “n+1” performance problem, and remedy with Eager loading
  • Psychological safety is crucial to effective engineering/teamwork
  • I need a frontend project so I can flex my Javascript muscles
  • Laravel (software, popularity, momentum) is HUGE

A few small critiques:

  • A nice enhancement to the current conference setup would be to include a stenographer for real-time captions and improved accessibility
  • I would love to see a more diverse group of speakers
  • I saw at least one image that didn’t belong in a professional presentation

Coming from the Drupal/Symfony world, it was awesome to get a window into another realm of PHP. Laravel is definitely something I’ll be evaluating more fully, and Laracon Online has provided a number of reference pointers for learning more about the framework.

Several years ago, I created a fleet of bots that pull data from one API and post to another API. Over the years, the bots have needed various interventions to get them back up and running, but I’ve shied away from upgrading underlying technologies. Thankfully, when I put these bots together, I used a system of layered Docker builds. I didn’t remember having done this, but after a quick review it became clear that this update was going to be easy.

Regarding the layered Docker builds, I had started with my own custom base image, then built the framework dependency, followed by the application image, and then finally a Dockerfile for app configuration. In this case, I needed to convert use of http to https in the application layer, as http support was removed from one of the APIs. After updating the app, I rebuilt the application image from the latest app release, and then rerolled the individual bot configurations on top.

I didn’t need to touch the framework image, which is fortunate because the framework version used is no longer available through supported channels. The level of effort would have gone up significantly if I didn’t have a ready custom-built image with the needed version. I highly recommend doing this layered image approach if you have toy projects you can’t poor loads of time and effort into updating and maintaining. It’s slightly more work up front, but will provide a stable bedrock for future project iteration.

Over the last few weeks I’ve been digging away on my dotfiles and figured I’d write up the interesting aspects of the various decisions, methods and changes involved. The high-level overview is that I moved to a bare Git repository structure, added an installer script and some light multi-OS support, and added automated linting and a few basic tests.

After using a symlinked dotfiles system for a couple of years, I got curious as to whether there might be decent alternatives. I tried rcm, but started having issues with it and lost interest. Then I found the following comment thread that describes using a bare Git repository, the simplicity of which caught my attention:

Making the conversion was easy enough, and the process helped alleviate some cruft that had built up. After standardizing my home directory and removing unnecessary symlinks, I created a one-liner to replace symlinks with hard files. Then I set up the bare Git repo format, switched everything over, and haven’t looked back!

I opted to alias git operations in the home directory to ‘dots’. It hasn’t been too bad to rewire my brain to use git as ‘dots’ in this instance. The only thing to watch out for is not accidentally adding untracked files (a habit that’s worthwhile to break), as doing so here could expose sensitive info!

I wanted to make my dotfiles installable on a non-macOS computer. I started out using a snapshot of a basic Ubuntu GUI install on VirtualBox. This is where the installer script initially became useful, as it saved running long-ish command sequences over and over. It later proved useful for automated testing! There were several fun issues to work out, and in the end my dotfiles gained robustness and a few bug fixes.

I added Travis CI integration, initially just for formatting and linting checks, and later got into some basic tests via bats. Travis CI has several tools built-in, including shellcheck, shfmt, and bats. So far, I haven’t found good lint and test tooling for zsh. Thankfully, most of my stuff is bash compatible, and these Travis CI built-ins have proven useful.

In adding bats tests, there were some delightful surprises. A fingerprints function I’d written a while back ultimately proved to be useless. It was an overly complex wrapper for ssh-keygen that was trying to add functionality that was already present (showing fingerprints for multi-key files). I might never have discovered this if it weren’t for attempting to create a bats test. A less delightful surprise was that Travis CI openssh is an older version that does not support passing the hash option (-E).

In the end, my dotfiles feel lighter and leaner. There’s still a lot to do, but it’s comforting to have some basic tests in place - even with the slight mismatch of tools to code.

This week I had occasion to help resolve a customer support case. As part of troubleshooting, I stood up a local copy of the site using Docker4Drupal. This worked out really well, and I’d like to take a moment to discuss some of the benefits.

It initially took a little time to configure the site with Docker4Drupal, and to get XDebug linked up. But I’m sure that if I were doing this regularly, this would have been no more than a few minutes. Once Docker4Drupal was in place, it was incredibly easy and cheap to burn down my local copy of the site and start fresh. Two commands to get back to the initial state (with maybe a little lag for database population):

docker-compose down
docker-compose up

Another perk is distributed reproducibility with troubleshooting. With a docker-compose.yml and a few other assets, anyone can spin up their own copy of the site to dig in and contribute to solving a problem. Each site build will be functionally equivalent, so it’s fairly assured that everyone is seeing the same issue.

The speed benefits hinge a bit on a site being amenable to quick standup. Large database size can be mitigated somewhat through clever host:guest volume management, but the larger a database the more beneficial it will be to generate a slimmed down development copy from production (perhaps as a daily task - wink wink nudge nudge).

Using Docker4Drupal, I was also able to quickly compare the site build to a vanilla Drupal installation with a minimal set of contrib modules for reproducing the issue. simplytest.me is another option here, but Docker4Drupal is comparatively faster and more configurable (especially with regard to memory and CPU allocation).

That’s all for this installment. I hope this is sufficiently convincing for everyone to include Docker4Drupal assets in their Drupal repos. Happy troubleshooting!

I’m still kicking with tmux, and have rounded a few rough edges since the last installment. One area of advancement has been learning about defining default sessions. To be clear, this doesn’t entail attaching to indefinitely running background sessions. This is firing up an on-demand pre-defined session.

In researching support for this, I found teamocil and tmuxinator. These look interesting, but ultimately didn’t appeal to me since they are Ruby gems that introduce additional dependencies. I don’t like dependencies. It turns out this can be handled natively in tmux!

Here’s a Stack Overflow answer that provides a rough explanation of the concept:

Here’s an example of the session file I’m using to write this blog post:

new-window -n blog -c ~/blog
send-keys -t blog.1 "vi ." Enter
split-window -h -d -c ~/blog
send-keys -t blog.2 "hugo serve -D -b localhost:1313" Enter

The above session file opens a single window called “blog”. It then tells the window to open vi. Next the window is split vertically (keeping the focus on the vi pane), and then starts up hugo serve in the second pane.

In order to make use of the session defined above, we can call it directly:

# The leading tmux depends whether you're in or out of tmux at execution time.
[tmux] new-session "tmux source-file ~/.tmux/sess.blog"

We can also add a binding in .tmux.conf so that the session can be invoked with 3 keys:

bind H source-file ~/.tmux/sess.blog