I’ve been playing with Hubot a bit lately, and decided to up the ante on the endeavor by creating a Hubot Docker container. There were a couple of misadventures before landing on a stable source container from which it will now be crazy easy to extend and deploy.

I set up Docker using the instructions here.

You may notice from reviewing the Dockerfile that line 6 imports a ‘Universe’ apt source. This is to support dependencies associated with installing a newer version of Node.js (v0.10.19) in a container. To facilitate deployment speed, the container doesn’t include this repository by default. Further down on line 9, another apt repository is added to point to a newer Node.js package. You can find more information about this here.

The Dockerfile includes several different conventions, including RUNning commands, and ADDing files and ENVironment variables to the container.

I had initially tried setting up an S3 Brain with the aws2js Hubot script. I got it working several times, but ultimately found the npm package handling for this and a few other Hubot scripts to be far too brittle to recommend currently. Most often, package retrieval would hang, fail, or occasionally fail silently :/ In each case the following error would appear:

Error: Cannot find module 'aws2js'

Same problem, one tool… (this would be a great time to set up an S3 media bucket to show results). In addition to ImageMagick, I installed ghostscript with brew install ghostscript to enable working with eps vector images.

Starting with a square profile photo:

identify nhoag-bw-sq.jpg
/path/to/nhoag-bw-sq.jpg JPEG 480x480 480x480+0+0 8-bit sRGB 65.5KB 0.000u 0:00.000

Generate a black rectangle to match the width of the profile photo:

convert -size 480x100 xc:black black.jpg

Reduce the opacity of the new rectangle:

convert black.jpg -alpha on -channel alpha -evaluate set 25% shade-25.png

Overlay the modified rectangle on the photo:

composite -gravity center shade-25.png nhoag-bw-sq.jpg nhoag-shade.jpg

Convert black text eps file to png:

convert -colorspace RGB -density 300 black_text.eps -resize x95 black_text.png

Reduce the opacity of the black text png:

convert black_text.png +flatten -alpha on -channel A -evaluate set 10% +channel black_text_opac.png

Create a slightly larger canvas for the black text png (to accommodate blur):

convert -size 475x110 xc:transparent black_text_canvas.png

Overlay the black text png on the new canvas:

composite -gravity center black_text_opac.png black_text_canvas.png black_text_opac_canvas.png

Blur the black text png:

convert black_text_opac_canvas.png -blur 0x4 black_text_opac_canvas_blur.png

Overlay the black text png on the profile photo:

composite -gravity center black_text_opac_canvas_blur.png nhoag-shade.jpg nhoag-shade-black_text.jpg

Convert white text eps file to png:

convert -colorspace RGB -density 300 white_text.eps -resize x95 white_text.png

Overlay white text png on the profile photo:

composite -gravity center white_text.png nhoag-shade-black_text.jpg nhoag-shade-black_text-white_text.jpg

Generate a new png with the text ‘profile’:

montage -background none -fill white -font Courier \
	-pointsize 72 label:'Profile' +set label \
	-shadow  -background transparent -geometry +5+5 \
	profile_text.png

Overlay the ‘profile’ text on the profile photo:

composite -gravity center -geometry +0+90 profile_text.png nhoag-shade-black_text-white_text.jpg nhoag-shade-black_text-white_text-profile.jpg

You can view the finished product on my Acquia Google+ profile.

I recently modified the Google profile image on my work Google+ account to make it more clear that it’s a work profile rather than a personal profile. I added a few layers to my existing profile image including a work logo and a couple of simple transparencies. In the past I would have relied on Photoshop, GIMP, or another heavy image manipulation GUI software to accomplish this task. This time around, I decided to make use of the OSX Preview program in conjunction with the command line tool, ImageMagick.

To get ImageMagick running on OSX is as simple as running brew install imagemagick with Homebrew. This gives you tons of image manipulation powers that are described under the command man convert.

Preview is pretty much a terrible image editing interface. But with patience you can use it to minimally mash up several layers of images. To start, I had a profile JPG image, and a logo PNG file. To add to the mix I created a couple of lightly shaded transparencies to encapsulate the logo. Following are a set of commands to generate a simple shaded transparency.

Generate a black rectangle:

``` bash New Black Rectangle convert -size 20x80 xc:black black.jpg


Add transparency to the black rectangle:

``` bash Add Transparency
convert black.jpg -alpha on -channel alpha -evaluate set 25% black-25.png

Add more transparency:

bash More Transparency convert black-25.jpg -alpha on -channel alpha -evaluate set 50% black-12.5.png

Now to mash the images together, open them all up in Preview, ‘select all’ for an image you want to overlay on the profile, paste the image into the profile, then resize and/or re-orient the overlay image dimensions as desired.

ncdu for the Win

While reading through some internal tool enhancement tickets at work the other day, I happened accross a quick mention of a command line tool that I’d not yet seen, but which proved to have immediate value. The tool is ncdu, ‘NCurses Disk Usage’, which as the man page states, “…provides a fast way to see what directories are using your disk space.”

In the process of onboarding new sites to Acquia Cloud, it’s not always clear where the lines have been drawn with regard to separating out code and media assets for a site. Drupal itself is also quite flexible about where media can be stored, and custom PHP opens up the possibilities completely. Version control is not so forgiving, as loading media into a VCS can cause a repository to be unusably slow. In order to maintain wholistic efficiency for a project, it’s helpful to know if there are bulky files stashed somewhere in an application. With this piece of the puzzle, it’s possible to divert media assets out of the repo and into the file system.

This is where ncdu comes in. Regular old du is a handy tool indeed, but requires a lot of iterative manual steps to walk through an entire directory tree. By contrast, ncdu pops you into an interactive screen with a simple graph showing where the heaviest files are located. You can quickly navigate up the chain and find those big files in no time! Note: Calculating disk usage in a large and vast file system is still going to take some time to crunch.

GoAccess Plugin: UTC Support Added

In other news, the GoAccess Shell Plugin is freshly outfitted with UTC time specification. In addition to being able to filter by start time X hours or minutes ago, you can now pass a UTC argument to hit a specific time range in an access log. To have the script return values starting at UTC 9:30am, you can pass --time=0930. The new change is particularly helpful for honing in on that 3-5 minute period of downtime where you want to determine if there were any anomalies in the traffic pattern.

I made a bunch of big changes to the GoAccess shell plugin including improved argument handling, greater agnosticism, and greater configurability with time filtering. I converted the plugin from sourced script to regular bash script. This meant the script components had to be ordered into a sequential configuration, but this also makes the script more portable and easier to fire up.

The script options now support short and long call values, courtesy of Stack Overflow. So the log file location can be designated with either -l or –location. and the same is true of each option. I also made the options more intuitive to use by ensuring that options such as –report and –open can be called without having to pass an associated extra value such as ‘1’ or ‘yes’.

The time filter defaults to showing results for the past hour, but this can be altered in various ways. You could specify that you want results from 3 hours ago with -b3. By default, the end value is ‘now’, but this can also be customized by passing -d10M to specify that results should span a 10-minute period following the start time. Time units and integer values are parsed with regex and sed, respectively.

Many of the big changes were made over the course of several hours at a location where I was unable to iteratively test. It was a bit scary to diverge so far from a working copy of the script, but in the end I think it allowed me to be more adventurous with the direction of the scipt. The subsequent debugging wasn’t as involved as I had anticipated.

The remaining TODOs are to add support for parallel remote connections, compression support for gzipped log files, and support for filtering by arbitary UTC (HH:MM) time values.