Rubber duck focusing

Con­trary to pop­u­lar opin­ion about work inter­rup­tions, I’ve found that being inter­rupted with the sim­ple prompt of “explain what you are doing” is actu­ally often help­ful.

Being forced to describe in words how I’m spend­ing my time helps me to remem­ber the big pic­ture and notice when I’ve gone off-track.

To this end, I’ve set up a graphical prompt for main­tain­ing reg­u­lar jrnl entries in the style of the old Face­book sta­tus line:

Prompt: Andrew is…

The prompt only appears while I’m actively using the com­puter, is pre­ceded by a pas­sive noti­fi­ca­tion so I’m not caught by sur­prise, and allows alarm clock–style snooz­ing in case I’m doing some­thing that shouldn’t be inter­rupted.

#!/bin/bash set -o errtrace prompt() { notify-send --hint int:transient:1 \ --icon 'appointment-new' \ 'Upcoming journal entry' sleep 4s yad --center \ --entry \ --entry-label "$1" \ --no-buttons \ --on-top \ --sticky \ --timeout 60 \ --timeout-indicator left \ --title 'Journal' \ --undecorated \ --width 600 } while sleep 5m; do if (( $(xprintidle) < 60000 )) && status="$(prompt 'Andrew is')"; then jrnl "@prompt Andrew is $status" sleep 15m fi done

Hairball, artist's rendition

I main­tain an OpenPGP key server in a public server pool and reg­u­larly receive peer­ing requests from other server oper­a­tors. To help me bet­ter under­stand the net­work and make more informed deci­sions about which requests to respond to, I’ve com­piled a net­work graph using the mon­i­tor­ing data collected by Kristian Fiskerstrand.

A pre­lim­i­nary look at its degree distribution reveals that most servers are well con­nected, and sev­eral are very much so:

Degree distribution of SKS key server network

While this is good news regard­ing the health of the pool, it does pose a chal­lenge to efforts at visu­al­iza­tion. The graph is a hair­ball:

Graph of SKS key server network

In that light, I’ve taken con­sid­er­able artis­tic license to make the net­work’s fea­tures more dis­tin­guish­able:

Hover over a node to clar­ify its con­nec­tions.

The size of a node denotes its closeness centrality, a rough pre­dic­tor of how quickly data might be able to prop­a­gate from that server to the rest of the net­work.

Color is deter­mined by a server’s pri­or­i­ties in the three regional pools, Europe (EU), North Amer­ica (NA), and Ocea­nia (OC):

  • Hue indicates the bias between regions, as shown in the key above.
  • Saturation corresponds with the strength of that bias. Well balanced servers thus loose any discernible hue.
  • Lightness is determined by priority regardless of region. Servers holding a high priority in any region are lighter.

From this graph I can see that my server is rel­a­tively well bal­anced across regions and has a high pri­or­ity, but has few con­nec­tions com­pared to other servers.

I’d hoped that it would be more visu­ally appar­ent which new con­nec­tions I should make to strengthen the net­work, but the graph has turned out to be too com­plex for me to fol­low by eye.

I’ll have to take a more ana­lyt­i­cal approach—to be con­tin­ued.

Smarter timestamps in Middleman

The feed template in middleman-blog uses file mod­i­fi­ca­tion times to deter­mine when arti­cles were updated. If your arti­cles are under ver­sion con­trol, how­ever, you can do bet­ter.

Your ver­sion con­trol sys­tem already describes pre­cisely the mod­i­fi­ca­tions that are mean­ing­ful: those that involve the arti­cles’ con­tent. Arbi­trary file sys­tem activ­ity such as cloning a repos­i­tory or syn­chro­niz­ing data between com­put­ers is irrel­e­vant.

To tap into this more reli­able data source, I’ve created a Mid­dle­man exten­sion that pro­vides an mtime attribute on each arti­cle. Behind the scenes it queries the ver­sion con­trol sys­tem to find the last recorded change to the arti­cle’s con­tent.

With the exten­sion in place, the atom:updated element for each arti­cle can be writ­ten as:

xml.updated article.mtime.to_datetime.rfc3339

and for the entire feed as:


The exten­sion is mod­u­lar with respect to ver­sion con­trol sys­tems. I’ve imple­mented sup­port for Git and a plain file sys­tem fall­back.