Matthew Ernisse

January 05, 2019 @17:10

I own my own cable modem and have for the past 10 or so years. At first it was to save on the rental fee for the garbage equipment the local cable company supplied, but since they have stopped charging that it became more of a control thing. I have no need for the garbage router or wifi access point that they supply. I used to work for an ISP and so I'm well aware of the quality and support these devices receive. (Fun fact, the average cost per unit target when I was in the business for a residential CPE device (customer premise equipment) was between US $3 and US $8. For business devices it went up to a whopping US $25 or so...) I also prefer greatly the features and power that my OpenBSD routers give me and I've written more than a few posts about the various UniFi devices I've scattered around to provide WiFi. A few months ago the old Motorola/Arris SurfBoard 6141 I've had started playing up. It needed rebooting periodically to maintain the speeds provisioned. It was probably close to 7 years old and even though it's still a supported DOCSIS 3.0 modem the specs are starting to leave a bit to be desired...

I've used the SurfBoard products since I first got cable internet in the late 1990s and have always had good luck with them so I figured why change now and went and bought a new SB8200. The specs seem to be enough of an upgrade that I'll likely get another 6 or 7 years out of it barring fiber to the home finally coming to my area.

While playing around with the new modem I decided that I wanted to monitor the statistics it provided. Sadly it seems that the cable company, probably in response to the various information disclosure vulnerabilities decided to block the SNMP agent on the device. I'm all for good security practice but it would be nice to provide SNMP on the LAN side at least. Thankfully it still lets you access the web server so once again Python and BeautifulSoup to the rescue.

Arris SB8200 Web Interface

I pulled all the data from the web interface into the same InfluxDB and Grafana platform that I have been talking about lately and pulled together the following dashboard.

Arris Modem Statistics Dashboard

This is strictly a layer 2 look at the health of my circuit. There are separate dashboards monitoring my OpenBSD router for layer 3 information. It does give me a good look at what is going on and just adds to the toolbox of data that I have available to troubleshoot issues. Now that the better three-quarters is working from home full time this is even more important since while I always thought of the network here at home as being all in production, her income is now dependent upon it.

I've got to clean up the script a bit but once I do I'll post it in my miscellaneous script git repository if you want to look at it. It probably won't work with other versions of the Arris SurfBoard modems so be warned it won't be a copy and paste job.


December 28, 2018 @10:37

I like metrics. I've been working lately to convert some aging parts of my monitoring and alerting infrastructure over to something slightly more modern (in this case Grafana, InfluxDB, Telegraf and collectd). As part of this project I'm looking broadly at my environment and trying to decide what else to monitor (and what I don't need to monitor anymore). One of the things that came up was website performance.


Now if you go all the way back to my first post on this iteration of the blog you'll see that this website is static. In fact if you dig around a bit you'll see that I render all the pages from Jinja templates via a git post-receive hook so all my webserver is doing is sending you data off disk. Since server side metrics are practically useless in this case I naturally gravitated towards collecting RUM or Real User Metrics.

Privacy Goals

So now we enter what is (for me at least) the slippery slope. Once you decide to ship information back from the user agent to the server you open yourself up to a huge amount of options. Do I try to do cute stuff with cookies and LocalStorage to assign a user a unique ID (or maybe do some browser fingerprinting) to watch 'visits', do I try to gather detailed device information (ostensibly so I can test the site based on what people use), do I try to determine how people got to my site by history snooping? The possibilities for data collection in this fashion are pretty intense. For me though the answer is almost entirely a resounding no. As someone with a three tiered malware / tracking blocking infrastructure I don't want to be part of the problem. I firmly believe that privacy and security go hand in hand on the web so I refuse to collect any more information than what I need. I also think that this is just plain good security practice. Sadly this is in direct conflict with the urge and ability to just slurp everything into storage somewhere which seems to be a pretty successful business model (and generally horrific practice) these days. Realistically I don't think that someone's going to hack my site to get browsing details on the 20 or 30 visitors per day that I get but I see it as an excuse to lead by example here, to be able to say that you can collect useful real user metrics easily, fast and with an eye towards privacy.

Browser Side

If you view the source of this page, just before the footer you'll see a script tag loading 'timing.js'. It is pretty self explanatory, all this does is fire the sendLoadTiming() function once the DOM readyState hits the complete phase. This ensures that all the performance metrics are available in your browser. sendLoadTiming() then simply takes that PerformanceTiming data in your browser adds the current page location and your user agent string and sends it along to a Flask application.

Server Side

The Flask application receives the data (a JSON object) from timing.js and is responsible for sanitizing it and sending it off to InfluxDB so I can graph it. The first thing that happens to a request is that I compute 6 timing measurements from the data.

  1. How long DNS resolution took
  2. How long connection setup took
  3. How long the SSL handshake took
  4. How long the HTTP request took
  5. How long the HTTP response took
  6. How long the DOM took to load

I then sanitize the user agent string into browser and platform using the Request object provided by Flask which exposes the underlying Werkzeug useragents helper. Based on the platform name I (crudely) determine if the device is a mobile device or not. Finally I collect the URL, and HTTP version used in the request. All of this information becomes tags on the 6 measurements that get stored in InfluxDB.



Once the data is in InfluxDB it is a pretty simple task to create a Grafana dashboard. The JSON for the dashboard I use is available here, though it will likely need some customization for your environment.


So far I think it's been a success. Leveraging JavaScript for this has kept all of the automated traffic out of the data set meaning that I'm really only looking at data with people on the other end (which is the point). It has enough information to be useful but not so much as to be a potential invasion of privacy. All of the parts are open source if you want to look at them or use them yourself.

I may add a measurement for DOM interactive time as well as the DOM complete time. It looks like a LOT of people have some strange onLoad crap happening in their browser.

Some strange timing from Android

Also, once the new PerformanceNavigationTiming API is broadly supported (once Safari supports it, at least) I'll need to migrate to that but I don't see that as a major change. I'd love to hear from you if you found this useful or if you have any suggestions on expanding on it.


December 21, 2018 @09:28

If you have read my previous post about monitoring my ADS-B receiver it probably won't come as a surprise that the impetus for this whole project has been to deprecate MRTG from my environment. MRTG was a fine enough tool when it was basically all we had (though I had rolled a few iterations of a replacement for personal projects over the years) but these days it is woefully dated. The biggest issues lie in the data gathering engine. Even a moderately sized environment is asking for trouble, dropped polls, and stuck perl processes. MRTG also fails to provide any information beyond the aggregated traffic statistics.


Years ago I wrote a small script that renders some web pages to display the switchports on the network linked to their MRTG graphs. Each port is enumerated by operational status and description to make it easy to find what you are looking for. It turns out it also makes it pretty easy to throw MRTG out and switch to something else.

I had already settled on Grafana and InfluxDB for a large part of the new monitoring infrastructure with most of the data being collected via collectd running on all my physical and virtual hosts. I am monitoring containers with cAdvisor which also feeds into InfluxDB, so I very much wanted to keep data going into InfluxDB yet I needed something to bridge the gap to the SNMP monitoring that the switches and UPSes in my network require. Enter Telegraf.

My only complaint is that the configuration for the SNMP input module in Telegraf is garbage. It took a bunch of trial and error to figure out the most efficient way to get everything in and working. I do very much like the results though...


Setting up Telegraf as a SNMP agent

There are a number of blog posts kicking around with fragments of information and copy/paste chunks of configuration files but not much in the way of well written documentation. I guess I'll just pile more of the former on.

I deployed Telegraf as a Docker container, though the configuration is largely the same if you deploy directly on a host. I did install all the SNMP MIBs I needed (in Debian, the snmp-mibs-downloader package covered most of them, I added the APC PowerNet MIB for my UPSes and the Synology MIBs for my work NAS) on my Docker host so I could mount them into the container. I pulled the official container and extracted the default configuration file.

docker run --rm telegraf telegraf config > telegraf.conf

With that in hand I set about killing the vast majority of it, leaving only the [agent] section. Since I am only doing SNMP collection the only change I made there was to back the interval off to 120s instead of 10s.

I then configured Telegraf to send metrics to InfluxDB

# Configuration for sending metrics to InfluxDB
  urls = [ "http://influxdb:8086" ]
  database = "telegraf"
  skip_database_creation = true
  username = "[REDACTED]"
  password = "[REDACTED]"

This just left the SNMP input configuration, which I'll break up and describe a bit inline.

  agents = [ "" ]
  community = "[REDACTED]"
  version = 2

This is pretty self-explanatory, the basic information to poll the agent. You can pass a list into agents and it will use all the same configuration for all of the targets. You can have multiple inputs.snmp stanzas.

  name = "hostname"
  oid = "SNMPv2-MIB::sysName.0"
  is_tag = true

This collects the value of the SNMPv2-MIB::sysName.0 OID and makes it available as a tag.

  inherit_tags = [ "hostname" ]
  oid = "IF-MIB::ifXTable"

This is the meat, it walks the IF-MIB::ifXTable and collects all the leaf OIDs as metrics. It inherits the hostname tag from above.

      name = "ifName"
      oid = "IF-MIB::ifName"
      is_tag = true

      name = "ifDescr"
      oid = "IF-MIB::ifDescr"
      is_tag = true

      name = "ifAlias"
      oid = "IF-MIB::ifAlias"
      is_tag = true

These specify additional OIDs to use as tags on the metrics. The difference between this and the hostname tag is that these are scoped to the index in the walk of the IF-MIB::ifXTable, so if you are looking at index 0 in IF-MIB::ifXTable, it will fetch IF-MIB::ifName.0 and use that. I put the configuration and a docker-compose file in Puppet and let the agent crank the wheel and was rewarded with a happy stack of containerized monitoring goodness.

Telegraf, InfluxDB and Grafana in Containers

The compose file is below, but I'll leave the configuration management bits up to you, dear reader.

version: '2'
      - MIBDIRS=/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/snmp/mibs/syno
      - grafana_backend
      - /var/local/docker/data/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
      - /usr/share/snmp/mibs:/usr/share/snmp/mibs:ro
      - /var/lib/snmp/mibs/iana:/usr/share/snmp/mibs/iana
      - /var/lib/snmp/mibs/ietf:/usr/share/snmp/mibs/ietf

      name: grafana_backend

Gluing it to Grafana

The last piece was updating the links to the new graphs. Happily if you setup a variable in a dashboard you can pass it in the URL to the dashboard so I was able to simply change the URL in the template file and regenerate the page.

Graph Homepage

In my case the new URL was

https://[REDACTED]/grafana/d/[REDACTED]/switch-statistics?var-host=[SWITCH NAME]&var-port=[PORT NAME]

Hopefully this makes it a little clearer if you are trying to achieve a complex SNMP configuration in Telegraf.


November 29, 2018 @10:57

I have been running a FlightAware / FlightRadar24 ADS-B feeder for almost 4 years now. It is an older Raspberry Pi B with a RTL-SDR stick running dump1090 at its core. These days it is mounted in my garage with the antenna on the roof. When I built it I stuffed a Honeywell HIH6130 temperature and humidity sensor in the enclosure. At the time it was mounted on a fence in my back yard so it would be in full sun for much of the day so I hooked it up to Icinga to alert me if it ever got too hot or too wet inside.

asdb-feeder in August 2014

Lately I've been investigating ways to get more information into a central location for my infrastructure as a whole. I have a lot of one-off, largely custom built systems to collect and aggregate system status data. While this has worked for the last 12 years, it is most certainly starting to show its age. At the moment I'm working with a stack that includes collectd, InfluxDB, and Grafana. The latter two run as Docker containers, while the former is deployed by Puppet to all my physical and virtual hosts.

I wanted to pull together some additional monitoring information from the ADS-B feeder to see just how far I can go with this setup. Luckily the dump1090 web interface works by reading JSON files from the receiver daemon, so all the interesting statistics are available on disk to read.

dump1090-fa web interface

I was able to pull together a quick python script that loads the JSON and emits the statistics to collectd (which forwards them onto InfluxDB for Grafana to work with). I need to get the script into git somewhere but for now, here is the currently running copy.

#!/usr/bin/env python3
''' (c) 2018 Matthew Ernisse <>
 All Rights Reserved.

Collect statistics from dump1090-fa and send to collectd.  Uses the collectd
Exec plugin.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

import json
import os
import socket
import time

def print_aircraft(stats):
    ''' Parse and emit information from the aircraft.json file. '''
    aircraft = len(stats.get('aircraft', []))
    messages = stats.get('messages')
    if not messages:
        raise ValueError('JSON stats undefined')

    m = "PUTVAL \"{}/dump1090/counter-messages\" interval={} N:{}".format(

    m = "PUTVAL \"{}/dump1090/gauge-aircraft\" interval={} N:{}".format(

def print_stats(stats):
    ''' Parse and emit information from the stats.json file. '''
    counters = [

    gauges = [

    values = stats.get('local')
    if not values or not type(values) == dict:
        raise ValueError('JSON stats undefined')

    for k in counters:
        value = values.get(k)
        if not value:
            value = 'U'

        m = "PUTVAL \"{}/dump1090/counter-{}\" interval={} N:{}".format(

    for k in gauges:
        value = values.get(k)
        if not value:
            value = 'U'

        m = "PUTVAL \"{}/dump1090/gauge-{}\" interval={} N:{}".format(

if __name__ == '__main__':
    interval = float(os.environ.get('COLLECTD_INTERVAL', 10))
    hostname = os.environ.get('COLLECTD_HOSTNAME', socket.getfqdn())

    while True:
        with open('/var/run/dump1090-fa/stats.json') as fd:
            stats = json.load(fd)

        stats = stats.get('total')

        with open('/var/run/dump1090-fa/aircraft.json') as fd:
            stats = json.load(fd)


I also wanted to pull the temperature / humidity sensor readings, that ended up being a similarly easy task since I already had written a script for Icinga to use. A quick modification to the script to emit the values in the way that collectd wants and that was flowing in. I created a user for the i2c group so the script can use the i2c interface on the Raspberry Pi.

The script currently looks like this.

#!/usr/bin/env python3

import os
import socket
import time
import smbus

def read_sensor():
    ''' Protocol guide:
    bus = smbus.SMBus(0)
    devid = 0x27

    # writing the device id to the bus triggers a measurement request.

    # wait for the measurement, section 3.0 says it is usually
    # 36.65mS but roll up to 50 to be sure.

    # data is 4 bytes
    data = bus.read_i2c_block_data(devid, 0)

    # bits 8,7 of the first byte received are the status bits.
    # 00 - normal
    # 01 - stale data
    # 10 - device in command mode
    # 11 - diagnostic mode [ ignore all data ]
    health = (data[0] & 0xC0) >> 6

    # section 4.0
    humidity = (((data[0] & 0x3F) << 8) + data[1]) * 100.0 / 16383.0

    # section 5.0
    tempC = ((data[2] << 6) + ((data[3] & 0xFC) >> 2)) * 165.0 / 16383.0 - 40.0

    return (tempC, humidity)

if __name__ == '__main__':
    interval = float(os.environ.get('COLLECTD_INTERVAL', 10))
    hostname = os.environ.get('COLLECTD_HOSTNAME', socket.getfqdn())

    while True:
        retval = read_sensor()
        print("PUTVAL \"{}/hih6130/gauge-temperature\" interval={} N:{:.2f}".format(

        print("PUTVAL \"{}/hih6130/gauge-humidity\" interval={} N:{:.2f}".format(


The collectd plugin configuration is pretty easy, the dump1090 files are readable by nogroup so you can execute that script as nobody. As I said I made an i2c user that was member of the i2c group so the Python SMBus module can communicate with the sensor.

LoadPlugin exec
<Plugin exec>
    Exec "nobody:nogroup" "/usr/bin/"
    Exec "i2c:i2c" "/usr/bin/"

Once the statistics were flowing into InfluxDB, it was just a matter of putting together a dashboard in Grafana.

Summary Panels

Host Status Panels

SDR Status Panels

The JSON from Grafana for the dashboard is here, though it may require some tweaking to work for you.

So far I'm pretty happy with the way this all went together. I still have a bunch of network equipment that I'd like to bring over and a stack of ancient MRTG graphs to replace. Hopefully it will be a similarly simple experience.


March 24, 2018 @16:41

I mentioned offhandedly at the end of my post on how Docker and Flask are helping me sleep at night a potential use case for an iOS share extension. I finally started working on that idea.

iOS Share Extension Picker

In general I have to say I'm pleased with the iOS development workflow. The APIs seem fairly robust and Swift 4 has clearly had a lot of thought and work put into it to make it much more accessible and readable than Objective C, which I always kind of felt was... a bit overloaded with punctuation. I feel like XCode is the real boat anchor on the whole process, most specifically the interface building tools. I found myself dreading having to work on the UI portion of my app. There are so many quirks in manipulating the various constraints and connections between UI elements and code that it just... hurt. Getting my head back around the event-driven nature of GUI programming took a little while, combined with the nuances of the Apple GCD threading model that their application framework makes use of does feel quite a bit different than the much more straightforward mostly web-based programming that I have done recently. The only other real irritating piece isn't strictly speaking Apple's fault. Swift is a strongly-typed language and JavaScript isn't. This lends itself to some machinations converting one to another (my API uses JSON for communication). I do feel a bit that given the popularity of JSON for web APIs that this should have been more of a solved problem. I ended up using SwiftlyJSON but I still pine for the ease of Python's json.loads and json.dumps methods.

So the app started out as little more than a settings page and a share extension which technically ticked the boxes that I originally set out to tick but once I had gotten there scope creep set in. I justified it to myself by saying that since I was trying to reduce the back and forth between tabs in Safari to start a job that I could just go a few steps further and put all the track and container management into the app as well.

Original ContainerStatusViewController

Honestly it didn't take too terribly long to do. I initially used FRadioPlayer to handle the playback but after having to make a bunch of changes to suit my needs I decided to simply re-implement it. As a bonus it ended up about half the size, since I'm not playing a live stream.

New TrackTableViewController

It does really make me sad that Apple has chosen to completely ignore the open source community in their development programs. I don't have a way to distribute the resulting application in any way other than as an XCode project. To use it you will have to get a copy of the project from the git repository and then build and install it from a Mac running XCode. I can't just give you something you can drop into iTunes. In a way I empathize with the desire to prevent random malware by requiring signed bundles to come from the App Store, but not giving the user the ability to choose to install from alternative repositories does mean that there really isn't a good way to distribute free software on the platform.


Of course it's not like this will be useful without the rest of the infrastructure, so you'll need the Flask application, the Docker containers, and all of the requisite configuration to make all those things talk. Also be aware that iOS will refuse to make a non-HTTPS connection and will also refuse to verify a CA or certificate that has a SHA-1 hash anywhere in the trust chain (all very good things).

Lock Screen Now Playing

So far it has been an interesting journey down the rabbit hole. There is a lot of maturity on the Apple side of things. Containers certainly have their uses though I'm still afraid of people who think they are a good way to distribute software outside of their organizations. I still think that for most things native applications are at best a step back on the client side. There is a lot less code to make the website part go versus what is required to make the iOS app go and if the Share Extension was exposed via JavaScript I never would have needed to write an app in the first place. 🍺

February 06, 2018 @12:56

The Background

Right out of the gate I'll admit that my implementation is a bit naive, but it is if nothing else an example of what can be accomplished with a little bit of work. In general my microcontroller development workflow has been tied to a particular system largely using the vendor supplied tools like MPLAB X or Atmel Studio. This is usually OK as I need to have physical access to either the prototype or production hardware for testing and verification. From a high level it generally looks like this:

Three Swans Inn Lighting Controller

Lately I have switched most of my Atmel development away from their IDE and just use a Makefile to build and flash chips. This actually lets me write and build the code from anywhere as this is all stored in a git repository, which is checked out on a system that I can access from anywhere. The build tool chains are a bit obtuse though and keeping all the moving parts around so I could at least ensure the code compiles has been a challenge.

The Goal

So, containers! I made a couple containers with the Atmel and Microchip tool chains in them, with a little glue I was able to connect the post-receive hook of my git repository to my Docker instance to produce on-commit builds of the appropriate project.

Part of the glue is based on the fact that I have all the firmware source in a single git repository for ease of maintenance so I try to determine what changed for building so I don't have to rebuild everything all at once. I also have several different microcontrollers that I target so there is a little logic in the hook to launch the right container for the right code.

The Hook

I snagged most of this from the post-receive hook that handles the deployment of this website. The biggest change was detecting which project within the repository needs to be built.

# microcode post-receive hook
# (c) 2018 Matthew J. Ernisse <>
# All Rights Reserved.

set -e

GIT_DIR=$(git rev-parse --git-dir 2>/dev/null)

    local mapping="\
        led-timer:atmelbuilder \
        led-strand:atmelbuilder \
        bar-lighting:atmelbuilder \
        led-gadget:microchipbuilder \
    if [ -z "$1" ]; then
        echo "usage: try_container project"
        return 1

    for x in $mapping; do
        if [ "$1" = "${x%%:*}" ]; then
                "${x##*:}" "$1" "$REV" "$GIT_AUTHOR"

if [ -z "$GIT_DIR" ]; then
    echo >&2 "fatal: post-receive GIT_DIR not set"
    exit 1

while read oldrev newrev refname; do

GIT_AUTHOR=$(git show --format='%ae' --no-patch $REV)

for fn in $(git diff-tree --no-commit-id --name-only -r $REV); do
    PROJECTS="$PROJECTS $(dirname $fn)"

if [ ! "$GIT_BRANCH" = "refs/heads/master" ]; then

for project in $PROJECTS; do
    try_container "$project"

The Container Launcher

This is basically a stripped down version of the container module from my Flask youtubedown front end.

#!/usr/bin/env python3
''' (c) 2018 Matthew J. Ernisse <>
All Rights Reserved.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

import docker
import os
import sys

# Configuration

def run_container(container_name, image, args):
    ''' Execute a container. '''
    tls_config = docker.tls.TLSConfig(

    client = docker.DockerClient(
            '/var/www/autobuild': {
                'bind': '/output',
                'mode': 'rw',

def usage():
    print("usage: {} image_name project_name git_rev author".format(

if __name__ == "__main__":
    if not len(sys.argv) == 5:
        print("Invalid number of arguments.", file=sys.stderr)

    builder = sys.argv[1]
    project = sys.argv[2]
    git_rev = sys.arv[3]
    author = sys.argv[4]

    container_name = "{}-builder--{}--{}".format(

    image = "{}/{}:latest".format(DOCKER_REPOSITORY, builder)

        run_container(container_name, image, project)
        print("*** Running {}...".format(image))
    except Exception as e:
        print("!!! Failed to start container: {}".format(str(e)))


A short while after I push a commit a container will run,build the project, and emit an archive containing the relevant .hex files for the microcontroller as well as a long of the process. I still have some work to do on the Microchip Makefile but for the most part this makes things a lot easier. I can develop from any workstation as long as I have the programming utilities for the chips and if I don't I can at least ensure that code builds every time I commit. The plumbing is pretty generic so I'm sure I'll find other places to use it, for example I was thinking I should try to find a way to build and push the Docker images to my private registry upon commit.

January 10, 2018 @10:37

YouTube Dead Job Running

I don't want to start out by bitching about yet another crappy Internet thing, but I think I have to. YouTube Red is this play from YouTube to try to get into the paid streaming business and one of the 'features' they tout is the ability to play videos in the background on mobile devices... something that totally worked before JUST FINE before.

This is a dumpster on fire

Over the last year or so I figured out a rather complex work-around for this on the iPad.


  1. go to 'desktop mode' in the browser,
  2. hit the PiP button, slide the video off the screen
  3. wait a few seconds
  4. lock the device.
  5. playback will pause a few seconds later
  6. hit play from the lock screen

If you did it right the iPad goes back to making noise, if you screwed up the process or the timing the nasty JavaScript takes notice and stops playback (causing the 'playing' UI on the lock screen to go away either when you lock or when you hit play from the lock screen). Since this needs PiP it doesn't work on the iPhone. 😦

Old Man Yells at Cloud

Doing this dance is annoying and yet from time to time I like to listen to random music mixes as I'm falling asleep so I have put up with it this far. (As an aside lately I've been listening to Mike Duncan's Revolutions podcast before bed. He did The History of Rome which I also loved, so check it out). Always on the look-out for reasons to wield a big hammer at these kinds of problems I started thinking about potential solutions and realized that I had been doing a different manual dance to get tracks loaded on my phone.


That dance looks like:

  1. youtubedown the video.
  2. ffmpeg to strip the video track out and leave the audio.
  3. copy into iTunes, cry, flail, gnash teeth
  4. ???
  5. have audio on my phone... somewhere

There has to be a better way... right? Obv.

It turns out tossing ffmpeg and youtubedown into a Docker container was stupidly easy. So that gives me a nice way to automatically do 1 and 2 above. Cool. Now, the trick was how to make this all just happen from the phone so I need some sort of interface. I just happen to have a bunch of boilerplate Flask code laying around from other projects that leans on Bootstrap so I dusted that off and started plugging away.

To take a quick step back, it is useful to know that most of my Internet facing infrastructure runs in a colocation facility. All of my internal stuff then connects to the colo via VPN and through the magic of OSPF and BGP it ensures that all the IPv4 and IPv6 traffic for 'my things' crosses only those VPN links. Almost all of the time this is a good thing. In fact this is the basis of how I ad-block and content filter all my devices including my iPhone and iPad. In this case though having to traverse the VPN and therefore my colocation provider's uplink twice isn't really useful for streaming audio that originally came from the Internet. I do have some servers at home though so I tossed Docker on one of the VMs with the idea that external requests can proxy over the VPN but if I am in bed I just have to traverse my WiFi. Sweet.

After a weekend of what felt like mostly fighting writing JavaScript I came up with YouTube Dead.

How this now works:

  1. find video I want to listen to.
  2. copy URL
  3. paste URL
  4. press start
  5. listen

Starting a job

Being able to launch the worker containers with respect to the locality of the usage of the output is a win for me. It solved both problems without the typical 'you now have n+1 problems' fallout. The Flask app uses a worker thread to watch the container and if it finishes successfully it stores the metadata so I can listen to previously downloaded tracks with the click of a link. It would be trivial to detect the location of the user request and launch the container at any of my sites letting me keep the data closest to the user that is requesting it. It would also be pretty trivial to extend this model to basically anything that I can shove into a container that I might want to trigger from the web. Next though, I think I'll start earnestly looking at the dumpster fire that is Apple iOS development to see if I can put together a share extension to eliminate #2, #3 and #4 in the list. 🍺 🐳

December 16, 2017 @12:32


I don't remember where I ran across but I thought it was a pretty rad idea. I was able to find an old version of my website there and enjoy it in a browser similar to what I would have been running back then...

Old Screenshot

Inspired by their work and wanting to fool more around with Docker containers I set about to make some containers filled with old browser goodness.

If you saw my earlier post on Docker you might have noticed that like a reasonable human who values his time, I'm running macOS on my workstation and not Linux. So now that I want to run X11 apps the trick of just passing the local X11 socket through to the container isn't viable. I could install something like XQuartz but... no. That sounds awful, and also totally ignores Windows users. I'm told Windows can run Docker too... So the first hurdle was to figure out how to overcome my... opinions.

I ended up making a container called x11base that does a few things. For as wide support as possible, it checks to see if you have passed an X11 socket into the container and if so it will use it. If you have not then it launches a X11 server, then x11vnc so you can attach to it with any VNC client.

All the browsers inherit from this containers.

You can find them in my git repository. At the moment I have Netscape 3.04 Gold and NetScape 4.79 working. I'm having some frustrating problems with Wine so getting Netscape 1.22/win32 working is proving to be difficult.

Netscape 3.04 Gold Netscape 4.79

I included jwz's http 1.0 proxy so they sort of actually work, though HTTPS has changed a lot since they were made so I don't think any secure websites will work without more work. I've been meaning to hack together a sslstrip type proxy, but that feels kind of dirty (not that it will stop me, mind you...).


For the eagle eyed, there is also a version of Elite Plus in there, it works over VNC... sorta, but you probably should run it on Linux...

See you in the black...

Edited: October 18, 2017 @14:03

I have been meaning to play around with containers for a while but for the life of me have not found a real reason to. I feel like without a real use case, any attempt I'd make to learn anything useful would be a huge waste of time. There are a bunch of neat toys out there, from running random ASCII art commands to a crazy script that 'emulates' some of the insane Hollywood style computer screens, as well as base images for all manner of application stacks and frameworks, but all of those are easily installable using your favorite package manager.

None of this really made me care enough to install and learn anything about any of the container ecosystems. I do like the idea of containers as sandboxes but as a macOS user I have that built in for free, so I have no impetus there either.

Still, there is a lot of talk about containers in the development community so I have been keeping an eye out for a use-case where I could justify investing time in them. Lately my primary development work has been creating various bespoke Flask applications. Flask comes with Werkzeug and a simple server built in, so I typically just run the internal server, iterate on the code, and then commit to my git repository. Eventually Puppet comes along and does the heavy lifting to deploy the changes to production. This works really well and I can't really figure out a reason to shoehorn a container into the process..

Docker on Aramaki

Turns out the excuse came from this web site. As I have written about before this entire site is generated from a home brew Python script. It takes all the design from templates and blog articles from markdown files and is triggered from a git post-receive hook on the web server. This lets me make a very fast web site that doesn't rely on any dynamic pages or API calls. The one drawback of this method lies in the differences between viewing pages over HTTP/HTTPS versus off the local filesystem. To test the site locally I was hand-editing some of the output to change some of the URLs from paths that would work on the website to paths that work on the local filesystem. This was getting annoying and frankly is just the thing to replace with a very small shell script.

I initially thought about modifying the build script to use filesystem paths when building locally, but that would just add complexity and potential for breakage. I then thought about fooling around with the web server built into macOS but I am generally loathe to mess around with things in the bowels of the OS lest I do something that Apple breaks in an update. In the end I figured this might finally be a good excuse to pull together a Docker container running Apache, that included the Python bits that the site builder needed and then in true ex-sysadmin fashion wrap it up in a nice shell script.

This resulted in a pretty reasonable work flow.

  1. Update working copy of site.
  2. Run
    • build Docker image
    • copy working copy into Docker image
    • launch an instance of this image.
    • Open a browser to the URL of the local Docker instance.
  3. Verify things are the way we want.
  4. Fix and GOTO 1 or continue.
  5. git add, commit, push to remote.
    • git hook deploys to production.

Now to be fair there are probably easier ways to do this including using a staging branch that is served on another domain name, directory, or on an internal VM. This would save me from building, launching, and cleaning up images. I could use my normal publishing work flow and scripts to simply do the right thing and then merge back to master when I'm ready to deploy the site to production.

But that doesn't give me an excuse to play with 🐳 Docker. 😁


As of the time of writing these are the main pieces that make this work flow possible.


FROM debian:latest
LABEL version="0.3.0" \
    vendor="Matthew Ernisse <>" \
    description="Build and serve"

RUN apt-get update \
    && apt-get install -y \
    apache2 \
    python \
    python-pip \
    && rm -rf /var/lib/apt/lists/* \
    && mkdir -p /var/www/ \
    && a2dissite 000-default

COPY docker/going-flying.conf /etc/apache2/sites-available
COPY . /var/www/

RUN a2ensite going-flying \
    && pip install \
    --requirement /var/www/ \
    && /var/www/

CMD ["/usr/sbin/apachectl", "-DFOREGROUND"]

This is pretty straightforward. I take the Debian base Docker image and install the bits I need to build and serve the site. I also have a very basic apache configuration fragment that points the server to the location I will be copying the site files to (the same location as in production so the script doesn't have to care). I then simply copy the working copy of the site into the image and run on it.

# (c) 2017 Matthew J. Ernisse <>
# All Rights Reserved.
# Build and run a copy of the website inside a Docker container.

set -e

echo " test builder."

if ! which docker 2>&1 >/dev/null; then
    echo "docker not found."
    exit 1

if [ "$(uname -s)" != "Darwin" ]; then
    echo "Not running on macOS.  Exiting."
    exit 2

cat << EOF

                    ##         .
              ## ## ##        ==
           ## ## ## ## ##    ===
       /"""""""""""""""""\___/ ===
      {                       /  ===-
       \______ O           __/
         \    \         __/


echo "Building image..."
_image=$(docker build --force-rm --squash . -t going-flying:latest | \
    awk '/^Successfully built [0-9a-f]+/ { print $3 }')

docker run --rm -d -p 8080:80 --name going-flying $_image > /dev/null

open "http://localhost:8080"

echo "Container running, Press [RETURN] to end."
echo "Stopping..."

docker stop going-flying > /dev/null
echo "OK."

This just does the docker build and docker run dance that causes a container to be running. It can probably be simplified even further but it gets the job done. The biggest thing was to make sure that I wasn't leaving a pile of images and whatnot laying around. And not having to remember the different command line switches needed to make it all Just Work.

The only other change was a hook in that changes the base URL of the site from the normal to http://localhost:8080/. It does this by simply detecting if it is running in a Docker container and changing an instance variable.

def is_docker():
        ''' Return True if we're running in docker.'''
        if not os.path.exists('/proc/self/cgroup'):
                return None

        with open('/proc/self/cgroup') as fd:
                line = fd.readline()
                while line:
                        if 'docker' in line:
                                return True

                        line = fd.readline()

        return None

[ ... later in main() ... ]

        if is_docker():
                BuildConfig.urlbase = "http://localhost:8080/"
                print ":whale:  container detected."

I was skeptical at first if this was going to be worth it, but after using this for a few site updates, I honestly feel that this was easier than many of the alternatives and in the end let me go back to fixing a bunch of style and template bugs that I had on the TODO list for some time. I'd call that a result that was worth the effort. I look forward to finding more places where a container fits into my work flow. It might even turn into an excuse to run a private registry and start playing with some of the CI tools to run builds.


It turns out that Safari doesn't like to auto play videos not in view when the page loads. I tried to slam together some JavaScript to 'fix' this, but your milage may vary. If the videos aren't playing you should be able to right click on one of them and say 'show controls' then hit play.

Subscribe via RSS. Send me a comment.