Matthew Ernisse

April 04, 2019 @09:35

I have mentioned a few times that I rely on OpenBSD VPNs to ensure that clients outside of my home network get the same level of protection as they do inside. This means that I can use already existing DNS and proxy infrastructure to prevent various malvertizing, tracking, beacons, and poorly behaved applications and websites from leaking personal information, and I can prevent wifi hotspots from analyizing my traffic or injecting JavaScript. Creating the actual infrastructure is out of scope for this post, but I did previously post some information about what the DNS configuration looks like.

Part 1: OpenBSD

Setting up a VPN with OpenBSD is extremely simple compared to the many alternatives. This is a large part of why I like OpenBSD so much. I have several site to site VPN tunnels as well as the road warrior configuration all terminating on the same iked(8) instance. In my case I use an internal certificate authority for all on-network SSL/TLS so it was fairly easy to extend that to authentication for my VPNs. You can run your VPN with a pre-shared key; however, for security purposes I cannot recommend that and will instead talk about the configuration as I have implemented it.

I'm going to assume you have generated a CA and certificate and key pair for your VPN server as well as your clients in PEM (PKCS1) format.

  1. Your CA's certificate goes in /etc/iked/ca/ca.crt. This can be a bundle of CA certificates to trust.
  2. Your VPN server's certificate goes in /etc/iked/certs/ and is named based on the ID you will be using. I use FQDNs for peer identification so mine is simply the hostname of the server. So something like /etc/iked/certs/
  3. The private key for your VPN server goes in /etc/iked/private/private.key
  4. Ensure you have pf(4) setup to properly allow traffic both to iked(8) and from the client tunnel(s) to the Internet (with NAT if you need it). For more information see pf.conf(5)

You should now be ready to tackle your iked.conf(5). Here you will need to make some choices based on your devices. You may need to look up their capabilities and test some of the options. I have used the following configuration with clients running iOS 12.2 and macOS 10.14.4. I consider this to provide the minimum viable security which is why I have the re-key lifetime fairly short.

ikev2 "RoadWarrior" passive esp \
        from to \
        peer \
        ikesa enc aes-256 \
                prf hmac-sha2-256 \
                auth hmac-sha2-256 \
                group modp2048 \
        childsa enc aes-256 \
                auth hmac-sha2-256 \
                group modp2048 \
        srcid \
        lifetime 180m bytes 16G \
        config address [ Addresses to assign to clients in CIDR ] \
        config name-server [ Your DNS server 1 ] \
        config name-server [ Your DNS server 2 ] \
        config netmask [ Dotted quad netmask for your client network ] \
        tag "$name-$id"

This rule should be first in your iked.conf(5) to ensure it is matched only if there is not another rule that is more specific. Please familiarize yourself with the iked.conf(5) manpage, it explains all the options available to you. The key here is that I'm setting up the tunnel to capture all IPv4 traffic from the client, and the client can come from any IPv4 address. This is part of why I don't use a pre-shared key for this. A pre-shared key could be easily compromised turning your VPN endpoint into an open proxy for all sorts of Internet ne'er-do-wells; however, if public key cryptography is being used you would have to leak a valid, signed certificate and corresponding private key. In that unlikely even you can always revoke that certificate and place a CRL in /etc/iked/crls/.

Make sure you read the iked(8) manpage, depending on your network setup you may find that you need one or more of the flags for reliable operation. In particular -6, and the -t, -T pair may be important to you.

Once you reach this point start up iked.

Part 2: macOS and iOS clients

Apple has included a reasonably fully featured VPN client in both macOS and iOS, though most of the connection configuration is not exposed via a GUI. I used Apple Configurator 2 to generate a configuration profile that can be installed on both macOS and iOS clients. You will need a CA certificate, and a client certificate and key pair. For clients I issue client only certificates which adds the benefit of the client cert not being considered valid to run a server with. In the extremely unlikely case of a compromised client, the certificate cannot be used to impersonate a server. You will want the CA certificate as a .PEM file (PKCS1) and the client certificate and key bundled in a .p12 (PKCS12) file to make the Apple Configurator happy.

Apple Configurator 2

The work flow for the Apple Configuration 2 utility is fairly straightforward. There are many more options available to you than what I will cover here. For example in my production deployments I include various device restrictions, I preload the WPA2-Enterprise configuration for my wifi SSID, configure my AirPrint printers, all alongside the VPN and certificate configuration shown here. This is just what you need to get a VPN connection going.

  1. Fill out the mandatory General page.
  2. Add your CA and client certificate bundles.
  3. Configure the VPN client, matching the authentication and transport settings from iked.conf(5) above.

Apple Configurator 2 General

Add Certificates

VPN Page 1

VPN Page 2

VPN Page 3

Manual Additions for Connect on Demand

I want my VPN to connect any time I'm not on a trusted wifi network. Thankfully Apple lets you connect to your VPN on demand with some simple rule based matching (see the VPN section of the Configuration Profile reference guide). Unfortunately the Configurator does not allow you to setup those rules in the GUI, so once you have your profile made you will need to add some keys into the plist. Open up the .mobileconfig you just created in a text editor and look for the chunk that looks like this.

            <string>Configures VPN settings</string>

To that dictionary you will need to add some additional keys to setup an OnDemand configuration. An example of the policy I use is below, but the reference document above describes the rules available in more detail. The example will start the VPN connection any time the device is not connected to the list of trusted wifi SSIDs and will automatically disconnect when the device connects to one of those trusted SSIDs.

            <string>Your Home SSID</string>
            <string>Your Other SSID</string>

Now install the profile and you should see your VPN appear in the Settings application. Once you leave your SSIDMatch networks you should see the icon at the top of the screen showing that you are now connected.

iOS VPN Indicator

Part 3: Errata and Caveats

This has worked really well for me for years now, however there have been edge cases. In particular I have found some cases where the captive portal detection in iOS doesn't work with the on demand VPN connection.

To disable your VPN in iOS open Settings and tap General, then scroll down to VPN and then tap the "i" next to your connection name. On the following screen de-select Connect On Demand. This will disable the OnDemandRules and let traffic flow out the normal wifi / cellular interfaces.

Real clear tap target, Apple...

Disable Connect On Demand

Since your device's Internet access will now be reliant upon your VPN tunnel you may way to look into adding high availability to your OpenBSD endpoint. The manpages for carp(4), pfsync(4), and sasyncd(8) are good places to start looking.

April 02, 2019 @18:10

I really don't want to sound like the old man yelling at a cloud here; however, sometimes you need to. When DRM first appeared as a way to sell digital goods on the Internet and prevent the dreaded piracy and sharing that was certain to be the downfall of all capitalism and hurl us into the darkest night, the Internet was, as you might expect quite put out.

Books Burning

The problem of course is that the digital age is littered with corpses of companies that couldn't possibly disappear. To make matters worse the morass that spawns companies in the post dot com era has such a death grip on anything that might even smell slightly like intellectual property that in death these companies often live on, like zombies, to eventually be sold off to recoup some of the venture capital that they were founded upon. What that ultimately means is that the DRM protected digital assets that customers licensed are almost always rendered useless.

Over the decades I have been involved in technology I have continued to prefer to have physical, or DRM-free digital versions of the things that I buy. Sometimes this means spending more money than the DRM encumbered, all digital versions; however, this is fine with me because at the end of the day these things will survive the almost certain death of the distributer from whom I purchased the items.

Sure, the cloud, and the digital economy are convenient. I will happily affirm the ease of use that Amazon, Apple, Spotify, Pandora, and others all bring to the consumer, often with minimal admonishment. At the end of the day though, the risk of the inevitable failure of the companies that store everything "for me" vastly outweighs the ease of use. The anxiety I would feel knowing that thousands of dollars worth of content, entertainment, culture, history, literature, and art could disappear from my grasp at the whim of a faceless corporation trying to fulfill its mandate to provide shareholder value is frankly not something I want in my life.

I more often than not find Cory Doctorow a little 'out there', but in this case he's spot on.

February 20, 2019 @10:21


Ubiquiti's UniFi platform has the ability to run scheduled speed tests to keep an eye on your ISP's throughput from their USG router at a site. I discovered this back when I finished converting the network at the office over to UniFi and have been wanting to replicate this functionality at my other locations where I use OpenBSD routers. Currently I aggregate the data from those devices into my new Grafana-based monitoring platform which I wanted to continue to use so I could have a consolidated view into the infrastructure.

UniFi Speedtest


The Ubiquiti speed test functionality connects to an echo server running in Amazon AWS and reports back to the controller so the first thing I needed to do was either find an existing way to replicate that functionality or build something similar. Thankfully Debian ships a cli application that functions similarly to the Ubiquiti tester but leverages the already existing infrastructure. I ended up leveraging this and whipping up a really quick and dirty speedtester container that I could run periodically from cron(8).


Every hour cron fires the following script on one of my Docker engine hosts.

Container launch job


set -e

docker run --rm \
    -e INFLUX_HOST=[redacted] \
    -e INFLUX_USER=[redacted] \
    -e INFLUX_PASS=[redacted] \
    -e INFLUX_DB=[redacted] \
    --network grafana_backend \

Inside the container the entry point runs speedtest-cli with a few arguments to select the closest server and provide the output as a JSON formatted string which then gets piped into a small Python script to ship that data off to InfluxDB.

JSON to InfluxDB logger

#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
''' (c) 2019 Matthew J Ernisse <>
All Rights Reserved.

Log the results of a run of speedtest-cli(1) to an InfluxDB database.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

import json
import os
import sys
from influxdb import InfluxDBClient

class SpeedtestLogger(object):
    ''' Parse the JSON output of speedtest-cli and post the statistics
    up to an InfluxDB.  DB configuration is stored in the environment.
    def __init__(self):
        ''' Read the configuration environment variables and setup
        the InfluxDB client.

            INFLUX_HOST - Hostname of InfluxDB server
            INFLUX_PORT - Port (8086 by default)
            INFLUX_USER - Username to authenticate with
            INFLUX_PASS - Password to authenticate with
            INFLUX_DB   - Database to log to

        The Measurement will be called: speedtest

        The fields will be:
            download, upload, and ping

        The measurements will be tagged with:
            server_id, country_code, city, sponsor

        self.influx_config = {
            'host': os.environ.get('INFLUX_HOST'),
            'port': int(os.environ.get(
            'user': os.environ.get('INFLUX_USER'),
            'pass': os.environ.get('INFLUX_PASS'),
            'db': os.environ.get('INFLUX_DB')

        self.fields = {
            'download': 0.0,
            'upload': 0.0

        self.tags = {
            'city': '',
            'country_code': '',
            'server_id': '',
            'sponsor': ''

        self.timestamp = 0.0

        self.ifclient = InfluxDBClient(

    def parseJson(self, s):
        obj = json.loads(s)
        self.fields['download'] = obj['download']
        self.fields['upload'] = obj['upload']
        self.fields['ping'] = obj['ping']

        self.tags['city'] = obj['server']['name']
        self.tags['country_code'] = obj['server']['cc']
        self.tags['server_id'] = obj['server']['id']
        self.tags['sponsor'] = obj['server']['sponsor']

        self.timestamp = obj['timestamp']

    def postToInflux(self):
        if not self.timestamp:
            raise ValueError('No timestamp, did you call parseJson first?')

        measurements = [{
            'measurement': 'speedtest',
            'tags': self.tags,
            'time': self.timestamp,
            'fields': self.fields


if __name__ == '__main__':
    if sys.stdin.isatty():
        print('stdin is a tty, aborting!')

    logger = SpeedtestLogger()

    with sys.stdin as fd:
        input =


Now that the data is available to Grafana I was able to easily add some panels to my existing router dashboard to include the test measurements.

Grafana Router Dashboard


Start to finish it only took a few hours to get all of this put together. I didn't need to put the speed tester into a container but it seems like a reasonable idea to ensure I have a wide array of future deployment options open to myself. I already have some cloud hosted assets, so it makes sense to be able to extend the monitoring into those environments if the need arises. Even though I have less than a week of data I find it a bit surprising that my ISP has been fairly reliable. I'm currently on an up to 100Mbps / 10Mbps plan and the 95th percentile results over the last few days have been within 15% or so of meeting that claim. I remain impressed with the ease of use and flexibility of the tools, back when I worked for a national ISP we collected some similar information for billing dedicated Internet customers and it was all done with a large web of custom code.

Grafna panel configuration

Grafana made visualizing the information almost shockingly easy. Especially nice is the built in transforms that allow you to calculate the 95th percentiles over arbitrary time windows.

Honestly I'd love to see features like this built into consumer endpoint gear, I think it would help keep ISPs honest.

January 28, 2019 @21:01

Why are you a green bubble?

People often ask me why I have so much of the features of my phones turned off. My iPhone has iCloud, Siri, FaceTime and iMessage all firmly disabled and have since I originally setup the phone, my Mac has never signed into iCloud, and my Android phone has just about everything including Google Play Services disabled. My personal philosophy is that if it doesn't provide me with value, I disable it.


I will just leave this here

Add in all the lock screen bypass bugs and the fact that Apple still won't keep HomeKit turned off properly and I have to wonder why anyone involved in technology trusts any of this crap.

This is how you get ants

So I'm just going to leave this here so I can link it the next time someone asks me why I'm a green bubble.

January 05, 2019 @17:10

I own my own cable modem and have for the past 10 or so years. At first it was to save on the rental fee for the garbage equipment the local cable company supplied, but since they have stopped charging that it became more of a control thing. I have no need for the garbage router or wifi access point that they supply. I used to work for an ISP and so I'm well aware of the quality and support these devices receive. (Fun fact, the average cost per unit target when I was in the business for a residential CPE device (customer premise equipment) was between US $3 and US $8. For business devices it went up to a whopping US $25 or so...) I also prefer greatly the features and power that my OpenBSD routers give me and I've written more than a few posts about the various UniFi devices I've scattered around to provide WiFi. A few months ago the old Motorola/Arris SurfBoard 6141 I've had started playing up. It needed rebooting periodically to maintain the speeds provisioned. It was probably close to 7 years old and even though it's still a supported DOCSIS 3.0 modem the specs are starting to leave a bit to be desired...

I've used the SurfBoard products since I first got cable internet in the late 1990s and have always had good luck with them so I figured why change now and went and bought a new SB8200. The specs seem to be enough of an upgrade that I'll likely get another 6 or 7 years out of it barring fiber to the home finally coming to my area.

While playing around with the new modem I decided that I wanted to monitor the statistics it provided. Sadly it seems that the cable company, probably in response to the various information disclosure vulnerabilities decided to block the SNMP agent on the device. I'm all for good security practice but it would be nice to provide SNMP on the LAN side at least. Thankfully it still lets you access the web server so once again Python and BeautifulSoup to the rescue.

Arris SB8200 Web Interface

I pulled all the data from the web interface into the same InfluxDB and Grafana platform that I have been talking about lately and pulled together the following dashboard.

Arris Modem Statistics Dashboard

This is strictly a layer 2 look at the health of my circuit. There are separate dashboards monitoring my OpenBSD router for layer 3 information. It does give me a good look at what is going on and just adds to the toolbox of data that I have available to troubleshoot issues. Now that the better three-quarters is working from home full time this is even more important since while I always thought of the network here at home as being all in production, her income is now dependent upon it.

I've got to clean up the script a bit but once I do I'll post it in my miscellaneous script git repository if you want to look at it. It probably won't work with other versions of the Arris SurfBoard modems so be warned it won't be a copy and paste job.


December 28, 2018 @10:37

I like metrics. I've been working lately to convert some aging parts of my monitoring and alerting infrastructure over to something slightly more modern (in this case Grafana, InfluxDB, Telegraf and collectd). As part of this project I'm looking broadly at my environment and trying to decide what else to monitor (and what I don't need to monitor anymore). One of the things that came up was website performance.


Now if you go all the way back to my first post on this iteration of the blog you'll see that this website is static. In fact if you dig around a bit you'll see that I render all the pages from Jinja templates via a git post-receive hook so all my webserver is doing is sending you data off disk. Since server side metrics are practically useless in this case I naturally gravitated towards collecting RUM or Real User Metrics.

Privacy Goals

So now we enter what is (for me at least) the slippery slope. Once you decide to ship information back from the user agent to the server you open yourself up to a huge amount of options. Do I try to do cute stuff with cookies and LocalStorage to assign a user a unique ID (or maybe do some browser fingerprinting) to watch 'visits', do I try to gather detailed device information (ostensibly so I can test the site based on what people use), do I try to determine how people got to my site by history snooping? The possibilities for data collection in this fashion are pretty intense. For me though the answer is almost entirely a resounding no. As someone with a three tiered malware / tracking blocking infrastructure I don't want to be part of the problem. I firmly believe that privacy and security go hand in hand on the web so I refuse to collect any more information than what I need. I also think that this is just plain good security practice. Sadly this is in direct conflict with the urge and ability to just slurp everything into storage somewhere which seems to be a pretty successful business model (and generally horrific practice) these days. Realistically I don't think that someone's going to hack my site to get browsing details on the 20 or 30 visitors per day that I get but I see it as an excuse to lead by example here, to be able to say that you can collect useful real user metrics easily, fast and with an eye towards privacy.

Browser Side

If you view the source of this page, just before the footer you'll see a script tag loading 'timing.js'. It is pretty self explanatory, all this does is fire the sendLoadTiming() function once the DOM readyState hits the complete phase. This ensures that all the performance metrics are available in your browser. sendLoadTiming() then simply takes that PerformanceTiming data in your browser adds the current page location and your user agent string and sends it along to a Flask application.

Server Side

The Flask application receives the data (a JSON object) from timing.js and is responsible for sanitizing it and sending it off to InfluxDB so I can graph it. The first thing that happens to a request is that I compute 6 timing measurements from the data.

  1. How long DNS resolution took
  2. How long connection setup took
  3. How long the SSL handshake took
  4. How long the HTTP request took
  5. How long the HTTP response took
  6. How long the DOM took to load

I then sanitize the user agent string into browser and platform using the Request object provided by Flask which exposes the underlying Werkzeug useragents helper. Based on the platform name I (crudely) determine if the device is a mobile device or not. Finally I collect the URL, and HTTP version used in the request. All of this information becomes tags on the 6 measurements that get stored in InfluxDB.



Once the data is in InfluxDB it is a pretty simple task to create a Grafana dashboard. The JSON for the dashboard I use is available here, though it will likely need some customization for your environment.


So far I think it's been a success. Leveraging JavaScript for this has kept all of the automated traffic out of the data set meaning that I'm really only looking at data with people on the other end (which is the point). It has enough information to be useful but not so much as to be a potential invasion of privacy. All of the parts are open source if you want to look at them or use them yourself.

I may add a measurement for DOM interactive time as well as the DOM complete time. It looks like a LOT of people have some strange onLoad crap happening in their browser.

Some strange timing from Android

Also, once the new PerformanceNavigationTiming API is broadly supported (once Safari supports it, at least) I'll need to migrate to that but I don't see that as a major change. I'd love to hear from you if you found this useful or if you have any suggestions on expanding on it.


December 26, 2018 @14:52


Last year I wrote about my favorite podcasts so I figured I'd do the same this year. In no particular order, though I will call out the ones that I'm still listening to first.

Favorite Podcasts 2018

Still Listening

I have about 20 podcasts on my iPhone right now, but more than half are sitting there waiting to be listened to. Digging out of the backlog of some of these podcasts is quite the epic task.

King Falls AM

Though they have been on hiatus between seasons for a good part of the last few months this is still one of the few podcasts that I stay up to date with. It is epic. If you enjoy audio dramas, listen to this. I can't state enough how awesome this podcast is. 📻

Troy Hunt's Weekly Update Podcast

I wrote a good synopsis of this last year, and it all still holds. This is one of my touchstones for the InfoSec community. If you are interested in technology and security at all, it is worth checking Troy out.

Downloadable Content

Penny Arcade has been doing a lot in the last few years, largely on but the behind the scenes podcast is still going strong.

Above & Beyond: Group Therapy

Above and Beyond continue to provide the background music for any yard or house work that I end up having to do. Sit back, relax, and enjoy the chill vibes from the electronic music world... or get some work done. 🎧

New this year

Elite: Escape Velocity

From the world of Elite: Dangerous, set in the 33rd century Escape Velocity is a full cast audio scifi serial by Christopher Jarvis and The Radio Theatre Workshop. I think I blew through this in a weekend. I am anxiously waiting for Series 4.
Even if you aren't an Elite: Dangerous player check this out, I think the story still holds up. :🍸:

Jason Scott Talks His Way Out of It

If you don't know Jason Scott I don't think I can do him justice in a short intro. A Free Range Archivist at the venerable Internet Archive, curator of software oddities, documentarian, textfiles enthusiast, DEFCON speaker, and now podcaster. Jason tells lots of stories on a weekly basis in an effort to educate, entertain and, get himself out of debt.


I can't believe I only started listening to this podcast this year. I was a massive The History of Rome fan and Mike Duncan is back at it with his dry wit, and silky voice dropping wonderful narrative history in a weekly(ish) podcast. Focused on the great revolutions of history this is worth a listen even if you aren't a history buff. I'll also toss a quick plug in for his book The Storm Before the Storm which I read, listened to (he narrates the Audible version so it is like getting a huge bonus episode of The History of Rome), and loved every minute of.

Risky Business

I stumbled across this at some point this year while hunting for more InfoSec content. It is a really informative and well produced weekly look into what is going on in the information security world. If you are in IT at all, this is worth a listen.

The Adventure Zone

Another one that I can't believe I started listening to this year. I blew through the back catalog in like 3 months and listen to each episode as soon as it comes out. I wrote a dedicated article earlier this year about it so go check that out if you want to know more.

The History of Byzantium

Robin Pierson decided to do something amazing. He took it upon himself to take up the mantle of Mike Duncan and pick up the story of the Romans where The History of Rome left off. Styled as the spiritual successor Robin has done a flat out amazing job. I'm currently only in the 10th century AD, furiously trying to catch up but this podcast truly lives up to the creator's intent. If you like history, listen to this.

The Pirate History Podcast

I think pirates are cool. I like to know things about the things that I think are cool. This was a recommendation from a friend and it has been so worth it. Give it a listen and get Matt Albers take you through the golden age of piracy. 🏴‍☠️

The Invisible Network

Last but not least, a podcast I found quite literally two days prior to writing this. I listened to all 6 episodes currently released in one sitting. A really lovely look behind the scenes on how NASA has and continues to work to bring humanity to the stars. Produced by the Space Communications and Navigation program office at NASA this is a must listen for all space geeks out there. 🚀


If a podcast wasn't listed here that was on the 2017 list it means I've stopped keeping up to date on it this year. Most likely because of the vast back catalogs of the new podcasts this year that I have been chewing through so don't take their absence as a negative review. I have something like 16GB of unlistened to episodes on my phone right now that I'm working my way through. The ones that are back this year are something special though.

Hopefully 2019 will be as filled with great podcast content as 2018. 🍻

December 22, 2018 @16:10

Merry Christmas, Happy Hanukkah, Happy Saturnalia, Happy Festivus, Joyous Yule, and congratulations on surviving the solstice to everyone. Be safe and enjoy some time with the people that are important to you this holiday season as the Earth hurtles towards perihelion.

🎄 🎁

Merry Christmas from Bennie and I

December 21, 2018 @09:28

If you have read my previous post about monitoring my ADS-B receiver it probably won't come as a surprise that the impetus for this whole project has been to deprecate MRTG from my environment. MRTG was a fine enough tool when it was basically all we had (though I had rolled a few iterations of a replacement for personal projects over the years) but these days it is woefully dated. The biggest issues lie in the data gathering engine. Even a moderately sized environment is asking for trouble, dropped polls, and stuck perl processes. MRTG also fails to provide any information beyond the aggregated traffic statistics.


Years ago I wrote a small script that renders some web pages to display the switchports on the network linked to their MRTG graphs. Each port is enumerated by operational status and description to make it easy to find what you are looking for. It turns out it also makes it pretty easy to throw MRTG out and switch to something else.

I had already settled on Grafana and InfluxDB for a large part of the new monitoring infrastructure with most of the data being collected via collectd running on all my physical and virtual hosts. I am monitoring containers with cAdvisor which also feeds into InfluxDB, so I very much wanted to keep data going into InfluxDB yet I needed something to bridge the gap to the SNMP monitoring that the switches and UPSes in my network require. Enter Telegraf.

My only complaint is that the configuration for the SNMP input module in Telegraf is garbage. It took a bunch of trial and error to figure out the most efficient way to get everything in and working. I do very much like the results though...


Setting up Telegraf as a SNMP agent

There are a number of blog posts kicking around with fragments of information and copy/paste chunks of configuration files but not much in the way of well written documentation. I guess I'll just pile more of the former on.

I deployed Telegraf as a Docker container, though the configuration is largely the same if you deploy directly on a host. I did install all the SNMP MIBs I needed (in Debian, the snmp-mibs-downloader package covered most of them, I added the APC PowerNet MIB for my UPSes and the Synology MIBs for my work NAS) on my Docker host so I could mount them into the container. I pulled the official container and extracted the default configuration file.

docker run --rm telegraf telegraf config > telegraf.conf

With that in hand I set about killing the vast majority of it, leaving only the [agent] section. Since I am only doing SNMP collection the only change I made there was to back the interval off to 120s instead of 10s.

I then configured Telegraf to send metrics to InfluxDB

# Configuration for sending metrics to InfluxDB
  urls = [ "http://influxdb:8086" ]
  database = "telegraf"
  skip_database_creation = true
  username = "[REDACTED]"
  password = "[REDACTED]"

This just left the SNMP input configuration, which I'll break up and describe a bit inline.

  agents = [ "" ]
  community = "[REDACTED]"
  version = 2

This is pretty self-explanatory, the basic information to poll the agent. You can pass a list into agents and it will use all the same configuration for all of the targets. You can have multiple inputs.snmp stanzas.

  name = "hostname"
  oid = "SNMPv2-MIB::sysName.0"
  is_tag = true

This collects the value of the SNMPv2-MIB::sysName.0 OID and makes it available as a tag.

  inherit_tags = [ "hostname" ]
  oid = "IF-MIB::ifXTable"

This is the meat, it walks the IF-MIB::ifXTable and collects all the leaf OIDs as metrics. It inherits the hostname tag from above.

      name = "ifName"
      oid = "IF-MIB::ifName"
      is_tag = true

      name = "ifDescr"
      oid = "IF-MIB::ifDescr"
      is_tag = true

      name = "ifAlias"
      oid = "IF-MIB::ifAlias"
      is_tag = true

These specify additional OIDs to use as tags on the metrics. The difference between this and the hostname tag is that these are scoped to the index in the walk of the IF-MIB::ifXTable, so if you are looking at index 0 in IF-MIB::ifXTable, it will fetch IF-MIB::ifName.0 and use that. I put the configuration and a docker-compose file in Puppet and let the agent crank the wheel and was rewarded with a happy stack of containerized monitoring goodness.

Telegraf, InfluxDB and Grafana in Containers

The compose file is below, but I'll leave the configuration management bits up to you, dear reader.

version: '2'
      - MIBDIRS=/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/snmp/mibs/syno
      - grafana_backend
      - /var/local/docker/data/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
      - /usr/share/snmp/mibs:/usr/share/snmp/mibs:ro
      - /var/lib/snmp/mibs/iana:/usr/share/snmp/mibs/iana
      - /var/lib/snmp/mibs/ietf:/usr/share/snmp/mibs/ietf

      name: grafana_backend

Gluing it to Grafana

The last piece was updating the links to the new graphs. Happily if you setup a variable in a dashboard you can pass it in the URL to the dashboard so I was able to simply change the URL in the template file and regenerate the page.

Graph Homepage

In my case the new URL was

https://[REDACTED]/grafana/d/[REDACTED]/switch-statistics?var-host=[SWITCH NAME]&var-port=[PORT NAME]

Hopefully this makes it a little clearer if you are trying to achieve a complex SNMP configuration in Telegraf.


December 04, 2018 @11:00

I'm trying to figure out a way to balance the lack of surprise and schadenfreude I have at Tumblr/Verizon's decision to paint all sexual content with the regressive and transparent 'but think of the children' brush. Tumblr grew largely thanks to the alternative and adult communities that found its permissive and accepting nature welcoming. It became what it is today because of the LGBTQ+ and sex worker communities, and now it has decided to break up. Their post paints a pretty picture full of platitudes, inclusiveness, acceptance, and love of community but it is obvious to the most casual of observer that it is just a sham. Tumblr is breaking up with the people that helped it grow because it is easier than trying to actually make the service a better place.


I feel bad for the users that are being displaced, some of whom I have followed for close to a decade. I admit I do feel a bit like pointing and laughing at the management of Tumblr who just signed their product's own death warrant, but most of all I feel like this is yet another billboard for retaining control of your community, and your content. In an era with filled with social media companies promising to help you build communities and share content it is more important than ever to remember that at the end of the day they all will betray you eventually because you aren't their customer, you are their product. At some point your needs will clash with theirs and they will without remorse chose themselves every single time.

Anyone who creates anything on the Internet needs to relentlessly protect their community by ensuring that they have control. I am sure it sounds a bit bizarre to some but if you are going to use services like social media to engage people (which you basically have to right now) you need to act like any day you could wake up and find them gone. You need to ensure that people can find you again, that your content and community doesn't just disappear and that you can move on to whatever comes next.

Tumblr will die, the Internet will move on. In a couple years it will be another story the olds tell the kids these days, but hopefully... we learn. In the mean time, I'm firing up grab-site and archiving the people I have followed on Tumblr for the last 10 years. Hopefully we will cross paths again.

To The Archive Mobile!

For creators, find a way to root your community in something you control. Go pay Ghost or a similar host to house your blog. Domain names are cheap these days, I like Gandi but there are many places that will sell you one. Resist the urge to get a free blog with a free url. Being is no less risky than being It's not expensive, or difficult anymore to maintain a presence online where you are the customer. It isn't perfect, but at least when you own the domain name if you need to change providers your address stays the same and your community can follow you to your new home. Link everything to your blog and link your blog to everything. Make it the clearing house for all that you are doing, make it easy for your community to follow you when the inevitable happens.

For members of communities, and followers of creators, if it isn't clear where to go next reach out to the creators. Many of them are scrambling to find a place to land or to let all their followers know where else they can be found. If you don't know ask, and politely suggest they think about creating a place they own to anchor their community if they havne't already..

November 29, 2018 @10:57

I have been running a FlightAware / FlightRadar24 ADS-B feeder for almost 4 years now. It is an older Raspberry Pi B with a RTL-SDR stick running dump1090 at its core. These days it is mounted in my garage with the antenna on the roof. When I built it I stuffed a Honeywell HIH6130 temperature and humidity sensor in the enclosure. At the time it was mounted on a fence in my back yard so it would be in full sun for much of the day so I hooked it up to Icinga to alert me if it ever got too hot or too wet inside.

asdb-feeder in August 2014

Lately I've been investigating ways to get more information into a central location for my infrastructure as a whole. I have a lot of one-off, largely custom built systems to collect and aggregate system status data. While this has worked for the last 12 years, it is most certainly starting to show its age. At the moment I'm working with a stack that includes collectd, InfluxDB, and Grafana. The latter two run as Docker containers, while the former is deployed by Puppet to all my physical and virtual hosts.

I wanted to pull together some additional monitoring information from the ADS-B feeder to see just how far I can go with this setup. Luckily the dump1090 web interface works by reading JSON files from the receiver daemon, so all the interesting statistics are available on disk to read.

dump1090-fa web interface

I was able to pull together a quick python script that loads the JSON and emits the statistics to collectd (which forwards them onto InfluxDB for Grafana to work with). I need to get the script into git somewhere but for now, here is the currently running copy.

#!/usr/bin/env python3
''' (c) 2018 Matthew Ernisse <>
 All Rights Reserved.

Collect statistics from dump1090-fa and send to collectd.  Uses the collectd
Exec plugin.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

import json
import os
import socket
import time

def print_aircraft(stats):
    ''' Parse and emit information from the aircraft.json file. '''
    aircraft = len(stats.get('aircraft', []))
    messages = stats.get('messages')
    if not messages:
        raise ValueError('JSON stats undefined')

    m = "PUTVAL \"{}/dump1090/counter-messages\" interval={} N:{}".format(

    m = "PUTVAL \"{}/dump1090/gauge-aircraft\" interval={} N:{}".format(

def print_stats(stats):
    ''' Parse and emit information from the stats.json file. '''
    counters = [

    gauges = [

    values = stats.get('local')
    if not values or not type(values) == dict:
        raise ValueError('JSON stats undefined')

    for k in counters:
        value = values.get(k)
        if not value:
            value = 'U'

        m = "PUTVAL \"{}/dump1090/counter-{}\" interval={} N:{}".format(

    for k in gauges:
        value = values.get(k)
        if not value:
            value = 'U'

        m = "PUTVAL \"{}/dump1090/gauge-{}\" interval={} N:{}".format(

if __name__ == '__main__':
    interval = float(os.environ.get('COLLECTD_INTERVAL', 10))
    hostname = os.environ.get('COLLECTD_HOSTNAME', socket.getfqdn())

    while True:
        with open('/var/run/dump1090-fa/stats.json') as fd:
            stats = json.load(fd)

        stats = stats.get('total')

        with open('/var/run/dump1090-fa/aircraft.json') as fd:
            stats = json.load(fd)


I also wanted to pull the temperature / humidity sensor readings, that ended up being a similarly easy task since I already had written a script for Icinga to use. A quick modification to the script to emit the values in the way that collectd wants and that was flowing in. I created a user for the i2c group so the script can use the i2c interface on the Raspberry Pi.

The script currently looks like this.

#!/usr/bin/env python3

import os
import socket
import time
import smbus

def read_sensor():
    ''' Protocol guide:
    bus = smbus.SMBus(0)
    devid = 0x27

    # writing the device id to the bus triggers a measurement request.

    # wait for the measurement, section 3.0 says it is usually
    # 36.65mS but roll up to 50 to be sure.

    # data is 4 bytes
    data = bus.read_i2c_block_data(devid, 0)

    # bits 8,7 of the first byte received are the status bits.
    # 00 - normal
    # 01 - stale data
    # 10 - device in command mode
    # 11 - diagnostic mode [ ignore all data ]
    health = (data[0] & 0xC0) >> 6

    # section 4.0
    humidity = (((data[0] & 0x3F) << 8) + data[1]) * 100.0 / 16383.0

    # section 5.0
    tempC = ((data[2] << 6) + ((data[3] & 0xFC) >> 2)) * 165.0 / 16383.0 - 40.0

    return (tempC, humidity)

if __name__ == '__main__':
    interval = float(os.environ.get('COLLECTD_INTERVAL', 10))
    hostname = os.environ.get('COLLECTD_HOSTNAME', socket.getfqdn())

    while True:
        retval = read_sensor()
        print("PUTVAL \"{}/hih6130/gauge-temperature\" interval={} N:{:.2f}".format(

        print("PUTVAL \"{}/hih6130/gauge-humidity\" interval={} N:{:.2f}".format(


The collectd plugin configuration is pretty easy, the dump1090 files are readable by nogroup so you can execute that script as nobody. As I said I made an i2c user that was member of the i2c group so the Python SMBus module can communicate with the sensor.

LoadPlugin exec
<Plugin exec>
    Exec "nobody:nogroup" "/usr/bin/"
    Exec "i2c:i2c" "/usr/bin/"

Once the statistics were flowing into InfluxDB, it was just a matter of putting together a dashboard in Grafana.

Summary Panels

Host Status Panels

SDR Status Panels

The JSON from Grafana for the dashboard is here, though it may require some tweaking to work for you.

So far I'm pretty happy with the way this all went together. I still have a bunch of network equipment that I'd like to bring over and a stack of ancient MRTG graphs to replace. Hopefully it will be a similarly simple experience.


November 26, 2018 @15:30

While I was waiting for new tires to be put on my car today I was able to watch the landing of Mars InSight which was relayed via the MarCo A&B experimental interplanetary cube sats.

Misson Control as touchdown was confirmed

Since everything worked so well we even got back a picture from the lander mere moments after landing was confirmed.

Hello, Mars

Congratulations to everyone involved in this mission, I'm excited to see what we learn not only about our friend the red planet but also about the continued feasibility of the cube sat program. Maybe we'll see something like the PlanetLabs Dove cube sat streaking towards Mars someday.

MarCo Relay Animation from NASA/JPL-Caltech

November 25, 2018 @23:40

I know it's not particularly uncommon for web sites these days to drastically change things and in fact most people consider this a feature. The fail fast mentality is great and all except that it means you are often failing more than not and the general consensus seems to be that it's perfectly acceptable to do it in public to the detriment of your users.

There are a few patterns that I really wish would die. One of the worst offenders are single page "apps" that hijack the history navigation controls of your browser to keep you from having to download the 20MB of JavaScript on every page load making it next to impossible to refresh the page after the inevitable crash of the script somewhere in the dark mess of the minified JavaScript source. The other is the continued fad of styling the native browser video controls and overriding the functionality that is built in to provide "a consistent look and feel across platforms." This wouldn't be quite so annoying if it didn't almost always break in some unique and fun ways and omit features that the developer didn't have on their personal laptop. I don't think I've seen a single video site that skinned the HTML video element that provided a native full screen view or picture in picture on macOS. I find those features really useful and Apple worked very hard to optimize them for performance and battery life so it would be great if people would just leave them alone.

To move from a general rant to something slightly more specific here are a few examples from the latest YouTube redesign that really drives home the level of amateur hour that keeps infesting the web.

This video was not vertically letter boxed

No, this video was not letter boxed

I know CSS is hard, but come on Google...

I guess even Google doesn't get CSS

Yes, I always wanted to scroll left and right to see my whole video

Why would you ever scroll part of a video off screen?

Insert sad trombone here.

November 07, 2018 @23:30

A little over six and a half years ago I left the Linux as a desktop community for the Mac community. I replaced a Lenovo Thinkpad T500 for an Apple refurbished late 2011 MacBook Pro and honestly have not regretted it.

Over the years I maxed out the memory, went from the 500G SATA HDD to a Crucial 256GB SSD, then put the 500G HDD in the optical bay, then upgraded to a Samsung EVO 512GB SSD with the optical drive back in there. I replaced the keyboard twice, the battery twice, and had the logic board replaced out of warranty for free by Apple thanks to the recall for an issue with the discrete graphics. Through all that it quite happily chugged along and for the most part just worked. Even now it's in better shape than most of my old laptops, but the lid hinge is starting to get weak (it will often just slowly close itself on me), it needs yet another new battery, and the inability to run the latest macOS lead me to conclude that it is time to look for an upgrade.

Old Laptops

It ended up being a bit of a challenge to decide on an upgrade, though. I really like the 13" Retina MacBook Pro I have for work, I really like the portability of the MacBook, and the new MacBook Air looks like a great compromise between the two. I fought with myself for quite some time over what would come next for me and finally settled on a 15" Mid-2015 Retina MacBook Pro. Essentially the bigger brother of what I have for work.

Hello, Kitsune

Now I won't blame you if you are wondering why I'd pick a 3 year old laptop over the latest and greatest. In the end it was almost the only choice. I wanted a 15" class laptop because I spend most of my time computing sitting on the couch. The 13" is really nice and portable but it's actually a little too light and a tad too small to comfortably use on my lap. That basically ruled out the lighter and smaller MacBook and MacBook Air. As for the newer 15" MacBook Pro, I almost exclusively use the vi text editor so not having a hardware escape key is just not something I feel I can get used to. I've also heard many people at work who do have the new MacBook Pros complain loudly about the keyboard so that was another nail in the coffin of the new models.

Given all that, the last non-touchbar 15" MacBook Pro is... the Mid 2015. I found a nice example with the 2.5GHz i7 and the Radeon R9 on eBay for a real good price after a few weeks of looking and snapped it up.

Since this is the second Mac I've ever had as my primary workstation it was the first time I got to use Migration Assistant. I have previously used recovery mode to recover from Time Machine which works a treat so I had high expectations. In the end I'd say the experience lived up to them. The only real problem I had seems to be related to how I have my WiFi configured. I use WPA2 Enterprise (so username and password authentication) on my main SSID which I configure using a profile in macOS (which also serves to disable Siri, a bunch of iCloud stuff I don't use, sets up my internal certificate trust settings, and my VPN). Every time I started up Migration Assistant it would drop off the WiFi with no explanation. After flailing around a bit it looks like that was because it couldn't access the authentication information after logging me out to start the migration, so I figured I'd use Ethernet. That would have worked except that the laptop showed up on a Saturday and the only Thunderbolt to Ethernet adapter I own was at the office. Thankfully my guest WiFi uses WPA2 PSK and that actually appears to work just fine.


It took about 4 hours to transfer the 210GB or so of data, but afterwards the new Mac looked just the same as the old Mac. A quick run through my customization script to catch the few settings in the new version of macOS, the automounter, and the applications I have installed via homebrew, I have not had to go back. Sunday evening I shut off the old laptop. I do plan on re-deploying it as a test workstation if I ever get around to building a dedicated test environment, but for now it is sitting in a drawer under my desk.

It's been a good laptop and this new one has big shoes to fill.

Goodbye, Aramaki


November 06, 2018 @13:40

It's probably too late to change anyone's mind, but I saw a particularly salient Twitter come across this morning.


Also particularly poignant for me is this morning's post over at McMansion Hell

Nub Sez Vote So please, vote. This is basically the bare minimum required of all citizens in this republic other than paying your taxes (maybe). In any case, it is our only chance to directly influence the policies of this nation and is a right that thousands died to embody us with. Your voice counts, but to get it heard you have to show up.

I Voted 2018

🇺🇸🍻 🎉

Subscribe via RSS. Send me a comment.