Matthew Ernisse

July 25, 2018 @20:00

UniFi Security Gateway in the NMS

A couple days ago I wrote a bit about setting up a new Ubiquiti UniFi Security Gateway, and after living with it for a bit I have a few additional notes.

/config/user-data is preserved through resets

I'm not exactly sure why this happened but I fat fingered the JSON and during a provisioning cycle the USG wiped the certificates from /config/auth (where it seems to want you to put them) and while rebuilding I noticed that /config/user-data doesn't get wiped. When you run the restore-default command it seems to have set -x in it somewhere and emits this:

mernisse@ubnt:~$ set-default
+ cmd=restore-default
+ shift
+ case $cmd in
+ exit_if_fake restore-default
++ uname -a
++ grep mips
+ '[' 'Linux ubnt 3.10.20-UBNT #1 SMP Fri Nov 3 15:45:37 MDT 2017 mips64 GNU/Linux' = '' -o -f /tmp/FAKE ']'
+ exit_if_busy restore-default
+ '[' -f /var/run/system.state ']'
++ cat /var/run/system.state
+ state=ready
+ '[' ready '!=' ready ']'
+ state_lock
+ lockfile /var/run/system.state
+ TEMPFILE=/var/run/system.state.4478
+ LOCKFILE=/var/run/system.state.lock
+ ln /var/run/system.state.4478 /var/run/system.state.lock
+ rm -f /var/run/system.state.4478
+ return 0
+ echo 120
+ echo 3
+ rm -f /config/mgmt
+ apply_restore_default
++ cut -c -8
++ echo 7080 27092 31310 11976 31941
++ /usr/bin/md5sum
+ local R=eb2c7606
+ prune_old_config
+ find / -type d -iname 'w.????????' -exec rm -rf '{}' ';'
+ rm -f /config/config.boot
+ rm -f /config/unifi
+ rm -f /config/auth/ca.crt /config/auth/server.crt /config/auth/server.key
+ mv / /
+ state_unlock
+ /bin/rm -f /var/run/system.state.lock
+ reboot

I made a copy of the certificates for the VPN in /config/user-data to ensure that if this happens again I can simply copy them back into place.

You can load a local JSON config file

The core of the UniFi system is the integration to the NMS, otherwise it would just be an EdgeRouter LITE. It appears that the provisioning process causes the controller's configuration to be merged with your config.gateway.json file and sent to the device. The downside is that you can't just push the JSON down to the USG, you need the entire rendered payload. Luckily you do have access to the underlying commands to import and export the configuration.

Once you have the USG up and working you can backup the JSON from the ssh console by running:

mca-ctrl -t dump-cfg > /config/user-data/backup.json

If for some reason the configuration gets messed up and you can no longer talk to the controller because the VPN is down you can simply reload it with:

 mca-ctrl -t apply -c /config/user-data/backup.json

All in all I'm still happy with it minus two things that I've sent to Ubiquiti using their feedback form:

  1. Would really like to have the PEM encoded certificates in the config.gateway.json. This would certainly help if you need to reload the device.
  2. Would like to have a checkbox to bridge eth1 and eth2. Almost everything at the office is wireless, but I do have a Synology NAS that I want wired, thankfully the UniFi UAP-AC-IW that is there has a built in 2 port switch but if I wanted to use a different AP it seems like it would be really handy to be able to easily use the WAN 2 port as a switched LAN port.

🍺 👍

July 20, 2018 @16:45


I have several physical locations linked together with VPN tunnels. The central VPN server runs OpenBSD with iked(8). I also have several roaming clients (iOS and macOS) that terminate client access tunnels to this system so I am loathe to make breaking changes to it. The site to site tunnels run a gif(8) tunnel in IP-over-IP mode to provide a layer 3 routable interface on top of the IKEv2 tunnel. My internal tunnels run ospfd(8) and ospf6d(8) to exchange routes and my external site to site tunnels run bgpd(8). Most of my internal sites use OpenBSD as endpoints so configuration is painfully simple, however in my office at work I have been using a MikroTik RouterBoard RB951-2HnD. This has worked well enough but lately it has been showing its age, randomly requiring manual intervention to re-establish tunnels and flirting with periods of unexplainable high latency.

Old Work Network


This is not meant to be a comprehensive HOWTO. I doubt your setup will be close enough to mine to translate directly but hopefully you will find some useful information since this isn't a particularly well documented use case for the Ubiquiti UniFi USG product line.

It is also worth noting that under the covers the USG runs the same EdgeOS as their EdgeRouter line of products with the caveat that the controller will overwrite the configuration any time it provisions the device. Fortunately Ubiquiti has foreseen this and provides a way to provide advanced configuration via a JSON file on the controller.

I manage all of my sites from a centralized UniFi controller instance, so I need the VPN to work before I can swap out the RouterBoard for the USG. This is an overview of how I did that.


Since I already have a working VPN tunnel at the site I already had all the X.509 certificates and IP addresses needed to configure the new router. Starting at home, where the controller is located I plugged in the USG WAN port to my LAN and connected my laptop to the USG LAN port. I was able to adopt the gateway into the controller with no trouble.

I fiddled around with the config until I got it working and stuffed the changes into the config.gateway.json file. Finally I blew the device away and forgot it from the controller. It is important at this point to reload the certificates into the factory defaulted router (put them in /config/auth) before adopting the gateway in the controller. The gateway will go into a reboot loop much the same way as if you typo-ed the config.gateway.json file if it cannot find the files. Once the certificates were loaded, I re-adopted the gateway and the configuration was applied.

I was then able to take it into work and swap the MicroTik.


I will simply annotate the config.gateway.json file inline to explain how this all ended up going together.

    "service": {
        "dns": {
            "forwarding": {
                "options": [

Set the DNS domain name handed out by the gateway, not strictly needed in this context, but handy.

        "nat": {
            "rule": {
                "6004": {
                    "description": "VPN Link Local NAT",
                    "destination": {
                        "address": "!"
                    "log": "disable",
                    "outbound-interface": "tun0",
                    "outside-address": {
                        "address": ""
                    "source": {
                        "address": ""
                    "type": "source"

NAT any traffic coming from the tunnel or IPSec endpoint addresses to the canonical address of the router. This prevents local daemons from selecting the wrong source IP (most frequently done by syslogd).

    "interfaces": {
        "loopback": {
            "lo": {
                "address": [

This is the IPSec endpoint, I use policy based IPSec so this needs to exist somewhere so the traffic can get picked up by the kernel and sent across the tunnel.

        "tunnel": {
            "tun0": {
                "address": [
                "description": "ub3rgeek vpn",
                "encapsulation": "ipip",
                "ip": {
                    "ospf": {
                        "network": "point-to-point"
                "local-ip": "",
                "mtu": "1420",
                "multicast": "enable",
                "remote-ip": "",
                "ttl": "255"

This sets up the IP-over-IP tunnel. Note I could not get the OSPF session to come up for the life of me using my normal /32 addressed tunnel so I switched to a /30. After that OSPF came right up. If you debug ospf events and get complaints that the peer address of tun0 is not an ospf address, then you might be hitting this too.

    "protocols": {
        "ospf": {
            "area": {
                "": {
                    "network": [
            "parameters": {
                "abr-type": "cisco",
                "router-id": ""
            "passive-interface": [

This is rather straightforward, I'm redistributing the local networks and the tunnel address. This is a pretty simple OSPF configuration. Since I have no routers on the Ethernet end of things I set both interfaces to passive.

    "vpn": {
        "ipsec": {
            "auto-firewall-nat-exclude": "enable",
            "esp-group": {
                "ub3rgeek": {
                    "compression": "disable",
                    "lifetime": "3600",
                    "mode": "tunnel",
                    "pfs": "dh-group14",
                    "proposal": {
                        "1": {
                            "encryption": "aes256",
                            "hash": "sha256"
            "ike-group": {
                "ub3rgeek": {
                    "ikev2-reauth": "no",
                    "key-exchange": "ikev2",
                    "lifetime": "28800",
                    "proposal": {
                        "1": {
                            "dh-group": "14",
                            "encryption": "aes256",
                            "hash": "sha256"
            "site-to-site": {
                "peer": {
                    "": {
                        "authentication": {
                            "id": "",
                            "mode": "x509",
                            "remote-id": "",
                            "x509": {
                                "ca-cert-file": "/config/auth/ca.crt",
                                "cert-file": "/config/auth/server.crt",
                                "key": {
                                    "file": "/config/auth/server.key"
                        "connection-type": "initiate",
                        "ike-group": "ub3rgeek",
                        "ikev2-reauth": "inherit",
                        "local-address": "default",
                        "tunnel": {
                            "0": {
                                "allow-nat-networks": "disable",
                                "allow-public-networks": "disable",
                                "esp-group": "ub3rgeek",
                                "local": {
                                    "prefix": ""
                                "protocol": "all",
                                "remote": {
                                    "prefix": ""

This is the real meat and potatoes of the configuration. It corresponds to the following configuration on the OpenBSD side of things.

ikev2 "work" passive esp \
        from to \
        peer $PEER_WORK \
        ikesa enc aes-256 \
                auth hmac-sha2-256 \
                prf hmac-sha2-256 \
                group modp2048 \
        childsa enc aes-256 \
                auth hmac-sha2-256 \
                group modp2048 \
        srcid dstid \
        lifetime 360m bytes 32g


In the end I am very happy about the whole thing. The USG is pretty slick and for simple configurations I imagine it is super easy to get going, and other than the lack of documentation for some of the things that aren't exposed in the controller UI it was not too hard to figure out. I would suggest if you are stuck trying to figure out the cli, you might want to explore the EdgeOS or Vyatta (the upstream Open Source project the EdgeOS is based on) documentation. I found those helpful.

New Work Network


July 12, 2018 @20:47

I enabled HTTPS on this website just under a year ago. If you follow my blog you know that this is a static website, and since there appears to be a bit of an uproar in the web community over HTTPS right now I figured I'd simply weigh in.

Do you need HTTPS for your website?


There are lots of good reasons for this, and not many reasons not do it but the major point that resonates with me is not the risks to your website, but the risks to the general Internet at large. Actors (both malicious and benign) can inject content into any HTTP served site and cause the web browser of their visitors site to do... essentially whatever they want. This doesn't have to be targeted at your site, anyone in the middle can simply target ALL HTTP traffic out there, regardless of the content.

This isn't a user agent (browser) problem, this isn't a server problem, anyone with access to ANY part of the network between the server and the user agent can inject anything they want without the authenticity provided by TLS.

HTTPS is Easy, and for most it is free. It also allows HTTP/2 which is faster (even for static sites like this one which uses HTTP/2). Really it is. If you aren't convinced let me also point you at Troy Hunt's excellent demo of what people can do to your static website.

April 06, 2018 @14:30

I had occasion today to install some updates on one of my macOS systems and found myself inconvenienced by a number of applications adding a pile of dock icons without asking. I don't keep much in the dock on my systems preferring to use clover+space to launch applications and I don't think I have touched the dock layout in literally years at this point so I went searching for a solution.

Clean Dock

From chflags(1) the 'schg' flag makes a file system immutable, meaning not even the super-user (root) can alter it.

A quick cleanup of my dock and chflags schg on ~/Library/Preferences/ seems to have prevented further changes by installers.

You will have to chflags noschg the plist file to make any changes to the dock stick in the future.

March 30, 2018 @10:06

I spent a few hours this week taking a break from Surviving Mars (which is scratching the same itch that Sim City / Sim Tower seems to scratch for me) and finally got around to playing VA-11 HALL-A. I really like this kind of game, a mechanically simplistic story driven world with interesting characters and design.

Jill gets a drink...

The game is pretty simple but tells an interesting and nuanced set of stories in an approachable 10 hour playthrough. The music is really good and there are some genuinely heartfelt and hilarious moments throughout.

I think I snagged this on a Steam sale -- well worth it.

March 24, 2018 @16:41

I mentioned offhandedly at the end of my post on how Docker and Flask are helping me sleep at night a potential use case for an iOS share extension. I finally started working on that idea.

iOS Share Extension Picker

In general I have to say I'm pleased with the iOS development workflow. The APIs seem fairly robust and Swift 4 has clearly had a lot of thought and work put into it to make it much more accessible and readable than Objective C, which I always kind of felt was... a bit overloaded with punctuation. I feel like XCode is the real boat anchor on the whole process, most specifically the interface building tools. I found myself dreading having to work on the UI portion of my app. There are so many quirks in manipulating the various constraints and connections between UI elements and code that it just... hurt. Getting my head back around the event-driven nature of GUI programming took a little while, combined with the nuances of the Apple GCD threading model that their application framework makes use of does feel quite a bit different than the much more straightforward mostly web-based programming that I have done recently. The only other real irritating piece isn't strictly speaking Apple's fault. Swift is a strongly-typed language and JavaScript isn't. This lends itself to some machinations converting one to another (my API uses JSON for communication). I do feel a bit that given the popularity of JSON for web APIs that this should have been more of a solved problem. I ended up using SwiftlyJSON but I still pine for the ease of Python's json.loads and json.dumps methods.

So the app started out as little more than a settings page and a share extension which technically ticked the boxes that I originally set out to tick but once I had gotten there scope creep set in. I justified it to myself by saying that since I was trying to reduce the back and forth between tabs in Safari to start a job that I could just go a few steps further and put all the track and container management into the app as well.

Original ContainerStatusViewController

Honestly it didn't take too terribly long to do. I initially used FRadioPlayer to handle the playback but after having to make a bunch of changes to suit my needs I decided to simply re-implement it. As a bonus it ended up about half the size, since I'm not playing a live stream.

New TrackTableViewController

It does really make me sad that Apple has chosen to completely ignore the open source community in their development programs. I don't have a way to distribute the resulting application in any way other than as an XCode project. To use it you will have to get a copy of the project from the git repository and then build and install it from a Mac running XCode. I can't just give you something you can drop into iTunes. In a way I empathize with the desire to prevent random malware by requiring signed bundles to come from the App Store, but not giving the user the ability to choose to install from alternative repositories does mean that there really isn't a good way to distribute free software on the platform.


Of course it's not like this will be useful without the rest of the infrastructure, so you'll need the Flask application, the Docker containers, and all of the requisite configuration to make all those things talk. Also be aware that iOS will refuse to make a non-HTTPS connection and will also refuse to verify a CA or certificate that has a SHA-1 hash anywhere in the trust chain (all very good things).

Lock Screen Now Playing

So far it has been an interesting journey down the rabbit hole. There is a lot of maturity on the Apple side of things. Containers certainly have their uses though I'm still afraid of people who think they are a good way to distribute software outside of their organizations. I still think that for most things native applications are at best a step back on the client side. There is a lot less code to make the website part go versus what is required to make the iOS app go and if the Share Extension was exposed via JavaScript I never would have needed to write an app in the first place. 🍺

February 06, 2018 @12:56

The Background

Right out of the gate I'll admit that my implementation is a bit naive, but it is if nothing else an example of what can be accomplished with a little bit of work. In general my microcontroller development workflow has been tied to a particular system largely using the vendor supplied tools like MPLAB X or Atmel Studio. This is usually OK as I need to have physical access to either the prototype or production hardware for testing and verification. From a high level it generally looks like this:

Three Swans Inn Lighting Controller

Lately I have switched most of my Atmel development away from their IDE and just use a Makefile to build and flash chips. This actually lets me write and build the code from anywhere as this is all stored in a git repository, which is checked out on a system that I can access from anywhere. The build tool chains are a bit obtuse though and keeping all the moving parts around so I could at least ensure the code compiles has been a challenge.

The Goal

So, containers! I made a couple containers with the Atmel and Microchip tool chains in them, with a little glue I was able to connect the post-receive hook of my git repository to my Docker instance to produce on-commit builds of the appropriate project.

Part of the glue is based on the fact that I have all the firmware source in a single git repository for ease of maintenance so I try to determine what changed for building so I don't have to rebuild everything all at once. I also have several different microcontrollers that I target so there is a little logic in the hook to launch the right container for the right code.

The Hook

I snagged most of this from the post-receive hook that handles the deployment of this website. The biggest change was detecting which project within the repository needs to be built.

# microcode post-receive hook
# (c) 2018 Matthew J. Ernisse <>
# All Rights Reserved.

set -e

GIT_DIR=$(git rev-parse --git-dir 2>/dev/null)

    local mapping="\
        led-timer:atmelbuilder \
        led-strand:atmelbuilder \
        bar-lighting:atmelbuilder \
        led-gadget:microchipbuilder \
    if [ -z "$1" ]; then
        echo "usage: try_container project"
        return 1

    for x in $mapping; do
        if [ "$1" = "${x%%:*}" ]; then
                "${x##*:}" "$1" "$REV" "$GIT_AUTHOR"

if [ -z "$GIT_DIR" ]; then
    echo >&2 "fatal: post-receive GIT_DIR not set"
    exit 1

while read oldrev newrev refname; do

GIT_AUTHOR=$(git show --format='%ae' --no-patch $REV)

for fn in $(git diff-tree --no-commit-id --name-only -r $REV); do
    PROJECTS="$PROJECTS $(dirname $fn)"

if [ ! "$GIT_BRANCH" = "refs/heads/master" ]; then

for project in $PROJECTS; do
    try_container "$project"

The Container Launcher

This is basically a stripped down version of the container module from my Flask youtubedown front end.

#!/usr/bin/env python3
''' (c) 2018 Matthew J. Ernisse <>
All Rights Reserved.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

import docker
import os
import sys

# Configuration

def run_container(container_name, image, args):
    ''' Execute a container. '''
    tls_config = docker.tls.TLSConfig(

    client = docker.DockerClient(
            '/var/www/autobuild': {
                'bind': '/output',
                'mode': 'rw',

def usage():
    print("usage: {} image_name project_name git_rev author".format(

if __name__ == "__main__":
    if not len(sys.argv) == 5:
        print("Invalid number of arguments.", file=sys.stderr)

    builder = sys.argv[1]
    project = sys.argv[2]
    git_rev = sys.arv[3]
    author = sys.argv[4]

    container_name = "{}-builder--{}--{}".format(

    image = "{}/{}:latest".format(DOCKER_REPOSITORY, builder)

        run_container(container_name, image, project)
        print("*** Running {}...".format(image))
    except Exception as e:
        print("!!! Failed to start container: {}".format(str(e)))


A short while after I push a commit a container will run,build the project, and emit an archive containing the relevant .hex files for the microcontroller as well as a long of the process. I still have some work to do on the Microchip Makefile but for the most part this makes things a lot easier. I can develop from any workstation as long as I have the programming utilities for the chips and if I don't I can at least ensure that code builds every time I commit. The plumbing is pretty generic so I'm sure I'll find other places to use it, for example I was thinking I should try to find a way to build and push the Docker images to my private registry upon commit.

February 01, 2018 @11:13

I was headed back from the California Nebula last night in Elite: Dangerous to try to sneak in a few runs on the just finished community goal in the Wangal system. It was a little over 1000ly worth of travel... about 71 jumps in the old Type-6 to make it out of California Sector BV-Y c7. I had just found a non-human signal source in Aries Dark Region IM-V c2-15 and poked around a bit at the wreckage.

This looks ominous

One jump later, leaving Aries Dark Region QY-Q b5-4 I was yanked out of witchspace into Aries Dark Region VE-P b6-3. Sadly I ended up facing away from the Thargoid in a completely disabled ship and couldn't get turned around until it was about to jump away, so no chance to use the fancy new Xeno scanner.


Stay safe out there cmdrs.


EDSM route data

January 10, 2018 @10:37

YouTube Dead Job Running

I don't want to start out by bitching about yet another crappy Internet thing, but I think I have to. YouTube Red is this play from YouTube to try to get into the paid streaming business and one of the 'features' they tout is the ability to play videos in the background on mobile devices... something that totally worked before JUST FINE before.

This is a dumpster on fire

Over the last year or so I figured out a rather complex work-around for this on the iPad.


  1. go to 'desktop mode' in the browser,
  2. hit the PiP button, slide the video off the screen
  3. wait a few seconds
  4. lock the device.
  5. playback will pause a few seconds later
  6. hit play from the lock screen

If you did it right the iPad goes back to making noise, if you screwed up the process or the timing the nasty JavaScript takes notice and stops playback (causing the 'playing' UI on the lock screen to go away either when you lock or when you hit play from the lock screen). Since this needs PiP it doesn't work on the iPhone. 😦

Old Man Yells at Cloud

Doing this dance is annoying and yet from time to time I like to listen to random music mixes as I'm falling asleep so I have put up with it this far. (As an aside lately I've been listening to Mike Duncan's Revolutions podcast before bed. He did The History of Rome which I also loved, so check it out). Always on the look-out for reasons to wield a big hammer at these kinds of problems I started thinking about potential solutions and realized that I had been doing a different manual dance to get tracks loaded on my phone.


That dance looks like:

  1. youtubedown the video.
  2. ffmpeg to strip the video track out and leave the audio.
  3. copy into iTunes, cry, flail, gnash teeth
  4. ???
  5. have audio on my phone... somewhere

There has to be a better way... right? Obv.

It turns out tossing ffmpeg and youtubedown into a Docker container was stupidly easy. So that gives me a nice way to automatically do 1 and 2 above. Cool. Now, the trick was how to make this all just happen from the phone so I need some sort of interface. I just happen to have a bunch of boilerplate Flask code laying around from other projects that leans on Bootstrap so I dusted that off and started plugging away.

To take a quick step back, it is useful to know that most of my Internet facing infrastructure runs in a colocation facility. All of my internal stuff then connects to the colo via VPN and through the magic of OSPF and BGP it ensures that all the IPv4 and IPv6 traffic for 'my things' crosses only those VPN links. Almost all of the time this is a good thing. In fact this is the basis of how I ad-block and content filter all my devices including my iPhone and iPad. In this case though having to traverse the VPN and therefore my colocation provider's uplink twice isn't really useful for streaming audio that originally came from the Internet. I do have some servers at home though so I tossed Docker on one of the VMs with the idea that external requests can proxy over the VPN but if I am in bed I just have to traverse my WiFi. Sweet.

After a weekend of what felt like mostly fighting writing JavaScript I came up with YouTube Dead.

How this now works:

  1. find video I want to listen to.
  2. copy URL
  3. paste URL
  4. press start
  5. listen

Starting a job

Being able to launch the worker containers with respect to the locality of the usage of the output is a win for me. It solved both problems without the typical 'you now have n+1 problems' fallout. The Flask app uses a worker thread to watch the container and if it finishes successfully it stores the metadata so I can listen to previously downloaded tracks with the click of a link. It would be trivial to detect the location of the user request and launch the container at any of my sites letting me keep the data closest to the user that is requesting it. It would also be pretty trivial to extend this model to basically anything that I can shove into a container that I might want to trigger from the web. Next though, I think I'll start earnestly looking at the dumpster fire that is Apple iOS development to see if I can put together a share extension to eliminate #2, #3 and #4 in the list. 🍺 🐳

January 02, 2018 @11:36

It seems like the blog is turning into an alternating stream of screaming about things Apple is doing wrong and gushing about how great the UniFi line of products are from Ubiquiti... I have a back log of ideas for things to write about other than those it just seems like life keeps getting in the way and and out the other end either a rant or praise just naturally flows.

I suppose it is also easiest to write about the things that have most recently consumed a few hours of your life. I'd write about how I just re-wrote the entire website generation code in Jinja2 and Python3 but that's not really all that interesting as it was basically drop-in.

So rolling back to things that I have worked with recently, you might remember this post from just before the holidays wherein I fought a bit with the two UniFi softwares to get them to use the same SSL certificate. I also that hinted that this was coming over here where I talked a bit about the experience of extending my UniFi WiFi network infrastructure to my office at work.

I bought the UVC-G3 camera in the same order as the newest AP with plans of mounting it to my garage. If you saw my original post on setting up UniFi in the first place you may have seen on the map view that I have a detached garage. Having a view of the driveway, side walk and yard and a bit of the front is certainly useful but this is also the most challenging location that I intend to have a camera. Currently the uplink is over the WiFi connection between the garage and basement APs and if you have been following the weather it has averaged about 9°F up here so being an un-heated and un-insulated garage this is the most environmentally difficult spot I've got.

cam01 in place


I'm happy to report that the initial setup is very similar to the WiFi products. The controller software (apparently via a UDP broadcast) sees the cameras as they come up on the network and gives you the option to 'manage' the camera. As an alternative you can manually configure the camera to connect to your controller if they aren't on the same layer 2 network segment, or use the camera simply as an RTMP server. In managed mode the whole process is very similar to adopting a UniFi access point. Once you have the camera managed it will upload any new firmware that it needs along with some base configuration and reboot it a few times. Once that has settled down you can move on to the rest of the setup for the camera(s). The configuration is pretty slick and easy. You end up with 2 tabs to go through and in most cases the defaults are sane.

UniFi Video Camera Setup


There are a couple options for recording as you might expect in NVR software. You can record always, never, on a schedule, or on motion. You also have a few options for retention, either time based, space based, or both. This ends up being pretty powerful and again the defaults are reasonably sane.

UniFi Video Recording Setup

The most amazing feature that is bundled in the NVR software (for free, without any cloud nonsense, and without any strings attached other than needing to buy their otherwise very good cameras I might add...) is the motion based recording. From the camera configuration screen you can configure your motion detection zone. Once you hit configure you are presented with a live view from the camera that you can draw a boundary box on.

UniFi Video Motion Zone Setup

After adjusting the border of the area you can smack 'test zone' and... more awesomeness happens. The zone border disappears and instead you get the same live image but now detected motion highlighted in red is overlayed and a nice histogram showing the trigger threshold versus the amount of motion in the frame appears (red is exceeding the threshold, green is not). This lets you fine-tune the motion trigger sensitivity and hopefully keep false positives low.

UniFi Video Motion Zone Test

Once you are happy with your new camera settings you can tell the software to alert you once a recording is triggered and you will be presented with a nice e-mail with a snap of the frame that triggered the event.

UniFi Video Alert Mail

Software Review

So the setup process was reasonably painless. The software installed just about as easily as the WiFi software and configuration was almost alarmingly easy. It has been almost a month since I've gotten this all up and running and I have to say it has been basically hands off. The iOS mobile application works great, and thanks to the power of VPN I can watch live video and recordings from just about anywhere without having any of this accessible to the Internet at large. The camera itself uses h.264 and uses a little over 1 Mbps worth of network bandwidth. So far there have been some hiccups in connectivity thanks to the weather and the WiFi link, but nothing major and nothing lasting more than a few seconds.

Traffic Graph For Garage

Hardware Review

The camera itself is really quite nice. Feels solid and comes with a very versatile mounting system. It was easy to aim and secure and has held up without complaint to our delightful weather thus far. The only irritation is that unlike most of the WiFi products, the cameras are still supplied with 24V 'Passive-PoE'. The garage switch does have 802.3af PoE, but I still have to use an injector in line. Not a huge deal here but I have some other locations where I'd really like to be able to power the camera via the local switch without more hardware in the line. There does appear to be a SKU for 802.3af capable UVC-G3 cameras but I can't actually find someone selling them yet. Perhaps in the near future they will appear and my only hardware gripe will go away. (fingers crossed)


So, tl;dr? Ok. This is just as rad as the WiFi, if you are in the market for a slightly more complex than 'consumer grade', powerful, and most assuredly not cloud connected surveillance solution then give this a serious look. You might be surprised. I sure was.

Edited: December 30, 2017 @14:10

Seriously, It Isn't a Problem

There has been a bunch of discussion around the 'revelation' that a software update to the iPhone was purposefully slowing older phones. While I believe that they should have been more transparent to users about what was happening, perhaps even adopting the UI from the MacBook for when the battery has aged and requires replacement (I had to do this about a year ago on my 2011 MacBook Pro, macOS will toss a little ! by the battery icon and of course System Report will give you further information).

macOS Battery Info

Sadly on this front Apple opted for a pretty inconspicuous note in their release notes for the iOS 10.2.1 update...

iOS 10.2.1 Release Notes

I don't see any of this as being a problem. Lithium cells age in charge/discharge cycles. The chemistry of the cell changes slightly as energy is pulled out of and pushed back into the cell. This change is irreversible. Most manufacturers rate their cells in the 300 to 500 cycle range after which it is typical to have lost 20% of the original capacity of the cell. One of the things that happens as the cells age is that the internal resistance increases, meaning essentially it becomes harder to get energy into and out of the battery. If we do a little back of the napkin math here suddenly this all seems very reasonable. If you charge your phone nightly from 50% (low for me, high for a lot of other people I know who always seem to be in the red at the end of the day) then you will be putting about 182 cycles on the battery per year. At this rate you will hit 500 cycles in under 3 years. At the time of writing the iPhone 6 is over 3 years old, the 6S is a little over 2, and the 7 (which I have) is a little over a year old. There is also some evidence that the harder you work the phone, the higher it will drive the internal resistance of the cell over the lifespan which might be what caused Apple to decide to throttle the CPU speeds on aged phones. The software only appears to throttle phones as battery capacity drops so the performance of the device can be restored by simply replacing the aged battery.

Which brings me nicely to the real point of this.

Non-Replaceable Batteries ARE A Problem

If Apple had never decided to go with a non user serviceable battery then this never would have been a problem. Battery getting older? No problem! The thing is, I can't lay all the blame for this at the feet of Apple. EVERYONE is doing this now. There is nary a flagship device on the market that lets you pull the battery out. Even my previous phones, the oft scoffed at BlackBerry Passport and BlackBerry Classic had non-removable batteries. It is understandable that not having to accommodate removable batteries makes design and construction of the phones easier, is less parts to manufacture and assemble and can certainly lead to smaller and lighter devices but I believe that we have reached the point where the devices are small and light enough. With the resurgence of the larger phone and 'phablet' form factors, surely you can take the hit in the profit margin to put a replaceable battery on a $1000 device... right?

On the bright side it seems that (if you trust a reddit post) Apple charges a fairly nominal fee to replace the battery in your phone. Honestly it is about what the battery would probably cost you retail, but I can't help but feel like this whole thing could have been avoided if they had just made the battery removable.


I think Apple is doing the right thing. Bullet 2 in the article should really have been a no-brainer in the first place but it is good seeing the recognize that some things you can't just hide behind the UI and hand waving. I still would really like this trend of non user serviceable batteries to die in a fire though.

December 22, 2017 @11:45

Merry Christmas to everyone. Be safe and enjoy some time with the people that are important to you.

🎄 🎁

Merry Christmas from Bennie and I

December 18, 2017 @20:48

I run UniFi to manage my various Ubiquiti access points, now across multiple sites and I try to setup everything with HTTPS only and with certificates signed by my internal CA. I followed for the instructions provided by Ubiquiti for UniFi back when I installed it.

Recently I added UniFi Video into the mix and am running that application on the same VM as UniFi (yeah, the names of the applications are a bit confusing) so I wanted to use the same certificate since the hostname and IP are the same.

The problem with this is that in the Ubiquiti documentation you use the Java keystore to create a CSR and sign it. This means you never get the private key so you can't import the resulting certificate into a different keystore. You can however import a keystore entry into another keystore. So this is how I used that to work around the lack of a private key.


If all you want to do is use a custom certificate with UniFi Video and not copy the certificate from UniFi you can look here, which are the instructions that I based the installation phase of this procedure on.


I have the software installed on a VM running Debian 8, with the following versions of the Ubiquiti software installed from their apt repositories. The process should be similar for other distributions and versions, but the paths are likely to be different so go poking around before trying this.

> dpkg -l unifi\* | awk '/^ii/ { printf "%s - %s\n", $2, $3 }'
unifi - 5.6.22-10205
unifi-video - 3.8.5


Since I use Puppet for configuration management, I built the VM using my normal Debian PXEBoot installer which automagically configures the new system with Puppet as a postinst task. The entire manifest set will configure all the base things (auto-updates, Icinga monitoring, NTP, DNS, SSL Certificate trust, NFS, LDAP and more!), but this manifest is all it takes to get a combined UniFi and UniFi Video system (with auto-update). It is really nice when software plays nice together.

# Setup the UBNT NMS for the UniFi wifi gear.
class unifi_nms {
    include 'apt'
    apt::source { 'ubnt':
        location   => '',
        repos      => 'ubiquiti',
        release    => 'stable',
        key        => '4A228B2D358A5094178285BE06E85760C0A52C50',
        key_server => '',
        include_src =>  false,

    apt::source { 'unifi-video':
        location => '',
        repos => 'ubiquiti',
        release => 'jessie',
        key => '795C6027520643F0BA02297F97B46B8582C6571E',
        key_server => '',
        include_src => false,

    package { 'haveged':
        ensure => latest,

    package { 'unifi':
        ensure => latest,
        require => [

    package { 'unifi-video':
        ensure => latest,
        require => [


In short the process is:

  1. Stop unifi-video
  2. Move the existing keystore out of the way
  3. Export the private key and certificate from unifi
  4. Convert the certificate to the appropreate formats and move into place
  5. Start unifi-video

This is the tricky bit, a few things worth documenting for clarity

For UniFi

For UniFi Video

You may want to unmanage your cameras first, the directions are a bit unclear in this exact case and I chose to.

This is what Worked For Me

Stop Services and Backup Keystore

> sudo invoke-rc.d unifi-video stop
> sudo mv /usr/lib/unifi-video/data/{keystore,keystore-orig}

Export Certificate and Key

>sudo keytool -importkeystore -srckeystore /usr/lib/unifi/data/keystore -destkeystore unifi.p12 -deststoretype pkcs12
Importing keystore /usr/lib/unifi/data/keystore to unifi.p12...
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
Entry for alias cert1 successfully imported.
Entry for alias unifi successfully imported.
Import command completed:  2 entries successfully imported, 0 entries failed or cancelled

Use the UniFi password for all 3 password prompts or keytool will complain.

Now convert the PKCS12 store into DER encoded files with OpenSSL.

>openssl pkcs12 -in unifi.p12 -nokeys -clcerts -passin pass:aircontrolenterprise | openssl x509 -outform der -out unifi_cert.der
>openssl pkcs12 -in unifi.p12 -nocerts -passin pass:aircontrolenterprise -passout pass:123456 | openssl pkcs8 -topk8 -inform PEM -passin pass:123456 -outform DER -nocrypt -in unifi_key.pem -out unifi_key_decrypted.der

Prepare and Install Certificate and Key

Now these get moved into place as specified by the documentation...

>sudo rm /usr/lib/unifi-video/data/{keystore,ufv-truststore}
>sudo rm /usr/lib/unifi-video/conf/evostream/server.*
>sudo mkdir /usr/lib/unifi-video/data/certificates
>sudo mv unifi_cert.der /usr/lib/unifi-video/data/certificates/ufv-server.cert.der
>sudo mv unifi_key_decrypted.der /usr/lib/unifi-video/data/certificates/ufv-server.key.der
>sudo chown -R unifi-video:unifi-video /usr/lib/unifi-video/data/certificates
>sudoedit /usr/lib/unifi-video/data/


>sudo invoke-rc.d unifi-video start


If all goes well you should see something like this in /var/log/unifi-video/server.log:

1513647038.643 2017-12-18 20:30:38.643/EST: INFO   >>>> unifi-video v3.8.5+a24428.171030.1542 is starting in main
1513647038.713 2017-12-18 20:30:38.713/EST: INFO   Loading camera keystore from /usr/lib/unifi-video/data/cam-keystore... in main
1513647038.792 2017-12-18 20:30:38.792/EST: INFO   Creating a new app key store and import custom certs in main
1513647038.792 2017-12-18 20:30:38.792/EST: INFO   Importing custom app key/cert pair in keystore in main
1513647038.792 2017-12-18 20:30:38.792/EST: INFO   importPrivateKey: loading keystore /usr/lib/unifi-video/data/keystore in main
1513647038.793 2017-12-18 20:30:38.793/EST: INFO   importPrivateKey: loading key /usr/lib/unifi-video/data/certificates/ufv-server.key.der in main
1513647038.835 2017-12-18 20:30:38.835/EST: INFO   importPrivateKey: loaded cert chain /usr/lib/unifi-video/data/certificates/ufv-server.cert.der - 1 certs found in main
1513647038.854 2017-12-18 20:30:38.854/EST: INFO   importPrivateKey: stored the key in main
1513647038.854 2017-12-18 20:30:38.854/EST: INFO   Custom app keystore created and loaded sucessfully in main
1513647038.863 2017-12-18 20:30:38.863/EST: INFO   Loading app keystore from /usr/lib/unifi-video/data/keystore... in main
1513647038.877 2017-12-18 20:30:38.877/EST: INFO   loadTrustStore load existing file: ufv-truststore in main
1513647039.064 2017-12-18 20:30:39.064/EST: INFO   SSL Keystore initialized in main
1513647039.145 2017-12-18 20:30:39.145/EST: INFO   Controller starting in main


Success Screen Shot

Now you can re-manage your cameras. I suspect since cam-keystore is left in place that un-managing and re-managing your cameras may not be needed but I'm going to err on the side of caution here.

All of my previously configured settings for the camera were re-applied (recording settings, motion zones, etc..), so it was only like 3 extra clicks for a little bit of safety.

December 16, 2017 @12:32


I don't remember where I ran across but I thought it was a pretty rad idea. I was able to find an old version of my website there and enjoy it in a browser similar to what I would have been running back then...

Old Screenshot

Inspired by their work and wanting to fool more around with Docker containers I set about to make some containers filled with old browser goodness.

If you saw my earlier post on Docker you might have noticed that like a reasonable human who values his time, I'm running macOS on my workstation and not Linux. So now that I want to run X11 apps the trick of just passing the local X11 socket through to the container isn't viable. I could install something like XQuartz but... no. That sounds awful, and also totally ignores Windows users. I'm told Windows can run Docker too... So the first hurdle was to figure out how to overcome my... opinions.

I ended up making a container called x11base that does a few things. For as wide support as possible, it checks to see if you have passed an X11 socket into the container and if so it will use it. If you have not then it launches a X11 server, then x11vnc so you can attach to it with any VNC client.

All the browsers inherit from this containers.

You can find them in my git repository. At the moment I have Netscape 3.04 Gold and NetScape 4.79 working. I'm having some frustrating problems with Wine so getting Netscape 1.22/win32 working is proving to be difficult.

Netscape 3.04 Gold Netscape 4.79

I included jwz's http 1.0 proxy so they sort of actually work, though HTTPS has changed a lot since they were made so I don't think any secure websites will work without more work. I've been meaning to hack together a sslstrip type proxy, but that feels kind of dirty (not that it will stop me, mind you...).


For the eagle eyed, there is also a version of Elite Plus in there, it works over VNC... sorta, but you probably should run it on Linux...

See you in the black...

Edited: December 13, 2017 @20:47

I'm not currently subscribed to Patreon largely because when money on the Internet is concerned I have a long wait and see what happens cool down. There are a lot of Internet start ups that come and go like a flash in the pan and a lot that get bought quickly and morphed into something else. If you are going to have some way to charge me money, I need some stability. I have no problem being an early adopter, as long as you don't have a link to my bank account or credit card (even through a third party).

Seems like a safe and sane option to me.

That being said, since Patreon seemed like it was gaining traction, espeically with people that I respect who are creating things, I started collecting links to the Patreon profiles I was interested in backing in my private wallabag instance with the intention of eventually subscribing and throwing some beer money into the hat.

Of course Patreon goes and screws it up, so I'm at the very least putting that idea on hold.

Dave Jones of the EEVBlog just posted a good video about what they are doing from the creator's point of view.

You can go and read Patreon's explanation and decide for yourself, but I get a huge waft of crap off this. I have a hard time trusting the direction this is going in and until that trust is restored I won't be giving them money.


I'll just leave this here...

Subscribe via RSS. Send me a comment.