Matthew Ernisse

Edited: September 04, 2018 @23:30

So, I mentioned a while back that I watch Acquisitions Inc on the yubtubs. Well through there I also started watching Dice, Camera, Action. During the Stream of Many Eyes event there was a DCA episode featuring Travis McElroy and that reminded me of the fact that I have had The Adventure Zone languishing away on my iPhone for a while now, un-listened to. Now I'm pretty terrible about keeping up with podcasts (there is so much good stuff out there to listen to and watch these days) so I just wanted to toss out a few words about what happened next.

The Adventure Zone: Balance

I started listening on July 23rd (according to Podcasts.app) and have basically been blowing through several episodes per day since. There are something north of 90 episodes in the feed and I am within 20 of being current. If that isn't a glowing enough review to interest you then here is a brief synopsis.

The Adventure Zone starts out as a comedy roundtable / actual play ish podcast as three brothers and their father take a stab at playing a starter module for D&D. It's light and funny and clearly a learning experience for all involved. I stuck with it and after the first story 'Here There Be Gerblins' wraps up the first hints of what it will become are unveiled. By the end of the Balance adventure I was hooked. I very literally laughed, cried, and cheered aloud while listening. The production quality went through the roof somewhere about 1/3rd of the way through and the story very quickly left the known world and became something special and unique unto itself. The chemistry of the McElroy family is something delightful to behold and they do an amazing job of morphing D&D into something more consumable in podcast form. It really focuses on the collaborative story telling aspect with often hilarious and serious consequences of perpetually unpredictable dicerolls. I think almost anyone would take a shine to the Tres Horny Boys (as they named their group, accidentally).

As I said am not entirely caught up, I am just up to the start of the new 'season' which was preceded by several short story arcs written by each of the McElroys using different RPG systems and settings as they tried to figure out what they wanted to do for 'season two' but all the mini-stories so far have been really really enjoyable.

The Adventure Zone: Amnesty

I implore anyone who might be reading this to check this out.

Apple Podcasts, RSS Feed

Edit:

I also forgot to mention that the music in this podcast is completely off the chain.

September 02, 2018 @12:45

Recently I had a rental VW with the fancy new radio in it and I figured I'd give CarPlay a shot.

Welp.

Welp, I guess I won't be needing that feature when I buy a new car.

August 29, 2018 @09:20

I've been stewing about this for a while and have not yet found an alternative so this is part rant part dear lazyweb plea.

Goodbye, Sonos.

Sonos recently released the 9.0 version of their software which now requires you to have a Sonos account. I have zero desire to sign up for an account or be be in a situation where my home stereo equipment needs to connect to the Internet just to work so I'm actively looking to replace all the Sonos equipment in my home with something else. At the moment the leading idea is to just sprinkle Bluetooth speakers around the house. I don't see any drawback to this approach. with the exception of the Since you need to use a phone or tablet to control the Sonos system there isn't any real drawback to just using Bluetooth audio streaming directly to a speaker.

Honestly since they never got AirPlay or the Android audio streaming equivalent working (for no clear reason since both have been available on Raspberry Pis for YEARS now), nor did they ever support anything other than optical Dolby Digital on the Play:Bar and Play:Base TV speaker products, and since their controller applications just keep getting worse and worse, I am not sad about leaving them. For me, the only nice thing about their hardware that I have found over the years that is missing from most modern network speakers is the inclusion of Ethernet.

So if anyone out there dear lazyweb has an idea of a replacement that doesn't need the cloud to provide base functionality I'd be interested in hearing about it.

🔊 🍸

August 27, 2018 @17:10

For a long time now the core of my ad blocking strategy has been squid and privoxy running on my OpenBSD routers. Mobile devices VPN into the network and receive a proxy.pac which routes all traffic to these proxies which reject connections to known ad hosts. With the growing adoption of HTTPS (thankfully) privoxy is becoming less and less useful so I have been trying to find better ways to block ads at the networking level.

I'm not going to get into the ethics of ad blocking, it's my choice to make but I will leave this here.

Tay Tay says block ads (source)

Around the same time CloudFlare announced 1.1.1.1, a privacy focused anycast DNS service. I've been using the Level 3 anycast DNS resolvers for a while now but that's not exactly optimal. With CloudFlare's resolvers you get not only a geographically distributed DNS resolver cluster but DNS-over-TLS and DNS-over-HTTPS support.

Now I run ISC BIND for resolvers, which at this point doesn't support either encrypted DNS method. I do support and validate DNSSEC but that doesn't keep people from eavesdropping on me.

Enter unbound

For a while now OpenBSD has had unbound as the recursive resolver in the base installation so I've been aware of it and trust it. Since I do both recursive and authorative DNS on the same servers I have not had a reason to introduce it. Until CloudFlare.

I added the unbound packages to my DNS server's puppet manifest so the default Debian package got installed. I then added the following configuration to /etc/unbound/unbound.conf.d/cloudflare.conf. Since I'm going to have BIND actually listen to and respond to queries from clients I bind only to localhost (::1 is the IPv6 loopback address) and listen on a non-standard DNS port (5300 since it was open and semi-obvious). This does mean that I have two layers of cache to worry about if I need to clear the DNS cache for any reason but I almost never have to do that so I will worry about that later.

unbound configuration

# This file is managed by Puppet.
#
# Forward DNS requests to CloudFlare using DNS over TLS.
server:
    verbosity: 1
    use-syslog: yes
    do-tcp: yes
    prefetch: yes
    port: 5300
    interface: ::1
    do-ip4: yes
    do-ip6: yes
    prefer-ip6: yes
    rrset-roundrobin: yes
    use-caps-for-id: yes
forward-zone:
    name: "."
    forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com
    forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com
    forward-addr: 1.1.1.1@853#cloudflare-dns.com
    forward-addr: 1.0.0.1@853#cloudflare-dns.com
    forward-ssl-upstream: yes

I then switched the forwarders section of my named.conf from:

    forwarders {
        4.2.2.2;
        4.2.2.1;
    };

to:

    // Unbound listens on [::1]:5300 and forwards to CloudFlare
    forwarders {
        ::1 port 5300;
    };

After letting puppet apply the new configuration I checked the outbound WAN interface of my router with tcpdump(8) and verified that all DNS resolution was heading off to CloudFlare.

Adding adblocking

unbound(8) has a really nice feature where you can override recursion fairly easily. This can be leveraged to block malicious sites at the DNS layer. I found a couple lists that I was able to plug in that so far have worked really well for me.

The first one is a malware block list that is already provided in the unbound config format. So I just used puppet-vcsrepo to ensure an up-to-date copy is always checked out in /usr/local/etc/unbound/blocks. I was then able to add include: "/usr/local/etc/unbound/blocks/blocks.conf" to the server: section of my unbound config.

Since I also wanted ad blocking I continued my search and came across Steven Black's curated list that consildates a number of difference sources into a hosts.txt format file. Since this isn't exactly the format unbound wants I had to do a little more work.

  1. Checked that repository out with puppet-vcsrepo into /usr/local/etc/unbound/stevenblack.
  2. Wrote the script below to convert the list format from a hosts file to an unbound configuration file.
  3. Configured puppet to exec that script when the vcsrepo pulls an update and then notify (restart) the unbound service.
  4. Added include: /usr/local/etc/unbound/stevenblack.conf to my unbound configuration.

unbound-blocklist script

#!/bin/sh
# unbound-blacklist (c) 2018 Matthew J Ernisse <matt@going-flying.com>
#
# Generate an unbound style config from a hosts list.

set -e

SRC="/usr/local/etc/unbound/stevenblack/hosts"
OUTPUT="/usr/local/etc/unbound/stevenblack.conf"


if [ ! -f "$SRC" ]; then
    echo "Could not open $SRC"
    exit 1
fi

awk '/^0\.0\.0\.0/ {
    print "local-zone: \""$2"\" redirect"
    print "local-data: \""$2" A 0.0.0.0\""
}' "$SRC" > "$OUTPUT"

The entire puppet manifest for the unbound configuration is as follows. It is included by the rest of the manifests that setup BIND on my name servers.

unbound Puppet manifest

# Unbound - This is the caching recursor.  Uses DNS-over-TLS
# to CloudFlare to provide secure and private DNS resolution.
class auth_dns::unbound {
    package { 'unbound':
        ensure => latest,
    }

    service { 'unbound':
        ensure => running,
    }

    file { '/etc/unbound/unbound.conf.d/cloudflare.conf':
        source => 'puppet:///modules/auth_dns/unbound.conf',
        owner => 'root',
        group => 'root',
        mode => '0644',
        require => [
            Package['unbound'],
        ],
        notify => [
            Service['unbound'],
        ],
    }

    exec { 'rebuild unbound blacklist':
        command => '/usr/bin/unbound-blacklist',
        refreshonly => true,
        require => [
            Package['unbound'],
            File['/usr/bin/unbound-blacklist'],
            Vcsrepo['/usr/local/etc/unbound/stevenblack'],
        ],
        notify => Service['unbound'],
    }

    file { '/usr/bin/unbound-blacklist':
        ensure => present,
        source => 'puppet:///modules/auth_dns/unbound-blacklist',
        owner => root,
        group => root,
        mode => '0755',
    }

    file { '/usr/local/etc/unbound':
        ensure => directory,
        owner => root,
        group => root,
        mode => '0755',
    }

    vcsrepo { '/usr/local/etc/unbound/blocks':
        ensure => present,
        provider => git,
        source => 'https://github.com/k0nsl/unbound-blocklist.git',
        revision => 'master',
        require => [
            Package['unbound'],
            File['/etc/unbound/unbound.conf.d/cloudflare.conf'],
            File['/usr/local/etc/unbound'],
        ],
        notify => Service['unbound'],
    }

    vcsrepo { '/usr/local/etc/unbound/stevenblack':
        ensure => present,
        provider => git,
        source => 'https://github.com/StevenBlack/hosts.git',
        revision => 'master',
        require => [
            Package['unbound'],
            File['/etc/unbound/unbound.conf.d/cloudflare.conf'],
            File['/usr/local/etc/unbound'],
        ],
        notify => Exec['rebuild unbound blacklist'],
    }
}

Conclusion

So far it feels like a lot of things load faster. I am noticing less requests being blocked by privoxy and squid, to the point that I'm thinking I may be able to completely depricate them. It is also nice that devices on the network that don't listen to proxy.pac files are now being protected from malware and malvertizing as well.

🍺

August 26, 2018 @11:30

iPictureFrame and XCode

I know I'm not 'average' when it comes to my opinions about technology. I imagine this has to do with growing up with technology that was much more simplistic than it is today. Compared to modern software and hardware the NEC PowerMate 286 running DOS 6.0 that I learned to program on was extremely simple. Not that it wasn't powerful, but it didn't have any designs to hide things from you. You had access to the hardware directly, and all the memory, and all the peripheral I/O space. You were able to completely control the system, and even understand exactly what was going on.

Not today.

Don't get me wrong, this isn't a bad thing. The protections in modern operating systems are required for the interconnected (and hostile) world we live in. Computers are also powerful enough that you can afford to give the user an API instead of direct access to the hardware (with all the risks that come along with that). The real problem I have is when vendors decide to lock down their consumer hardware to prevent the user from running whatever software they would like on it.

I could easily go off on a rant about Android devices with locked boot loaders, or "smart" TVs with the unnecessary, non-removable, and often poorly supported, and under powered guts, or a myriad of the unfortunate decisions manufacturers are making these days. But that's not what has been bugging me lately. I, like many people if their quarterly filings and trillion dollar valuation is to be believed, have spent a fair amount of money on iOS powered hardware. I expect when I buy a thing that I can basically do whatever I want with it. Now I really do love the security by default stance of iOS but I also believe firmly that as the owner of the device, if I want to shoot myself in the foot, I should be allowed to peel off the warranty void if removed sticker and fire away.

Fucking Apple...

Of course the worst part is that it's not that I'm not allowed to run my own code on my iOS devices. If I have a Mac, and install XCode, and sign up for an Apple Developer account, then for 6 days at a time I can run something I wrote on the thing I bought. To be clear, I'm 100% fine with that being the App Store development experience, however what I want to do is write code for my own personal use on my own personal devices. I don't want any of this software to be transmitted to Apple to be put on the store, or sent in binary form to another person. All I want to do is run my own stuff on things I own.

Now I do understand that my particular use-case might be a bit outside the middle of the bell curve, but I think this is an expectation that isn't unreasonable. I would also point out that if you want to encourage people to learn to code, it might be a good idea to let them actually run their code, and live with it before trying to make a buck off it. In this world of launch early, release often and fix it in a patch release we really do need more people who are used to living with the choices they make. In my case I wrote a silly streaming audio player to help me fall asleep at night that requires a fair amount of infrastructure behind it, so I would never distribute it as a compiled binary, but I'd really like to not have to reload it on my device every 6 days. Similarly I have an iPad 1 and an iPad 2 that are basically useless but would make nice digital picture frames... if only I could run the app that I wrote for more than a few days without having to reload the code on them.

If anyone out there at Apple is listening, I'd really like a way to make my iOS devices trust my internal CA for code signing. Is that really so much to ask?

🍻

August 25, 2018 @12:00

UniFi Switches in the NMS

Since I installed the first bits of the Ubiquiti UniFi family of products in my network I have been impressed. They have never failed to meet my expectations. I have written several articles about some rather advanced configuration and implementation details:

And of course I have written several generally glowing reviews about the product line.

The only thing I've been missing is the UniFi switch products. Until recently I have not had a burning need for switches, I had plenty and they worked fine. However, similar to the story of the UniFi USG that replaced a wonky MikroTik RouterBoard I started to have problems with the switch in my detached garage.

Old Garage Linksys

I did actually look around a bit when choosing a switch, there are 3 devices in the garage that are powered by PoE that I really wanted the switch to provide. The Linksys did 802.3af PoE, but two of the 3 devices use the "passive PoE" which isn't compatible so I really wanted to either find a dual mode switch or a source for 802.3af to passive PoE adapters.

Well, the UniFi Switch 8-150W fit the bill perfectly. It has the ability to provide 802.3af, 802.3at (PoE+), and passive PoE over its 8 RJ-45 ports and sports 2 SFP cages as well. As with all the other UniFi stuff installation and configuration was a breeze (in fact I did it using the iOS app while standing in my garage during a downpour), and it has been completely problem free since installation (despite several 100+ degree days where I'm sure the poor thing has roasted).

New Garage USW8-150W

In fact, it worked so well that I kept poking around the UniFi switch line and discovered that there was another switch that scratched a rather odd itch.

At the office, the LAN is provided by the landlord. It is a combination of cable Internet and a bunch of Cisco Meraki gear. I have a PoE+ powered jack in my office that I use for my personal equipment but have not been able to take advantage of it so I have an unsightly mess of power cables and PoE injectors hanging about. Turns out the UniFi Switch 8 (USW8) has just the thing. Port 1 can consume PoE and power the device while port 8 can provide PoE to a downstream device. I was able to eliminate a bunch of crap by dropping in one of these as it was powered from the Meraki switch and in turn powered the downstream access point.

USW8 at work

I think it all came out rather smart. Chalk up another well designed product from Ubiquiti. I actually have another USW8 sitting and waiting to be deployed at home, but I have several holes to cut before that goes in.

🍻

July 25, 2018 @20:00

UniFi Security Gateway in the NMS

A couple days ago I wrote a bit about setting up a new Ubiquiti UniFi Security Gateway, and after living with it for a bit I have a few additional notes.

/config/user-data is preserved through resets

I'm not exactly sure why this happened but I fat fingered the JSON and during a provisioning cycle the USG wiped the certificates from /config/auth (where it seems to want you to put them) and while rebuilding I noticed that /config/user-data doesn't get wiped. When you run the restore-default command it seems to have set -x in it somewhere and emits this:

mernisse@ubnt:~$ set-default
+ cmd=restore-default
+ shift
+ case $cmd in
+ exit_if_fake restore-default
++ uname -a
++ grep mips
+ '[' 'Linux ubnt 3.10.20-UBNT #1 SMP Fri Nov 3 15:45:37 MDT 2017 mips64 GNU/Linux' = '' -o -f /tmp/FAKE ']'
+ exit_if_busy restore-default
+ '[' -f /var/run/system.state ']'
++ cat /var/run/system.state
+ state=ready
+ '[' ready '!=' ready ']'
+ state_lock
+ lockfile /var/run/system.state
+ TEMPFILE=/var/run/system.state.4478
+ LOCKFILE=/var/run/system.state.lock
+ ln /var/run/system.state.4478 /var/run/system.state.lock
+ rm -f /var/run/system.state.4478
+ return 0
+ echo 120
+ echo 3
+ rm -f /config/mgmt
+ apply_restore_default
++ cut -c -8
++ echo 7080 27092 31310 11976 31941
++ /usr/bin/md5sum
+ local R=eb2c7606
+ prune_old_config
+ find /root.dev/ -type d -iname 'w.????????' -exec rm -rf '{}' ';'
+ rm -f /config/config.boot
+ rm -f /config/unifi
+ rm -f /config/auth/ca.crt /config/auth/server.crt /config/auth/server.key
+ mv /root.dev/w /root.dev/w.eb2c7606
+ state_unlock
+ /bin/rm -f /var/run/system.state.lock
+ reboot

I made a copy of the certificates for the VPN in /config/user-data to ensure that if this happens again I can simply copy them back into place.

You can load a local JSON config file

The core of the UniFi system is the integration to the NMS, otherwise it would just be an EdgeRouter LITE. It appears that the provisioning process causes the controller's configuration to be merged with your config.gateway.json file and sent to the device. The downside is that you can't just push the JSON down to the USG, you need the entire rendered payload. Luckily you do have access to the underlying commands to import and export the configuration.

Once you have the USG up and working you can backup the JSON from the ssh console by running:

mca-ctrl -t dump-cfg > /config/user-data/backup.json

If for some reason the configuration gets messed up and you can no longer talk to the controller because the VPN is down you can simply reload it with:

 mca-ctrl -t apply -c /config/user-data/backup.json

All in all I'm still happy with it minus two things that I've sent to Ubiquiti using their feedback form:

  1. Would really like to have the PEM encoded certificates in the config.gateway.json. This would certainly help if you need to reload the device.
  2. Would like to have a checkbox to bridge eth1 and eth2. Almost everything at the office is wireless, but I do have a Synology NAS that I want wired, thankfully the UniFi UAP-AC-IW that is there has a built in 2 port switch but if I wanted to use a different AP it seems like it would be really handy to be able to easily use the WAN 2 port as a switched LAN port.

🍺 👍

July 20, 2018 @16:45

Background

I have several physical locations linked together with VPN tunnels. The central VPN server runs OpenBSD with iked(8). I also have several roaming clients (iOS and macOS) that terminate client access tunnels to this system so I am loathe to make breaking changes to it. The site to site tunnels run a gif(8) tunnel in IP-over-IP mode to provide a layer 3 routable interface on top of the IKEv2 tunnel. My internal tunnels run ospfd(8) and ospf6d(8) to exchange routes and my external site to site tunnels run bgpd(8). Most of my internal sites use OpenBSD as endpoints so configuration is painfully simple, however in my office at work I have been using a MikroTik RouterBoard RB951-2HnD. This has worked well enough but lately it has been showing its age, randomly requiring manual intervention to re-establish tunnels and flirting with periods of unexplainable high latency.

Old Work Network

Notes

This is not meant to be a comprehensive HOWTO. I doubt your setup will be close enough to mine to translate directly but hopefully you will find some useful information since this isn't a particularly well documented use case for the Ubiquiti UniFi USG product line.

It is also worth noting that under the covers the USG runs the same EdgeOS as their EdgeRouter line of products with the caveat that the controller will overwrite the configuration any time it provisions the device. Fortunately Ubiquiti has foreseen this and provides a way to provide advanced configuration via a JSON file on the controller.

I manage all of my sites from a centralized UniFi controller instance, so I need the VPN to work before I can swap out the RouterBoard for the USG. This is an overview of how I did that.

Pre-Deployment

Since I already have a working VPN tunnel at the site I already had all the X.509 certificates and IP addresses needed to configure the new router. Starting at home, where the controller is located I plugged in the USG WAN port to my LAN and connected my laptop to the USG LAN port. I was able to adopt the gateway into the controller with no trouble.

I fiddled around with the config until I got it working and stuffed the changes into the config.gateway.json file. Finally I blew the device away and forgot it from the controller. It is important at this point to reload the certificates into the factory defaulted router (put them in /config/auth) before adopting the gateway in the controller. The gateway will go into a reboot loop much the same way as if you typo-ed the config.gateway.json file if it cannot find the files. Once the certificates were loaded, I re-adopted the gateway and the configuration was applied.

I was then able to take it into work and swap the MicroTik.

Configuration

I will simply annotate the config.gateway.json file inline to explain how this all ended up going together.

{
    "service": {
        "dns": {
            "forwarding": {
                "options": [
                    "domain=work.ub3rgeek.net"
                ]
            }
        },

Set the DNS domain name handed out by the gateway, not strictly needed in this context, but handy.

        "nat": {
            "rule": {
                "6004": {
                    "description": "VPN Link Local NAT",
                    "destination": {
                        "address": "!172.16.0.0/15"
                    },
                    "log": "disable",
                    "outbound-interface": "tun0",
                    "outside-address": {
                        "address": "192.168.197.97"
                    },
                    "source": {
                        "address": "172.16.0.0/15"
                    },
                    "type": "source"
                }
            }
        }
    },

NAT any traffic coming from the tunnel or IPSec endpoint addresses to the canonical address of the router. This prevents local daemons from selecting the wrong source IP (most frequently done by syslogd).

    "interfaces": {
        "loopback": {
            "lo": {
                "address": [
                    "172.16.197.96/32"
                ]
            }
        },

This is the IPSec endpoint, I use policy based IPSec so this needs to exist somewhere so the traffic can get picked up by the kernel and sent across the tunnel.

        "tunnel": {
            "tun0": {
                "address": [
                    "172.17.197.198/30"
                ],
                "description": "ub3rgeek vpn",
                "encapsulation": "ipip",
                "ip": {
                    "ospf": {
                        "network": "point-to-point"
                    }
                },
                "local-ip": "172.16.197.96",
                "mtu": "1420",
                "multicast": "enable",
                "remote-ip": "172.16.197.32",
                "ttl": "255"
            }
        }
    },

This sets up the IP-over-IP tunnel. Note I could not get the OSPF session to come up for the life of me using my normal /32 addressed tunnel so I switched to a /30. After that OSPF came right up. If you debug ospf events and get complaints that the peer address of tun0 is not an ospf address, then you might be hitting this too.

    "protocols": {
        "ospf": {
            "area": {
                "0.0.0.0": {
                    "network": [
                        "192.168.197.96/28",
                        "172.17.197.196/30",
                        "10.10.10.0/24"
                    ]
                }
            },
            "parameters": {
                "abr-type": "cisco",
                "router-id": "192.168.197.97"
            },
            "passive-interface": [
                "eth0",
                "eth1"
            ]
        }
    },

This is rather straightforward, I'm redistributing the local networks and the tunnel address. This is a pretty simple OSPF configuration. Since I have no routers on the Ethernet end of things I set both interfaces to passive.

    "vpn": {
        "ipsec": {
            "auto-firewall-nat-exclude": "enable",
            "esp-group": {
                "ub3rgeek": {
                    "compression": "disable",
                    "lifetime": "3600",
                    "mode": "tunnel",
                    "pfs": "dh-group14",
                    "proposal": {
                        "1": {
                            "encryption": "aes256",
                            "hash": "sha256"
                        }
                    }
                }
            },
            "ike-group": {
                "ub3rgeek": {
                    "ikev2-reauth": "no",
                    "key-exchange": "ikev2",
                    "lifetime": "28800",
                    "proposal": {
                        "1": {
                            "dh-group": "14",
                            "encryption": "aes256",
                            "hash": "sha256"
                        }
                    }
                }
            },
            "site-to-site": {
                "peer": {
                    "69.55.65.182": {
                        "authentication": {
                            "id": "bdr01.work.ub3rgeek.net",
                            "mode": "x509",
                            "remote-id": "bdr01.colo.ub3rgeek.net",
                            "x509": {
                                "ca-cert-file": "/config/auth/ca.crt",
                                "cert-file": "/config/auth/server.crt",
                                "key": {
                                    "file": "/config/auth/server.key"
                                }
                            }
                        },
                        "connection-type": "initiate",
                        "ike-group": "ub3rgeek",
                        "ikev2-reauth": "inherit",
                        "local-address": "default",
                        "tunnel": {
                            "0": {
                                "allow-nat-networks": "disable",
                                "allow-public-networks": "disable",
                                "esp-group": "ub3rgeek",
                                "local": {
                                    "prefix": "172.16.197.96/32"
                                },
                                "protocol": "all",
                                "remote": {
                                    "prefix": "172.16.197.32/32"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

This is the real meat and potatoes of the configuration. It corresponds to the following configuration on the OpenBSD side of things.

#
# bdr01.work.ub3rgeek.net
#
ikev2 "work" passive esp \
        from 172.16.197.96 to 172.16.197.96 \
        peer $PEER_WORK \
        ikesa enc aes-256 \
                auth hmac-sha2-256 \
                prf hmac-sha2-256 \
                group modp2048 \
        childsa enc aes-256 \
                auth hmac-sha2-256 \
                group modp2048 \
        srcid bdr01.colo.ub3rgeek.net dstid bdr01.work.ub3rgeek.net \
        lifetime 360m bytes 32g

Conclusion

In the end I am very happy about the whole thing. The USG is pretty slick and for simple configurations I imagine it is super easy to get going, and other than the lack of documentation for some of the things that aren't exposed in the controller UI it was not too hard to figure out. I would suggest if you are stuck trying to figure out the cli, you might want to explore the EdgeOS or Vyatta (the upstream Open Source project the EdgeOS is based on) documentation. I found those helpful.

New Work Network

🍺

July 12, 2018 @20:47

I enabled HTTPS on this website just under a year ago. If you follow my blog you know that this is a static website, and since there appears to be a bit of an uproar in the web community over HTTPS right now I figured I'd simply weigh in.

Do you need HTTPS for your website?

Yes.

There are lots of good reasons for this, and not many reasons not do it but the major point that resonates with me is not the risks to your website, but the risks to the general Internet at large. Actors (both malicious and benign) can inject content into any HTTP served site and cause the web browser of their visitors site to do... essentially whatever they want. This doesn't have to be targeted at your site, anyone in the middle can simply target ALL HTTP traffic out there, regardless of the content.

This isn't a user agent (browser) problem, this isn't a server problem, anyone with access to ANY part of the network between the server and the user agent can inject anything they want without the authenticity provided by TLS.

HTTPS is Easy, and for most it is free. It also allows HTTP/2 which is faster (even for static sites like this one which uses HTTP/2). Really it is. If you aren't convinced let me also point you at Troy Hunt's excellent demo of what people can do to your static website.

April 06, 2018 @14:30

I had occasion today to install some updates on one of my macOS systems and found myself inconvenienced by a number of applications adding a pile of dock icons without asking. I don't keep much in the dock on my systems preferring to use clover+space to launch applications and I don't think I have touched the dock layout in literally years at this point so I went searching for a solution.

Clean Dock

From chflags(1) the 'schg' flag makes a file system immutable, meaning not even the super-user (root) can alter it.

A quick cleanup of my dock and chflags schg on ~/Library/Preferences/com.apple.dock.plist seems to have prevented further changes by installers.

You will have to chflags noschg the plist file to make any changes to the dock stick in the future.

March 30, 2018 @10:06

I spent a few hours this week taking a break from Surviving Mars (which is scratching the same itch that Sim City / Sim Tower seems to scratch for me) and finally got around to playing VA-11 HALL-A. I really like this kind of game, a mechanically simplistic story driven world with interesting characters and design.

Jill gets a drink...

The game is pretty simple but tells an interesting and nuanced set of stories in an approachable 10 hour playthrough. The music is really good and there are some genuinely heartfelt and hilarious moments throughout.

I think I snagged this on a Steam sale -- well worth it.

March 24, 2018 @16:41

I mentioned offhandedly at the end of my post on how Docker and Flask are helping me sleep at night a potential use case for an iOS share extension. I finally started working on that idea.

iOS Share Extension Picker

In general I have to say I'm pleased with the iOS development workflow. The APIs seem fairly robust and Swift 4 has clearly had a lot of thought and work put into it to make it much more accessible and readable than Objective C, which I always kind of felt was... a bit overloaded with punctuation. I feel like XCode is the real boat anchor on the whole process, most specifically the interface building tools. I found myself dreading having to work on the UI portion of my app. There are so many quirks in manipulating the various constraints and connections between UI elements and code that it just... hurt. Getting my head back around the event-driven nature of GUI programming took a little while, combined with the nuances of the Apple GCD threading model that their application framework makes use of does feel quite a bit different than the much more straightforward mostly web-based programming that I have done recently. The only other real irritating piece isn't strictly speaking Apple's fault. Swift is a strongly-typed language and JavaScript isn't. This lends itself to some machinations converting one to another (my API uses JSON for communication). I do feel a bit that given the popularity of JSON for web APIs that this should have been more of a solved problem. I ended up using SwiftlyJSON but I still pine for the ease of Python's json.loads and json.dumps methods.

So the app started out as little more than a settings page and a share extension which technically ticked the boxes that I originally set out to tick but once I had gotten there scope creep set in. I justified it to myself by saying that since I was trying to reduce the back and forth between tabs in Safari to start a job that I could just go a few steps further and put all the track and container management into the app as well.

Original ContainerStatusViewController

Honestly it didn't take too terribly long to do. I initially used FRadioPlayer to handle the playback but after having to make a bunch of changes to suit my needs I decided to simply re-implement it. As a bonus it ended up about half the size, since I'm not playing a live stream.

New TrackTableViewController

It does really make me sad that Apple has chosen to completely ignore the open source community in their development programs. I don't have a way to distribute the resulting application in any way other than as an XCode project. To use it you will have to get a copy of the project from the git repository and then build and install it from a Mac running XCode. I can't just give you something you can drop into iTunes. In a way I empathize with the desire to prevent random malware by requiring signed bundles to come from the App Store, but not giving the user the ability to choose to install from alternative repositories does mean that there really isn't a good way to distribute free software on the platform.

NowPlayingViewController

Of course it's not like this will be useful without the rest of the infrastructure, so you'll need the Flask application, the Docker containers, and all of the requisite configuration to make all those things talk. Also be aware that iOS will refuse to make a non-HTTPS connection and will also refuse to verify a CA or certificate that has a SHA-1 hash anywhere in the trust chain (all very good things).

Lock Screen Now Playing

So far it has been an interesting journey down the rabbit hole. There is a lot of maturity on the Apple side of things. Containers certainly have their uses though I'm still afraid of people who think they are a good way to distribute software outside of their organizations. I still think that for most things native applications are at best a step back on the client side. There is a lot less code to make the website part go versus what is required to make the iOS app go and if the Share Extension was exposed via JavaScript I never would have needed to write an app in the first place. 🍺

February 06, 2018 @12:56

The Background

Right out of the gate I'll admit that my implementation is a bit naive, but it is if nothing else an example of what can be accomplished with a little bit of work. In general my microcontroller development workflow has been tied to a particular system largely using the vendor supplied tools like MPLAB X or Atmel Studio. This is usually OK as I need to have physical access to either the prototype or production hardware for testing and verification. From a high level it generally looks like this:

Three Swans Inn Lighting Controller

Lately I have switched most of my Atmel development away from their IDE and just use a Makefile to build and flash chips. This actually lets me write and build the code from anywhere as this is all stored in a git repository, which is checked out on a system that I can access from anywhere. The build tool chains are a bit obtuse though and keeping all the moving parts around so I could at least ensure the code compiles has been a challenge.

The Goal

So, containers! I made a couple containers with the Atmel and Microchip tool chains in them, with a little glue I was able to connect the post-receive hook of my git repository to my Docker instance to produce on-commit builds of the appropriate project.

Part of the glue is based on the fact that I have all the firmware source in a single git repository for ease of maintenance so I try to determine what changed for building so I don't have to rebuild everything all at once. I also have several different microcontrollers that I target so there is a little logic in the hook to launch the right container for the right code.

The Hook

I snagged most of this from the post-receive hook that handles the deployment of this website. The biggest change was detecting which project within the repository needs to be built.

#!/bin/sh
# microcode post-receive hook
# (c) 2018 Matthew J. Ernisse <matt@going-flying.com>
# All Rights Reserved.
#

set -e

GIT_AUTHOR=""
GIT_BRANCH=""
GIT_DIR=$(git rev-parse --git-dir 2>/dev/null)
PROJECTS=""
REV=0

try_container()
{
    local mapping="\
        led-timer:atmelbuilder \
        led-strand:atmelbuilder \
        bar-lighting:atmelbuilder \
        led-gadget:microchipbuilder \
        "
    if [ -z "$1" ]; then
        echo "usage: try_container project"
        return 1
    fi

    for x in $mapping; do
        if [ "$1" = "${x%%:*}" ]; then
            start-build-container.py \
                "${x##*:}" "$1" "$REV" "$GIT_AUTHOR"
            return
        fi
    done
}

if [ -z "$GIT_DIR" ]; then
    echo >&2 "fatal: post-receive GIT_DIR not set"
    exit 1
fi


while read oldrev newrev refname; do
    GIT_BRANCH=$refname
    REV=$newrev
done

GIT_AUTHOR=$(git show --format='%ae' --no-patch $REV)

for fn in $(git diff-tree --no-commit-id --name-only -r $REV); do
    PROJECTS="$PROJECTS $(dirname $fn)"
done

if [ ! "$GIT_BRANCH" = "refs/heads/master" ]; then
    exit
fi

for project in $PROJECTS; do
    try_container "$project"
done

The Container Launcher

This is basically a stripped down version of the container module from my Flask youtubedown front end.

#!/usr/bin/env python3
'''
start-build-container.py (c) 2018 Matthew J. Ernisse <matt@going-flying.com>
All Rights Reserved.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
import docker
import os
import sys

# Configuration
DOCKER_CA="CA.crt"
DOCKER_CLIENT=(
    "cert.pem",
    "key.pem"
)
DOCKER_ENGINE="whale.example.com:2367"
DOCKER_REPOSITORY="whale.example.com"

def run_container(container_name, image, args):
    ''' Execute a container. '''
    global DOCKER_CA, DOCKER_CLIENT, DOCKER_ENGINE
    tls_config = docker.tls.TLSConfig(
        client_cert=DOCKER_CLIENT,
        verify=DOCKER_CA
    )

    client = docker.DockerClient(
        base_url=DOCKER_ENGINE,
        tls=tls_config
    )

    client.containers.run(
        image,
        args,
        auto_remove=True,
        detach=True,
        name=container_name,
        volumes={
            '/var/www/autobuild': {
                'bind': '/output',
                'mode': 'rw',
            }
        }
    )

def usage():
    print("usage: {} image_name project_name git_rev author".format(
        os.path.basename(sys.argv[0]),
        file=sys.stderr
    )


if __name__ == "__main__":
    if not len(sys.argv) == 5:
        print("Invalid number of arguments.", file=sys.stderr)
        usage()
        sys.exit(1)

    builder = sys.argv[1]
    project = sys.argv[2]
    git_rev = sys.arv[3]
    author = sys.argv[4]

    container_name = "{}-builder--{}--{}".format(
        project,
        git_rev,
        author
    )

    image = "{}/{}:latest".format(DOCKER_REPOSITORY, builder)

    try:
        run_container(container_name, image, project)
        print("*** Running {}...".format(image))
    except Exception as e:
        print("!!! Failed to start container: {}".format(str(e)))

Conclusion

A short while after I push a commit a container will run,build the project, and emit an archive containing the relevant .hex files for the microcontroller as well as a long of the process. I still have some work to do on the Microchip Makefile but for the most part this makes things a lot easier. I can develop from any workstation as long as I have the programming utilities for the chips and if I don't I can at least ensure that code builds every time I commit. The plumbing is pretty generic so I'm sure I'll find other places to use it, for example I was thinking I should try to find a way to build and push the Docker images to my private registry upon commit.

February 01, 2018 @11:13

I was headed back from the California Nebula last night in Elite: Dangerous to try to sneak in a few runs on the just finished community goal in the Wangal system. It was a little over 1000ly worth of travel... about 71 jumps in the old Type-6 to make it out of California Sector BV-Y c7. I had just found a non-human signal source in Aries Dark Region IM-V c2-15 and poked around a bit at the wreckage.

This looks ominous

One jump later, leaving Aries Dark Region QY-Q b5-4 I was yanked out of witchspace into Aries Dark Region VE-P b6-3. Sadly I ended up facing away from the Thargoid in a completely disabled ship and couldn't get turned around until it was about to jump away, so no chance to use the fancy new Xeno scanner.

Hyperdicted!

Stay safe out there cmdrs.

o7

EDSM route data

January 10, 2018 @10:37

YouTube Dead Job Running

I don't want to start out by bitching about yet another crappy Internet thing, but I think I have to. YouTube Red is this play from YouTube to try to get into the paid streaming business and one of the 'features' they tout is the ability to play videos in the background on mobile devices... something that totally worked before JUST FINE before.

This is a dumpster on fire

Over the last year or so I figured out a rather complex work-around for this on the iPad.

Basically:

  1. go to 'desktop mode' in the browser,
  2. hit the PiP button, slide the video off the screen
  3. wait a few seconds
  4. lock the device.
  5. playback will pause a few seconds later
  6. hit play from the lock screen

If you did it right the iPad goes back to making noise, if you screwed up the process or the timing the nasty JavaScript takes notice and stops playback (causing the 'playing' UI on the lock screen to go away either when you lock or when you hit play from the lock screen). Since this needs PiP it doesn't work on the iPhone. 😦

Old Man Yells at Cloud

Doing this dance is annoying and yet from time to time I like to listen to random music mixes as I'm falling asleep so I have put up with it this far. (As an aside lately I've been listening to Mike Duncan's Revolutions podcast before bed. He did The History of Rome which I also loved, so check it out). Always on the look-out for reasons to wield a big hammer at these kinds of problems I started thinking about potential solutions and realized that I had been doing a different manual dance to get tracks loaded on my phone.

Tools

That dance looks like:

  1. youtubedown the video.
  2. ffmpeg to strip the video track out and leave the audio.
  3. copy into iTunes, cry, flail, gnash teeth
  4. ???
  5. have audio on my phone... somewhere

There has to be a better way... right? Obv.

It turns out tossing ffmpeg and youtubedown into a Docker container was stupidly easy. So that gives me a nice way to automatically do 1 and 2 above. Cool. Now, the trick was how to make this all just happen from the phone so I need some sort of interface. I just happen to have a bunch of boilerplate Flask code laying around from other projects that leans on Bootstrap so I dusted that off and started plugging away.

To take a quick step back, it is useful to know that most of my Internet facing infrastructure runs in a colocation facility. All of my internal stuff then connects to the colo via VPN and through the magic of OSPF and BGP it ensures that all the IPv4 and IPv6 traffic for 'my things' crosses only those VPN links. Almost all of the time this is a good thing. In fact this is the basis of how I ad-block and content filter all my devices including my iPhone and iPad. In this case though having to traverse the VPN and therefore my colocation provider's uplink twice isn't really useful for streaming audio that originally came from the Internet. I do have some servers at home though so I tossed Docker on one of the VMs with the idea that external requests can proxy over the VPN but if I am in bed I just have to traverse my WiFi. Sweet.

After a weekend of what felt like mostly fighting writing JavaScript I came up with YouTube Dead.

How this now works:

  1. find video I want to listen to.
  2. copy URL
  3. paste URL
  4. press start
  5. listen

Starting a job

Being able to launch the worker containers with respect to the locality of the usage of the output is a win for me. It solved both problems without the typical 'you now have n+1 problems' fallout. The Flask app uses a worker thread to watch the container and if it finishes successfully it stores the metadata so I can listen to previously downloaded tracks with the click of a link. It would be trivial to detect the location of the user request and launch the container at any of my sites letting me keep the data closest to the user that is requesting it. It would also be pretty trivial to extend this model to basically anything that I can shove into a container that I might want to trigger from the web. Next though, I think I'll start earnestly looking at the dumpster fire that is Apple iOS development to see if I can put together a share extension to eliminate #2, #3 and #4 in the list. 🍺 🐳

Subscribe via RSS. Send me a comment.