Matthew Ernisse

Edited: September 14, 2017 @22:03

It is funny. In this day and age of disposable everything, where people are more than happy to shell out money for things that don't actually exist you might think that we've finally left nostalgia behind. There is no point in wishing for the past if it is all still there on some drive somewhere in the cloud.

I am finding that I have a form of nostalgia for old software. More elegant ways to transmit information, closer to the metal as it were. It's probably the same as a well worn tool that has been replaced by a bulky power tool that does the job way faster but makes a big mess, a lot of noise, needs tremendous care-and-feeding and breaks down periodically in spectacular ways, periodically killing people. (I think I just accidentally took a shot at the web browser there...) 👾

But I digress. I started writing this because somewhere in my wandering I happened upon some more of archive.org's amazing work and at the risk of falling off into another rambling tangent I have admit that a part of me envies those folks. Working to preserve the whirlwind of ephemera that is the Internet so that hopefully those that come after us will be able to see all the hideous mistakes we made on our free GeoCities pages back in the early 1990s, play all our old text adventure games, and witness the unbridled hubris as we created what we thought would be an anarchic, academic, utopia.

If you are still reading this and want to learn a little bit about some of the events that shaped the form and function of the network we take for granted today, or you just want to marvel at the ability to still use a dead file format brought back to life by an emulator written in a language that deserves to die...

The Hacker Crackdown by Bruce Stirling

Edit

archive.org on an ipad After fighting with the JavaScript emulation on archive.org, I decided to put together a package that essentially mimics what is running in the browser but as a 'native' app. So if you would rather read the stack on your computer or want a starting point for a working Mac SE emulator go grab sterling.tar.gz.

August 25, 2017 @17:30

Why new WiFi?

Back in May I closed on a house, leaving my old apartment of 10 years behind. The house was built in 1856 and as you might expect is built like a tank. This is lovely for many reasons but poses a bit of an impediment for having good WiFi.

As a bit of background, one of the things that I did during the nearly decade that I worked for the local phone company's ISP arm was to help build and deploy various WiFi installations. These ranged from single room, single access point coffee shops to small cities. We evaluated a number of vendors to standardize around when developing these solutions looking at RF performance, number of concurrent clients, authentication and management infrastructure, robustness, and client roaming. Now this was when 802.11g was brand new so things have changed but the lessons were well learned.

The Search

For the last 6 years or so I used a very nice access point from Ruckus Wireless. They have one of the nicest radio and antenna combinations on the market which let me cover my entire 1100 sqft apartment with one access point (and a fair bit of the parking lot... 😊) but they are a bit spendy and I couldn't justify buying 3 or 4 of them.

I also use MikroTik RouterBoard access points and routers for some smaller deployments but honestly I'm not a huge fan of their CAPsMAN WiFi management software and I don't know why but they don't seem to believe that standard PoE (802.3af) is a thing worth supporting.

Also on the list of brands that my supplier ISPSupplies stocks happened to be Ubiquiti. I had initially ruled them out because they also suffered from the lack of 802.3af, but I happened to see that they had just released some new access points so I dug up a data sheet to see if they finally saw the light and ditched passive PoE. Turns out they did, so as well as the new 802.3at PoE+ standard. I was interested.

Features

Unifi Marketing Image

Why no how?

This isn't a tutorial on how to implement WiFi. There are many of those available online and Troy Hunt made a rather nice one for Ubiquiti that is pretty close to what I ended up doing. He does a good job of going through the process so feel free to go check that out if you want to know how to deploy this stuff. This is meant to be more of an explanation of my experience with the product. Once I decided to go with the UniFi system I ordered the bits from my friendly supplier

Bits I bought

Setup

I have a pretty complex network already so I didn't get the security gateway or any of their switches. The Cisco 3750 PoE switch that I have works just fine, and I very much like my OpenBSD router. I also don't trust the cloud very much so I chose to deploy the Linux version of the UniFi controller software. All in all it took me about 20 minutes to create a puppet manifest and deploy the software on a new VM. Taking ownership of the access points was a breeze and within 30 minutes I had the latest firmware on them and was ready to provision the network.

UniFi device list

Configuration of my SSIDs, VLANs, and RADIUS profiles (I use WPA2-Enterprise for my internal SSID and have a WPA2-PSK guest network on a separate VLAN) was simple and intuitive. I'd say that I had a working WiFi network within an hour and a half, including opening the boxes and putting the access points roughly where I wanted them.

UniFi Map

Results

This was a couple months ago and after living with the system for a while I can honestly say I'm extremely happy 😄. Installation, configuration, and firmware updates have been easy. All of the clients I have had on the network (Windows 10 laptop, macOS laptops, iPhone 7, Samsung Galaxy S6, BlackBerry Passport, BlackBerry Classic, iPad Mini 2, Kindle Fire, and Kindle PaperWhite) work great and most importantly, roam between access points seamlessly. The previous Ruckus Wireless WiFi network performed really well in the last location so unlike Troy I don't have glowing things to say about the huge performance boost...

Garage access point

but I can successfully cover about 1.75 acres with just 3 access points with no slowdowns or dropouts.

Garage AP statistics

UniFi Mobile App

View from the client location

View from test above

Conclusion

So tl;dr, consumer grade router / access point combos are heaping piles of 💩 garbage, don't use them, use something that was designed to be an access point, these Ubiquiti jobbies are pretty good. I'd buy them again.

👍 💯 🍺

August 17, 2017 @13:40

There has been a lot of buzz around about how quickly the web is moving towards HTTPS everywhere. For quite a while the EFF has had extensions for the popular browsers to enforce HTTPS Everywhere, and security bloggers like Troy Hunt have written a bunch of things about impending browser changes that are going to make life a lot harder for people with websites that do not support HTTPS.

I've been running HTTPS on ssl.ub3rgeek.net for a while now, since that site serves several applications (OwnCloud, tt-rss and wallabag for example) and I have good reason to want that to be secure, but I figured this was a good time to pull the trigger and put SSL on going-flying.com.

SSL Labs Test Result

The reality is that while I'm unlikely to get the 'insecure' warnings from the browser updates but thankfully SNI is pretty well supported these days so pulling that trigger was pretty damn easy. 👍

In my case I buy DV certificates from my registrar (a rad French company called Gandi). Before people start screaming about LetsEncrypt I may switch to those at some point but frankly I don't really feel like they are "there yet". I use certificates for a lot of things that you don't see, including signing Apple MobileConfig bundles for use in deployment to my iOS devices. These certificates are still not trusted everywhere by default and integrating the LetsEncrypt ecosystem into all those automated backend tools is... well it's work I'm not getting paid for. 😂

🍺

April 14, 2017 @16:08

I have been going through my ~/TODO list recently and I have meant to figure out why my Sonos indexing has been failing lately. I sync my iTunes Library from my Time Machine backups into a shared space on my NAS so other things can get to it without having to have my Mac on.

I tried to re-add the UNC path and it would consistently return error 900.

Google wasn't helpful at all on what error 900 actually meant.

So I cranked up debugging on samba and this came across:

No protocol supported !

I had recently disabled SMB1 on my NAS but didn't realize that change coincided with my indexing failures.

So tl;dr, it looks like Sonos uses SMB1 to connect to your NAS, so make sure that you leave it enabled.

Dear Sonos... please use a newer version of SMB... SMB1 is terrible.

🍺 🔉

April 11, 2017 @20:08

I just wanted to quickly mention a change I ran into today while upgrading my OpenBSD routers to 6.1.

As a quick background I use OpenIKED to terminate VPN connections from OpenBSD routers, iOS devices, mac OS devices and MikroTik RouterOS devices. The OpenBSD and RouterOS systems are site-to-site links with ipip(4) interfaces running on top of the ikev2 tunnels. Routing is handled by the ospfd(8) and ospf6d(8) daemons provided by OpenBSD.

The tunnel to my RouterOS device stopped working today with a rather strange message:

Apr 11 11:49:12 bdr01 iked[60779]: ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG

Searching around in the debug output of iked(8) there was some indication that the daemon could only use RFC 7427 signatures:

Apr 11 10:01:23 bdr01 iked[64964]: set_policy: could not find pubkey for /etc/iked/pubkeys/fqdn/hostname

I checked RouterOS and it only has a rsa signature option for ikev2 certificate based authentication.

The fix?

Get the public key for the connection and put it where iked(8) expects it.

openssl rsa -in private key -pubout > public key

This allowed the tunnel to come right up without any changes on the MikroTik end.

March 10, 2017 @20:00

Over the years I have had many different BlackBerry phones. I started with a 7100t, one of the first candybar-style BlackBerry devices and just finished up a several-year relationship with a Passport.

I loved every minute of it.

I still think that RIM/BlackBerry had the best device for communication out there, but as they sunset the BlackBerry 10 operating system, there is no longer any reason to continue.

Yes, BlackBerry now makes Android software and TCL makes BlackBerry branded hardware but if you are going to switch away from a platform, you might as well evaluate all the options.

I chose an iPhone.

There are lots of reasons, and none of them are perfect, but at the end of the day it works for me, and that's what is important.

The tl;dr of it all is that I trust Apple more than I trust Google.

They are both huge multi-national corporations who don't really care about anything but driving shareholder value... but Google basically only makes money on selling out its users.

My Collection

I will miss you, you crazy Canadians.

My BlackBerry Collection

September 19, 2016 @16:00

I have actually been building the static content of the site from a python(1) script for a while, though until recently it ran from cron(8) and rebuilt all the pages every hour. This wasn't too bad since there were a few internal pages that also got rebuilt, including my graphing pages that are built from SNMP queries of various network gear.

So a little bit about the page generation. The script uses the Cheetah Template engine to assemble the files for each static page. There is some logic in each template to ensure the proper elements are included based on which page is being created.

ScreenShot of code.html

For example code.html is made up of 4 files.

  1. header.html.tmpl - This is not visible, it is everything up to the closing head tag.
  2. nav.html.tmpl - This is the nav element, including the other page buttons. This is actually even included on the index.html page but it hides itself since it knows it is not needed.
  3. code.html.tmpl - The content of the page.
  4. footer.html.tmpl - the footer element and the closing body and html tags.

This lets me build a wide variety of content out of the same style. There are configuration provisions in build.py that allow me to add additional JavaScript and CSS links in header.html.tmpl if I need to. This is used by the network information page to include additional style and the JavaScript that allows for dynamic hiding of the lists.

        elif page == "network.html.tmpl":
            extras["custom_css"] = [
                '/css/lists-ok.css',
                '/css/network.css'
            ]
            extras["custom_js"] = [
                '/js/jquery.js',
                '/js/network.js'
            ]

The whole build process is fired off by the following post-receive hook in git.

#!/bin/sh
# going-flying.com post-receive hook
# (c) 2016 Matthew J. Ernisse <matt@going-flying.com>
# All Rights Reserved.
#
# Update the on-disk representation of my website when I push a new
# revision up to the git repository.

set -e

BUILD_DIR="/var/www/going-flying.com"
GIT_DIR=$(git rev-parse --git-dir 2>/dev/null)
REV=0

if [ -z "$GIT_DIR" ]; then
    echo >&2 "fatal: post-receive GIT_DIR not set"
    exit 1
fi

echo "updating $BUILD_DIR"
GIT_WORK_TREE=$BUILD_DIR git checkout -f

echo "building html from templates"
$BUILD_DIR/build.py

while read oldrev newrev refname; do
    REV="$newrev"
done

echo "optimizing JPGs."
find "$BUILD_DIR" -name \*.jpg -print0 | xargs -0 jpegoptim -qpst

echo "optimizing PNGs."
find "$BUILD_DIR" -name \*.png -print0 | xargs -0 pngcrush -reduce \
    -rem alla -q -dir "$BUILD_DIR"

echo "setting rev to $REV"
sed -e "s/GIT_REV/${REV}/" "$BUILD_DIR/index.html" > "$BUILD_DIR/index.html.new"
mv $BUILD_DIR/index.html.new $BUILD_DIR/index.html

echo "site deployed."

The result is that a git push looks like this:

Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 195.70 KiB | 0 bytes/s, done.
Total 11 (delta 2), reused 0 (delta 0)
remote: updating /var/www/going-flying.com
remote: building html from templates
remote: optimizing JPGs.
remote: optimizing PNGs.
remote: setting rev to 3ac149f570d379bf71ed78a7734042af2200591a
remote: site deployed.
To git@repo.ub3rgeek.net:going-flying.com.git
   197843c..3ac149f  master -> master

It works pretty well, allows me to serve static files, have a long Expires: header and in the end causes the pages to load reasonably fast.

First test using GTMetrix from San Jose

Result of GTMetrix Page test

Even if I test from Australia using PingDom...

Result of PingDom Page test

Next time we will talk about the gallery generator. In the mean time... 🍺

September 06, 2016 @16:00

I am hoping this will be the first of three or four posts detailing some of the technical bits under the covers of the new website. In this particular post I'll talk mostly about the design decisions that went into the whole infrastructure.

All of this works for me, and is based on my use-case. It is entirely possible that your application may be different and some of the decisions I made won't work for you, but at least you can hopefully understand the reasons behind it all.

So first, the givens

What I chose to do

Why

This allowed me to hand-create a single HTML template that gets applied virtually everywhere (the gallery has bespoke templates for it). I was able to craft a responsive design with almost zero JavaScript (only the mobile interface for the gallery uses JavaScript (jQuery)), which makes me happy. The site looks reasonable across desktops, phones, and tablets. It doesn't expose any data to any third-party sites. It is fast to load and render. It takes almost no server resources to serve.

Most of the pieces (which I will go into detail in the next few posts) have been around for a while but it is how I'm putting them together that makes it so much easier to maintain. I collapsed a lot of the templates down to a single base template class and only customize the bits needed to make each page. I also went from triggering this all out of cron(8) on the hour to building it when a change is pushed to the server. This not only saves server resources rebuilding things when nothing has changed, but also makes it so the errors are immediately noticed (in the git push output instead of in a cron mail that I may ignore).

Hopefully this makes sense. Next time I'll start talking about the oldest part of the site -- the template builder.

Subscribe via RSS. Send me a comment.