Edited: September 14, 2017 @22:03
It is funny. In this day and age of disposable everything, where people
are more than happy to shell out money for things that don't actually
exist you might think that we've finally left nostalgia behind. There
is no point in wishing for the past if it is all still there on some
drive somewhere in the cloud
.
Why new WiFi?
Back in May I closed on a house, leaving my old apartment of 10 years behind. The house was built in 1856 and as you might expect is built like a tank. This is lovely for many reasons but poses a bit of an impediment for having good WiFi.
There has been a lot of buzz around about how quickly the web is moving towards HTTPS everywhere. For quite a while the EFF has had extensions for the popular browsers to enforce HTTPS Everywhere, and security bloggers like Troy Hunt have written a bunch of things about impending browser changes that are going to make life a lot harder for people with websites that do not support HTTPS.
I have been going through my ~/TODO list recently and I have meant to figure out why my Sonos indexing has been failing lately. I sync my iTunes Library from my Time Machine backups into a shared space on my NAS so other things can get to it without having to have my Mac on.
I just wanted to quickly mention a change I ran into today while upgrading my OpenBSD routers to 6.1.
Over the years I have had many different BlackBerry phones. I started with a 7100t, one of the first candybar-style BlackBerry devices and just finished up a several-year relationship with a Passport.
I have actually been building the static content of the site from a python(1) script for a while, though until recently it ran from cron(8) and rebuilt all the pages every hour. This wasn't too bad since there were a few internal pages that also got rebuilt, including my graphing pages that are built from SNMP queries of various network gear.
I am hoping this will be the first of three or four posts detailing some of the technical bits under the covers of the new website. In this particular post I'll talk mostly about the design decisions that went into the whole infrastructure.

I have lately started using MikroTik RouterBoards for various remote sites on my network. Mostly the RB951Ui-2HnD as they are inexpensive, powerful, and an all-in-one remote access solution. I typically only route prefixes for my network and networks I have direct VPN links to, but there are a few sites where I don't trust the local Internet provider and will route everything via the VPN.

So I found myself stumbling across /r/unixporn/ the other day and a fair number of people seem to use screenfetch to display information about their systems in the screenshots they post.

I found myself needing to make roughly 100 DNS records for a DHCP pool. In BIND I usually accomplish this with some fancy vi(1) work, but sadly this was for a Windows based lab at work.

In the previous post on this topic I talked about building my ADS-B receiver to feed FlightAware and FlightRadar24. I got decent results but was waiting for some final pieces to put the unit outdoors and attach the LNA and filter (some ethernet cables and an antenna in the end).

Being a student pilot I have been aware of the FAA's NextGen project which happens to include ADS-B. Most of what I have been poking around at has been the in-cockpit stuff, evaluating various "ADS-B in" (broadcasts TO airplanes in flight) products that provide things like FIS-B and TIS-B (weather and traffic, for non-pilot types). A few months ago I ran into a number of projects for making receivers that allow you to receive the "ADS-B out" traffic (broadcasts FROM airplanes in flight) and was interested. I then found out that FlightAware (my favorite flight tracking website) is interested in consuming the data streams from ADS-B receivers. So I built one.

I provide BaaS (Backup as a Service) on a NetApp FAS2020 to a number of friends using a vFiler on the system that hosts my public virtual machines (such as the one that runs this particular website). This provides separation and allows me to delegate administration of the backup location to the users that actually consume this data. When it came time to monitor the vFiler though I found that check_nac (which I use to monitor my instance) does not have access to vFiler resources. It looks like this is a limitation of the SNMP agent so the solution was to use the wonderful NetApp Manageability API.

So it's not a secret that I am a big fan of Debian Linux, and also not a secret that I am a big fan of NetApp's storage technology (I did go work for them when given the chance after all), however in the "Enterprise" world Debian is kind of a second class citizen. Most people have heard of it but RedHat kinda rules the day... Thankfully if you do it right Linux is pretty much Linux from a compiled binary standpoint.