Matthew Ernisse

Sonos Music Library NAS error 900

Published on: 2017-04-14 16:08

I have been going through my ~/TODO list recently and I have meant to figure out why my Sonos indexing has been failing lately. I sync my iTunes Library from my Time Machine backups into a shared space on my NAS so other things can get to it without having to have my Mac on.

I tried to re-add the UNC path and it would consistently return error 900.

Google wasn't helpful at all on what error 900 actually meant.

So I cranked up debugging on samba and this came across:

No protocol supported !

I had recently disabled SMB1 on my NAS but didn't realize that change coincided with my indexing failures.

So tl;dr, it looks like Sonos uses SMB1 to connect to your NAS, so make sure that you leave it enabled.

Dear Sonos... please use a newer version of SMB... SMB1 is terrible.

🍺 🔉

IKEv2 with OpenBSD (OpenIKED) 6.1 and MikroTik RouterOS.

Published on: 2017-04-11 20:08

I just wanted to quickly mention a change I ran into today while upgrading my OpenBSD routers to 6.1.

As a quick background I use OpenIKED to terminate VPN connections from OpenBSD routers, iOS devices, mac OS devices and MikroTik RouterOS devices. The OpenBSD and RouterOS systems are site-to-site links with ipip(4) interfaces running on top of the ikev2 tunnels. Routing is handled by the ospfd(8) and ospf6d(8) daemons provided by OpenBSD.

The tunnel to my RouterOS device stopped working today with a rather strange message:

Apr 11 11:49:12 bdr01 iked[60779]: ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG

Searching around in the debug output of iked(8) there was some indication that the daemon could only use RFC 7427 signatures:

Apr 11 10:01:23 bdr01 iked[64964]: set_policy: could not find pubkey for /etc/iked/pubkeys/fqdn/bdr01.work.ub3rgeek.net

I checked RouterOS and it only has a rsa signature option for ikev2 certificate based authentication.

The fix?

Get the public key for the connection and put it where iked(8) expects it.

openssl rsa -in private key -pubout > public key

This allowed the tunnel to come right up without any changes on the MikroTik end.

Pouring out a little for my BlackBerry addiction

Published on: 2017-03-10 20:00

Over the years I have had many different BlackBerry phones. I started with a 7100t, one of the first candybar-style BlackBerry devices and just finished up a several-year relationship with a Passport.

I loved every minute of it.

I still think that RIM/BlackBerry had the best device for communication out there, but as they sunset the BlackBerry 10 operating system, there is no longer any reason to continue.

Yes, BlackBerry now makes Android software and TCL makes BlackBerry branded hardware but if you are going to switch away from a platform, you might as well evaluate all the options.

I chose an iPhone.

There are lots of reasons, and none of them are perfect, but at the end of the day it works for me, and that's what is important.

The tl;dr of it all is that I trust Apple more than I trust Google.

They are both huge multi-national corporations who don't really care about anything but driving shareholder value... but Google basically only makes money on selling out its users.

My Collection

  • BlackBerry 7100t
  • BlackBerry 8100 (Pearl)
  • BlackBerry Bold 9000
  • BlackBerry Bold 9700
  • BlackBerry Playbook
  • BlackBerry Bold 9900
  • BlackBerry Bold 9930 (Work)
  • BlackBerry Q10
  • BlackBerry Passport
  • BlackBerry Classic (Work)

I will miss you, you crazy Canadians.

My BlackBerry Collection

Site reboot part two: building the static pages.

Published on: 2016-09-19 16:00

I have actually been building the static content of the site from a python(1) script for a while, though until recently it ran from cron(8) and rebuilt all the pages every hour. This wasn't too bad since there were a few internal pages that also got rebuilt, including my graphing pages that are built from SNMP queries of various network gear.

So a little bit about the page generation. The script uses the Cheetah Template engine to assemble the files for each static page. There is some logic in each template to ensure the proper elements are included based on which page is being created.

ScreenShot of code.html

For example code.html is made up of 4 files.

  1. header.html.tmpl - This is not visibile, it is everything up to the closing head tag.
  2. nav.html.tmpl - This is the nav element, including the other page buttons. This is actually even included on the index.html page but it hides itself since it knows it is not needed.
  3. code.html.tmpl - The content of the page.
  4. footer.html.tmpl - the footer element and the closing body and html tags.

This lets me build a wide variety of content out of the same style. There are configuration provisions in build.py that allow me to add additional JavaScript and CSS links in header.html.tmpl if I need to. This is used by the network informantion page to include additional style and the JavaScript that allows for dynamic hiding of the lists.

        elif page == "network.html.tmpl":
            extras["custom_css"] = [
                '/css/lists-ok.css',
                '/css/network.css'
            ]
            extras["custom_js"] = [
                '/js/jquery.js',
                '/js/network.js'
            ]

The whole build process is fired off by the following post-receive hook in git.

#!/bin/sh
# going-flying.com post-receive hook
# (c) 2016 Matthew J. Ernisse <matt@going-flying.com>
# All Rights Reserved.
#
# Update the on-disk representation of my website when I push a new
# revision up to the git repository.

set -e

BUILD_DIR="/var/www/going-flying.com"
GIT_DIR=$(git rev-parse --git-dir 2>/dev/null)
REV=0

if [ -z "$GIT_DIR" ]; then
    echo >&2 "fatal: post-receive GIT_DIR not set"
    exit 1
fi

echo "updating $BUILD_DIR"
GIT_WORK_TREE=$BUILD_DIR git checkout -f

echo "building html from templates"
$BUILD_DIR/build.py

while read oldrev newrev refname; do
    REV="$newrev"
done

echo "optimizing JPGs."
find "$BUILD_DIR" -name \*.jpg -print0 | xargs -0 jpegoptim -qpst

echo "optimizing PNGs."
find "$BUILD_DIR" -name \*.png -print0 | xargs -0 pngcrush -reduce \
    -rem alla -q -dir "$BUILD_DIR"

echo "setting rev to $REV"
sed -e "s/GIT_REV/${REV}/" "$BUILD_DIR/index.html" > "$BUILD_DIR/index.html.new"
mv $BUILD_DIR/index.html.new $BUILD_DIR/index.html

echo "site deployed."

The result is that a git push looks like this:

Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 195.70 KiB | 0 bytes/s, done.
Total 11 (delta 2), reused 0 (delta 0)
remote: updating /var/www/going-flying.com
remote: building html from templates
remote: optimizing JPGs.
remote: optimizing PNGs.
remote: setting rev to 3ac149f570d379bf71ed78a7734042af2200591a
remote: site deployed.
To git@repo.ub3rgeek.net:going-flying.com.git
   197843c..3ac149f  master -> master

It works pretty well, allows me to serve static files, have a long Expires: header and in the end causes the pages to load reasonably fast.

First test using GTMetrix from San Jose

Result of GTMetrix Page test

Even if I test from Australia using PingDom...

Result of PingDom Page test

Next time we will talk about the gallery generator. In the mean time... 🍺

The re-birth of a website, a technical deep dive.

Published on: 2016-09-06 16:00

I am hoping this will be the first of three or four posts detailing some of the technical bits under the covers of the new website. In this particular post I'll talk mostly about the design decisions that went into the whole infrastructure.

All of this works for me, and is based on my use-case. It is entirely possible that your application may be different and some of the decisions I made won't work for you, but at least you can hopefully understand the reasons behind it all.

So first, the givens

  • I want to host this myself, with as little external dependencies as possible.
  • My site is fairly small
  • I have some images I'd like to have a home for.
  • Sometimes I write things, but not a lot.
  • I have some code splashed around, that is fun to share.

What I chose to do

  • The entire site is made up of static pages served off disk.
  • All the HTML and CSS is hand-written.
  • The gallery generator is also pre-processed and served directly from disk.
  • This entire blog is rendered from the same templates as the base site and markdown fragments to contain the article content.
  • I trigger all of this off a post-receive hook in my git repository.

Why

This allowed me to hand-create a single HTML template that gets applied virtually everywhere (the gallery has bespoke templates for it). I was able to craft a responsive design with almost zero JavaScript (only the mobile interface for the gallery uses JavaScript (jQuery)), which makes me happy. The site looks reasonable across desktops, phones, and tablets. It doesn't expose any data to any third-party sites. It is fast to load and render. It takes almost no server resources to serve.

Most of the pieces (which I will go into detail in the next few posts) have been around for a while but it is how I'm putting them together that makes it so much easier to maintain. I collapsed a lot of the templates down to a single base template class and only customize the bits needed to make each page. I also went from triggering this all out of cron(8) on the hour to building it when a change is pushed to the server. This not only saves server resources rebuilding things when nothing has changed, but also makes it so the errors are immediatly noticed (in the git push output instead of in a cron mail that I may ignore).

Hopefully this makes sense. Next time I'll start talking about the oldest part of the site -- the template builder.