It has been almost 3 years now since I built
Thoughts — the microblogging
platform and I'd say it's been a success. Over the years I've tweaked it a bit
for cost and usability, integrating some quality of life features into the
posting interface (I swapped out my own editor for Trix),
changing the attachment processing pipeline to support videos, and making
a rudimentary swing at rich link previews (that does need a revisit). As
it stands today it costs me between 9¢ and 11¢ per month to run.
One of the things that landed in the ~/TODO
after using it for a while
was to add the ability for me to reply to an existing Thought. Sometimes
I throw something out there and later on want to follow up to it but there
wasn't a great way to do that. Thoughts are generally speaking immutable
and independent — and while this is an intended feature it lead to
situations like this.
I've been trying to size and design a portable solar power system for
camping and so I needed to figure out a way to get the data from the
charge controller. Renogy sells some silly
Bluetooth module that can connect your charge controller to their app
but that doesn't appear to provide any sort of long-term logging and
analysis functions so it's not what I want. It turns out that as is
the case with so many things the answer was a quick Python script.
The frequent reader of this blog will likely know what is coming next
as the combination of InfluxDB and Grafana is a popular one here.
I got into it to
replace MRTG
then expanded it to monitor
my ADSB feeder,
a Mikrotik Wireless Wire,
an Arris DOCSIS cable modem,
my Internet speeds,
my bespoke sensor network,
the performance of all my systems including my
Windows gaming PC,
and of course
the performance of this website.
So it seems that despite nftables being The Way Forward for the Linux kernel firewall since kernel 3.13 or so the CADT over at Docker don't seem to have bothered supporting nftables, mostly seeming to assume that people will keep using the iptables compatibility shims. This manifested as build failures for a container on one of the new systems I'm building due to a build step's inability to reach my DNS servers.
When I get seriously involved in writing things on the computer I tend to
go to a full-screen terminal window and bring out tmux. I was a very heavy
user of GNU screen for many years but I found the pane splitting in
tmux to be more flexible so at some point I switched to it. I ported
much of my screen configuration over to maintain the muscle memory of
the keybindings. While I was at it I added several widgets to the status
bar at the bottom of the screen. These served various purposes over the
years, but are mostly just scripts accreting atop one another.
If you follow my microblog that I named Thoughts, you may have noticed that I added rich link previews. I found myself taking screenshots of links that I'd post and that is just a silly duplication of work which means it's time to write some software.
When I originally built Thoughts, I only supported images as attachments. I used the accept parameter of the file input tag in the posting interface to prevent myself from trying to attach anything other than images, leaving the pipeline simply unimplemented. This made it easy to add.
When I sat down and put together the requirements for my micro-blog thing that I've dubbed "Thoughts" I decided that I wanted to provide a simple and clean way to link to and embed individual Thoughts. I explain a bit in a previous post about how the embed works and the importance of keeping my infrastructure out of the critical display path. When you click the embed button on a thought you get a bunch of standard HTML that includes all of the Thought. The only thing the JavaScript does is to apply a custom stylesheet. It can be omitted (or blocked by something like CSP) and you will still get a decent looking representation of the Thought as you can see below.
I currently have a handful of containerized apps that I maintain in a shared repository and a few more that are in their own repositories. I wanted to be able to trigger builds of all my container projects from a single post-receive hook so I leaned on the work I did previously cleaning up my git hooks and created a script that will look in the root of the repository for a Dockerfile and if it finds one will launch a builder container using the same python script that I wrote about previously.
It turns out that describing my new Thoughts system has turned into a three part series. You probably want to go back and read the previous two articles before reading this one.
Ages ago I built
a FlightAware ADS-B feeder on a
Raspberry Pi Model B Rev 1.
To this day it is still running and happily feeding data to both
FlightAware and FlightRadar24. Earlier this
year I even built another feeder for the
UAT variant. Well FlightAware finally
released support for
Raspbian (Debian) 10.0 (Buster)
so I decided that it was time to upgrade. At first I started down the path of
simply making a new manifest for Puppet which readers of
this blog might recognize as my preferred configuration management utility.
Well the two feeders I have are both rather under-powered and have pretty small
memories. Since the SDR decoding process takes up so much CPU time and memory
is already very thin running the Puppet agent just didn't make a lot of sense. It turns
out that "look at Ansible again" has been sitting
around aging nicely in my ~/TODO
so I figured why not.
Last Friday I deployed my new Azure Functions based Thoughts application to this website and wrote about the Python bits of it. Towards the end of that entry I mentioned that quite a bit of JavaScript and some Web Components technology went into pulling all this together. I figured I'd talk a little bit about the JavaScript side of things. Since there is much of it I will start with the reading side of things, it being the more straightforward part.
Introduction
Being sequestered in the house for the last month and a bit has given me
(as I am sure it has most of us) an opportunity to go through the old
~/TODO
list. One of the things that has been aging on there has been
to finally explore "Serverless Computing" (whomever coined that phrase
has forgotten the face of their father). When evaluating the various
options available I decided to look at
Azure Functions
for a variety of reasons. Firstly of the big three, I find Microsoft the
least distasteful. Their business model isn't 'harvest everyone's data and
sell it while also sometimes doing other things', instead they are an old
world corporation who seems to basically have a go-to-market strategy of
exchange goods and services for money. Secondly when I first started
looking into this they were the only provider to support
Python which is my preferred language.
I did also look at Cloudflare Workers briefly as running functions at the edge
makes a lot more sense to me than running them in a central datacenter but
the lack of Python support and the lack of a couple other features (more
on that as I talk about requirements) meant I'd need to incorporate their
technology with something else which isn't what I was looking to do.
A while back I began working on replacing MRTG and RRDtool. I have written about the major parts of this previously, but the one feature of RRDtool that I needed to support was the summarization and retention policies. The RRDtool database will automatically consolidate and roll off values stored based on the definitions setup when the database is created. This is used by MRTG to generate the 'Daily' graph with a 5 minute average, the 'Weekly' graph with a 30 minute average, the 'Monthly' graph with a 2 hour average and the 'Yearly' graph with a daily average.
Goal
Ubiquiti's UniFi platform has the ability to
run scheduled speed tests to keep an eye on your ISP's throughput from
their USG router at a site. I discovered this back when I finished
converting the network at the office
over to UniFi and have been wanting to replicate this functionality at my
other locations where I use OpenBSD routers.
Currently I aggregate the data from those devices into my
new Grafana-based monitoring platform which I wanted to continue to use so I could
have a consolidated view into the infrastructure.
Edited: August 13, 2019 @15:00

I own my own cable modem and have for the past 10 or so years. At first it was to save on the rental fee for the garbage equipment the local cable company supplied, but since they have stopped charging that it became more of a control thing. I have no need for the garbage router or wifi access point that they supply. I used to work for an ISP and so I'm well aware of the quality and support these devices receive. (Fun fact, the average cost per unit target when I was in the business for a residential CPE device (customer premise equipment) was between US $3 and US $8. For business devices it went up to a whopping US $25 or so...) I also prefer greatly the features and power that my OpenBSD routers give me and I've written more than a few posts about the various UniFi devices I've scattered around to provide WiFi. A few months ago the old Motorola/Arris SurfBoard 6141 I've had started playing up. It needed rebooting periodically to maintain the speeds provisioned. It was probably close to 7 years old and even though it's still a supported DOCSIS 3.0 modem the specs are starting to leave a bit to be desired...