Goal
Ubiquiti's UniFi platform has the ability to
run scheduled speed tests to keep an eye on your ISP's throughput from
their USG router at a site. I discovered this back when I finished
converting the network at the office
over to UniFi and have been wanting to replicate this functionality at my
other locations where I use OpenBSD routers.
Currently I aggregate the data from those devices into my
new Grafana-based monitoring platform which I wanted to continue to use so I could
have a consolidated view into the infrastructure.
Design
The Ubiquiti speed test functionality connects to an echo server running in Amazon AWS and reports back to the controller so the first thing I needed to do was either find an existing way to replicate that functionality or build something similar. Thankfully Debian ships a speedtest.net cli application that functions similarly to the Ubiquiti tester but leverages the already existing Speedtest.net infrastructure. I ended up leveraging this and whipping up a really quick and dirty speedtester container that I could run periodically from cron(8).
Implementation
Every hour cron fires the following script on one of my Docker engine hosts.
Container launch job
#!/bin/sh
set -e
CONTAINER_NAME="[redacted]/speedtester:latest"
docker run --rm \
-e INFLUX_HOST=[redacted] \
-e INFLUX_USER=[redacted] \
-e INFLUX_PASS=[redacted] \
-e INFLUX_DB=[redacted] \
--network grafana_backend \
${CONTAINER_NAME}
Inside the container the entry point runs speedtest-cli with a few arguments to select the closest server and provide the output as a JSON formatted string which then gets piped into a small Python script to ship that data off to InfluxDB.
JSON to InfluxDB logger
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
'''upload-to-influx.py (c) 2019 Matthew J Ernisse <matt@going-flying.com>
All Rights Reserved.
Log the results of a run of speedtest-cli(1) to an InfluxDB database.
Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:
* Redistributions of source code must retain the
above copyright notice, this list of conditions
and the following disclaimer.
* Redistributions in binary form must reproduce
the above copyright notice, this list of conditions
and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
import json
import os
import sys
from influxdb import InfluxDBClient
class SpeedtestLogger(object):
''' Parse the JSON output of speedtest-cli and post the statistics
up to an InfluxDB. DB configuration is stored in the environment.
'''
def __init__(self):
''' Read the configuration environment variables and setup
the InfluxDB client.
Variables:
INFLUX_HOST - Hostname of InfluxDB server
INFLUX_PORT - Port (8086 by default)
INFLUX_USER - Username to authenticate with
INFLUX_PASS - Password to authenticate with
INFLUX_DB - Database to log to
The Measurement will be called: speedtest
The fields will be:
download, upload, and ping
The measurements will be tagged with:
server_id, country_code, city, sponsor
'''
self.influx_config = {
'host': os.environ.get('INFLUX_HOST'),
'port': int(os.environ.get(
'INFLUX_PORT',
8086
)),
'user': os.environ.get('INFLUX_USER'),
'pass': os.environ.get('INFLUX_PASS'),
'db': os.environ.get('INFLUX_DB')
}
self.fields = {
'download': 0.0,
'upload': 0.0
}
self.tags = {
'city': '',
'country_code': '',
'server_id': '',
'sponsor': ''
}
self.timestamp = 0.0
self.ifclient = InfluxDBClient(
self.influx_config['host'],
self.influx_config['port'],
self.influx_config['user'],
self.influx_config['pass'],
self.influx_config['db']
)
def parseJson(self, s):
obj = json.loads(s)
self.fields['download'] = obj['download']
self.fields['upload'] = obj['upload']
self.fields['ping'] = obj['ping']
self.tags['city'] = obj['server']['name']
self.tags['country_code'] = obj['server']['cc']
self.tags['server_id'] = obj['server']['id']
self.tags['sponsor'] = obj['server']['sponsor']
self.timestamp = obj['timestamp']
def postToInflux(self):
if not self.timestamp:
raise ValueError('No timestamp, did you call parseJson first?')
measurements = [{
'measurement': 'speedtest',
'tags': self.tags,
'time': self.timestamp,
'fields': self.fields
}]
self.ifclient.write_points(measurements)
if __name__ == '__main__':
if sys.stdin.isatty():
print('stdin is a tty, aborting!')
sys.exit(1)
logger = SpeedtestLogger()
with sys.stdin as fd:
input = fd.read().strip()
logger.parseJson(input)
logger.postToInflux()
Dashboard
Now that the data is available to Grafana I was able to easily add some panels to my existing router dashboard to include the test measurements.
Conclusion
Start to finish it only took a few hours to get all of this put together. I didn't need to put the speed tester into a container but it seems like a reasonable idea to ensure I have a wide array of future deployment options open to myself. I already have some cloud hosted assets, so it makes sense to be able to extend the monitoring into those environments if the need arises. Even though I have less than a week of data I find it a bit surprising that my ISP has been fairly reliable. I'm currently on an up to 100Mbps / 10Mbps plan and the 95th percentile results over the last few days have been within 15% or so of meeting that claim. I remain impressed with the ease of use and flexibility of the tools, back when I worked for a national ISP we collected some similar information for billing dedicated Internet customers and it was all done with a large web of custom code.
Grafana made visualizing the information almost shockingly easy. Especially nice is the built in transforms that allow you to calculate the 95th percentiles over arbitrary time windows.
Honestly I'd love to see features like this built into consumer endpoint gear, I think it would help keep ISPs honest.