Matthew Ernisse

December 04, 2018 @11:00

I'm trying to figure out a way to balance the lack of surprise and schadenfreude I have at Tumblr/Verizon's decision to paint all sexual content with the regressive and transparent 'but think of the children' brush. Tumblr grew largely thanks to the alternative and adult communities that found it's permissive and accepting nature welcoming. It became what it is today because of the LGBTQ+ and sex worker communities, and now it has decided to break up. Their post paints a pretty picture full of platitudes, inclusiveness, acceptance, and love of community but it is obvious to the most casual of observer that it is just a sham. Tumblr is breaking up with the people that helped it grow because it is easier than trying to actually make the service a better place.

IF I DON'T SEE IT, IT ISN'T THERE

I feel bad for the users that are being displaced, some of whom I have followed for close to a decade. I admit I do feel a bit like pointing and laughing at the management of Tumblr who just signed their product's own death warrant, but most of all I feel like this is yet another billboard for retaining control of your community, and your content. In an era with filled with social media companies promising to help you build communities and share content it is more important than ever to remember that at the end of the day they all will betray you eventually because you aren't their customer, you are their product. At some point your needs will clash with theirs and they will without remorse chose themselves every single time.

Anyone who creates anything on the Internet needs to relentlessly protect their community by ensuring that they have control. I am sure it sounds a bit bizarre to some but if you are going to use services like social media to engage people (which you basically have to right now) you need to act like any day you could wake up and find them gone. You need to ensure that people can find you again, that your content and community doesn't just disappear and that you can move on to whatever comes next.

Tumblr will die, the Internet will move on. In a couple years it will be another story the olds tell the kids these days, but hopefully... we learn. In the mean time, I'm firing up grab-site and archiving the people I have followed on Tumblr for the last 10 years. Hopefully we will cross paths again.

To The Archive Mobile!

For creators, find a way to root your community in something you control. Go pay Ghost or a similar host to house your blog. Domain names are cheap these days, I like Gandi but there are many places that will sell you one. Resist the urge to get a free blog with a free url. Being www.example.com/you is no less risky than being you.tumblr.com. It's not expensive, or difficult anymore to maintain a presence online where you are the customer. It isn't perfect, but at least when you own the domain name if you need to change providers your address stays the same and your community can follow you to your new home. Link everything to your blog and link your blog to everything. Make it the clearing house for all that you are doing, make it easy for your community to follow you when the inevitable happens.

For members of communities, and followers of creators, if it isn't clear where to go next reach out to the creators. Many of them are scrambling to find a place to land or to let all their followers know where else they can be found. If you don't know ask, and politely suggest they think about creating a place they own to anchor their community if they havne't already..

November 29, 2018 @10:57

I have been running a FlightAware / FlightRadar24 ADS-B feeder for almost 4 years now. It is an older Raspberry Pi B with a RTL-SDR stick running dump1090 at its core. These days it is mounted in my garage with the antenna on the roof. When I built it I stuffed a Honeywell HIH6130 temperature and humidity sensor in the enclosure. At the time it was mounted on a fence in my back yard so it would be in full sun for much of the day so I hooked it up to Icinga to alert me if it ever got too hot or too wet inside.

asdb-feeder in August 2014

Lately I've been investigating ways to get more information into a central location for my infrastructure as a whole. I have a lot of one-off, largely custom built systems to collect and aggregate system status data. While this has worked for the last 12 years, it is most certainly starting to show its age. At the moment I'm working with a stack that includes collectd, InfluxDB, and Grafana. The latter two run as Docker containers, while the former is deployed by Puppet to all my physical and virtual hosts.

I wanted to pull together some additional monitoring information from the ADS-B feeder to see just how far I can go with this setup. Luckily the dump1090 web interface works by reading JSON files from the receiver daemon, so all the interesting statistics are available on disk to read.

dump1090-fa web interface

I was able to pull together a quick python script that loads the JSON and emits the statistics to collectd (which forwards them onto InfluxDB for Grafana to work with). I need to get the script into git somewhere but for now, here is the currently running copy.

#!/usr/bin/env python3
''' collectd-dump1090-fa.py (c) 2018 Matthew Ernisse <matt@going-flying.com>
 All Rights Reserved.

Collect statistics from dump1090-fa and send to collectd.  Uses the collectd
Exec plugin.

Redistribution and use in source and binary forms,
with or without modification, are permitted provided
that the following conditions are met:

    * Redistributions of source code must retain the
      above copyright notice, this list of conditions
      and the following disclaimer.
    * Redistributions in binary form must reproduce
      the above copyright notice, this list of conditions
      and the following disclaimer in the documentation
      and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''
import json
import os
import socket
import time

def print_aircraft(stats):
    ''' Parse and emit information from the aircraft.json file. '''
    aircraft = len(stats.get('aircraft', []))
    messages = stats.get('messages')
    if not messages:
        raise ValueError('JSON stats undefined')

    m = "PUTVAL \"{}/dump1090/counter-messages\" interval={} N:{}".format(
        hostname,
        interval,
        messages
    )
    print(m)

    m = "PUTVAL \"{}/dump1090/gauge-aircraft\" interval={} N:{}".format(
        hostname,
        interval,
        aircraft
    )
    print(m)


def print_stats(stats):
    ''' Parse and emit information from the stats.json file. '''
    counters = [
        'samples_processed',
        'samples_dropped',
    ]

    gauges = [
        'signal',
        'noise'
    ]

    values = stats.get('local')
    if not values or not type(values) == dict:
        raise ValueError('JSON stats undefined')

    for k in counters:
        value = values.get(k)
        if not value:
            value = 'U'

        m = "PUTVAL \"{}/dump1090/counter-{}\" interval={} N:{}".format(
            hostname,
            k,
            interval,
            value
        )
        print(m)

    for k in gauges:
        value = values.get(k)
        if not value:
            value = 'U'

        m = "PUTVAL \"{}/dump1090/gauge-{}\" interval={} N:{}".format(
            hostname,
            k,
            interval,
            value
        )
        print(m)


if __name__ == '__main__':
    interval = float(os.environ.get('COLLECTD_INTERVAL', 10))
    hostname = os.environ.get('COLLECTD_HOSTNAME', socket.getfqdn())

    while True:
        with open('/var/run/dump1090-fa/stats.json') as fd:
            stats = json.load(fd)

        stats = stats.get('total')
        print_stats(stats)

        with open('/var/run/dump1090-fa/aircraft.json') as fd:
            stats = json.load(fd)

        print_aircraft(stats)
        time.sleep(interval)

I also wanted to pull the temperature / humidity sensor readings, that ended up being a similarly easy task since I already had written a script for Icinga to use. A quick modification to the script to emit the values in the way that collectd wants and that was flowing in. I created a user for the i2c group so the script can use the i2c interface on the Raspberry Pi.

The script currently looks like this.

#!/usr/bin/env python3


import os
import socket
import time
import smbus

def read_sensor():
    ''' Protocol guide:
    https://phanderson.com/arduino/I2CCommunications.pdf
    '''
    bus = smbus.SMBus(0)
    devid = 0x27

    # writing the device id to the bus triggers a measurement request.
    bus.write_quick(devid)

    # wait for the measurement, section 3.0 says it is usually
    # 36.65mS but roll up to 50 to be sure.
    time.sleep(0.05)

    # data is 4 bytes
    data = bus.read_i2c_block_data(devid, 0)

    # bits 8,7 of the first byte received are the status bits.
    # 00 - normal
    # 01 - stale data
    # 10 - device in command mode
    # 11 - diagnostic mode [ ignore all data ]
    health = (data[0] & 0xC0) >> 6

    # section 4.0
    humidity = (((data[0] & 0x3F) << 8) + data[1]) * 100.0 / 16383.0

    # section 5.0
    tempC = ((data[2] << 6) + ((data[3] & 0xFC) >> 2)) * 165.0 / 16383.0 - 40.0

    return (tempC, humidity)


if __name__ == '__main__':
    interval = float(os.environ.get('COLLECTD_INTERVAL', 10))
    hostname = os.environ.get('COLLECTD_HOSTNAME', socket.getfqdn())

    while True:
        retval = read_sensor()
        print("PUTVAL \"{}/hih6130/gauge-temperature\" interval={} N:{:.2f}".format(
            hostname,
            interval,
            retval[0]
        ))

        print("PUTVAL \"{}/hih6130/gauge-humidity\" interval={} N:{:.2f}".format(
            hostname,
            interval,
            retval[1]
        ))

        time.sleep(interval)

The collectd plugin configuration is pretty easy, the dump1090 files are readable by nogroup so you can execute that script as nobody. As I said I made an i2c user that was member of the i2c group so the Python SMBus module can communicate with the sensor.

LoadPlugin exec
<Plugin exec>
    Exec "nobody:nogroup" "/usr/bin/collectd-dump1090-fa.py"
    Exec "i2c:i2c" "/usr/bin/collectd-hih6130.py"
</Plugin>

Once the statistics were flowing into InfluxDB, it was just a matter of putting together a dashboard in Grafana.

Summary Panels

Host Status Panels

SDR Status Panels

The JSON from Grafana for the dashboard is here, though it may require some tweaking to work for you.

So far I'm pretty happy with the way this all went together. I still have a bunch of network equipment that I'd like to bring over and a stack of ancient MRTG graphs to replace. Hopefully it will be a similarly simple experience.

🍻

November 26, 2018 @15:30

While I was waiting for new tires to be put on my car today I was able to watch the landing of Mars InSight which was relayed via the MarCo A&B experimental interplanetary cube sats.

Misson Control as touchdown was confirmed

Since everything worked so well we even got back a picture from the lander mere moments after landing was confirmed.

Hello, Mars

Congratulations to everyone involved in this mission, I'm excited to see what we learn not only about our friend the red planet but also about the continued feasibility of the cube sat program. Maybe we'll see something like the PlanetLabs Dove cube sat streaking towards Mars someday.

MarCo Relay Animation from NASA/JPL-Caltech

November 25, 2018 @23:40

I know it's not particularly uncommon for web sites these days to drastically change things and in fact most people consider this a feature. The fail fast mentality is great and all except that it means you are often failing more than not and the general consensus seems to be that it's perfectly acceptable to do it in public to the detriment of your users.

There are a few patterns that I really wish would die. One of the worst offenders are single page "apps" that hijack the history navigation controls of your browser to keep you from having to download the 20MB of JavaScript on every page load making it next to impossible to refresh the page after the inevitable crash of the script somewhere in the dark mess of the minified JavaScript source. The other is the continued fad of styling the native browser video controls and overriding the functionality that is built in to provide "a consistent look and feel across platforms." This wouldn't be quite so annoying if it didn't almost always break in some unique and fun ways and omit features that the developer didn't have on their personal laptop. I don't think I've seen a single video site that skinned the HTML video element that provided a native full screen view or picture in picture on macOS. I find those features really useful and Apple worked very hard to optimize them for performance and battery life so it would be great if people would just leave them alone.

To move from a general rant to something slightly more specific here are a few examples from the latest YouTube redesign that really drives home the level of amateur hour that keeps infesting the web.

This video was not vertically letter boxed

No, this video was not letter boxed

I know CSS is hard, but come on Google...

I guess even Google doesn't get CSS

Yes, I always wanted to scroll left and right to see my whole video

Why would you ever scroll part of a video off screen?

Insert sad trombone here.

November 07, 2018 @23:30

A little over six and a half years ago I left the Linux as a desktop community for the Mac community. I replaced a Lenovo Thinkpad T500 for an Apple refurbished late 2011 MacBook Pro and honestly have not regretted it.

Over the years I maxed out the memory, went from the 500G SATA HDD to a Crucial 256GB SSD, then put the 500G HDD in the optical bay, then upgraded to a Samsung EVO 512GB SSD with the optical drive back in there. I replaced the keyboard twice, the battery twice, and had the logic board replaced out of warranty for free by Apple thanks to the recall for an issue with the discrete graphics. Through all that it quite happily chugged along and for the most part just worked. Even now it's in better shape than most of my old laptops, but the lid hinge is starting to get weak (it will often just slowly close itself on me), it needs yet another new battery, and the inability to run the latest macOS lead me to conclude that it is time to look for an upgrade.

Old Laptops

It ended up being a bit of a challenge to decide on an upgrade, though. I really like the 13" Retina MacBook Pro I have for work, I really like the portability of the MacBook, and the new MacBook Air looks like a great compromise between the two. I fought with myself for quite some time over what would come next for me and finally settled on a 15" Mid-2015 Retina MacBook Pro. Essentially the bigger brother of what I have for work.

Hello, Kitsune

Now I won't blame you if you are wondering why I'd pick a 3 year old laptop over the latest and greatest. In the end it was almost the only choice. I wanted a 15" class laptop because I spend most of my time computing sitting on the couch. The 13" is really nice and portable but it's actually a little too light and a tad too small to comfortably use on my lap. That basically ruled out the lighter and smaller MacBook and MacBook Air. As for the newer 15" MacBook Pro, I almost exclusively use the vi text editor so not having a hardware escape key is just not something I feel I can get used to. I've also heard many people at work who do have the new MacBook Pros complain loudly about the keyboard so that was another nail in the coffin of the new models.

Given all that, the last non-touchbar 15" MacBook Pro is... the Mid 2015. I found a nice example with the 2.5GHz i7 and the Radeon R9 on eBay for a real good price after a few weeks of looking and snapped it up.

Since this is the second Mac I've ever had as my primary workstation it was the first time I got to use Migration Assistant. I have previously used recovery mode to recover from Time Machine which works a treat so I had high expectations. In the end I'd say the experience lived up to them. The only real problem I had seems to be related to how I have my WiFi configured. I use WPA2 Enterprise (so username and password authentication) on my main SSID which I configure using a profile in macOS (which also serves to disable Siri, a bunch of iCloud stuff I don't use, sets up my internal certificate trust settings, and my VPN). Every time I started up Migration Assistant it would drop off the WiFi with no explanation. After flailing around a bit it looks like that was because it couldn't access the authentication information after logging me out to start the migration, so I figured I'd use Ethernet. That would have worked except that the laptop showed up on a Saturday and the only Thunderbolt to Ethernet adapter I own was at the office. Thankfully my guest WiFi uses WPA2 PSK and that actually appears to work just fine.

Migrating!

It took about 4 hours to transfer the 210GB or so of data, but afterwards the new Mac looked just the same as the old Mac. A quick run through my customization script to catch the few settings in the new version of macOS, the automounter, and the applications I have installed via homebrew, I have not had to go back. Sunday evening I shut off the old laptop. I do plan on re-deploying it as a test workstation if I ever get around to building a dedicated test environment, but for now it is sitting in a drawer under my desk.

It's been a good laptop and this new one has big shoes to fill.

Goodbye, Aramaki

🍺

November 06, 2018 @13:40

It's probably too late to change anyone's mind, but I saw a particularly salient Twitter come across this morning.

@SwiftOnSecurity

Also particularly poignant for me is this morning's post over at McMansion Hell

Nub Sez Vote So please, vote. This is basically the bare minimum required of all citizens in this republic other than paying your taxes (maybe). In any case, it is our only chance to directly influence the policies of this nation and is a right that thousands died to embody us with. Your voice counts, but to get it heard you have to show up.

I Voted 2018

πŸ‡ΊπŸ‡ΈπŸ» πŸŽ‰

October 31, 2018 @21:50

Getting Started

Tor in Containers

I have been looking for reasons to try Docker on one of the random stack of un-used Raspberry Pis that I have laying around and thought it might be fun to build a little travel router. Somehow that morphed into lets get Tor working on here and then well if I can get a client, and a relay, why not an onion service?

Getting the Tor relay / proxy working was pretty easy. The entrypoint script is a little bit long because I wanted to allow for a fair bit of configuration flexibility in how the container is deployed.

You can find the container in the 'tor-relay' directory of my git repo.

I chose to also put polipo in a container to provide a HTTP proxy front-end. This made it pretty easy to get on the Tor network from a machine anywhere on my LAN. I even threw together a docker-compose.yml to bring up both the Tor client and polipo. You can find that in the tor-proxy-bundle of my git repo. Then I decided to go exploring, err researching.

Tor works!

The "Dark" Web

"Onion" services, often times called "hidden" services are addresses that end in .onion and allow operators to provide services over the Tor network without having to disclose their location or IP address. There is no centralized directory of these services like there is with DNS on the 'regular' Internet so discovering what is out there is a bit tricky. After some searching I found that much like the regular Internet there are various directories of links, and search engines available. The big difference in search engine tech is that they seem to start by crawling the regular Internet looking for .onion addresses and then primed with that they can start crawling and indexing links to other .onion addresses just like any Internet search engine would.

My favorite so far is Fresh Onions because I can sort by 'last seen' and just keep poking at whatever it crawled most recently. Things seem to come and go rather frequently on the Tor network so when I was looking at the link directories I kept finding that something like 60% of the links were dead so this provided a better experience.

After a day of poking around I came to realize that as with most technology the general understanding of the dark web is pretty far from reality. The idea that the dark web is awash with black markets, and hit men for hire, and hackers appears to be about as true as it was when people talked about the Internet back in the early 1990s. In fact the reality is that the dark web even looks an awful lot like the 1990s Internet.

Some Examples

What a nice retriever we have here!

No really, that's the name of the site

There are a lot of sites like this, though this is probably the cutest... The HTML is very rudimentary, quite literally the minimum you need to get an image on the screen.

Placeholder, Placeholder, Everywhere.

Placeholder

Lots of placeholders too... Sometimes not even a page but an empty directory listing from a freshly configured webserver with nothing on it.

Under construction, but no .gif.. yet

Under Construction

If you remember the Internet of the 1990s you almost certainly ran across (or maybe even used) one of the many under construction animated gif images that were out there. While I have yet to see one of those pop up on the dark web, there are lots of pages that purport to be under construction.

Sign my Guestbook!

No really, please sign it!

If you remember the under construction gif you probably also remember guestbooks. A rudimentary precursor to the blog comment box, these let visitors leave public notes for the site owner. Often times these devolved into... well what you might expect from an anonymous board where anyone can post anything...

Turns out those exist on the "Dark" web too.

"Dark" Thoughts

Design aside the "anonymity" of the dark web is very similar to the feeling back in the 1990s and early 2000s Internet. Before advertizers could track you all across the Internet it had a "wild west" feel in places. There were lots of aliases (the hacker nom-de-plume or 'handle'), and strange usernames (I was mee156 at one point thanks to a particularly uncreative corporate IT department) and often they were ephemeral. There were plenty of sites purporting the same sort of potentially illegal (often fake) products and services attributed to the dark web all because by and large you were anonymous (sorta). In a way as someone who grew up in those early days it is actually sort of heartening to see a bit of a renaissance so that maybe the kids today will get a shot at making some of the same mistakes I did and not have that follow them forever.

Hidden service in a (pair of) container(s)

Docker, Tor, Raspberry Pi

There are a lot of reasons people might run an onion service. Nefarious purposes aside if you aren't just using it for research or as a way to provide a link back into your private network then you are probably concerned about anonymity. There have been a few good DEFCON talks about anonymity and security on Tor and how people often screw it up.

While not a silver bullet it seems like putting your service and the Tor client that provides your .onion address into isolated containers are reasonable first steps towards operational security. By isolating the network to just the two containers you can reduce the attack surface and information leaked if the service is compromised. You can also develop the service in isolation (say on your laptop) and then transport it to another machine to deploy it providing an airgap. Beyond that wrapping this into containers makes it simpler to deploy just about anywhere. You could even put them up on a public cloud provider (if you can get an anonymous account), or since this works on a Raspberry Pi you could hide the Pi somewhere other than your home or work and simply snag an open Ethernet port or WiFi network (obviously with permission from the owner...).

Similar to the proxy and relay stuff, you can see an example docker-compose.yml, hidden service client, and Apache instance over in my git repo (under onion-service-bundle, onion-service, apache-rpi respectively).
The example onion service that I have on my Pi right now is available here if you are interested.

Conclusions?

Containers bring a lot of interesting possibilities to systems like Tor, where you are essentially creating an overlay network that you are then isolating and keeping largely ephemeral. The onion service keeps a little state (public/private key pairs) but for the most part there isn't anything that needs to be kept around between container runs. There are also other ways to create tunnel connections from inside a container to the world, opening up many different possibilities.

The other interesting thing is that while there are a lot of sites claiming services like what you read in typical reports about "The Dark Web", the vast majority of what is out there are either legitimate attempts to provide anonymous services (eg: The New York Times via Tor, SecureDrop to pass sensitive tips to journalists, and publications or collections of written works like zines), or research / experimentation like the examples above (and my own test service). There is even a streaming radio service over Tor out there.

I think demystifying things helps normalize them. There are plenty of people who use Tor to be able to access the free and open Internet in ways that those of us in countries that don't censor the Internet take for granted, and people who live under regimes so oppressive that reading certain things or posting certain opinions can earn them real jail time. It is important for more people to use Tor for the usual everyday things, provide relays, and run onion services to ensure that the people who are under real threat have more noise to hide in.

Final Thoughts

Peeking around Tor onion services did leave me with one other piece of advice I'd like to pass along. If you have not already I'd urge you to do two things.

  1. Use a password manager. I generally recommend LastPass, but people I know and trust like 1Password as well. This is your best defense the next time some service gets breached and your data ends up out there (more often than not it is found on the regular Internet).
  2. Sign up for Troy Hunt's very good and free service Have I Been Pwned. This will alert you when your data has been found in a data breach.

🍻

September 27, 2018 @11:30

I was making some firewall changes last weekend and while watching the logs I discovered that every now and then some host would try to connect to 169.254.169.254 on port 80. This was peculiar since I don't use the IPv4 link local addresses anywhere in my network. It seemed to be happening randomly from all of my Linux hosts, both physical and virtual.

If the processes originated on my firewall which is running OpenBSD I'd be able to track down the process that was doing this by adding a more specific rule with "log (user)" to pf.conf(5) but it seems that Linux dropped this ability from Netfilter sometime back in the 2.6 time frame. 😒

The part that makes this a bit unique is that this is a connection that will certainly fail. It trips a rule in my firewall that blocks bogons outbound on my WAN interface which means that the normal tools like netstat(1) and lsof(8) will only reveal anything if I somehow catch the process between the execution of the connect(2) system call and it failing. What I need to be able to do is log in real time what is happening, which I could do with something like strace(1) but I'd need to know the PID of the process and that is what I'm trying to find out.

So off I went looking for other things that might be helpful and stumbled upon the Linux kernel audit system. The audit system has been around for a while and lets you ask the kernel to communicate the details of syscalls as they happen. There is a filtering mechanism built in so that you don't end up dumping too much information or dramatically impacting performance and the raw data is sent to a userland program via a netlink socket. By default most distributions ship auditd, which listens on that netlink socket and dumps all the messages into a log.

Since I am looking at an attempted TCP connection the connect system call is the one I am interested in. I don't know much else about it though so it turns out a pretty simple filter rule is what I was looking for.

$ sudo auditctl -a exit,always -F arch=b64 -S connect

This asks the kernel to log upon exiting of the syscall any calls to connect(2). This immediately started flooding the audit log with entries like:

type=SYSCALL msg=audit(1538057732.986:13752): arch=c000003e syscall=42 success=yes exit=0 a0=12 a1=7ffc987e28c0 a2=6e a3=7f20abf93dd0 items=1 ppid=9803 pid=19584 auid=4294967295 uid=33 gid=33 euid=33 suid=33 fsuid=33 egid=33 sgid=33 fsgid=33 tty=(none) ses=4294967295 comm="apache2" exe="/usr/sbin/apache2" key=(null)
type=SOCKADDR msg=audit(1538057732.986:13752): saddr=01002F746D702F7...
type=CWD msg=audit(1538057732.986:13752): cwd="/"
type=PATH msg=audit(1538057732.986:13752): item=0 name="/tmp/passenger.PIT9MCV/agents.s/core" inode=1499146 dev=fe:01 mode=0140666 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL
type=PROCTITLE msg=audit(1538057732.986:13752): proctitle=2F7573722F7362696E2F617...

OK, so I'm getting closer but obviously some of the data is coming out in some packed hex format and the things I want aren't all on the same line so I need to figure out how to decode this. While searching for the format of the messages in hopes of writing a quick and dirty parser I found ausearch(8) which includes the handy -i option.

I fired up tcpdump(8) on the pflog(4) interface and waited for one of the 169.254.169.254 packets to be dropped. That let me find what I was looking for in the audit log... the culprit.

ausearch and tcpdump to the rescue

It turns out it was a puppet agent run. Now I know none of my modules try to talk to that address but puppet does a lot of things including running facter to get information about the system and the environment it is running in. I know some cloud infrastructure has standardized on that address as a location for guest agents to pick up metadata from so I suspected some default module trying to see if we are running on a cloud provider. A quick locate(1) and grep(1) later and it turns out that the built in facter ec2 module does in fact try to pull metadata from 169.254.169.254.

apollo@10:16:49 1.8T ~ >locate facter
[ ... many paths elided for brevity ...]
/usr/lib/ruby/vendor_ruby/facter
[ ... many more paths ...]
apollo@10:16:53 1.8T ~ >grep -R 169.254.169.254 /usr/lib/ruby/vendor_ruby/facter/*
/usr/lib/ruby/vendor_ruby/facter/ec2/rest.rb:      DEFAULT_URI = "http://169.254.169.254/latest/meta-data/"
/usr/lib/ruby/vendor_ruby/facter/ec2/rest.rb:      DEFAULT_URI = "http://169.254.169.254/latest/user-data/"
/usr/lib/ruby/vendor_ruby/facter/util/ec2.rb:      url = "http://169.254.169.254:80/"
/usr/lib/ruby/vendor_ruby/facter/util/ec2.rb:  # GET request for the URI http://169.254.169.254/latest/user-data/  If the
/usr/lib/ruby/vendor_ruby/facter/util/ec2.rb:    uri = "http://169.254.169.254/#{version}/user-data/"

So in the end, the Linux audit system is our friend. There is a lot of other cool stuff in there, I ran across a post from the slack engineering team that talks about how they use the audit system and how they leverage this information to alert on and challenge user actions in real time. It is also a cautionary tale that good network hygiene is important since you never know what random things you might leak out onto the Internet (or your ISP's network) if you aren't careful.

🍻

September 16, 2018 @15:00

I installed one of the Mojave public betas last week on the Mac Mini I have in the office. I used it as an excuse to finally tweak a script I wrote for customizing macOS out of the box.

Mojave

I'll annotate inline below, you can snag the original if it looks useful to you. The first hunk is just standard shell boilerplate. I tend to write POSIX shell and eschew any bash specific nonsense for maximum compatibility.

#!/bin/sh
# install-macos (c) 2017-2018 Matthew J. Ernisse <matt@going-flying.com>
# All Rights Reserved.
#
# Customize a base macOS install.
#
# Redistribution and use in source and binary forms,
# with or without modification, are permitted provided
# that the following conditions are met:
#
#     * Redistributions of source code must retain the
#       above copyright notice, this list of conditions
#       and the following disclaimer.
#     * Redistributions in binary form must reproduce
#       the above copyright notice, this list of conditions
#       and the following disclaimer in the documentation
#       and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

set -e

I don't use iCloud, I run ownCloud instead. This just makes the directory I use to sync my files to/from.

create_owncloud_dir()
{
    if [ ! -d "$HOME/Documents/cloud" ]; then
        echo "Creating ownCloud directory"
        mkdir -p "$HOME/Documents/cloud"
    fi
}

Disable more things I don't use or care about. Touristd is a bit annoying because they can push more crap to bother you with down the line. This at least shuts it up after initial install.

disable_siri()
{
    echo "Disabling Siri"
    defaults write com.apple.Siri StatusMenuVisible -bool false
    defaults write com.apple.Siri UserHasDeclinedEnable -bool true
    defaults write com.apple.assistant.support 'Assistant Enabled' 0
}

disable_touristd()
{
    defaults write com.apple.touristd \
        seed-https://help.apple.com/osx/mac/10.13/whats-new \
        -date "$(date)"
}

Set a whole bunch of system preferences. Disable automatic spelling and unicode quote correction. Set my preferred Finder style (list view), and enable daily update check and installation. Also try to keep finder from crapping .DS_Store folders all over the network shares.

set_defaults()
{
    echo "Writing various macOS defaults"
    # I suspect I am missing some...
    defaults write NSGlobalDomain AppleKeyboardUIMode -int 3
    defaults write NSGlobalDomain AppleICUForce24HourTime -int 1
    defaults write NSGlobalDomain AppleAquaColorVariant -int 6
    defaults write NSGlobalDomain AppleInterfaceStyle "Dark"
    defaults write NSGlobalDomain \
        AppleMiniturizeOnDoubleClick -bool false
    defaults write NSGlobalDomain AppleShowScrollBars "Always"
    defaults write NSGlobalDomain ApplePressAndHoldEnabled -bool false
    defaults write NSGlobalDomain \
        NSAutomaticCapitalizationEnabled -int 0
    defaults write NSGlobalDomain \
        NSAutomaticDashSubstitutionEnabled -int 0
    defaults write NSGlobalDomain \
        NSAutomaticPeriodSubstitutionEnabled -int 0
    defaults write NSGlobalDomain \
        NSAutomaticQuoteSubstitutionEnabled -int 0
    defaults write NSGlobalDomain \
        NSAutomaticSpellingCorrectionEnabled -int 0
    defaults write NSGlobalDomain \
        NSAutomaticTextCompletionEnabled -int 1
    defaults write NSGlobalDomain NSCloseAlwaysConfirmsChanges -int 1

    defaults write com.apple.keyboard.fnState -int 1
    defaults write com.apple.screencapture disable-shadow -bool true
    defaults write com.apple.finder FXPreferredViewStyle -string '"Nlsv"'

    # Check for updates automatically, daily, and auto-install
    # security updates
    defaults write com.apple.SoftwareUpdate \
        AutomaticCheckEnabled -bool true
    defaults write com.apple.SoftwareUpdate ScheduleFrequency -int 1
    defaults write com.apple.SoftwareUpdate AutomaticDownload -int 1
    defaults write com.apple.SoftwareUpdate CriticalUpdateInstall -int 1

    # Don't shit .DS_Store all over the show.
    defaults write com.apple.desktopservices \
        DSDontWriteNetworkStores true
}

As it says, install my internal CA into the trust store.

# Install and trust my local CA.
install_ca()
{
    echo "Installing CA, you will be prompted for your password"
    local tmpfile=$(mktemp)
    curl --fail \
        --silent \
        --location \
        --insecure \
        --output $tmpfile \
        http://apollo.internal.ub3rgeek.net/ca/ub3rgeek_Internal_CA.pem

    security add-trusted-cert \
        -k "$HOME/Library/Keychains/login.keychain-db" \
        $tmpfile

    rm $tmpfile
}

This gets called to install homebrew and a bunch of applications. There is a nasty hack later as the script needs to be run with sudo, but homebrew won't work that way.

# I hate you so much homebrew for having a fucking trustmeprompt shell
# pipe to a thing installer.  Fuck you.
install_homebrew()
{
    echo "Installing Homebrew and sundry applications"
    /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
    brew analytics off
    brew install caskroom/cask/docker
    brew install caskroom/cask/firefox
    brew install caskroom/cask/iterm2
    brew install caskroom/cask/owncloud
    brew install caskroom/cask/visual-studio-code
    brew install caskroom/cask/vnc-viewer
    brew install caskroom/cask/vlc
#   broken?!
#   brew install caskroom/fonts/font-inconsolata
    brew install imagemagick
    brew install flake8
    brew install ffmpeg
    brew install fdk-aac-encoder
    brew install gnupg
    brew install gnutls
# this doesn't seem to function correctly.
#   brew install opensc
    brew install pass
    brew install tmux
    brew install telnet
    brew install pwgen
    brew install wget
}

Setup my default shell environment.

install_profile()
{
    echo "Installing .profile"
    curl --fail \
        --silent \
        --location \
        --output "$HOME/.profile" \
        "https://ssl.ub3rgeek.net/git/?p=misc.git;a=blob_plain;f=profile;hb=HEAD"
}

Setup autofs, this gets customized a bit for different sites. It also setups the client to match my NFSv4 configuration.

create_automount()
{
    local nfs_server="tardis.internal.ub3rgeek.net"
    local shares="/vol/staff/mernisse /vol/backup /vol/media"
    echo "Setting up NFS automount."

    if [ ! -d "$HOME/Shares" ]; then
        mkdir $HOME/Shares
    fi

    if [ ! -f /etc/auto_nfs ]; then
        touch /etc/auto_nfs
        chown root:wheel /etc/auto_nfs
        chmod 644 /etc/auto_nfs
    fi

    for share in $shares; do
        if ! grep -q "nfs://$nfs_server$share" /etc/auto_nfs; then
            echo "$(basename $share)        nfs://$nfs_server$share" \
                >> /etc/auto_nfs
        fi
    done

    if ! grep -q "$HOME/Shares  auto_nfs" /etc/auto_master; then
        echo "$HOME/Shares  auto_nfs" >> /etc/auto_master
    fi

    if ! grep -q nfs.client.default_nfs4domain /etc/nfs.conf; then
        echo "nfs.client.default_nfs4domain = localdomain" >> /etc/nfs.conf
    fi

    automount -c
}

This just says hello and is called at the start of the script.

say_hello()
{
    local reset="\033[0m"
    local green="\033[32;1;m"
    local yellow="\033[33;1;m"
    local red="\033[31;1;m"
    local magenta="\033[35;1;m"
    local blue="\033[34;1;m"
    local cyan="\033[36;1;m"

    local hello=" ${green}H${yellow}e${red}l${magenta}l${blue}o${reset}"
    hello="${hello}, I am the Macintosh. "

    hello="${hello} $(sysctl -n hw.model)"
    hello="${hello} ${cyan}macOS $(sw_vers -productVersion)${reset}"

    printf "$hello\n"
}

The default uid/gid doesn't match my network so I change it here. This is why the script needs to be run as root. You do have to be careful with this since it can do wonky things to your login session once it does what its thing.

update_uid_and_groups()
{
    if [ "$(id -u mernisse)" -eq 1000 ]; then
        return
    fi

    echo "Setting uid to 1000 and creating media group"
    dscl . -change $HOME UniqueID $SUDO_UID 1000
    dseditgroup -o create -i 1042 media
    dseditgroup -o edit -a mernisse media
    echo "Changing ownership of $HOME to reflect new uid"
    chown -R 1000 $HOME
}

This is the start of execution. It checks to see if you are running as root or doing the homebrew step.

# TODO:
# https://download.panic.com/transmit/Transmit%204.4.13.zip
# ublock origin?
if [ ! "$UID" -eq 0 ] && [ ! "$1" = "homebrew" ]; then
    echo "Please run this script with sudo(8)."
    exit 1
fi

Catch the homebrew install which needs to be run as the user, not as root.

if [ "$1" = "homebrew" ]; then
    install_homebrew
    echo "Returning to sudo session..."
    exit
fi

Call all the stuff above.

say_hello
set_defaults
disable_siri
disable_touristd
install_ca
create_owncloud_dir

I replaced all the spinning disks with SSDs a while ago. So I don't need the sudden motion sensor..

echo "Disabling SuddenMotionSensor"
pmset -a sms 0

Some finder related things here. I don't like things being hidden.

echo "Unhiding /Volumes and ~/Library"
chflags nohidden ~/Library
chflags nohidden /Volumes

locate is a good thing to have.

echo "Enabling locatedb"
launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist
install_profile

This re-executes the script as the user that ran sudo. This is done to make homebrew happy.

echo "Dropping privs to $SUDO_USER to install homebrew"
echo "************************************************"
# This is a hack...
sudo -u $SUDO_USER $0 homebrew

Finally, change my UID and GID if needed.

# Do this late, because my window session will still have the old UID cached
# it gets... wonky.
update_uid_and_groups
create_automount

And that's it. There is still a few things I have to do manually, including installing a Configuration Profile with my WiFi and VPN settings but it really reduces the amount of things that I have to do to get a new macOS install up and running. πŸ‘ πŸ₯ƒ

September 15, 2018 @16:40

For a while now I've used a Yubikey Neo as a PIV card to authenticate to my public facing hosts. This is fairly straightforward but requires a host with OpenSC on it. In my .profile I have a function called add_smartcard which will add the PIV driver to the ssh-agent. This means I actually authenticate with the key that was generated in the Yubikey and not my password.

Yubikey in my laptop

# add_smartcard - Add the PIV key to the current ssh-agent if available.
# Requires a opensc compatible smartcard and associated libs and binaries.
add_smartcard()
{
    # Set to a string in the opensc-tool(1) -l output for your card.
    local _card_name="Yubikey"

    # set to the installed location of the opensc libraries.
    # on OSX with HomeBrew this is /usr/local/lib
    local _lib_dir="/usr/local/lib"

    if ! quiet_which opensc-tool; then
        return
    fi

    if [ -z "$SSH_AUTH_SOCK" ]; then
        return
    fi

    if opensc-tool -l | grep -q "$_card_name"; then
        if ssh-add -l | grep -q opensc-pkcs11; then
            return
        fi

        ssh-add -s "$_lib_dir/opensc-pkcs11.so"
        return

    fi

    # If card is no longer present, remove the key.
    if ssh-add -l | grep -q opensc-pkcs11; then
        ssh-add -e "$_lib_dir/opensc-pkcs11.so" > /dev/null
    fi
}

Yubikey PIV Authentication

This is all well and good but I wanted to have stronger authentication for scenarios when I'm not on one of my computers. I also wanted to ensure other users of my systems were protected since I can't force them to use PIV cards for authentication. I did a little research and found pam_oath which supports both sequence based and time based one time passwords. This means it is compatible with the OTP profile on the Yubikey and authenticator based apps like Google Authenticator.

The parts I did with Puppet

The first part is pretty straightforward so I setup a Puppet module to do it for me. You need to have the PAM module installed, add it to the sshd pam.d policy, and update your sshd_config.

I am using Debian 9 but you should be able to adapt the following to most Puppet setups and distributions.

# Setup OATH (HTOP) modules/oath/manifests/init.pp
class oath {
  package { 'libpam-oath':
    ensure => latest,
  }

  package { 'oathtool':
    ensure => latest,
  }

  service { 'sshd':
  }

  file { '/etc/users.oath':
    ensure => present,
    owner => root,
    group => root,
    mode => '0600',
  }

  augeas { 'add pam_oath.so':
    context => "/files/etc/pam.d/sshd",
    changes => [
      'ins 01 after include[. = "common-auth"]',
      'set 01/type auth',
      'set 01/control required',
      'set 01/module pam_oath.so',
      'set 01/argument[last()+1] usersfile=/etc/users.oath',
      'set 01/argument[last()+1] window=20',
    ],
    onlyif => 'match /files/etc/pam.d/sshd/*/module[. = "pam_oath.so"] size == 0',
  }

  augeas { 'set ChallengeResponseAuthentication':
    context => '/files/etc/ssh/sshd_config',
    changes => [
      'set ChallengeResponseAuthentication yes',
    ],
    onlyif => 'match /files/etc/ssh/sshd_config/ChallengeResponseAuthentication != "yes"',
    notify => Service['sshd'],
  }
}

Things I didn't do with Puppet

The last thing you need is to initalize your shared secrets. I didn't want to do this with Puppet since I felt the need to control where the secrets were and minimize their exposure. The way I have pam_oath configured they will ultimately live in a file called /etc/users.oath. Make sure this is owned by root and has mode 0600. There are two examples that follow for creating the secret. One is for the OATH-HOTP used in the Yubikey, the other is for TOTP which most authenticator apps use (I use the One Time Password feature built into the Hurricane Electric Network Tools app on iOS, but I tested this with Google Authenticator as well).

Create a secret for the Yubikey

> dd if=/dev/urandom count=1 bs=1k 2>/dev/null | sha256sum

Put the returned hexadecimal string into your Yubikey as the shared secret.

Yubico Personalization Tool

Create a secret for an authenticator app

> dd if=/dev/urandom count=1 bs=1k 2>/dev/null | sha256sum | cut -b 1-30
> oathtool --totp -v <hexadecimal key from above>

So if your randomly generated key is 01ab5d053493a266172b16248a8377 then you would see:

imladris@15:27:29 ~ >oathtool --totp -v 01ab5d053493a266172b16248a8377
Hex secret: 01ab5d053493a266172b16248a8377
Base32 secret: AGVV2BJUSORGMFZLCYSIVA3X
Digits: 6
Window size: 0
Step size (seconds): 30
Start time: 1970-01-01 00:00:00 UTC (0)
Current time: 2018-09-15 19:37:58 UTC (1537040278)
Counter: 0x30DC773 (51234675)

852412

I used the Python module qrcode which includes a command line utility called qr to generate the configration QR code for the authenticator app. Using the above output of oathtool as an example this is how I made the QR code.

qr "otpauth://totp/<user>@<host>?secret=AGVV2BJUSORGMFZLCYSIVA3X" > qr.png

You can find more information about the otpauth:// uri format on Google's GitHub wiki.

users.oath

Regardless of which mode you are using you'll need to add the secret to the users.oath file (I am using the example secret from above).

# Yubikey
HOTP    user1   -   01ab5d053493a266172b16248a8377
# Authenticator App
HOTP/T30/6  user2   -   01ab5d053493a266172b16248a8377    

The second line automatically increments the code every 30 seconds. In both cases they will expect a 6 digit code but in the second form it is explicit.

SSH with One Time Password

You can have different users with different methods. There are more complex PAM methods available if you don't want everyone to be required to use MFA or key based authentication, for example this is a good writeup that includes group or host exclusions. I feel like that provides an attacker the ability to work around your MFA.

In the end this probably took me long to write about than actually do and it's enhanced the security of my systems without any negative impact with one exception.

I use Panic's Transmit from time to time and I'm cheap so I have not upgraded to 5.0 yet. It turns out that they don't support the OTP prompt (in 4.0 at least). You can use a custom SSH key, and I believe you can restrict that key to sftp only so I may look into that as a workaround.

In any case there is really no reason you can't secure your servers now.

🍻

Edited: September 04, 2018 @23:30

So, I mentioned a while back that I watch Acquisitions Inc on the yubtubs. Well through there I also started watching Dice, Camera, Action. During the Stream of Many Eyes event there was a DCA episode featuring Travis McElroy and that reminded me of the fact that I have had The Adventure Zone languishing away on my iPhone for a while now, un-listened to. Now I'm pretty terrible about keeping up with podcasts (there is so much good stuff out there to listen to and watch these days) so I just wanted to toss out a few words about what happened next.

The Adventure Zone: Balance

I started listening on July 23rd (according to Podcasts.app) and have basically been blowing through several episodes per day since. There are something north of 90 episodes in the feed and I am within 20 of being current. If that isn't a glowing enough review to interest you then here is a brief synopsis.

The Adventure Zone starts out as a comedy roundtable / actual play ish podcast as three brothers and their father take a stab at playing a starter module for D&D. It's light and funny and clearly a learning experience for all involved. I stuck with it and after the first story 'Here There Be Gerblins' wraps up the first hints of what it will become are unveiled. By the end of the Balance adventure I was hooked. I very literally laughed, cried, and cheered aloud while listening. The production quality went through the roof somewhere about 1/3rd of the way through and the story very quickly left the known world and became something special and unique unto itself. The chemistry of the McElroy family is something delightful to behold and they do an amazing job of morphing D&D into something more consumable in podcast form. It really focuses on the collaborative story telling aspect with often hilarious and serious consequences of perpetually unpredictable dicerolls. I think almost anyone would take a shine to the Tres Horny Boys (as they named their group, accidentally).

As I said am not entirely caught up, I am just up to the start of the new 'season' which was preceded by several short story arcs written by each of the McElroys using different RPG systems and settings as they tried to figure out what they wanted to do for 'season two' but all the mini-stories so far have been really really enjoyable.

The Adventure Zone: Amnesty

I implore anyone who might be reading this to check this out.

Apple Podcasts, RSS Feed

Edit:

I also forgot to mention that the music in this podcast is completely off the chain.

September 02, 2018 @12:45

Recently I had a rental VW with the fancy new radio in it and I figured I'd give CarPlay a shot.

Welp.

Welp, I guess I won't be needing that feature when I buy a new car.

August 29, 2018 @09:20

I've been stewing about this for a while and have not yet found an alternative so this is part rant part dear lazyweb plea.

Goodbye, Sonos.

Sonos recently released the 9.0 version of their software which now requires you to have a Sonos account. I have zero desire to sign up for an account or be be in a situation where my home stereo equipment needs to connect to the Internet just to work so I'm actively looking to replace all the Sonos equipment in my home with something else. At the moment the leading idea is to just sprinkle Bluetooth speakers around the house. I don't see any drawback to this approach. with the exception of the Since you need to use a phone or tablet to control the Sonos system there isn't any real drawback to just using Bluetooth audio streaming directly to a speaker.

Honestly since they never got AirPlay or the Android audio streaming equivalent working (for no clear reason since both have been available on Raspberry Pis for YEARS now), nor did they ever support anything other than optical Dolby Digital on the Play:Bar and Play:Base TV speaker products, and since their controller applications just keep getting worse and worse, I am not sad about leaving them. For me, the only nice thing about their hardware that I have found over the years that is missing from most modern network speakers is the inclusion of Ethernet.

So if anyone out there dear lazyweb has an idea of a replacement that doesn't need the cloud to provide base functionality I'd be interested in hearing about it.

πŸ”Š 🍸

August 27, 2018 @17:10

For a long time now the core of my ad blocking strategy has been squid and privoxy running on my OpenBSD routers. Mobile devices VPN into the network and receive a proxy.pac which routes all traffic to these proxies which reject connections to known ad hosts. With the growing adoption of HTTPS (thankfully) privoxy is becoming less and less useful so I have been trying to find better ways to block ads at the networking level.

I'm not going to get into the ethics of ad blocking, it's my choice to make but I will leave this here.

Tay Tay says block ads (source)

Around the same time CloudFlare announced 1.1.1.1, a privacy focused anycast DNS service. I've been using the Level 3 anycast DNS resolvers for a while now but that's not exactly optimal. With CloudFlare's resolvers you get not only a geographically distributed DNS resolver cluster but DNS-over-TLS and DNS-over-HTTPS support.

Now I run ISC BIND for resolvers, which at this point doesn't support either encrypted DNS method. I do support and validate DNSSEC but that doesn't keep people from eavesdropping on me.

Enter unbound

For a while now OpenBSD has had unbound as the recursive resolver in the base installation so I've been aware of it and trust it. Since I do both recursive and authorative DNS on the same servers I have not had a reason to introduce it. Until CloudFlare.

I added the unbound packages to my DNS server's puppet manifest so the default Debian package got installed. I then added the following configuration to /etc/unbound/unbound.conf.d/cloudflare.conf. Since I'm going to have BIND actually listen to and respond to queries from clients I bind only to localhost (::1 is the IPv6 loopback address) and listen on a non-standard DNS port (5300 since it was open and semi-obvious). This does mean that I have two layers of cache to worry about if I need to clear the DNS cache for any reason but I almost never have to do that so I will worry about that later.

unbound configuration

# This file is managed by Puppet.
#
# Forward DNS requests to CloudFlare using DNS over TLS.
server:
    verbosity: 1
    use-syslog: yes
    do-tcp: yes
    prefetch: yes
    port: 5300
    interface: ::1
    do-ip4: yes
    do-ip6: yes
    prefer-ip6: yes
    rrset-roundrobin: yes
    use-caps-for-id: yes
forward-zone:
    name: "."
    forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com
    forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com
    forward-addr: 1.1.1.1@853#cloudflare-dns.com
    forward-addr: 1.0.0.1@853#cloudflare-dns.com
    forward-ssl-upstream: yes

I then switched the forwarders section of my named.conf from:

    forwarders {
        4.2.2.2;
        4.2.2.1;
    };

to:

    // Unbound listens on [::1]:5300 and forwards to CloudFlare
    forwarders {
        ::1 port 5300;
    };

After letting puppet apply the new configuration I checked the outbound WAN interface of my router with tcpdump(8) and verified that all DNS resolution was heading off to CloudFlare.

Adding adblocking

unbound(8) has a really nice feature where you can override recursion fairly easily. This can be leveraged to block malicious sites at the DNS layer. I found a couple lists that I was able to plug in that so far have worked really well for me.

The first one is a malware block list that is already provided in the unbound config format. So I just used puppet-vcsrepo to ensure an up-to-date copy is always checked out in /usr/local/etc/unbound/blocks. I was then able to add include: "/usr/local/etc/unbound/blocks/blocks.conf" to the server: section of my unbound config.

Since I also wanted ad blocking I continued my search and came across Steven Black's curated list that consildates a number of difference sources into a hosts.txt format file. Since this isn't exactly the format unbound wants I had to do a little more work.

  1. Checked that repository out with puppet-vcsrepo into /usr/local/etc/unbound/stevenblack.
  2. Wrote the script below to convert the list format from a hosts file to an unbound configuration file.
  3. Configured puppet to exec that script when the vcsrepo pulls an update and then notify (restart) the unbound service.
  4. Added include: /usr/local/etc/unbound/stevenblack.conf to my unbound configuration.

unbound-blocklist script

#!/bin/sh
# unbound-blacklist (c) 2018 Matthew J Ernisse <matt@going-flying.com>
#
# Generate an unbound style config from a hosts list.

set -e

SRC="/usr/local/etc/unbound/stevenblack/hosts"
OUTPUT="/usr/local/etc/unbound/stevenblack.conf"


if [ ! -f "$SRC" ]; then
    echo "Could not open $SRC"
    exit 1
fi

awk '/^0\.0\.0\.0/ {
    print "local-zone: \""$2"\" redirect"
    print "local-data: \""$2" A 0.0.0.0\""
}' "$SRC" > "$OUTPUT"

The entire puppet manifest for the unbound configuration is as follows. It is included by the rest of the manifests that setup BIND on my name servers.

unbound Puppet manifest

# Unbound - This is the caching recursor.  Uses DNS-over-TLS
# to CloudFlare to provide secure and private DNS resolution.
class auth_dns::unbound {
    package { 'unbound':
        ensure => latest,
    }

    service { 'unbound':
        ensure => running,
    }

    file { '/etc/unbound/unbound.conf.d/cloudflare.conf':
        source => 'puppet:///modules/auth_dns/unbound.conf',
        owner => 'root',
        group => 'root',
        mode => '0644',
        require => [
            Package['unbound'],
        ],
        notify => [
            Service['unbound'],
        ],
    }

    exec { 'rebuild unbound blacklist':
        command => '/usr/bin/unbound-blacklist',
        refreshonly => true,
        require => [
            Package['unbound'],
            File['/usr/bin/unbound-blacklist'],
            Vcsrepo['/usr/local/etc/unbound/stevenblack'],
        ],
        notify => Service['unbound'],
    }

    file { '/usr/bin/unbound-blacklist':
        ensure => present,
        source => 'puppet:///modules/auth_dns/unbound-blacklist',
        owner => root,
        group => root,
        mode => '0755',
    }

    file { '/usr/local/etc/unbound':
        ensure => directory,
        owner => root,
        group => root,
        mode => '0755',
    }

    vcsrepo { '/usr/local/etc/unbound/blocks':
        ensure => present,
        provider => git,
        source => 'https://github.com/k0nsl/unbound-blocklist.git',
        revision => 'master',
        require => [
            Package['unbound'],
            File['/etc/unbound/unbound.conf.d/cloudflare.conf'],
            File['/usr/local/etc/unbound'],
        ],
        notify => Service['unbound'],
    }

    vcsrepo { '/usr/local/etc/unbound/stevenblack':
        ensure => present,
        provider => git,
        source => 'https://github.com/StevenBlack/hosts.git',
        revision => 'master',
        require => [
            Package['unbound'],
            File['/etc/unbound/unbound.conf.d/cloudflare.conf'],
            File['/usr/local/etc/unbound'],
        ],
        notify => Exec['rebuild unbound blacklist'],
    }
}

Conclusion

So far it feels like a lot of things load faster. I am noticing less requests being blocked by privoxy and squid, to the point that I'm thinking I may be able to completely depricate them. It is also nice that devices on the network that don't listen to proxy.pac files are now being protected from malware and malvertizing as well.

🍺

August 26, 2018 @11:30

iPictureFrame and XCode

I know I'm not 'average' when it comes to my opinions about technology. I imagine this has to do with growing up with technology that was much more simplistic than it is today. Compared to modern software and hardware the NEC PowerMate 286 running DOS 6.0 that I learned to program on was extremely simple. Not that it wasn't powerful, but it didn't have any designs to hide things from you. You had access to the hardware directly, and all the memory, and all the peripheral I/O space. You were able to completely control the system, and even understand exactly what was going on.

Not today.

Don't get me wrong, this isn't a bad thing. The protections in modern operating systems are required for the interconnected (and hostile) world we live in. Computers are also powerful enough that you can afford to give the user an API instead of direct access to the hardware (with all the risks that come along with that). The real problem I have is when vendors decide to lock down their consumer hardware to prevent the user from running whatever software they would like on it.

I could easily go off on a rant about Android devices with locked boot loaders, or "smart" TVs with the unnecessary, non-removable, and often poorly supported, and under powered guts, or a myriad of the unfortunate decisions manufacturers are making these days. But that's not what has been bugging me lately. I, like many people if their quarterly filings and trillion dollar valuation is to be believed, have spent a fair amount of money on iOS powered hardware. I expect when I buy a thing that I can basically do whatever I want with it. Now I really do love the security by default stance of iOS but I also believe firmly that as the owner of the device, if I want to shoot myself in the foot, I should be allowed to peel off the warranty void if removed sticker and fire away.

Fucking Apple...

Of course the worst part is that it's not that I'm not allowed to run my own code on my iOS devices. If I have a Mac, and install XCode, and sign up for an Apple Developer account, then for 6 days at a time I can run something I wrote on the thing I bought. To be clear, I'm 100% fine with that being the App Store development experience, however what I want to do is write code for my own personal use on my own personal devices. I don't want any of this software to be transmitted to Apple to be put on the store, or sent in binary form to another person. All I want to do is run my own stuff on things I own.

Now I do understand that my particular use-case might be a bit outside the middle of the bell curve, but I think this is an expectation that isn't unreasonable. I would also point out that if you want to encourage people to learn to code, it might be a good idea to let them actually run their code, and live with it before trying to make a buck off it. In this world of launch early, release often and fix it in a patch release we really do need more people who are used to living with the choices they make. In my case I wrote a silly streaming audio player to help me fall asleep at night that requires a fair amount of infrastructure behind it, so I would never distribute it as a compiled binary, but I'd really like to not have to reload it on my device every 6 days. Similarly I have an iPad 1 and an iPad 2 that are basically useless but would make nice digital picture frames... if only I could run the app that I wrote for more than a few days without having to reload the code on them.

If anyone out there at Apple is listening, I'd really like a way to make my iOS devices trust my internal CA for code signing. Is that really so much to ask?

🍻

Subscribe via RSS. Send me a comment.