Matthew Ernisse

August 27, 2018 @17:10

For a long time now the core of my ad blocking strategy has been squid and privoxy running on my OpenBSD routers. Mobile devices VPN into the network and receive a proxy.pac which routes all traffic to these proxies which reject connections to known ad hosts. With the growing adoption of HTTPS (thankfully) privoxy is becoming less and less useful so I have been trying to find better ways to block ads at the networking level.

I'm not going to get into the ethics of ad blocking, it's my choice to make but I will leave this here.

Tay Tay says block ads (source)

Around the same time CloudFlare announced 1.1.1.1, a privacy focused anycast DNS service. I've been using the Level 3 anycast DNS resolvers for a while now but that's not exactly optimal. With CloudFlare's resolvers you get not only a geographically distributed DNS resolver cluster but DNS-over-TLS and DNS-over-HTTPS support.

Now I run ISC BIND for resolvers, which at this point doesn't support either encrypted DNS method. I do support and validate DNSSEC but that doesn't keep people from eavesdropping on me.

Enter unbound

For a while now OpenBSD has had unbound as the recursive resolver in the base installation so I've been aware of it and trust it. Since I do both recursive and authorative DNS on the same servers I have not had a reason to introduce it. Until CloudFlare.

I added the unbound packages to my DNS server's puppet manifest so the default Debian package got installed. I then added the following configuration to /etc/unbound/unbound.conf.d/cloudflare.conf. Since I'm going to have BIND actually listen to and respond to queries from clients I bind only to localhost (::1 is the IPv6 loopback address) and listen on a non-standard DNS port (5300 since it was open and semi-obvious). This does mean that I have two layers of cache to worry about if I need to clear the DNS cache for any reason but I almost never have to do that so I will worry about that later.

unbound configuration

# This file is managed by Puppet.
#
# Forward DNS requests to CloudFlare using DNS over TLS.
server:
    verbosity: 1
    use-syslog: yes
    do-tcp: yes
    prefetch: yes
    port: 5300
    interface: ::1
    do-ip4: yes
    do-ip6: yes
    prefer-ip6: yes
    rrset-roundrobin: yes
    use-caps-for-id: yes
forward-zone:
    name: "."
    forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com
    forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com
    forward-addr: 1.1.1.1@853#cloudflare-dns.com
    forward-addr: 1.0.0.1@853#cloudflare-dns.com
    forward-ssl-upstream: yes

I then switched the forwarders section of my named.conf from:

    forwarders {
        4.2.2.2;
        4.2.2.1;
    };

to:

    // Unbound listens on [::1]:5300 and forwards to CloudFlare
    forwarders {
        ::1 port 5300;
    };

After letting puppet apply the new configuration I checked the outbound WAN interface of my router with tcpdump(8) and verified that all DNS resolution was heading off to CloudFlare.

Adding adblocking

unbound(8) has a really nice feature where you can override recursion fairly easily. This can be leveraged to block malicious sites at the DNS layer. I found a couple lists that I was able to plug in that so far have worked really well for me.

The first one is a malware block list that is already provided in the unbound config format. So I just used puppet-vcsrepo to ensure an up-to-date copy is always checked out in /usr/local/etc/unbound/blocks. I was then able to add include: "/usr/local/etc/unbound/blocks/blocks.conf" to the server: section of my unbound config.

Since I also wanted ad blocking I continued my search and came across Steven Black's curated list that consildates a number of difference sources into a hosts.txt format file. Since this isn't exactly the format unbound wants I had to do a little more work.

  1. Checked that repository out with puppet-vcsrepo into /usr/local/etc/unbound/stevenblack.
  2. Wrote the script below to convert the list format from a hosts file to an unbound configuration file.
  3. Configured puppet to exec that script when the vcsrepo pulls an update and then notify (restart) the unbound service.
  4. Added include: /usr/local/etc/unbound/stevenblack.conf to my unbound configuration.

unbound-blocklist script

#!/bin/sh
# unbound-blacklist (c) 2018 Matthew J Ernisse <matt@going-flying.com>
#
# Generate an unbound style config from a hosts list.

set -e

SRC="/usr/local/etc/unbound/stevenblack/hosts"
OUTPUT="/usr/local/etc/unbound/stevenblack.conf"


if [ ! -f "$SRC" ]; then
    echo "Could not open $SRC"
    exit 1
fi

awk '/^0\.0\.0\.0/ {
    print "local-zone: \""$2"\" redirect"
    print "local-data: \""$2" A 0.0.0.0\""
}' "$SRC" > "$OUTPUT"

The entire puppet manifest for the unbound configuration is as follows. It is included by the rest of the manifests that setup BIND on my name servers.

unbound Puppet manifest

# Unbound - This is the caching recursor.  Uses DNS-over-TLS
# to CloudFlare to provide secure and private DNS resolution.
class auth_dns::unbound {
    package { 'unbound':
        ensure => latest,
    }

    service { 'unbound':
        ensure => running,
    }

    file { '/etc/unbound/unbound.conf.d/cloudflare.conf':
        source => 'puppet:///modules/auth_dns/unbound.conf',
        owner => 'root',
        group => 'root',
        mode => '0644',
        require => [
            Package['unbound'],
        ],
        notify => [
            Service['unbound'],
        ],
    }

    exec { 'rebuild unbound blacklist':
        command => '/usr/bin/unbound-blacklist',
        refreshonly => true,
        require => [
            Package['unbound'],
            File['/usr/bin/unbound-blacklist'],
            Vcsrepo['/usr/local/etc/unbound/stevenblack'],
        ],
        notify => Service['unbound'],
    }

    file { '/usr/bin/unbound-blacklist':
        ensure => present,
        source => 'puppet:///modules/auth_dns/unbound-blacklist',
        owner => root,
        group => root,
        mode => '0755',
    }

    file { '/usr/local/etc/unbound':
        ensure => directory,
        owner => root,
        group => root,
        mode => '0755',
    }

    vcsrepo { '/usr/local/etc/unbound/blocks':
        ensure => present,
        provider => git,
        source => 'https://github.com/k0nsl/unbound-blocklist.git',
        revision => 'master',
        require => [
            Package['unbound'],
            File['/etc/unbound/unbound.conf.d/cloudflare.conf'],
            File['/usr/local/etc/unbound'],
        ],
        notify => Service['unbound'],
    }

    vcsrepo { '/usr/local/etc/unbound/stevenblack':
        ensure => present,
        provider => git,
        source => 'https://github.com/StevenBlack/hosts.git',
        revision => 'master',
        require => [
            Package['unbound'],
            File['/etc/unbound/unbound.conf.d/cloudflare.conf'],
            File['/usr/local/etc/unbound'],
        ],
        notify => Exec['rebuild unbound blacklist'],
    }
}

Conclusion

So far it feels like a lot of things load faster. I am noticing less requests being blocked by privoxy and squid, to the point that I'm thinking I may be able to completely depricate them. It is also nice that devices on the network that don't listen to proxy.pac files are now being protected from malware and malvertizing as well.

🍺

August 26, 2018 @11:30

iPictureFrame and XCode

I know I'm not 'average' when it comes to my opinions about technology. I imagine this has to do with growing up with technology that was much more simplistic than it is today. Compared to modern software and hardware the NEC PowerMate 286 running DOS 6.0 that I learned to program on was extremely simple. Not that it wasn't powerful, but it didn't have any designs to hide things from you. You had access to the hardware directly, and all the memory, and all the peripheral I/O space. You were able to completely control the system, and even understand exactly what was going on.

Not today.

Don't get me wrong, this isn't a bad thing. The protections in modern operating systems are required for the interconnected (and hostile) world we live in. Computers are also powerful enough that you can afford to give the user an API instead of direct access to the hardware (with all the risks that come along with that). The real problem I have is when vendors decide to lock down their consumer hardware to prevent the user from running whatever software they would like on it.

I could easily go off on a rant about Android devices with locked boot loaders, or "smart" TVs with the unnecessary, non-removable, and often poorly supported, and under powered guts, or a myriad of the unfortunate decisions manufacturers are making these days. But that's not what has been bugging me lately. I, like many people if their quarterly filings and trillion dollar valuation is to be believed, have spent a fair amount of money on iOS powered hardware. I expect when I buy a thing that I can basically do whatever I want with it. Now I really do love the security by default stance of iOS but I also believe firmly that as the owner of the device, if I want to shoot myself in the foot, I should be allowed to peel off the warranty void if removed sticker and fire away.

Fucking Apple...

Of course the worst part is that it's not that I'm not allowed to run my own code on my iOS devices. If I have a Mac, and install XCode, and sign up for an Apple Developer account, then for 6 days at a time I can run something I wrote on the thing I bought. To be clear, I'm 100% fine with that being the App Store development experience, however what I want to do is write code for my own personal use on my own personal devices. I don't want any of this software to be transmitted to Apple to be put on the store, or sent in binary form to another person. All I want to do is run my own stuff on things I own.

Now I do understand that my particular use-case might be a bit outside the middle of the bell curve, but I think this is an expectation that isn't unreasonable. I would also point out that if you want to encourage people to learn to code, it might be a good idea to let them actually run their code, and live with it before trying to make a buck off it. In this world of launch early, release often and fix it in a patch release we really do need more people who are used to living with the choices they make. In my case I wrote a silly streaming audio player to help me fall asleep at night that requires a fair amount of infrastructure behind it, so I would never distribute it as a compiled binary, but I'd really like to not have to reload it on my device every 6 days. Similarly I have an iPad 1 and an iPad 2 that are basically useless but would make nice digital picture frames... if only I could run the app that I wrote for more than a few days without having to reload the code on them.

If anyone out there at Apple is listening, I'd really like a way to make my iOS devices trust my internal CA for code signing. Is that really so much to ask?

🍻

August 25, 2018 @12:00

UniFi Switches in the NMS

Since I installed the first bits of the Ubiquiti UniFi family of products in my network I have been impressed. They have never failed to meet my expectations. I have written several articles about some rather advanced configuration and implementation details:

And of course I have written several generally glowing reviews about the product line.

The only thing I've been missing is the UniFi switch products. Until recently I have not had a burning need for switches, I had plenty and they worked fine. However, similar to the story of the UniFi USG that replaced a wonky MikroTik RouterBoard I started to have problems with the switch in my detached garage.

Old Garage Linksys

I did actually look around a bit when choosing a switch, there are 3 devices in the garage that are powered by PoE that I really wanted the switch to provide. The Linksys did 802.3af PoE, but two of the 3 devices use the "passive PoE" which isn't compatible so I really wanted to either find a dual mode switch or a source for 802.3af to passive PoE adapters.

Well, the UniFi Switch 8-150W fit the bill perfectly. It has the ability to provide 802.3af, 802.3at (PoE+), and passive PoE over its 8 RJ-45 ports and sports 2 SFP cages as well. As with all the other UniFi stuff installation and configuration was a breeze (in fact I did it using the iOS app while standing in my garage during a downpour), and it has been completely problem free since installation (despite several 100+ degree days where I'm sure the poor thing has roasted).

New Garage USW8-150W

In fact, it worked so well that I kept poking around the UniFi switch line and discovered that there was another switch that scratched a rather odd itch.

At the office, the LAN is provided by the landlord. It is a combination of cable Internet and a bunch of Cisco Meraki gear. I have a PoE+ powered jack in my office that I use for my personal equipment but have not been able to take advantage of it so I have an unsightly mess of power cables and PoE injectors hanging about. Turns out the UniFi Switch 8 (USW8) has just the thing. Port 1 can consume PoE and power the device while port 8 can provide PoE to a downstream device. I was able to eliminate a bunch of crap by dropping in one of these as it was powered from the Meraki switch and in turn powered the downstream access point.

USW8 at work

I think it all came out rather smart. Chalk up another well designed product from Ubiquiti. I actually have another USW8 sitting and waiting to be deployed at home, but I have several holes to cut before that goes in.

🍻

July 25, 2018 @20:00

UniFi Security Gateway in the NMS

A couple days ago I wrote a bit about setting up a new Ubiquiti UniFi Security Gateway, and after living with it for a bit I have a few additional notes.

/config/user-data is preserved through resets

I'm not exactly sure why this happened but I fat fingered the JSON and during a provisioning cycle the USG wiped the certificates from /config/auth (where it seems to want you to put them) and while rebuilding I noticed that /config/user-data doesn't get wiped. When you run the restore-default command it seems to have set -x in it somewhere and emits this:

mernisse@ubnt:~$ set-default
+ cmd=restore-default
+ shift
+ case $cmd in
+ exit_if_fake restore-default
++ uname -a
++ grep mips
+ '[' 'Linux ubnt 3.10.20-UBNT #1 SMP Fri Nov 3 15:45:37 MDT 2017 mips64 GNU/Linux' = '' -o -f /tmp/FAKE ']'
+ exit_if_busy restore-default
+ '[' -f /var/run/system.state ']'
++ cat /var/run/system.state
+ state=ready
+ '[' ready '!=' ready ']'
+ state_lock
+ lockfile /var/run/system.state
+ TEMPFILE=/var/run/system.state.4478
+ LOCKFILE=/var/run/system.state.lock
+ ln /var/run/system.state.4478 /var/run/system.state.lock
+ rm -f /var/run/system.state.4478
+ return 0
+ echo 120
+ echo 3
+ rm -f /config/mgmt
+ apply_restore_default
++ cut -c -8
++ echo 7080 27092 31310 11976 31941
++ /usr/bin/md5sum
+ local R=eb2c7606
+ prune_old_config
+ find /root.dev/ -type d -iname 'w.????????' -exec rm -rf '{}' ';'
+ rm -f /config/config.boot
+ rm -f /config/unifi
+ rm -f /config/auth/ca.crt /config/auth/server.crt /config/auth/server.key
+ mv /root.dev/w /root.dev/w.eb2c7606
+ state_unlock
+ /bin/rm -f /var/run/system.state.lock
+ reboot

I made a copy of the certificates for the VPN in /config/user-data to ensure that if this happens again I can simply copy them back into place.

You can load a local JSON config file

The core of the UniFi system is the integration to the NMS, otherwise it would just be an EdgeRouter LITE. It appears that the provisioning process causes the controller's configuration to be merged with your config.gateway.json file and sent to the device. The downside is that you can't just push the JSON down to the USG, you need the entire rendered payload. Luckily you do have access to the underlying commands to import and export the configuration.

Once you have the USG up and working you can backup the JSON from the ssh console by running:

mca-ctrl -t dump-cfg > /config/user-data/backup.json

If for some reason the configuration gets messed up and you can no longer talk to the controller because the VPN is down you can simply reload it with:

 mca-ctrl -t apply -c /config/user-data/backup.json

All in all I'm still happy with it minus two things that I've sent to Ubiquiti using their feedback form:

  1. Would really like to have the PEM encoded certificates in the config.gateway.json. This would certainly help if you need to reload the device.
  2. Would like to have a checkbox to bridge eth1 and eth2. Almost everything at the office is wireless, but I do have a Synology NAS that I want wired, thankfully the UniFi UAP-AC-IW that is there has a built in 2 port switch but if I wanted to use a different AP it seems like it would be really handy to be able to easily use the WAN 2 port as a switched LAN port.

🍺 👍

July 20, 2018 @16:45

Background

I have several physical locations linked together with VPN tunnels. The central VPN server runs OpenBSD with iked(8). I also have several roaming clients (iOS and macOS) that terminate client access tunnels to this system so I am loathe to make breaking changes to it. The site to site tunnels run a gif(8) tunnel in IP-over-IP mode to provide a layer 3 routable interface on top of the IKEv2 tunnel. My internal tunnels run ospfd(8) and ospf6d(8) to exchange routes and my external site to site tunnels run bgpd(8). Most of my internal sites use OpenBSD as endpoints so configuration is painfully simple, however in my office at work I have been using a MikroTik RouterBoard RB951-2HnD. This has worked well enough but lately it has been showing its age, randomly requiring manual intervention to re-establish tunnels and flirting with periods of unexplainable high latency.

Old Work Network

Notes

This is not meant to be a comprehensive HOWTO. I doubt your setup will be close enough to mine to translate directly but hopefully you will find some useful information since this isn't a particularly well documented use case for the Ubiquiti UniFi USG product line.

It is also worth noting that under the covers the USG runs the same EdgeOS as their EdgeRouter line of products with the caveat that the controller will overwrite the configuration any time it provisions the device. Fortunately Ubiquiti has foreseen this and provides a way to provide advanced configuration via a JSON file on the controller.

I manage all of my sites from a centralized UniFi controller instance, so I need the VPN to work before I can swap out the RouterBoard for the USG. This is an overview of how I did that.

Pre-Deployment

Since I already have a working VPN tunnel at the site I already had all the X.509 certificates and IP addresses needed to configure the new router. Starting at home, where the controller is located I plugged in the USG WAN port to my LAN and connected my laptop to the USG LAN port. I was able to adopt the gateway into the controller with no trouble.

I fiddled around with the config until I got it working and stuffed the changes into the config.gateway.json file. Finally I blew the device away and forgot it from the controller. It is important at this point to reload the certificates into the factory defaulted router (put them in /config/auth) before adopting the gateway in the controller. The gateway will go into a reboot loop much the same way as if you typo-ed the config.gateway.json file if it cannot find the files. Once the certificates were loaded, I re-adopted the gateway and the configuration was applied.

I was then able to take it into work and swap the MicroTik.

Configuration

I will simply annotate the config.gateway.json file inline to explain how this all ended up going together.

{
    "service": {
        "dns": {
            "forwarding": {
                "options": [
                    "domain=work.ub3rgeek.net"
                ]
            }
        },

Set the DNS domain name handed out by the gateway, not strictly needed in this context, but handy.

        "nat": {
            "rule": {
                "6004": {
                    "description": "VPN Link Local NAT",
                    "destination": {
                        "address": "!172.16.0.0/15"
                    },
                    "log": "disable",
                    "outbound-interface": "tun0",
                    "outside-address": {
                        "address": "192.168.197.97"
                    },
                    "source": {
                        "address": "172.16.0.0/15"
                    },
                    "type": "source"
                }
            }
        }
    },

NAT any traffic coming from the tunnel or IPSec endpoint addresses to the canonical address of the router. This prevents local daemons from selecting the wrong source IP (most frequently done by syslogd).

    "interfaces": {
        "loopback": {
            "lo": {
                "address": [
                    "172.16.197.96/32"
                ]
            }
        },

This is the IPSec endpoint, I use policy based IPSec so this needs to exist somewhere so the traffic can get picked up by the kernel and sent across the tunnel.

        "tunnel": {
            "tun0": {
                "address": [
                    "172.17.197.198/30"
                ],
                "description": "ub3rgeek vpn",
                "encapsulation": "ipip",
                "ip": {
                    "ospf": {
                        "network": "point-to-point"
                    }
                },
                "local-ip": "172.16.197.96",
                "mtu": "1420",
                "multicast": "enable",
                "remote-ip": "172.16.197.32",
                "ttl": "255"
            }
        }
    },

This sets up the IP-over-IP tunnel. Note I could not get the OSPF session to come up for the life of me using my normal /32 addressed tunnel so I switched to a /30. After that OSPF came right up. If you debug ospf events and get complaints that the peer address of tun0 is not an ospf address, then you might be hitting this too.

    "protocols": {
        "ospf": {
            "area": {
                "0.0.0.0": {
                    "network": [
                        "192.168.197.96/28",
                        "172.17.197.196/30",
                        "10.10.10.0/24"
                    ]
                }
            },
            "parameters": {
                "abr-type": "cisco",
                "router-id": "192.168.197.97"
            },
            "passive-interface": [
                "eth0",
                "eth1"
            ]
        }
    },

This is rather straightforward, I'm redistributing the local networks and the tunnel address. This is a pretty simple OSPF configuration. Since I have no routers on the Ethernet end of things I set both interfaces to passive.

    "vpn": {
        "ipsec": {
            "auto-firewall-nat-exclude": "enable",
            "esp-group": {
                "ub3rgeek": {
                    "compression": "disable",
                    "lifetime": "3600",
                    "mode": "tunnel",
                    "pfs": "dh-group14",
                    "proposal": {
                        "1": {
                            "encryption": "aes256",
                            "hash": "sha256"
                        }
                    }
                }
            },
            "ike-group": {
                "ub3rgeek": {
                    "ikev2-reauth": "no",
                    "key-exchange": "ikev2",
                    "lifetime": "28800",
                    "proposal": {
                        "1": {
                            "dh-group": "14",
                            "encryption": "aes256",
                            "hash": "sha256"
                        }
                    }
                }
            },
            "site-to-site": {
                "peer": {
                    "69.55.65.182": {
                        "authentication": {
                            "id": "bdr01.work.ub3rgeek.net",
                            "mode": "x509",
                            "remote-id": "bdr01.colo.ub3rgeek.net",
                            "x509": {
                                "ca-cert-file": "/config/auth/ca.crt",
                                "cert-file": "/config/auth/server.crt",
                                "key": {
                                    "file": "/config/auth/server.key"
                                }
                            }
                        },
                        "connection-type": "initiate",
                        "ike-group": "ub3rgeek",
                        "ikev2-reauth": "inherit",
                        "local-address": "default",
                        "tunnel": {
                            "0": {
                                "allow-nat-networks": "disable",
                                "allow-public-networks": "disable",
                                "esp-group": "ub3rgeek",
                                "local": {
                                    "prefix": "172.16.197.96/32"
                                },
                                "protocol": "all",
                                "remote": {
                                    "prefix": "172.16.197.32/32"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

This is the real meat and potatoes of the configuration. It corresponds to the following configuration on the OpenBSD side of things.

#
# bdr01.work.ub3rgeek.net
#
ikev2 "work" passive esp \
        from 172.16.197.96 to 172.16.197.96 \
        peer $PEER_WORK \
        ikesa enc aes-256 \
                auth hmac-sha2-256 \
                prf hmac-sha2-256 \
                group modp2048 \
        childsa enc aes-256 \
                auth hmac-sha2-256 \
                group modp2048 \
        srcid bdr01.colo.ub3rgeek.net dstid bdr01.work.ub3rgeek.net \
        lifetime 360m bytes 32g

Conclusion

In the end I am very happy about the whole thing. The USG is pretty slick and for simple configurations I imagine it is super easy to get going, and other than the lack of documentation for some of the things that aren't exposed in the controller UI it was not too hard to figure out. I would suggest if you are stuck trying to figure out the cli, you might want to explore the EdgeOS or Vyatta (the upstream Open Source project the EdgeOS is based on) documentation. I found those helpful.

New Work Network

🍺

July 12, 2018 @20:47

I enabled HTTPS on this website just under a year ago. If you follow my blog you know that this is a static website, and since there appears to be a bit of an uproar in the web community over HTTPS right now I figured I'd simply weigh in.

Do you need HTTPS for your website?

Yes.

There are lots of good reasons for this, and not many reasons not do it but the major point that resonates with me is not the risks to your website, but the risks to the general Internet at large. Actors (both malicious and benign) can inject content into any HTTP served site and cause the web browser of their visitors site to do... essentially whatever they want. This doesn't have to be targeted at your site, anyone in the middle can simply target ALL HTTP traffic out there, regardless of the content.

This isn't a user agent (browser) problem, this isn't a server problem, anyone with access to ANY part of the network between the server and the user agent can inject anything they want without the authenticity provided by TLS.

HTTPS is Easy, and for most it is free. It also allows HTTP/2 which is faster (even for static sites like this one which uses HTTP/2). Really it is. If you aren't convinced let me also point you at Troy Hunt's excellent demo of what people can do to your static website.

April 06, 2018 @14:30

I had occasion today to install some updates on one of my macOS systems and found myself inconvenienced by a number of applications adding a pile of dock icons without asking. I don't keep much in the dock on my systems preferring to use clover+space to launch applications and I don't think I have touched the dock layout in literally years at this point so I went searching for a solution.

Clean Dock

From chflags(1) the 'schg' flag makes a file system immutable, meaning not even the super-user (root) can alter it.

A quick cleanup of my dock and chflags schg on ~/Library/Preferences/com.apple.dock.plist seems to have prevented further changes by installers.

You will have to chflags noschg the plist file to make any changes to the dock stick in the future.

January 02, 2018 @11:36

It seems like the blog is turning into an alternating stream of screaming about things Apple is doing wrong and gushing about how great the UniFi line of products are from Ubiquiti... I have a back log of ideas for things to write about other than those it just seems like life keeps getting in the way and and out the other end either a rant or praise just naturally flows.

I suppose it is also easiest to write about the things that have most recently consumed a few hours of your life. I'd write about how I just re-wrote the entire website generation code in Jinja2 and Python3 but that's not really all that interesting as it was basically drop-in.

So rolling back to things that I have worked with recently, you might remember this post from just before the holidays wherein I fought a bit with the two UniFi softwares to get them to use the same SSL certificate. I also that hinted that this was coming over here where I talked a bit about the experience of extending my UniFi WiFi network infrastructure to my office at work.

I bought the UVC-G3 camera in the same order as the newest AP with plans of mounting it to my garage. If you saw my original post on setting up UniFi in the first place you may have seen on the map view that I have a detached garage. Having a view of the driveway, side walk and yard and a bit of the front is certainly useful but this is also the most challenging location that I intend to have a camera. Currently the uplink is over the WiFi connection between the garage and basement APs and if you have been following the weather it has averaged about 9°F up here so being an un-heated and un-insulated garage this is the most environmentally difficult spot I've got.

cam01 in place

Setup

I'm happy to report that the initial setup is very similar to the WiFi products. The controller software (apparently via a UDP broadcast) sees the cameras as they come up on the network and gives you the option to 'manage' the camera. As an alternative you can manually configure the camera to connect to your controller if they aren't on the same layer 2 network segment, or use the camera simply as an RTMP server. In managed mode the whole process is very similar to adopting a UniFi access point. Once you have the camera managed it will upload any new firmware that it needs along with some base configuration and reboot it a few times. Once that has settled down you can move on to the rest of the setup for the camera(s). The configuration is pretty slick and easy. You end up with 2 tabs to go through and in most cases the defaults are sane.

UniFi Video Camera Setup

Recording

There are a couple options for recording as you might expect in NVR software. You can record always, never, on a schedule, or on motion. You also have a few options for retention, either time based, space based, or both. This ends up being pretty powerful and again the defaults are reasonably sane.

UniFi Video Recording Setup

The most amazing feature that is bundled in the NVR software (for free, without any cloud nonsense, and without any strings attached other than needing to buy their otherwise very good cameras I might add...) is the motion based recording. From the camera configuration screen you can configure your motion detection zone. Once you hit configure you are presented with a live view from the camera that you can draw a boundary box on.

UniFi Video Motion Zone Setup

After adjusting the border of the area you can smack 'test zone' and... more awesomeness happens. The zone border disappears and instead you get the same live image but now detected motion highlighted in red is overlayed and a nice histogram showing the trigger threshold versus the amount of motion in the frame appears (red is exceeding the threshold, green is not). This lets you fine-tune the motion trigger sensitivity and hopefully keep false positives low.

UniFi Video Motion Zone Test

Once you are happy with your new camera settings you can tell the software to alert you once a recording is triggered and you will be presented with a nice e-mail with a snap of the frame that triggered the event.

UniFi Video Alert Mail

Software Review

So the setup process was reasonably painless. The software installed just about as easily as the WiFi software and configuration was almost alarmingly easy. It has been almost a month since I've gotten this all up and running and I have to say it has been basically hands off. The iOS mobile application works great, and thanks to the power of VPN I can watch live video and recordings from just about anywhere without having any of this accessible to the Internet at large. The camera itself uses h.264 and uses a little over 1 Mbps worth of network bandwidth. So far there have been some hiccups in connectivity thanks to the weather and the WiFi link, but nothing major and nothing lasting more than a few seconds.

Traffic Graph For Garage

Hardware Review

The camera itself is really quite nice. Feels solid and comes with a very versatile mounting system. It was easy to aim and secure and has held up without complaint to our delightful weather thus far. The only irritation is that unlike most of the WiFi products, the cameras are still supplied with 24V 'Passive-PoE'. The garage switch does have 802.3af PoE, but I still have to use an injector in line. Not a huge deal here but I have some other locations where I'd really like to be able to power the camera via the local switch without more hardware in the line. There does appear to be a SKU for 802.3af capable UVC-G3 cameras but I can't actually find someone selling them yet. Perhaps in the near future they will appear and my only hardware gripe will go away. (fingers crossed)

Conclusion

So, tl;dr? Ok. This is just as rad as the WiFi, if you are in the market for a slightly more complex than 'consumer grade', powerful, and most assuredly not cloud connected surveillance solution then give this a serious look. You might be surprised. I sure was.

Edited: December 30, 2017 @14:10

Seriously, It Isn't a Problem

There has been a bunch of discussion around the 'revelation' that a software update to the iPhone was purposefully slowing older phones. While I believe that they should have been more transparent to users about what was happening, perhaps even adopting the UI from the MacBook for when the battery has aged and requires replacement (I had to do this about a year ago on my 2011 MacBook Pro, macOS will toss a little ! by the battery icon and of course System Report will give you further information).

macOS Battery Info

Sadly on this front Apple opted for a pretty inconspicuous note in their release notes for the iOS 10.2.1 update...

iOS 10.2.1 Release Notes

I don't see any of this as being a problem. Lithium cells age in charge/discharge cycles. The chemistry of the cell changes slightly as energy is pulled out of and pushed back into the cell. This change is irreversible. Most manufacturers rate their cells in the 300 to 500 cycle range after which it is typical to have lost 20% of the original capacity of the cell. One of the things that happens as the cells age is that the internal resistance increases, meaning essentially it becomes harder to get energy into and out of the battery. If we do a little back of the napkin math here suddenly this all seems very reasonable. If you charge your phone nightly from 50% (low for me, high for a lot of other people I know who always seem to be in the red at the end of the day) then you will be putting about 182 cycles on the battery per year. At this rate you will hit 500 cycles in under 3 years. At the time of writing the iPhone 6 is over 3 years old, the 6S is a little over 2, and the 7 (which I have) is a little over a year old. There is also some evidence that the harder you work the phone, the higher it will drive the internal resistance of the cell over the lifespan which might be what caused Apple to decide to throttle the CPU speeds on aged phones. The software only appears to throttle phones as battery capacity drops so the performance of the device can be restored by simply replacing the aged battery.

Which brings me nicely to the real point of this.

Non-Replaceable Batteries ARE A Problem

If Apple had never decided to go with a non user serviceable battery then this never would have been a problem. Battery getting older? No problem! The thing is, I can't lay all the blame for this at the feet of Apple. EVERYONE is doing this now. There is nary a flagship device on the market that lets you pull the battery out. Even my previous phones, the oft scoffed at BlackBerry Passport and BlackBerry Classic had non-removable batteries. It is understandable that not having to accommodate removable batteries makes design and construction of the phones easier, is less parts to manufacture and assemble and can certainly lead to smaller and lighter devices but I believe that we have reached the point where the devices are small and light enough. With the resurgence of the larger phone and 'phablet' form factors, surely you can take the hit in the profit margin to put a replaceable battery on a $1000 device... right?

On the bright side it seems that (if you trust a reddit post) Apple charges a fairly nominal fee to replace the battery in your phone. Honestly it is about what the battery would probably cost you retail, but I can't help but feel like this whole thing could have been avoided if they had just made the battery removable.

Edit

I think Apple is doing the right thing. Bullet 2 in the article should really have been a no-brainer in the first place but it is good seeing the recognize that some things you can't just hide behind the UI and hand waving. I still would really like this trend of non user serviceable batteries to die in a fire though.

December 18, 2017 @20:48

I run UniFi to manage my various Ubiquiti access points, now across multiple sites and I try to setup everything with HTTPS only and with certificates signed by my internal CA. I followed for the instructions provided by Ubiquiti for UniFi back when I installed it.

Recently I added UniFi Video into the mix and am running that application on the same VM as UniFi (yeah, the names of the applications are a bit confusing) so I wanted to use the same certificate since the hostname and IP are the same.

The problem with this is that in the Ubiquiti documentation you use the Java keystore to create a CSR and sign it. This means you never get the private key so you can't import the resulting certificate into a different keystore. You can however import a keystore entry into another keystore. So this is how I used that to work around the lack of a private key.

Note

If all you want to do is use a custom certificate with UniFi Video and not copy the certificate from UniFi you can look here, which are the instructions that I based the installation phase of this procedure on.

Background

I have the software installed on a VM running Debian 8, with the following versions of the Ubiquiti software installed from their apt repositories. The process should be similar for other distributions and versions, but the paths are likely to be different so go poking around before trying this.

> dpkg -l unifi\* | awk '/^ii/ { printf "%s - %s\n", $2, $3 }'
unifi - 5.6.22-10205
unifi-video - 3.8.5

Tangent

Since I use Puppet for configuration management, I built the VM using my normal Debian PXEBoot installer which automagically configures the new system with Puppet as a postinst task. The entire manifest set will configure all the base things (auto-updates, Icinga monitoring, NTP, DNS, SSL Certificate trust, NFS, LDAP and more!), but this manifest is all it takes to get a combined UniFi and UniFi Video system (with auto-update). It is really nice when software plays nice together.

# Setup the UBNT NMS for the UniFi wifi gear.
class unifi_nms {
    include 'apt'
    apt::source { 'ubnt':
        location   => 'http://www.ubnt.com/downloads/unifi/debian',
        repos      => 'ubiquiti',
        release    => 'stable',
        key        => '4A228B2D358A5094178285BE06E85760C0A52C50',
        key_server => 'keyserver.ubuntu.com',
        include_src =>  false,
    }

    apt::source { 'unifi-video':
        location => 'http://www.ubnt.com/downloads/unifi-video/apt-3.x',
        repos => 'ubiquiti',
        release => 'jessie',
        key => '795C6027520643F0BA02297F97B46B8582C6571E',
        key_server => 'keyserver.ubuntu.com',
        include_src => false,
    }

    package { 'haveged':
        ensure => latest,
    }

    package { 'unifi':
        ensure => latest,
        require => [
            Apt::Source['ubnt'],
            Package['haveged'],
        ],
    }

    package { 'unifi-video':
        ensure => latest,
        require => [
            Apt::Source['unifi-video'],
        ],
    }

Overview

In short the process is:

  1. Stop unifi-video
  2. Move the existing keystore out of the way
  3. Export the private key and certificate from unifi
  4. Convert the certificate to the appropreate formats and move into place
  5. Start unifi-video

This is the tricky bit, a few things worth documenting for clarity

For UniFi

For UniFi Video

You may want to unmanage your cameras first, the directions are a bit unclear in this exact case and I chose to.

This is what Worked For Me

Stop Services and Backup Keystore

> sudo invoke-rc.d unifi-video stop
> sudo mv /usr/lib/unifi-video/data/{keystore,keystore-orig}

Export Certificate and Key

>sudo keytool -importkeystore -srckeystore /usr/lib/unifi/data/keystore -destkeystore unifi.p12 -deststoretype pkcs12
Importing keystore /usr/lib/unifi/data/keystore to unifi.p12...
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:
Entry for alias cert1 successfully imported.
Entry for alias unifi successfully imported.
Import command completed:  2 entries successfully imported, 0 entries failed or cancelled

Use the UniFi password for all 3 password prompts or keytool will complain.

Now convert the PKCS12 store into DER encoded files with OpenSSL.

>openssl pkcs12 -in unifi.p12 -nokeys -clcerts -passin pass:aircontrolenterprise | openssl x509 -outform der -out unifi_cert.der
>openssl pkcs12 -in unifi.p12 -nocerts -passin pass:aircontrolenterprise -passout pass:123456 | openssl pkcs8 -topk8 -inform PEM -passin pass:123456 -outform DER -nocrypt -in unifi_key.pem -out unifi_key_decrypted.der

Prepare and Install Certificate and Key

Now these get moved into place as specified by the documentation...

>sudo rm /usr/lib/unifi-video/data/{keystore,ufv-truststore}
>sudo rm /usr/lib/unifi-video/conf/evostream/server.*
>sudo mkdir /usr/lib/unifi-video/data/certificates
>sudo mv unifi_cert.der /usr/lib/unifi-video/data/certificates/ufv-server.cert.der
>sudo mv unifi_key_decrypted.der /usr/lib/unifi-video/data/certificates/ufv-server.key.der
>sudo chown -R unifi-video:unifi-video /usr/lib/unifi-video/data/certificates
>sudoedit /usr/lib/unifi-video/data/system.properties

Restart

>sudo invoke-rc.d unifi-video start

Verify

If all goes well you should see something like this in /var/log/unifi-video/server.log:

1513647038.643 2017-12-18 20:30:38.643/EST: INFO   >>>> unifi-video v3.8.5+a24428.171030.1542 is starting in main
1513647038.713 2017-12-18 20:30:38.713/EST: INFO   Loading camera keystore from /usr/lib/unifi-video/data/cam-keystore... in main
1513647038.792 2017-12-18 20:30:38.792/EST: INFO   Creating a new app key store and import custom certs in main
1513647038.792 2017-12-18 20:30:38.792/EST: INFO   Importing custom app key/cert pair in keystore in main
1513647038.792 2017-12-18 20:30:38.792/EST: INFO   importPrivateKey: loading keystore /usr/lib/unifi-video/data/keystore in main
1513647038.793 2017-12-18 20:30:38.793/EST: INFO   importPrivateKey: loading key /usr/lib/unifi-video/data/certificates/ufv-server.key.der in main
1513647038.835 2017-12-18 20:30:38.835/EST: INFO   importPrivateKey: loaded cert chain /usr/lib/unifi-video/data/certificates/ufv-server.cert.der - 1 certs found in main
1513647038.854 2017-12-18 20:30:38.854/EST: INFO   importPrivateKey: stored the key in main
1513647038.854 2017-12-18 20:30:38.854/EST: INFO   Custom app keystore created and loaded sucessfully in main
1513647038.863 2017-12-18 20:30:38.863/EST: INFO   Loading app keystore from /usr/lib/unifi-video/data/keystore... in main
1513647038.877 2017-12-18 20:30:38.877/EST: INFO   loadTrustStore load existing file: ufv-truststore in main
1513647039.064 2017-12-18 20:30:39.064/EST: INFO   SSL Keystore initialized in main
1513647039.145 2017-12-18 20:30:39.145/EST: INFO   Controller starting in main

Enjoy

Success Screen Shot

Now you can re-manage your cameras. I suspect since cam-keystore is left in place that un-managing and re-managing your cameras may not be needed but I'm going to err on the side of caution here.

All of my previously configured settings for the camera were re-applied (recording settings, motion zones, etc..), so it was only like 3 extra clicks for a little bit of safety.

Edited: December 13, 2017 @20:47

I'm not currently subscribed to Patreon largely because when money on the Internet is concerned I have a long wait and see what happens cool down. There are a lot of Internet start ups that come and go like a flash in the pan and a lot that get bought quickly and morphed into something else. If you are going to have some way to charge me money, I need some stability. I have no problem being an early adopter, as long as you don't have a link to my bank account or credit card (even through a third party).

Seems like a safe and sane option to me.

That being said, since Patreon seemed like it was gaining traction, espeically with people that I respect who are creating things, I started collecting links to the Patreon profiles I was interested in backing in my private wallabag instance with the intention of eventually subscribing and throwing some beer money into the hat.

Of course Patreon goes and screws it up, so I'm at the very least putting that idea on hold.

Dave Jones of the EEVBlog just posted a good video about what they are doing from the creator's point of view.

You can go and read Patreon's explanation and decide for yourself, but I get a huge waft of crap off this. I have a hard time trusting the direction this is going in and until that trust is restored I won't be giving them money.

Update

I'll just leave this here...

December 11, 2017 @13:37

Summary

A while back I posted an initial review of iOS 11 and a follow up along with a what I admit was a bit of a rant about a beta of iOS 11.2.

The long and short of my complaints was basically:

I'm happy to admit that of the 6 or so grievances, the two that really hurt my daily usability of the device are fixed.

They also restored the force touch app switching on the iPhone 7 much to my delight.

Sadly... the Home Control bug seems to remain. At this point I'm going to stay away from anything HomeKit since I do not want to risk a stranger being able to control my home even with my phone locked.

Home Control WTF

Podcast app returned to form!

Not much to say, just works now. I was delighted the other evening while listening to King Falls AM when the next episode just started playing.

App Search, searches!

App Search Setting

I stumbled on this a bit accidentally. Sometime after I upgraded to iOS 11.2 I installed a new app (something I pretty rarely do). In doing so my natural motion is to into Settings and disable Siri and Search since it defaults to on (even when everything else is off...). As I ticked the switch to off I was shocked to see a new option appear! Turns out it does exactly what I want. My use the phone like clover+space on the Mac workflow is now back at my finger tips. Incidentally this is when I tried the force touch app switch and found it has been restored to my loving embrace. I'm sure my thumb will be thankful that the days of the double tap are numbered.

Conclusion

This is really the release that 11.0 should have been. I still think that Apple's release quality has suffered from their relentless pace of releases, but given the continual march of security updates and bugfixes I would still not suggest anyone lag behind the current version if they can reasonably help it. Honestly a little inconvenience of a crap release is nothing compared to a remote code execution vulnerability.

December 05, 2017 @22:51

This morning the UPS guy greeted me with a new Ubiquiti UniFi access point destined for use at work. I have been using a Mikrotik RB951-2HnD as a router and access point but I'm wanting to take advantage of 802.11ac various reasons so I ordered a UAP-AC-IW to replace the built-in Mikrotik WiFi. I'm still going to use the Mikrotik as a router and switch.

New AP!

To prep for the new AP I setup a new site in UniFi, re-created the network profiles and loaded in the IP range for the work segment of my network and the local RADIUS servers for WPA-Enterprise. I did this about a week ago so it was all ready for the big day. The other piece of business was making sure I had layer 3 adoption setup. I chose the DHCP option and setup the Mikrotik to hand out my UniFi server's IP as indicated.

[admin@bdr01] > /ip dhcp-server option export compact
# dec/05/2017 23:00:13 by RouterOS 6.40.5
#
# model = 951Ui-2HnD
/ip dhcp-server option
add code=252 name=proxy-pac value=\
    "'http://****.ub3rgeek.net/proxy.pac'"
add code=43 name=unifi-address value=0x0104c0a****

[admin@bdr01] > /ip dhcp-server network export compact
# dec/05/2017 23:01:46 by RouterOS 6.40.5
#
# model = 951Ui-2HnD
/ip dhcp-server network
add address=192.168.***.***/28 boot-file-name=pxelinux.0 dhcp-option=\
    proxy-pac,unifi-address dns-server=192.168.***.***,192.168.***.***\
    domain=work.ub3rgeek.net gateway=192.168.***.*** netmask=28\
    next-server=192.168.***.*** ntp-server=192.168.***.***

New AP pending adoption

I was impressed with and a bit surprised by the whole process. The Unifi software was smart enough to realize that the new AP was located in the new site (I assume because I told it the network address for the new LAN range) and promptly dropped it in the list of devices for the right site. After pressing adopt and waiting for the firmware update and provisioning process I was greeted with an alert that confirmed that everything was working.

Old AP is now a rogue

The Mikrotik is now being seen as a rogue! A quick disable of the wlan interface in RouterOS and everything just jumped over to the new UAP-AC-IW AP! It's really nice when things just work like they say on the tin.

I really couldn't be happier with these things. I wrote a bunch about the setup at home before and I'm pretty happy to see the success continue.

Hopefully my luck will hold out...

Another new toy

👍 💯 🍺

November 28, 2017 @10:10

It shouldn't surprise anyone that the Internet is under attack, but if it does, or if you want to know what you can do about it read on.

Call Congress

  1. Demand Progress - They have a number of causes they are working on including Net Neutrality.
  2. The EFF - The OG defender of rights in the digital age.
  3. NY Times Video - What Is Net Neutrality
  4. NY Times Topics: Net Neutrality
  5. Wired - Here's How The End of Net Neutrality Will Change the Internet

The Internet can only succeed if it remains open and free. Billions of people across the globe rely on it and if we allow corporate profiteering to take over then we will stifle so many of the core values not only of the Internet (it really has no values, being a collection of interconnected, yet privately run networks and all..) but of society itself. Freedom of expression, peaceful assembly, the ability to protest and communicate, and to innovate are all things we've held dear as a people for much longer than we've had the technology to communicate.

I spent almost a decade working for an ISP back before the 2015 Title II classification of Internet providers. I watched the executives of said ISP and its brethren (it's a small world in the "string wires across the globe" business after all) work harder and hard to try to find ways to squeeze more revenue out of their customers. Data caps and pay-per-gigabyte plans were the envy of the American ISP, though customers were very vocally against it and the market isn't quite oligopoly enough to pull the trigger so the tactics switched to quieter changes. Things like search engine and browser hijacking to DNS query redirection, and general data collection.

I can tell you that the infrastructure was already largely in place back in 2010 to enact the horror stories of blocking, throttling and extortion of content providers that people are presently worrying about.

This isn't science fiction, or scaremongering. It's already sitting there. Waiting for marketing to put a nice spin on it and the lawyers to say it's not a liability to use it.

If we give this up... it may take us decades to undo the damage, not only to ourselves but to the world at large. While a global network of networks, a lot of the infrastructure people use every day is located here in the US of A or is owned by US based companies, so what we do has deep effects globally.

If you live in the US, please look at the links above and put pressure on your elected representatives or if you can afford it, why not buy a FCC board member... I hear Ajit Pai is already spoken for though.

Call Congress

Edited: November 14, 2017 @15:00

Screenshot from MacRumors

I feel like I should explain why this irks me so. Apple just made a change so drastic in the functionality of their user interface (remember that Control Center is supposed to provide you with quick access to common functions from anywhere within the operating system) that they feel the need to present the user with a modal pop-up dialog box explaining why the user's understanding of the effect of the action that they just took is wrong.

This is antithetical to good design. The user interface shouldn't need a system native dialog box that pops up to apologize for itself. It should be self-explanatory. The user's intent is clearly to turn off the radio, but Apple has decided to redefine what they think the user wants and then drool all over themselves to try to be "transparent" about it.

Lets get back to the part where the UI did what the user actually wanted, that was nice.

Dear Apple, This shit is still just WRONG. Stop it. Whomever let this out the door in the first place is bad at their job. Whomever has let this fix out the door is also bad at their job. Yours Truly, Matt

I think I'm mad, largely because I'm afraid this is indicating the direction that Apple is heading and that I'm going to have to get off the ride. I really don't want to do that. Linux on the desktop and phone is still a really terrible user experience.

Subscribe via RSS. Send me a comment.