Broken OpenVPN IPv4 routing with iOS9 and IPv6

After finally taking the time to get tunnelled IPv6 into the homelab via Hurricane Electric I thought it would be nice to extend out the routing to my VPN clients, after all they connect in an appear like local devices to the rest of the network, why not?

What I thought was a simple configuration change has been puzzling me for the last few days, what I didn’t realise is that after switching on IPv6 in the OpenVPN server all IPv4 traffic hasn’t been correctly routed via the VPN. It turns out a small issue in either the OpenVPN client, iOS or something in-between has broke the configuration, but thankfully it only requires a small fix.

The solution finally came from the OpenVPN bug tracker, ticket 614:

IPv4 routing on iOS 9 is broken if IPv6 is enabled inside the tunnel. The tests were done with tun-ipv6 and redirect-gateway activated and all the IPv4 traffic bypasses VPN gateway, while IPv6 works fine. Works as expected without tun-ipv6. Doesn’t work with tun-ipv6 but no IPv6 address.

Exactly what I was experiencing. Thankfully fkooman came across an entry in the FAQ which mentioned an undocumented option called redirect-gateway ipv6. Injecting this option in the OpenVPN server resolves the routing issues.

On pfSense you just need to add push "redirect-gateway ipv6" into the “Advanced Options” section of the OpenVPN server configuration

Miniflux - Easy, self-hosted RSS

Since the demise of Google Reader a lot of new tools and sites have tried to take over the mantle as the de-facto RSS reader for the masses. The biggest (to my understanding) is Feedly which used the shutdown to push their product, unfortunately over time the investment in the “free” Feedly seems to have slowly slipped away in favour of their Pro offering, which isn’t surprising for any company wanting to turn a profit. This issue seems to be replicated across all the hosted providers who are trying to make a profit out of a service Google had supplied for free, and old stalwarts like me still struggle with the idea of paying $3-$7 a month for aggregating RSS.

With the aim to take matters into my own hands I decided to hunt around for an open source solution that I could self host, I’m already paying for a dedicated server so why not use that to host it?

Thankfully, it seems that a lot of other people had the same issue and a large list of open source solutions had popped up. The interesting one seems to involve the “Fever” API, which is a simple method of exporting these feed readers out to mobile and desktop readers without any quirky reader dependent applications, my favourite RSS application Reeder supported this API so really helped with the decision of what solution I needed.

Miniflux seems to be the perfect balance between function and simplicity, It can be installed damn near anywhere as it only uses PHP and a few standard modules, in addition it supports importing and exporting OPML files and the Fever API to allow my desktop and mobile client to keep in sync with no extra work.

Installation couldn’t be simpler. Checkout the repo, move to a folder of your choice and throw in a Nginx configuration:

server {
  listen 80;
  server_name rss.domain.com;
  root /home/user/www/rss.domain.com/;
  index index.php;

  index index.php index.html index.htm;

  # the following line is responsible for clean URLs
  try_files $uri $uri/ /index.php?$args;

  # serve static files directly
  location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ {
    access_log        off;
    expires           max;
  }

  location ^~ /data/ {
    deny all;
  }

  location ~ \.php$ {
    # Security: must set cgi.fixpathinfo to 0 in php.ini!
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass 127.0.0.1:8812;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
    include /etc/nginx/fastcgi_params;
  }
}

Done. Your Fever API endpoint is available at /fever/ and the username and password can be configured in the UI for the application. Everything is stored in Sqlite so easy to backup and move around.

If you’re looking for something thats simple and works, i’d recommend giving it a try!

Homelab Puppet

It might sound like using a nuclear weapon to swat a fly, but when you’re working with Puppet in your day job it can be really useful to have a test bench to fiddle with new ideas at home. After all thats what homelabs exist right?

Puppet Enterprise comes with a 10 free nodes license as stock, for a small homelab its perfect for managing that configuration that applies to all systems, DNS, routing, SSH keys, you get the idea. Also, as my day job runs Puppet Open Source its great to test out the commercial version and get to know it before the inevitable upgrade where a lot more is at stake.

For my installation I went with CentOS 7 and a single node installation, I use Code Manager to automatically deploy my configuration from a git repository I have stored in Gogs, which if you’ve not seen already I highly suggest checking out. Agents are mostly Debian 8 with a sprinkle of CentOS7 and RHEL7 for my learning needs.

Heres some handy hints from my Puppet usage, both in work and home:

Use Puppet Enterprise

10 free nodes! Take advantage of it if you can. While open source Puppet is great, the installer and Console makes Enterprise worth the $100/year/node just for the saved time of fiddling with config.

Use Puppet Forge

It might seem obvious, but a lot of places suffer from NIH when it comes to Puppet and decide to re-write from scratch instead of expanding on an already open source module. While the vast majority of modules I use are public on the Forge I have slipped into a habit of quickly hacking together a profile for an application rather than write a full module to share. In general using the Forge will save you time, so take advantage of it.

Use distro packages

While you can grab x .tar.gz from y website, extract, run, copy files and such, save yourself the pain and use distribution packages whenever possible. Not only does it make for much easier installation and management it saves you a lot of time when it comes to upgrading.

Don’t aim for 100% coverage

Trying to configure every part of a system with Puppet will burn you out quickly, cover the required elements and tick them off first. In my opinion Puppet shouldn’t be handling assigning IPs to devices or managing file systems, but setting DNS, firewall rules and package repositories are right up its street.

Things break, so check your config first

The --noop option is your friend. Make use of it to check that your new shiny config won’t blow a hole in the side of your system due to a dodgy Heira YAML file. In Puppet Enterprise you can even run this from the console.

If you have the infrastructure to spare, get a [Jenkins]() system setup and lint/test that config before it hits a live system. If you want to get really fancy, have Jenkins auto push to your production branch after testing for that continuous deployment feeling.

Puppet Enterprise has Application Orchestration!

While its a recent development I highly suggest you read the documentation and have a play with this. Hand holding multi system deployments is no longer needed!

I’m sure people have a hundred and one other things to say, but i’ll leave that for the experts…

The strange case of an OCZ Petrol SSD

A few years ago I took the risk and installed an SSD into my father’s PC, At the time his 300GB Seagate drive had failed in his stock Dell PC, just a touch outside of the warranty period, and in a attempt to keep the costs low I ended up picking a cheap SSD for him. The cheapest at the time was a OCZ Petrol 64GB. Only after a year or so did the horror stories about OCZ SSDs start appearing and a lot of people experienced failures after just a few weeks to months. My father’s SSD carried on chugging for a good few years, and died just a few weeks ago, not bad for a cursed brand…

The strange part was how it failed. Usually these SSDs just stopped working in every way and would appear to the BIOS. In this instance it was still there, it still booted, and it got about half way through the boot sequence for Windows XP before dying with IO errors BSOD. At the time I wrote off the disk as a complete failure, trying to plug it into another PC didn’t work, USB to SATA connector didn’t work, even when I did manage to get recognising on a system it said around 95% of the blocks were bad on the device. New SSD was purchased and this one was forgot about on my desk until I picked up a new USB 3.0 to SATA cable from Amazon today.

On a whim I decided to plug it into the drive, then into my Mac. OSX by default doesn’t write to NTFS but can read it, and it turns out it identifies something very weird in this device. When operated in read-only mode with no writes attempted to the device it works perfectly, this also confirms what I was seeing in the PC in that the boot loader and initial stages of Windows XP worked fine, but when it came to actually check the disk and do a write it caused the device to lock solid.

So, if you have a OCZ Petrol that you need to recover data from, try getting a device that supports write blocking and give it a go.

Workflow Frustrations

It happens to everyone. The day you sit down and try and get some items off your todo list only to hit a long list of obstacles slowing you down, suddenly you realise the tools you’ve depended on for a long time are no longer able to help and causing more problems than they solve.

For me, today, this was Evernote.

When I first started using Evernote it was the holy grail of information storage, a simple method to take notes, photos, and documents and store them all in a easily searchable and available archive. Over the past two years or so Evernote has become my reference store and has slowly built up to 2,000 nicely curated notes covering everything from upcoming travel plans to household bills and account information.

Then the inevitable happens: changes. XKCD #1172 sums it up nicely in that every change breaks someones workflow, and this is exactly what happened to me with Evernote.

A recent change in the OS X client disabled the quick shortcut of using the Delete key for trashing notes. My workflow with Evernote is that everything slightly interesting and new goes into @Inbox notebook, when i’ve got a period of free time I go through these notes and tag/move them to the required notebooks. During my “inbox processing” I find that a few notes are either mistakes, unneeded, or duplicates of information I already have in my system, and these notes are quickly casted out with a quit hit of the Delete key.

So, now its a block in my workflow. Evernote did change the shortcut to Cmd+Backspace but when you’ve been running with the same process for a long time trying to re-train yourself to use an alternative keystroke can be a tad frustrating. At the moment its slowing me down but over time i’m sure i’ll just become another point in my process.

The crux of the problem here is how to handle change, it isn’t just limited to little fiddly shortcuts or icons being moved around, it could be as large as introducing in a new process into your working day. I’ve seen many posts and such saying that being adaptable to change is one of the key separator between successful and non-successful people, but in reality everyone is a creature of habit, and when that habit is interrupted people will generally be unhappy.

In my case, I post my frustrations to my blog, get an early night, and probably forget about it the next day…

Fixing CIFS/Samba Browse Speed on OSX

One thing that has always frustrated me on Mac OS X is the impossible slow directory listing and browsing speeds on CIFS/SMB shares, Apple’s devices, such as the Time Capsule and OS X shares work perfectly, but anything running Samba has this amazingly slow response on any folder with more than 200 files.

Today i’ve been finally configuring my FreeNAS installation on my HP Gen8 Microserver, and after a good twenty or so minutes researching the issue I found a small post on the FreeNAS forums suggesting the following settings:

ea support = no
store dos attributes = no

Boom, quickly added to the configuration files and browsing now flies. Next is to try and improve the overall transfer speed over 25MiB/sec.

Introducing the Home Lab

Driven by some techie mental disability and the thirst to understand more i’ve slowly expanded out my home network into a “homelab”. A few months ago I picked up a cheap HP ProCurve 2824 from eBay, its a great gigabit switch with Layer 2 and basic Layer 3 capabilities, after a quick retrofit of the fans with some nice quiet Sunon Maglevs its been ticking over nicely as the core switch of my network.

In addition to the new switch I “acquired” Jo’s PC to use as a VM host, with some extra network cards and a bit more memory its now serving as a multifunction machine; pfSense, various lab VMs, and monitoring systems.

From time to time i’ll be posting about my latest experiments, what i’m learning, and how its now presenting a even larger drain on the electricity than it was previously.

Resetting the TP-Link TL-SG3210

In the hunt to introduce VLANs across all segments of my home network I managed to pick-up a L2 managed switch for a lot cheaper than I expected, the TL-SG3210 only offers the bare basics but its enough to get some control to the last remnant of the unmanaged network hidden behind a powerline ethernet adapter. At £36 I couldn’t say no to it.

As to expect, it all came pre-configured for the last user’s network, and this time I had no helpful IP sticker like the HP 2824 (I have no idea how I managed to reset that). This switch however had a RJ45 console port, which luckily responded to a standard Cisco rollover cable. Once you’re all plugged all you need to do is:

  • Set the port to 38400 baud, 8 data bits, No parity, 1 stop bit (8/N/1)
  • Reboot the device, when prompted hit CTRL+B
  • At the prompt reset then reset again and the device should reboot.

Once alive again the switch should be available on 192.168.0.1 with admin as the username and admin as the password. I’ve yet to really delve into the CLI of this device but i’d expect some nicer features to be tucked away in there, for the moment its doing a fine job of splitting out that dirty Murdoch device (Sky TV box) and the snooping LG TV off the main network.

Office 2016 for Mac

First of all, this is in no way a product review or anything that should resemble any sort of insight into Office 2016, as the sidebar suggest this is just my inane ramblings on the subject, be it insightful or just utter crap.

For the longest time it seems that the Office team and the Mac team at Microsoft have never really communicated except for throwing the odd document over regarding docx compatibility or some other quirky VBA feature they added in to sooth some crazy corporate. The Mac version of Office has always been the step daughter that the family are pleasant about but don’t really want to talk about them, from the Mac user perspective they’ve always come out with a workable product but it had its odd quirks and never really fitted in with the Mac OS UI/UX style guidelines.

It seems this has changed, as many people have commented in various blog posts over the last year or two that Microsoft is changing, faced with several big companies slowly eating their lunch while they’re not looking it seems that the new batch of products being thrown out are aimed to impress, and Office 2016 for Mac is one of these.

I mean, look at it.

So OK, its not all about the GUI fluff, what really counts is the UX and how everything works together. I feel confident to say you could put someone who is a heavy Office for Windows user in front of this version and they can use it without any major issues. What bugged me in the past is that Office for Mac had its own style and layout which people had to get used to, it seems that the Office and Mac teams are actually working together now and the result is a great product.

I’m sure that after a few weeks usage i’ll be at my usual situation of wanting to pull my hair out trying to get something done, but for the moment i’m basking in the UI goodness…

DVB-T and SDR with the RTL2632

I spotted a Youtube video the other day that talked quickly about SDRs (Software Defined Radios) and how you can pick up one for $20, which is a massive difference from the £300-400 devices I spotted a few years ago. Of course, I decided there and then that i’d grab one to experiment with and searched Google for the mystical device. As it turns out its the Realtek RT26832 based devices which allow SDR type functionality, and while a lot of devices out there are higher than the magical $20 due to them being advertised as a SDR it was quite easy to find one of these generic DVB-T tuners with the right chipset on Amazon for a grand total of £9. With the order being eligible under Amazon Prime I ended up ordering the item yesterday (Saturday) and it was delivered today (Sunday).

So, straight out of the box and into my Debian Jessie test system and everything worked, no tweaking or hassling, within seconds I had a working DVB adapter and I used the standard DVB tools to scan and create a channels.conf within a minute or so. My last experience with the LinuxDVB stack was around 2005-2006ish with MythTV, the drivers “sort of” worked and everything was a little rough round the edges, it seems the last 10 years have really cleaned up the stack. With that in mind its not really worth posting about getting the DVB-T tuner to work, because it just did…

SDR required a little extra work. I’ve not spent a large amount on time trying to get the full toolset to work on Debian, the rtl-sdr toolset is available as a package with Jessie and can be easily install, the biggest problem was that because I was using my test system I didn’t have a X session running to run anything on. I got everything installed and spun up a rtl_tcp instance without much incident, the biggest roadblock was that you can’t have the DVB kmod inserted at the same time as using the rtl-sdr package tools, but a quick rmmod and blacklist sorted that out, the tools are very quick to point out exactly what needs to be done.

Instead of working on Linux I got everything up and running on my Macbook Pro running OS X Yosemite, while OSX doesn’t have the full suite of tools available a few good ones have been developed for the platform. I found that CubicSDR was by far the easiest to get rolling with, no messing with MacPorts or any other third party packaging tools, just a DMG and a pre-packaged application. While it isn’t as feature complete as some of the other packages out there it does cover the basics to go poking around. Their todo list does looks interesting, especially with the target of having digital demodulation built in.

Quick overview done and i’m now looking for a better antenna. While not being used as a SDR the stick itself will be happily serving as a DVB-T source for my Plex system using TVHeadEnd and with a quick MCx to Coax adapter you can have it plumbed into the household aerial without much issue.

[Update - 201508/31]

Regarding the links earlier on in this post, it turns out that the two tuners are actually different, with the Nooelec it includes an improved tuner chip (the R820T2). As it turns out it is worth investing the few pounds more for this version as its more sensitive and also includes a more stable crystal that won’t require much adjustment.