Thursday, 2 June 2016

Netflix, or the pains of dealing with royalties and DMCA

A few days ago, after enjoying the use of a pretty much static IP address for a long while from my ISP (it hadn't changed in easily a year), my IP changed. This took down my IPv6 tunnel, which I tend to use a lot to access to various services for work -- you know, dogfooding and all of that. My IPv6 address depends on a tunnel that needs to stay up (and for that requires my IPv4 address to not change too much, but whatever).

Probably since then (but I did not really notice until yesterday or so), I've had multiple issues with Netflix streaming. As many know, Netflix is now enforcing some method of trying to detect VPN and proxy users to force local content upon its users. I think it's a stupid idea, but I see where they come from with that decision.

Netflix has to deal with royalties, copyright, and varying laws depending on where a user might be. For instance, you may wish to watch NCIS -- this will likely depend on Netflix having paid CBS (or whomever the title belong to) to be allowed to present it to clients. I have no idea how these costs are done, they might well be a percentage based on number of viewers or some such.

In the US, this is relatively easy, they can deal with local companies and handle things. This becomes more complicated when you factor in different copyright laws in other countries, and exclusivity rights, etc. In the case of NCIS, Global appears to have (exclusive?) rights for NCIS, so they look to be the only legit place to stream episodes online. I suspect Netflix would possibly have to pay *them* to stream NCIS in Canada, or otherwise be subject to random other byzantine rules. I don't pretend to understand the intricacies past the one class I took on Canadian copyright/patent/IP law over a year ago. Suffice to say it's complicated, and there are probably good reasons to try and have users in country X watch country X's content, and not country Y's. It's likely to cut costs.

My issue stemmed from the fact that with the reset of my IPv4/IPv6 connection, or possibly just as a coincidence, Netflix started to care about my IPv6 addresses. It's possible that geoip data informed this, or that Netflix started to do more checking, or started to do proper IPv6, etc. I don't know.

I had an online chat with an awesome Netflix Customer Service rep; HecThor (the name is awesome too!), and received great service even if they could not help:

Netflix Customer Service
You are now chatting with: Hecthor
Netflix Hecthor
Hello!! My name is HecThor! How can I help?
Hi, I'm Matt, I keep getting error messages saying that I am behind a proxy or VPN when I am not
Would you be able to consult logs or whatever you might have to tell me why that has been detected so I can take the necessary steps?
Netflix Hecthor
Oh, let me check this out for you. Could you please tell me which device are you using?
Right now, my Chrome browser
probably listed as Chrome on linux, version 50.0.2661.94
I had the same issues on a different device too (another Chrome, version is most likely different as it is running on a Chromebook device)
Netflix Hecthor
Just a quick question, have you tried going to the extensions of Google Chrome and unchecked and tried Netflix one more time?
what's more, this one does not have any extensions
Netflix Hecthor
Alright, just to confirm, are you using Linux?
not exclusively, but yes
Netflix Hecthor
Oh got it, I'm seeing here that the signal is being redirected to the US and then to Canada several times in a day, in this case the best thing to do is to check with your Internet Service Provider to investigate why your connection appears to come from a VPN or a proxy service, as they are in charge of the signal.
this is to be expected, I get IPv6 connectivity from a US provider for work purposes
could it be that you guys started to allow ipv6?
Netflix Hecthor
Oh got it, we do support with IPv6, however having the setting set to the US instead of Canada may cause this conflict , so in this case what I recommend is to turn it off and you'll be able to stream without a problem. :)
it's not the kind of thing I can turn off
there aren't providers here who do IPv6
is there any way for you to set my account to only use IPv4?
Netflix Hecthor
Got it, you see we don't have a way to set an account to use IPv4 or Ipv6 as this has to do with the Internet service, so in this case I would recommend you to contact them and try to reset the signal or check if they're able to do that change on your settings, I'm sure that once they do you'll be able to stream Netflix without a problem.
There is no thing to reset, there is no Canada endpoint for this thing.
in fact, it only started to be an issue since the last reset, because my IPv4 address changed a few days ago as well
Netflix Hecthor
I understand, and do you have a way so you can return to IPv4? The thing is that Netflix is working fine, however the system is detecting that your IP is constantly changing from region to region, this is why the system is not letting you stream.
I can't do this change on the local systems, no. This is how my home network is set up -- like I said, I do need IPv6 for my work. I work from home.
Netflix Hecthor
Oh I definitely understand what you mean, however, to be completely honest, the process you use will not let you stream. Unless you change that wont be able to stream, because when the system detects that you're in a country and your network shows another one, this issue appears, it might work some times but I can't guaranty it will always work, if you like you can try Netflix on your mobile's network to verify this.
I don't especially want to verify anything, since we have a fair expectation of what the issue is
you've been quite helpful
Do you object to me using this chat log for documentation purposes?
I can remove your name if you prefer, but I thought it looked badass enough ;)
Netflix Hecthor
Sure, no problem, and it's been a pleasure being able to help! :) Is there anything else I can do for you?

I went on to ask to file a complaint / provide feedback to the team, since Netflix should be aware of the complexity, and inconvenience this poses on its customers. Still, I want to reiterate that I was quite happy with the service I've had from Customer Support rep HecThor, who was helpful and understanding.

I'm technical enough to be able to deal with such issues in various ways. I did some searching, and it looks like you *can not* disable IPv6 simply for Chrome. It's also impractical to disable the IPv6 tunnel... I have it up for a good reason, and it had been working for a long while (that too, over a year) with no issues. Other people could also have other special network setups that could impede on Netflix steaming services. VPNs happen, and they are not all used to watch US content. They can also be done at the router level rather than at the device level; and even some ISPs require PPTP VPN use to get any kind of connectivity at all (or did in the past).

The inability to disable IPv6 in Chrome is probably really a usability bug in it, but it shows how the average user might eventually run into issues dealing with content "blocking" based on location. I'm not really expecting the average user to have a network setup like mine: I had to set up IPv6 myself here, as none of the providers in Canada do a satisfactory job at it. I also don't expect the average user to care about the IP family at all -- but we'll soon get to a point where blocking based on IP and location won't make sense. IPv6 is meant to improve mobility, and there are some steps taken to ensure this (see RFC 3775). GeoIP data can be wrong, misleading, or simply inexistant too, so you really ought not to rely on that at all.

Netflix has been doing relatively well in leading some interesting infrastructure ideas it seems, aside from not being very cooperative with Linux users for a long while (fortunately, now Netflix works on Linux, but only with the official Google Chrome, still not with free software browsers). It would be good to see that leadership continue and avoid restrictive policies in favor of cooperation, especially for a company priding itself on using Linux and open source technologies.

For now, I've opted to null-route IPv6 netflix, which means I get a small delay but I can still watch Futurama. It's the least intrusive change I thought of to not have to tear down my IPv6 tunnel, but still be able to watch content.

If for some inexplicable reason you also have a Cisco router at home and use an IPv6 provider from the US to get IPv6 connectivity and want to make sure Netflix keeps working; this is the command I used:

#ipv6 route 2406:DA00:FF00::/48 Null0

Rather than using outdated, unreliable technology to enforce restrictive, ill-designed content rules, Netflix should lead an overhaul of the limitations imposed upon it by the original content providers. That, or use some of those uncountable piles of moneys to cover potential costs of out-of-country-content access.

Monday, 29 February 2016

Nominations wanted for the Developer Membership Board


The Ubuntu Developer Membership Board is in need of new blood.

Of the seven members of the board, five (5) will be expiring on March 9th. Members of the Developer Membership Board are elected by all Ubuntu Developers for a term of 2 years, meeting in #ubuntu-meeting about once a fortnight. Candidates should be Ubuntu developers themselves, and should be well qualified to evaluate prospective Ubuntu developers.

The DMB is responsible for reviewing developer applicants and decides when to entrust them with developer privileges or to grant them Ubuntu membership status.

Providing at least six valid nominations are received, the new members will be chosen using Condorcet voting. Members of the ubuntu-dev team in Launchpad will be eligible to vote, and will receive voting ballots by email (to their email address recorded in Launchpad). A Call for Nominations has already been sent by email to the ubuntu-devel-announce mailing list (but another call for nominations should follow soon):

Applications should be sent as GPG-signed emails to developer-membership-board at (which is a private mailing list accessible only by DMB members).

Of course, if you're nominating a developer other than yourself, please make sure to ask who you're about to nominate beforehand, to make sure they're okay with it.

Friday, 15 January 2016

In full tinfoil hat mode: Using GPG with smartcards

Breaking OPSEC for a bit to write a how-to on using GPG keys with smartcards...

I've thought about experimenting with smartcards for a while. Turns out that my Thinkpad has a built-in smartcard reader, but most of my other systems don't. Also, I'd like to use a smartcard to protect my SSH keys, some of which I may use on systems that I do not fully control (ie. at the university to push code to Github or Bitbucket), or to get to my server. Smartcard readers are great, but they're not much fun to add to a list of stuff to carry everywhere.

There's an alternate option: the Yubikey. Yubico appears to have made a version 4 of the Yubikey which has CCID (smartcard magic), U2F (2-factor for GitHub and Google, on Chrome), and their usual OTP token, all on the same tiny USB key. What's more, it is documented as supporting 4096 bit RSA keys, and includes some ECC support (more on this later).

Setting up GPG keys for use with smartcards is simple. One has the choice of either creating your own keys locally, and moving them on the smartcard, or generating them on the smartcard right away. In other to have a backup of my full key available in a secure location, I've opted to generate the keys off of the card, and transferring them.

For this, you will need one (or two) Yubikey 4 (or Yubikey 4 Nano, or if you don't mind being limited to 2048 bit keys, the Yubikey NEO, which can also do NFC), some backup media of your choice, and apparently, at least the following packages:

gnupg2 gnupg-agent libpth20 libccid pcscd scdaemon libksba8 opensc

You should do all of this on a trusted system, not connected to any network.

First, setup gnupg2 to a reasonable level of security. Edit ~/.gnupg/gpg.conf to pick the options you want, I've based my config on Jeffrey Clement's blog entry on the subject:

#default-key AABBCC90DEADBEEF
keyserver hkp://
keyid-format 0xlong
personal-cipher-preferences AES256 AES192 AES CAST5
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
You'll want to replace default-key later with the key you've created, and uncomment the line.

The downside to all of this is that you'll need to use gpg2 in all cases rather than gpg; which is still the default on Ubuntu and Debian. gpg2 so far seems to work just fine for ever use I've had (including debsign, after setting DEBSIGN_PROGRAM=gpg2 in ~/.devscripts).

You can now generate your master key:
gpg2 --gen-key

Then edit the key to add new UIDs (identities) and subkeys, which will each have their own different capabilities:

gpg2 --expert --edit-key 0xAABBCC90DEADBEEF
Best is to follow jclement's blog entry for this. There is no point in reiterating all of it. There's also a pretty complete guide from The Linux Foundation IT here, though it seems to include a lot of stuff that does not appear to be required here on my system, in xenial.

Add the subkeys. You should have one of encryption, one for signing, and one for authentication. Works out pretty well, since there are three slots, one for each of these capabilities, on the Yubikey.

If you also want your master key on a smartcard, you'll probably need a second Yubikey (that's why I wrote two earlier), which would only get used to sign other people's keys, extend expiration dates, generate new subkeys, etc. That one should be left in a very secure location.

This is a great point to backup all the keys you've just created:

gpg2 -a --export-secret-keys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.master.key
gpg2 -a --export-secret-subkeys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.sub.key
gpg2 -a --export 0xAABBCC90DEADBEEF >

Next step is to configure the smartcard/Yubikey to add your name, a URL for the public key, set the PINs, etc. Use the following command for this:
gpg2 --card-edit

Finally, go back to editing your GPG key:
gpg2 --expert --edit-key 0xAABBCC90DEADBEEF

From this point you can use toggle to select each subkey (using key #), move them to the smartcard (keytocard), and deselect them (key #). To move the master key to the card, "toggle" out of toggle mode then back in, then immediately run 'keytocard'. GPG will ask if you're certain. There is no way to get a key back out of the card, if you want a local copy, you needed to make a backup first.

Now's probably a great time to copy your key to a keyserver, so that people may eventually start to use it to send you encrypted mail, etc.

After transferring the keys, you may want to make a "second backup", which would only contain the "clues" for GPG to know on which smartcard to find the private part of your keys. This will be useful if you need to use the keys on another system.

Another option is to use the public portion of your key (saved somewhere, like on a keyserver), then have gpg2 discover that it's on a smartcard using:

gpg2 --card-status

Unfortunately, it appears to only manage to pick up either only the master key, or only the subkeys, if you use separate smartcards. This may be a blessing in disguise, in that you'd still only use the master key on an offline, very secure system, and only the subkeys in your typical daily use scenario.

Don't forget to generate a revocation certificate. This is essential if you ever lose your key, if it's compromised, or you're ever in a situation where you want to let the world know quickly not to use your key anymore:

gpg2 --gen-revoke 0xAABBCC90DEADBEEF
Store that data in a safe place.

Finally, more on backing up the GPG keys. It could be argued that keeping your master key on a smartcard might be a bad idea. After all, if the smartcard is lost, while it would be difficult to get the key out of the smartcard, you would probably want to treat it as compromised and get the key revoked. The same applies to keys kept on USB drives or on CD. A strong passphrase will help, but you still lost control of your key and at that point, no longer know whether it is still safe.

What's more, USB drives and CDs tend to eventually fail. CDs rot after a number of years, and USB drives just seem to not want to work correctly when you really need them. Paper is another option for backing up your keys, since there are ways (paperkey, for instance) to represent the data in a way that it could either be retyped or scanned back into digital data to be retrieved. Further securing a backup key could involve using gfshare to split it into multiple bits, in the hope that while one of its locations could be compromised (lost), you'll still have some of the others sufficient to reconstruct the key.

With the subkeys on the Yubikey, and provided gpg2 --card-status reports your card as detected, if you have the gpg-agent running with SSH support enabled you should be able to just run:

ssh-add -l

And have it list your card serial number. You can then use ssh-add -L to get the public key to use to add to authorized_keys files to use your authentication GPG subkey as a SSH key. If it doesn't work, make sure the gpg-agent is running and that ssh-add uses the right socket, and make sure pcscd isn't interfering (it seemed to get stuck in a weird state, and not shutting down automatically as it should after dealing with a request).

Whenever you try to use one of the subkeys (or the master key), rather than being asked for the passphrase for the key (which you should have set as a very difficult, absolutely unguessable string that you and only you could remember and think of), you will be asked to enter the User PIN set for the smartcard.

You've achieved proper two-factor authentication.

Note of ECC on the Yubikey: according to the marketing documentation, the Yubikey knows about ECC p256 and ECC p384. Unfortunately, it looks like considers these unsafe, since they do not meet all the SafeCurves requirements. I'm not especially versed in cryptography, but this means I'll read up more on the subject, and stay away from the ECC implementation on the Yubikey 4 for now. However, it doesn't seem, at first glace, that this ECC implementation is meant for GPG at all. The Yubikey also has PIV magic which would allow it to be used as a pure SSH smartcard (rather than using a GPG authentication subkey for SSH), with a private certificate being generated by the card. These certificates could be created using RSA or ECC. I tried to play a bit with it (using RSA), following the SSH with PIV and PKCS11 document on; but I didn't manage to make it work. It looks like the GPG functions might interfere with PIV in some way or I could just not handle the ssh-agent the right way. I'm happy to be shown how to use this correctly.

Thursday, 5 November 2015

One manpage a day...

I first heard of this in a Google Doc, which was linked to by a wiki page in Swedish I was shown by someone on IRC. Unfortunately, I can't find any of these links anymore...

Documentation in some areas of Ubuntu is sorely lacking. Have you ever ran into a case where you tried to use this shiny new program, or do a one-shot use of some obscure old thing, without managing to find any documentation for it?

One of the first things we're trained to do as Unix users is to look for the manpage for a command. Many packages are missing manpages. Any missing manpage can be considered as a bug for a program, since it means we're missing documentation for it, and people who might use that software would have no idea how to use it, or perhaps would have no idea how to use it effectively, how to make the most of the available features.

Two such examples I found on my own system, looking at the contents of /usr/bin are ubuntu-drivers (pkg:ubuntu-drivers-common) and ubuntu-support-status (pkg:update-manager-core). I'm not trying to point fingers at anything (in fact, I've contributed to ubuntu-drivers before, too), just showing that examples of commands missing a manpage can be trivially found.

Let's all try to find one manpage to write a day, and we'll quickly improve the state of documentation in Ubuntu by a noticeable amount. Try to write the manpage, but otherwise at least file a bug for the fact that it's missing, against the package that contains that binary.

For convenience; here's a command to get a list of commands that man couldn't find a manpage for in /usr/bin (there may well be a better way to do this, and it will list some false-positives):

ls -1 --file-type /usr/bin | sed 's/@//' | LC_ALL=C xargs -n1 man -w >/dev/null

Then, to find out which package contains that binary, use dpkg -S with the name of the command.

Thanks to Stefan Bader, Colin King and Louis Bouchard for a stimulating discussion on documentation this morning. :)

Monday, 4 May 2015

Installer session at UOS

If you're interested in how Ubuntu gets installed on systems, want to ask about specific features, or have already filed bugs that you'd like to bring to our attention, watch for my session on the calendar:

It's currently scheduled for Tuesday May 5th at 18:00 UTC (that's in a little bit more than 24 hours!); but just in case it changes time, make sure you're marked as attending and subscribed to the blueprint.

As stated in the blueprint summary, I can't guarantee we'll get to everything, but it will be the right place to see what has to be done, and for anyone to pitch in time if they're interested in helping out!

Sunday, 29 March 2015

Preseeding installations

In early February, I completed a move from Canonical's Phonedations team to the Foundations team. Part of this new work means debugging a lot of different failure cases in the installer, grub, and other early boot or low-level sofware, some of which requiring careful reproduction steps and probably quite a few install runs in VMs on on hardware.

Given the number of installations I do I've started to keep around preseed files; the text files used to configure automatic installations. I've made them available at so that they can be reused as necessary. Most of these preseed files make heavy use of the network to get the installation data and packages from the web, so they will need to be tweaked for use in an isolated network. They are annotated enough that it should be possible for anyone to improve on them to suit their own needs. I will add to these files as I run across things to test and automate. I hope we can use some of them soon in new automated QA tests where appropriate, so that it can help catch regressions.

For those not familiar with preseeding, these files can be used and referred to in the installation command-line when starting from a network PXE boot or a CDROM or pretty much any other installation medium. They are useful to tell the installer how you want the installation to be done without having to answer all of the individual questions one by one in the forms in ubiquity or debian-installer. The installer will read the preseed file and use these answers without showing the prompts. This also means some of the files I make available should not be used lightly, as they will happily wipe disks without asking. You've been warned :)

To use this, you'll want to specify "preseed/file=/path/to/file" (or just file=) for a file directly accessible as a file system or through TFTP, or "preseed/url=http://URI/to/file" (or just url=) if it's available using HTTP. On d-i installs, this means you may also need to add "auto=true priority=critical" to avoid having to fill in language settings and the like (since the preseeds are typically only read after language, country, and network have been configured); and on ubiquity installs (for example, using a CD), you'll want to add 'only-ubiquity automatic-ubiquity' to the kernel command-line, again to keep the automated, minimal look and feel.

I plan on writing another entry soon on how to debug early boot issues in VMs or hardware using serial. Stay tuned.

Saturday, 16 August 2014

Focusing and manual skills

I don't think anyone can argue against the fact that to be an effective developer, you need to somehow attain sufficient focus to fully take in the task at hand, and be sufficiently in the zone that answers to the problem at hand come naturally. In a way, programming is like that, a transcendent form of art.

At least, it is, to some degree, for me. And I do feel that given sufficient focus, calm and quiet (or perhaps background noise, depending on the mood I'm in), I can get "in the zone", and solutions to what I'm trying to do come somewhat naturally. Not to say that I'm necessarily writing good code, but at least it forms some sort of sense in my mind.

People have different ways to achieve focus. Some meditate, some have it come to them more easily than others. It does happen that for some people, it works well to execute some kind of ritual to get in the right frame of mind: those can be as insignificant as getting out of bed in a certain way (for those fortunate enough to work from home), or a complicated as necessary. I believe many, if not most, integrate it in their routine, to the point they perhaps forget what it is that they do to attain focus.

For me, it now happens to be shaving, and the associated processes. It used to be kind of a chore, until I picked up wet shaving, and in particular, straight razor shaving.

There's nothing quite like putting a naked, extremely sharp blade against your skin to get you to only think about one thing at a time :)

I won't lie, the first shave with that relic was a scary experience. I wasn't at all sure of myself, with only a few tips and some videos on Youtube as training. I had bought a straight razor from Le Centre du Rasoir near my house after stumbling on articles about barbershops on the web, and it somehow interested me.

Since then, I've slowly taken up the different tasks that go with the actual act of shaving with a straight razor: honing the blade, stropping, shaving, etc.; picking up the different tools required (blade, strop, honing stones, shaving creams or soaps, etc.). It's as I was slowly honing and restoring four straight razors that got to me from eBay and as a gift from my father than I thought of writing this post, in a short break I took from the honing. Getting back home and putting the finishing touches on the four razors got me to think, and I noticed I had again become much more relaxed just by taking the time to do one thing well, taking care in what I was doing.

I think every developer.... well, everyone can benefit in acquiring some kind of ritual like this, using our hands rather that our brain to achieve something. It's at least a great experience to get a little bit away from technology for a short while, visiting old skills of earlier times.

As for the wet shaving itself, I'd be happy to respond to comments here, or blog again about it if there's enough interest in the subject; I'd love to hear that I'm not the only one in the Ubuntu and Debian communities crazy enough to take a blade to my face.