Thursday, 20 September 2018

Help needed to improve proposed migration

Hi!

Every once in a while, in the Foundations team, we do a coding day. A year ago, Lukasz and I wrote a script, following an idea from Steve Langasek, to provide "hints" and help for the next steps necessary for a package to migrate from -proposed to -release.

"ubuntu-archive-assistant" was born. I just pushed this to lp:ubuntu-dev-tools, after it being on its own in a separate git tree for a long while. I'd love to get help for feedback, as well as more people contributing fixes, etc. ubuntu-archive-assistant is designed to let you look at a specific package in -proposed and try to tell you what to do next to ensure it migrates from -proposed.

This is great work for new contributors wishing to work on something, say, to get upload privileges in Ubuntu.

Here's how it works (it uses subcommands right now).

Without any further options than "ubuntu-archive-assistant proposed", it will list packages in -proposed and let you pick:
$ ./ubuntu-archive-assistant proposed
No source package name was provided. The following packages are blocked in proposed:

(1) gnome-shell-extension-multi-monitors (Age: 338 days)
(2) node-is-glob (Age: 278 days)
(3) node-concat-with-sourcemaps (Age: 264 days)
(4) node-postcss (Age: 231 days)
(5) node-source-map (Age: 229 days)
(6) android-platform-system-core (Age: 226 days)
(7) libdigidocpp (Age: 226 days)
(8) qesteidutil (Age: 225 days)
(9) schleuder (Age: 218 days)
(10) ncbi-blast+ (Age: 216 days)
(11) node-postcss-filter-plugins (Age: 213 days)
(12) node-postcss-load-options (Age: 213 days)
(13) node-postcss-load-plugins (Age: 213 days)
(14) node-postcss-minify-font-values (Age: 213 days)
(15) node-postcss-load-config (Age: 209 days)
(16) live-config (Age: 207 days)
Page -1-. Press any key for next page or Q to select a package.
Which package do you want to look at? 9
Next steps for schleuder 3.2.2-1:
  Fix missing builds: amd64
     https://launchpad.net/ubuntu/+source/schleuder/3.2.2-1

If you specify which package you want to look at, it will give you the specifics for that package (examples here are for what is currently in -proposed):
$ ./ubuntu-archive-assistant proposed -s qesteidutil
Next steps for qesteidutil 0.3.1-0ubuntu4:
  Fix missing builds: amd64, arm64, armhf, i386, ppc64el, s390x
     https://launchpad.net/ubuntu/+source/qesteidutil/0.3.1-0ubuntu4
$ ./ubuntu-archive-assistant proposed -s android-platform-system-core
Next steps for android-platform-system-core 1:7.0.0+r33-2build1:
  Fix missing builds: amd64, arm64, armhf, i386
     https://launchpad.net/ubuntu/+source/android-platform-system-core/1:7.0.0+r33-2build1 

You can even get more information about the next steps for a package, by enabling --debug or --verbose:

$ ./ubuntu-archive-assistant proposed -s live-config                 Next steps for live-config 5.20180224:
  Fix unsatisfiable dependencies in live-config:

$ ./ubuntu-archive-assistant proposed --verbose -s live-config
live-config is not considered ✘
Next steps for live-config 5.20180224:
  Fix unsatisfiable dependencies in live-config:
    sysvinit-core | sysvinit (<< 2.88dsf-44) can not be satisfied on amd64 ✘
      sysvinit-core only exists in Debian ✘
 
$ ./ubuntu-archive-assistant proposed --debug -s live-config
live-config is not considered ✘
Next steps for live-config 5.20180224:
DEBUG: reasons: ['depends'] 
  Fix unsatisfiable dependencies in live-config:
    sysvinit-core | sysvinit (<< 2.88dsf-44) can not be satisfied on amd64 ✘
      sysvinit-core only exists in Debian ✘
         DEBUG: Is this package blacklisted? Should it be synced?

We've covered some of the common reasons for a package to be stuck in proposed, but there are a ton of others. We'll need help to improve the tooling and make it useful for everyone wishing to work on proposed migration. There's a lot more that can be done, including spending time to parse update_output.txt (or better yet, a YAML representation of it) and testing package installation automatically to figure out what packages need no-change rebuilds, etc. A lot of it is integration of other tools that already exist.

That's where you come in.

This is a great way to learn a lot more about what happens to packages after they are uploaded, and what more you can do to ensure your own uploads move quickly to be accessible to all Ubuntu users; and many of the improvements can be as simple as contributing a simple test for one failure case for packages in -proposed.

More to come about ubuntu-archive-assistant. There are other subcommands in it than just "proposed". :)

Wednesday, 16 May 2018

Building a local testing lab with Ubuntu, MAAS and netplan

Overview

I'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.

One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (Cisco C3750, 24 ports, with PoE support in case I'd want to start using that), a small Cisco ASA 5505 to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.

The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.

My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using home.example.com or staging.example.com, or even maas.example.com as a domain name following the name of the system. Name resolution for the maas.example.com domain is forwarded to the MAAS server. More on all of this later.

The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.

Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.

Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.

Setting up the network hardware

I will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.

For simplicity, the ASA is always .1 in any subnet, the switch is .2 when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always .5, and the MAAS server is always .25. DHCP ranges were designed to reserve anything .25 and below for static assignments on the staging and sandbox networks, and since I use a /23 subnet for home, half is for static assignments, and the other half is for DHCP there.

MAAS server hardware setup

Netplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:

network:
    ethernets:
        enp0s25:
            addresses: []
            dhcp4: true
            optional: true
    bridges:
        maasbr0:
            addresses: [ 10.3.99.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan15 ]
        staging:
            addresses: [ 10.3.98.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan10 ]
    vlans:
        vlan15:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 15
            link: enp0s25
        vlan10:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 10
            link: enp0s25
    version: 2
Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (/etc/libvirt/qemu/networks/maasbr0.xml):
<network>
<name>maasbr0</name>
<bridge name="maasbr0">
<forward mode="bridge">
</forward></bridge></network>
Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "listen_tcp = 1", and set authentication as you see fit, in /etc/libvirt/libvirtd.conf. Also set:

libvirtd_opts="-l"

In /etc/default/libvirtd, then restart the libvirtd service.


dnsmasq server

The raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:
dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases
dhcp-hostsdir=/depot/dnsmasq/reservations
dhcp-authoritative
dhcp-fqdn
# copied from maas, specify boot files per-arch.
dhcp-boot=tag:x86_64-efi,bootx64.efi
dhcp-boot=tag:i386-pc,pxelinux
dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32
dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64
# pass search domains everywhere, it's easier to type short names
dhcp-option=119,home.example.com,staging.example.com,maas.example.com
domain=example.com
no-hosts
addn-hosts=/depot/dnsmasq/dns/
domain-needed
expand-hosts
no-resolv
# home network
domain=home.example.com,10.3.0.0/23
auth-zone=home.example.com,10.3.0.0/23
dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h
# specify the default gw / next router
dhcp-option=tag:home,3,10.3.0.1
# define the tftp server
dhcp-option=tag:home,66,10.3.0.5
# staging is configured as above, but on 10.3.98.0/24.
# maas.example.com: "isolated" maas network.
# send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)
server=/maas.example.com/10.3.99.25
# very basic tftp config
enable-tftp
tftp-root=/depot/tftp
tftp-no-fail
# set some "upstream" nameservers for general name resolution.
server=8.8.8.8
server=8.8.4.4


DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in /depot/dnsmasq/reservations (as per the above), and look like this:

de:ad:be:ef:ca:fe,10.3.0.21

I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.

Similarly, files in /depot/dnsmasq/dns have names describing the hardware, but then contain entries in hosts file form:

10.3.0.21 izanagi

Again, this is used so any rename of a device only requires changing the content of a single file in /depot/dnsmasq/dns, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.


Installing MAAS

At this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.

The MAAS installation process is very straightforward. Simply install the maas package, which will pull in maas-rack-controller and maas-region-controller.

Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.

This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).

Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to /etc/maas/preseed/curtin_userdata:


late_commands:
[...]
  pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]
  pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]
  pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]
apt:
  sources:
    proposed.list:
      source: deb $MIRROR {{release}}-proposed main universe
write_files:
  userconfig:
    path: /etc/cloud/cloud.cfg.d/99-users.cfg
    content: |
      system_info:
        default_user:
          lock_passwd: False
          plain_text_passwd: [REDACTED]
      users:
        - default
        - name: mtrudel
          groups: sudo
          gecos: Matt
          shell: /bin/bash
          lock-passwd: False
          passwd: [REDACTED]


The pinning_ entries are simply added to the end of the "late_commands" section.

For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:

maas login local   http://localhost:5240/MAAS/  [your MAAS API key]
maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"

The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.


Setting up the client

Deployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the ubuntu user (or another user you already configured):

ssh ubuntu@vocal-toad.maas.example.com

There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (~/.ssh/config). Here, I'm sharing the relevant parts of the configuration file I use:

CanonicalDomains home.example.com
CanonicalizeHostname yes
CanonicalizeFallbackLocal no
HashKnownHosts no
UseRoaming no
# canonicalize* options seem to break github for some reason
# I haven't spent much time looking into it, so let's make sure it will go through the
# DNS resolution logic in SSH correctly.
Host github.com
  Hostname github.com.
Host *.maas
  Hostname %h.example.com
Host *.staging
  Hostname %h.example.com
Host *.maas.example.com
  User ubuntu
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host *.staging.example.com
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
Host *.lxd
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p
Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p

As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.

The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.

Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:

vocal-toad.maas
And the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you definitely want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).

It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:

#!/bin/sh
ssh $@.maas

And I saved this as "sandbox" in ~/bin and making it executable.

And with this, the lab is ready.

Usage

To connect to a deployed system, one can now do the following:


$ sandbox vocal-toad
Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.
Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)
[...]
ubuntu@vocal-toad:~$
ubuntu@vocal-toad:~$ id mtrudel
uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)

Mobility

One important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").

All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.

Next steps

One of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:
  • Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.
  • Automate customizing installed packages based on tags applied to the machines.
    • juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.
    • Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.
      • This could be done using chef or puppet, but will require setting up the infrastructure for it.
    • Integrate automatic installation of snaps.
  • Load new images into the raspberry pi automatically for netboot / preseeded installs
    • I have scripts for this, but they will take time to adapt
    • Space on such a device is at a premium, there must be some culling of old images

Thursday, 15 March 2018

Call for testing: netplan.io in 18.04

Since 17.10, netplan has been the default network configuration tool in Ubuntu. Since then, it has grown in features, bug fixes, and even got its package renamed in the archive from "nplan" to netplan.io. We added better routing, improved handling for bridges, support for marking devices as "optional" for boot (so that the system doesn't wait for them to come up at boot time), lots of documentation updates... There's even been work to get it building for other distros.


We have a website for it, too: netplan.io


As we get closer to the release of Ubuntu 18.04, it is past due to involve everyone in testing netplan and making sure it is solid and as featureful as possible for a wide range of use cases.


This is where you get to participate.


Let us know about any feature gaps that remain in what
netplan supports, so that we can add the features when it's possible, or so that these feature gaps can be properly documented if they can't be closed by release time.


Report any bugs you find in netplan on Launchpad.


If you are unsure whether something is a bug, it might well be, so it doesn't hurt to file a bug. At the very least, we do want to know if something feels really difficult to do, so we can look into improving the experience.


If you're unsure how to do something you can look up questions and answers, or add your own, on AskUbuntu here:
https://askubuntu.com/questions/tagged/netplan


Netplan is being actively developed and we can use your help; so if there's one feature you care deeply about, or a bug that bugs you and you want to have a hand in fixing it, you can also jump right in to the code in Github: http://github.com/CanonicalLtd/netplan

Wednesday, 7 March 2018

Backing up GPG keys

Using PGP/GPG keys for a long period of time (either expiring keys, or extending expiration dates) and the potential for travel, for hardware to fail, or for life's other events means that eventually rather than potentially, you will end up in a situation where a key is lost, damaged, or where you otherwise need to proceed with some disaster recovery techniques.

These techniques could be as simple as forgetting about the key altogether and letting it live forever on the Internet, without being used. It could also be that you were clever and saved a revocation certificate somewhere different than your private key is backed up, but what if you didn't?

What if you did not print the revocation certificate? Or you just really don't feel very much like re-typing half a gazillion character?

I wouldn't wish it to anyone, but there will always be the risk of a failure of your "backup options"; so I'm sharing here my personal backup methods.

I back up my GPG keys, which I use both at and outside of work, on multiple different media:


  • "Daily use" happens using a Yubikey that holds securely the private part of the keys (it can't be extracted from the smartcard), as well as the public part. I've already written about this two years ago, on this blog.
  • The first layer of backup is on a LUKS-encrypted USB key. The USB key must obviously be encrypted to block out most attempts at accessing the contents of the key; and it is a key that I usually carry on my person at all times, like the Yubikeys -- I also use it to back up other files I can't leave without, such as a password vault, some other certificates, copies of ID documents in case of loss for when I travel, etc.
  • The next layer is on paper. Well, cardstock actually, to avoid wanting to fold it. This is the process I want to dig into deeper here.

It turns out that backing up secure keys on paper is pretty straightforward, and something just fine to do. You will obviously want to keep the paper copies in a secure location that only you have access to, as much as possible safe from fire (or at least somewhere unlikely to burn down at the same time as you'd lose the other backups).

paperkey is a generally accepted way of saving the private part of your GPG key. It does a decent job at saving things in a printable form, from which point you would go ahead and re-type, or use OCR to recover the text generated by paperkey:

paperkey --secret-key secret.gpg --output printme.txt

This retains the same security systems as your original key. You should have added a passphrase to it anyway, so even if the paper copy was found and used to recover the key, you would be protected by the complexity of your passphrase.

But this depends on OCR working correctly, especially on an aging medium such as paper, or you spending many hours re-typing the contents, and potentially tracking down typos. There's error correction, but that sounds to me like not fun at all. When you want to recover your key, presumably it is because you really do need it as soon as possible.

Back in 2015 when I generated my latest keys, I found a blog post that explained how to use QR codes to back up data. QR codes have the benefit of being very resilient to corruption, and above all, do not require typing. QR codes are however limited in size, being limited to 177x177 squares, for about 1200 characters storage.

Along with that blog post, I also found out about DataMatrix codes (which are quite similar to QR codes), but where each symbol can save a bit more data (about 1500 bytes per image in the biggest size). Pick the format you prefer, I picked DataMatrix. Simply modify the size you split to in the commands below.

One might wish to save the paperkey or the private key directly (obviously, saving the private key might mean more chunks to print), and that can be done using the programs in dmtx-utils:
cat printme.txt | split -b 1500 - part-
rm printme.txt
for part in part-*; do
    dmtxwrite -e 8 ${part} > ${part}.png
done 

You will be left with multiple parts of the file you originally split (without a file extension), as well as a corresponding image in PNG format that can be printed, and later scanned, to recover the original.

Keep these in a safe location and your key should be recoverable years down the line. It's not a bad idea to "pretend" there's a catastrophe and attempt to recover your key every few months, just to be sure you can go through the steps easily and that the paper keys are in good shape.

Recovery is simple:

for file in *.png; do dmtxread $file >> printme.txt; done

If all went well, the original and recovered files should be identical, and you just avoided a couple of hours of typing.

Stay safe!

Tuesday, 1 August 2017

How to sign things for Secure Boot

Secure Boot signing


The whole concept of Secure Boot requires that there exists a trust chain, from the very first thing loaded by the hardware (the firmware code), all the way through to the last things loaded by the operating system as part of the kernel: the modules. In other words, not just the firmware and bootloader require signatures, the kernel and modules too. People don't generally change firmware or bootloader all that much, but what of rebuilding a kernel or adding extra modules provided by hardware manufacturers?

The Secure Boot story in Ubuntu includes the fact that you might want to build your own kernel (but we do hope you can just use the generic kernel we ship in the archive), and that you may install your own kernel modules. This means signing UEFI binaries and the kernel modules, which can be done with its own set of tools.

But first, more on the trust chain used for Secure Boot.


Certificates in shim


To begin with signing things for UEFI Secure Boot, you need to create a X509 certificate that can be imported in firmware; either directly though the manufacturer firmware, or more easily, by way of shim.

Creating a certificate for use in UEFI Secure Boot is relatively simple. openssl can do it by running a few SSL commands. Now, we needs to create a SSL certificate for module signing...

First, let's create some config to let openssl know what we want to create (let's call it 'openssl.cnf'):

# This definition stops the following lines choking if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd 
[ req ]
distinguished_name      = req_distinguished_name
x509_extensions         = v3
string_mask             = utf8only
prompt                  = no

[ req_distinguished_name ]
countryName             = CA
stateOrProvinceName     = Quebec
localityName            = Montreal
0.organizationName      = cyphermox
commonName              = Secure Boot Signing
emailAddress            = example@example.com

[ v3 ]
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always,issuer
basicConstraints        = critical,CA:FALSE
extendedKeyUsage        = codeSigning,1.3.6.1.4.1.311.10.3.6,1.3.6.1.4.1.2312.16.1.2
nsComment               = "OpenSSL Generated Certificate"
Either update the values under "[ req_distinguished_name ]" or get rid of that section altogether (along with the "distinguished_name" field) and remove the "prompt" field. Then openssl would ask you for the values you want to set for the certificate identification.

The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure "1.3.6.1.4.1.2312.16.1.2" is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate.

Then, we can start the fun part: creating the private and public keys.

openssl req -config ./openssl.cnf \
        -new -x509 -newkey rsa:2048 \
        -nodes -days 36500 -outform DER \
        -keyout "MOK.priv" \
        -out "MOK.der"
This command will create both the private and public part of the certificate to sign things. You need both files to sign; and just the public part (MOK.der) to enroll the key in shim.


Enrolling the key


Now, let's enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don't include that '1.3.6.1.4.1.2312.16.1.2' OID discussed earlier).

To enroll a key, use the mokutil command:
sudo mokutil --import MOK.der
Follow the prompts to enter a password that will be used to make sure you really do want to enroll the key in a minute.

Once this is done, reboot. Just before loading GRUB, shim will show a blue screen (which is actually another piece of the shim project called "MokManager"). use that screen to select "Enroll MOK" and follow the menus to finish the enrolling process. You can also look at some of the properties of the key you're trying to add, just to make sure it's indeed the right one using "View key". MokManager will ask you for the password we typed in earlier when running mokutil; and will save the key, and we'll reboot again.


Let's sign things


Before we sign, let's make sure the key we added really is seen by the kernel. To do this, we can go look at /proc/keys:

$ sudo cat /proc/keys
0020f22a I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
0022a089 I------     2 perm 1f0b0000     0     0 keyring   .builtin_trusted_keys: 1
003462c9 I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
00709f1c I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
00f488cc I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
[...]
1dcb85e2 I------     1 perm 1f030000     0     0 asymmetri Build time autogenerated kernel key: eae8fa5ee6c91603c031c81226b2df4b135df7d2: X509.rsa 135df7d2 []
[...]

Just make sure a key exists there with the attributes (commonName, etc.) you entered earlier.

To sign kernel modules, we can use the kmodsign command:
kmodsign sha512 MOK.priv MOK.der module.ko
module.ko should be the file name of a kernel module file you want to sign. The signature will be appended to it by kmodsign, but if you would rather keep the signature separate and concatenate it to the module yourself, you can do that too (see 'kmosign --help').

You can validate that the module is signed by checking that it includes the string '~Module signature appended~':

$ hexdump -Cv module.ko | tail -n 5
00002c20  10 14 08 cd eb 67 a8 3d  ac 82 e1 1d 46 b5 5c 91  |.....g.=....F.\.|
00002c30  9c cb 47 f7 c9 77 00 00  02 00 00 00 00 00 00 00  |..G..w..........|
00002c40  02 9e 7e 4d 6f 64 75 6c  65 20 73 69 67 6e 61 74  |..~Module signat|
00002c50  75 72 65 20 61 70 70 65  6e 64 65 64 7e 0a        |ure appended~.|
00002c5e

You can also use hexdump this way to check that the signing key is the one you created.


What about kernels and bootloaders?


To sign a custom kernel or any other EFI binary you want to have loaded by shim, you'll need to use a different command: sbsign. Unfortunately, we'll need the certificate in a different format in this case.

Let's convert the certificate we created earlier into PEM:

openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem

Now, we can use this to sign our EFI binary:

sbsign --key MOK.priv --cert MOK.pem my_binary.efi --output my_binary.efi.signed
As long as the signing key is enrolled in shim and does not contain the OID from earlier (since that limits the use of the key to kernel module signing), the binary should be loaded just fine by shim.


Doing signatures outside shim


If you don't want to use shim to handle keys (but I do recommend that you do use it), you will need to create different certificates; one of which being the PK (Platform Key) for the system, which you can enroll in firmware directly via KeyTool or some firmware tool provided with your system. I will not elaborate the steps to enroll the keys in firmware as it tends to vary from system to system, but the main idea is to put the system in Secure Boot "Setup Mode"; run KeyTool (which is its own EFI binary you can build yourself and run), and enroll the keys -- first by installing the KEK and DB keys, and finishing with the PK. These files need to be available from some FAT partition.

I do have this script to generate the right certificates and files; which I can share (and itself is copied from somewhere, I can't remember):

#!/bin/bash
echo -n "Enter a Common Name to embed in the keys: "
read NAME
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME PK/" -keyout PK.key \
        -out PK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME KEK/" -keyout KEK.key \
        -out KEK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME DB/" -keyout DB.key \
        -out DB.crt -days 3650 -nodes -sha256
openssl x509 -in PK.crt -out PK.cer -outform DER
openssl x509 -in KEK.crt -out KEK.cer -outform DER
openssl x509 -in DB.crt -out DB.cer -outform DER
GUID=`python -c 'import uuid; print str(uuid.uuid1())'`
echo $GUID > myGUID.txt
cert-to-efi-sig-list -g $GUID PK.crt PK.esl
cert-to-efi-sig-list -g $GUID KEK.crt KEK.esl
cert-to-efi-sig-list -g $GUID DB.crt DB.esl
rm -f noPK.esl
touch noPK.esl
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK PK.esl PK.auth
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK noPK.esl noPK.auth
chmod 0600 *.key
echo ""
echo ""
echo "For use with KeyTool, copy the *.auth and *.esl files to a FAT USB"
echo "flash drive or to your EFI System Partition (ESP)."
echo "For use with most UEFIs' built-in key managers, copy the *.cer files."
echo ""
The same logic as earlier applies: sign things using sbsign or kmodsign as required (use the .crt files with sbsign, and .cer files with kmodsign); and as long as the keys are properly enrolled in the firmware or in shim, they will be successfully loaded.


What's coming up for Secure Boot in Ubuntu


Signing things is complex -- you need to create SSL certificates, enroll them in firmware or shim... You need to have a fair amount of prior knowledge of how Secure Boot works, and that the commands to use are. It's rather obvious that this isn't at the reach of everybody, and somewhat bad experience in the first place. For that reason, we're working on making the key creation, enrollment and signatures easier when installing DKMS modules.

update-secureboot-policy should soon let you generate and enroll a key; and DKMS will be able to sign things by itself using that key.

Tuesday, 20 June 2017

Netplan by default in 17.10

Friday, I uploaded an updated nplan package (version 0.24) to change its Priority: field to important, as well as an update of ubuntu-meta (following a seeds update), to replace ifupdown with nplan in the minimal seed.

What this means concretely is that nplan should now be installed by default on all images, part of ubuntu-minimal, and dropped ifupdown at the same time.

For the time being, ifupdown is still installed by default due the way debootstrap generates the very minimal images used as a base for other images -- how it generates its base set of packages, since that depends only on the Priority: field of packages. Thus, nplan was added, but ifupdown still needs to be changed (which I will do shortly) to disappear from all images.

The intent is that nplan would now be the standard way of configuring networks. I've also sent an email about this to ubuntu-devel-announce@.

I've already written a bit about what netplan is and does, and I have still more to write on the subject (discussing syntax and how to do common things). We especially like how using a purely declarative syntax makes things easier for everyone (and if you can't do what you want that way, then it's a bug you should report).

MaaS, cloud-init and others have already started to support writing netplan configuration.

The full specification (summary wiki page and a blueprint reachable from it) for the migration process is available here.

While I get to writing something comprehensive about how to use the netplan YAML to configure networks, if you want to know more there's always the manpage, which is the easiest to use documentation. It should always be up to date with the current version of netplan available on your release (since we backported the last version to Xenial, Yakkety, and Zesty), and accessible via:

man 5 netplan

To make things "easy" however, you can also check out the netplan documentation directly from the source tree here:

https://git.launchpad.net/netplan/tree/doc/netplan.md

There's also a wiki page I started to get ready that links to the most useful things, such as an overview of the design of netplan, some discussion on the renderers we support and some of the commands that can be used.

We even have an IRC channel on Freenode: #netplan

I think you'll find that using netplan makes configuring networks easy and even enjoyable; but if you run into an issue, be sure to file a bug on Launchpad here:

Wednesday, 24 May 2017

An overview of UEFI Secure Boot on Ubuntu

Secure Boot is here

Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.

An overview

I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).


In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: BootOrder, which specified what order to boot things in, as well as BootEntry#### (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no BootEntry variable listed in BootOrder gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default BootEntry. Shim relies on that, or on a distro, to load in the first place.

Once we actually find shim to boot; this will try to validate signatures of the next piece in the puzzle: grub2, MokManager, or fallback, depending on the state of shim's own variables in NVRAM; more on this later.

In the usual scenario, shim will validate the grub2 image successfully, then grub2 itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking shim to check the signature.

Shim

Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load grub2 in the normal case, as well as how to load MokManager if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the fallback binary which can re-create BootEntry variables in case the firmware isn't able to handle them. I will expand on MokManager and fallback in a future blog post.

Your diagram says shim is signed by Microsoft, what's up with that?

Indeed, shim is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different shims for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.

One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.

Why reboot once a policy change is made or boot entries are rebuilt?

All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding BootEntry variables first, and then loading MokManager to add a new signing key before we can load a new grub2 image you signed yourself).

Grub2

grub2 is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of grub2. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through shim, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included in the firmware, as expressed earlier. Mostly just the Microsoft signatures.

grub2 in UEFI otherwise works just like it would elsewhere: it try to find its grub.cfg configuration file, and follow its instructions to boot the kernel and load the initramfs.

When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is signed. The kernels we install in Ubuntu are signed by Canonical, just like grub2 is, and shim knows about the signing key and can validate these signatures.

At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, grub2 will fall back to loading the kernel as a normal binary (as in not signed), outside of BootServices (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting BootServices means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then grub2 leaves the ExitBootServices call to be done by the kernel.

Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).

The kernel

In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.

In order to make this as easy as possible for people, for now we've opted to let users disable Secure Boot validation in shim via a semi-automatic process. Shim is still being verified by the system firmware, but any piece following it that asks shim to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). grub2 will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).

Shim validation

To toggle shim validation, if you were using DKMS packages and feel you'd really prefer to have shim validate everything (but be aware that if your system requires these drivers, they will not load and your system may be unusable, or at least whatever needs that driver will not work):
sudo update-secureboot-policy --enable
If nothing happens, it's because you already have shim validation enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, Secure Boot is not enabled on the system), the command will tell you.

And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see --help). There is an example of use of update-secureboot-policy here.