Showing posts with label hardware. Show all posts
Showing posts with label hardware. Show all posts

Wednesday, 16 May 2018

Building a local testing lab with Ubuntu, MAAS and netplan

Overview

I'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.

One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (Cisco C3750, 24 ports, with PoE support in case I'd want to start using that), a small Cisco ASA 5505 to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.

The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.

My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using home.example.com or staging.example.com, or even maas.example.com as a domain name following the name of the system. Name resolution for the maas.example.com domain is forwarded to the MAAS server. More on all of this later.

The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.

Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.

Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.

Setting up the network hardware

I will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.

For simplicity, the ASA is always .1 in any subnet, the switch is .2 when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always .5, and the MAAS server is always .25. DHCP ranges were designed to reserve anything .25 and below for static assignments on the staging and sandbox networks, and since I use a /23 subnet for home, half is for static assignments, and the other half is for DHCP there.

MAAS server hardware setup

Netplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:

network:
    ethernets:
        enp0s25:
            addresses: []
            dhcp4: true
            optional: true
    bridges:
        maasbr0:
            addresses: [ 10.3.99.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan15 ]
        staging:
            addresses: [ 10.3.98.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan10 ]
    vlans:
        vlan15:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 15
            link: enp0s25
        vlan10:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 10
            link: enp0s25
    version: 2
Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (/etc/libvirt/qemu/networks/maasbr0.xml):
<network>
<name>maasbr0</name>
<bridge name="maasbr0">
<forward mode="bridge">
</forward></bridge></network>
Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "listen_tcp = 1", and set authentication as you see fit, in /etc/libvirt/libvirtd.conf. Also set:

libvirtd_opts="-l"

In /etc/default/libvirtd, then restart the libvirtd service.


dnsmasq server

The raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:
dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases
dhcp-hostsdir=/depot/dnsmasq/reservations
dhcp-authoritative
dhcp-fqdn
# copied from maas, specify boot files per-arch.
dhcp-boot=tag:x86_64-efi,bootx64.efi
dhcp-boot=tag:i386-pc,pxelinux
dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32
dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64
# pass search domains everywhere, it's easier to type short names
dhcp-option=119,home.example.com,staging.example.com,maas.example.com
domain=example.com
no-hosts
addn-hosts=/depot/dnsmasq/dns/
domain-needed
expand-hosts
no-resolv
# home network
domain=home.example.com,10.3.0.0/23
auth-zone=home.example.com,10.3.0.0/23
dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h
# specify the default gw / next router
dhcp-option=tag:home,3,10.3.0.1
# define the tftp server
dhcp-option=tag:home,66,10.3.0.5
# staging is configured as above, but on 10.3.98.0/24.
# maas.example.com: "isolated" maas network.
# send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)
server=/maas.example.com/10.3.99.25
# very basic tftp config
enable-tftp
tftp-root=/depot/tftp
tftp-no-fail
# set some "upstream" nameservers for general name resolution.
server=8.8.8.8
server=8.8.4.4


DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in /depot/dnsmasq/reservations (as per the above), and look like this:

de:ad:be:ef:ca:fe,10.3.0.21

I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.

Similarly, files in /depot/dnsmasq/dns have names describing the hardware, but then contain entries in hosts file form:

10.3.0.21 izanagi

Again, this is used so any rename of a device only requires changing the content of a single file in /depot/dnsmasq/dns, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.


Installing MAAS

At this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.

The MAAS installation process is very straightforward. Simply install the maas package, which will pull in maas-rack-controller and maas-region-controller.

Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.

This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).

Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to /etc/maas/preseed/curtin_userdata:


late_commands:
[...]
  pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]
  pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]
  pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]
apt:
  sources:
    proposed.list:
      source: deb $MIRROR {{release}}-proposed main universe
write_files:
  userconfig:
    path: /etc/cloud/cloud.cfg.d/99-users.cfg
    content: |
      system_info:
        default_user:
          lock_passwd: False
          plain_text_passwd: [REDACTED]
      users:
        - default
        - name: mtrudel
          groups: sudo
          gecos: Matt
          shell: /bin/bash
          lock-passwd: False
          passwd: [REDACTED]


The pinning_ entries are simply added to the end of the "late_commands" section.

For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:

maas login local   http://localhost:5240/MAAS/  [your MAAS API key]
maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"

The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.


Setting up the client

Deployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the ubuntu user (or another user you already configured):

ssh ubuntu@vocal-toad.maas.example.com

There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (~/.ssh/config). Here, I'm sharing the relevant parts of the configuration file I use:

CanonicalDomains home.example.com
CanonicalizeHostname yes
CanonicalizeFallbackLocal no
HashKnownHosts no
UseRoaming no
# canonicalize* options seem to break github for some reason
# I haven't spent much time looking into it, so let's make sure it will go through the
# DNS resolution logic in SSH correctly.
Host github.com
  Hostname github.com.
Host *.maas
  Hostname %h.example.com
Host *.staging
  Hostname %h.example.com
Host *.maas.example.com
  User ubuntu
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host *.staging.example.com
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
Host *.lxd
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p
Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p

As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.

The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.

Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:

vocal-toad.maas
And the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you definitely want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).

It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:

#!/bin/sh
ssh $@.maas

And I saved this as "sandbox" in ~/bin and making it executable.

And with this, the lab is ready.

Usage

To connect to a deployed system, one can now do the following:


$ sandbox vocal-toad
Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.
Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)
[...]
ubuntu@vocal-toad:~$
ubuntu@vocal-toad:~$ id mtrudel
uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)

Mobility

One important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").

All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.

Next steps

One of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:
  • Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.
  • Automate customizing installed packages based on tags applied to the machines.
    • juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.
    • Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.
      • This could be done using chef or puppet, but will require setting up the infrastructure for it.
    • Integrate automatic installation of snaps.
  • Load new images into the raspberry pi automatically for netboot / preseeded installs
    • I have scripts for this, but they will take time to adapt
    • Space on such a device is at a premium, there must be some culling of old images

Tuesday, 1 August 2017

How to sign things for Secure Boot

Secure Boot signing


The whole concept of Secure Boot requires that there exists a trust chain, from the very first thing loaded by the hardware (the firmware code), all the way through to the last things loaded by the operating system as part of the kernel: the modules. In other words, not just the firmware and bootloader require signatures, the kernel and modules too. People don't generally change firmware or bootloader all that much, but what of rebuilding a kernel or adding extra modules provided by hardware manufacturers?

The Secure Boot story in Ubuntu includes the fact that you might want to build your own kernel (but we do hope you can just use the generic kernel we ship in the archive), and that you may install your own kernel modules. This means signing UEFI binaries and the kernel modules, which can be done with its own set of tools.

But first, more on the trust chain used for Secure Boot.


Certificates in shim


To begin with signing things for UEFI Secure Boot, you need to create a X509 certificate that can be imported in firmware; either directly though the manufacturer firmware, or more easily, by way of shim.

Creating a certificate for use in UEFI Secure Boot is relatively simple. openssl can do it by running a few SSL commands. Now, we needs to create a SSL certificate for module signing...

First, let's create some config to let openssl know what we want to create (let's call it 'openssl.cnf'):

# This definition stops the following lines choking if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd 
[ req ]
distinguished_name      = req_distinguished_name
x509_extensions         = v3
string_mask             = utf8only
prompt                  = no

[ req_distinguished_name ]
countryName             = CA
stateOrProvinceName     = Quebec
localityName            = Montreal
0.organizationName      = cyphermox
commonName              = Secure Boot Signing
emailAddress            = example@example.com

[ v3 ]
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always,issuer
basicConstraints        = critical,CA:FALSE
extendedKeyUsage        = codeSigning,1.3.6.1.4.1.311.10.3.6,1.3.6.1.4.1.2312.16.1.2
nsComment               = "OpenSSL Generated Certificate"
Either update the values under "[ req_distinguished_name ]" or get rid of that section altogether (along with the "distinguished_name" field) and remove the "prompt" field. Then openssl would ask you for the values you want to set for the certificate identification.

The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure "1.3.6.1.4.1.2312.16.1.2" is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate.

Then, we can start the fun part: creating the private and public keys.

openssl req -config ./openssl.cnf \
        -new -x509 -newkey rsa:2048 \
        -nodes -days 36500 -outform DER \
        -keyout "MOK.priv" \
        -out "MOK.der"
This command will create both the private and public part of the certificate to sign things. You need both files to sign; and just the public part (MOK.der) to enroll the key in shim.


Enrolling the key


Now, let's enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don't include that '1.3.6.1.4.1.2312.16.1.2' OID discussed earlier).

To enroll a key, use the mokutil command:
sudo mokutil --import MOK.der
Follow the prompts to enter a password that will be used to make sure you really do want to enroll the key in a minute.

Once this is done, reboot. Just before loading GRUB, shim will show a blue screen (which is actually another piece of the shim project called "MokManager"). use that screen to select "Enroll MOK" and follow the menus to finish the enrolling process. You can also look at some of the properties of the key you're trying to add, just to make sure it's indeed the right one using "View key". MokManager will ask you for the password we typed in earlier when running mokutil; and will save the key, and we'll reboot again.


Let's sign things


Before we sign, let's make sure the key we added really is seen by the kernel. To do this, we can go look at /proc/keys:

$ sudo cat /proc/keys
0020f22a I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
0022a089 I------     2 perm 1f0b0000     0     0 keyring   .builtin_trusted_keys: 1
003462c9 I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
00709f1c I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
00f488cc I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
[...]
1dcb85e2 I------     1 perm 1f030000     0     0 asymmetri Build time autogenerated kernel key: eae8fa5ee6c91603c031c81226b2df4b135df7d2: X509.rsa 135df7d2 []
[...]

Just make sure a key exists there with the attributes (commonName, etc.) you entered earlier.

To sign kernel modules, we can use the kmodsign command:
kmodsign sha512 MOK.priv MOK.der module.ko
module.ko should be the file name of a kernel module file you want to sign. The signature will be appended to it by kmodsign, but if you would rather keep the signature separate and concatenate it to the module yourself, you can do that too (see 'kmosign --help').

You can validate that the module is signed by checking that it includes the string '~Module signature appended~':

$ hexdump -Cv module.ko | tail -n 5
00002c20  10 14 08 cd eb 67 a8 3d  ac 82 e1 1d 46 b5 5c 91  |.....g.=....F.\.|
00002c30  9c cb 47 f7 c9 77 00 00  02 00 00 00 00 00 00 00  |..G..w..........|
00002c40  02 9e 7e 4d 6f 64 75 6c  65 20 73 69 67 6e 61 74  |..~Module signat|
00002c50  75 72 65 20 61 70 70 65  6e 64 65 64 7e 0a        |ure appended~.|
00002c5e

You can also use hexdump this way to check that the signing key is the one you created.


What about kernels and bootloaders?


To sign a custom kernel or any other EFI binary you want to have loaded by shim, you'll need to use a different command: sbsign. Unfortunately, we'll need the certificate in a different format in this case.

Let's convert the certificate we created earlier into PEM:

openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem

Now, we can use this to sign our EFI binary:

sbsign --key MOK.priv --cert MOK.pem my_binary.efi --output my_binary.efi.signed
As long as the signing key is enrolled in shim and does not contain the OID from earlier (since that limits the use of the key to kernel module signing), the binary should be loaded just fine by shim.


Doing signatures outside shim


If you don't want to use shim to handle keys (but I do recommend that you do use it), you will need to create different certificates; one of which being the PK (Platform Key) for the system, which you can enroll in firmware directly via KeyTool or some firmware tool provided with your system. I will not elaborate the steps to enroll the keys in firmware as it tends to vary from system to system, but the main idea is to put the system in Secure Boot "Setup Mode"; run KeyTool (which is its own EFI binary you can build yourself and run), and enroll the keys -- first by installing the KEK and DB keys, and finishing with the PK. These files need to be available from some FAT partition.

I do have this script to generate the right certificates and files; which I can share (and itself is copied from somewhere, I can't remember):

#!/bin/bash
echo -n "Enter a Common Name to embed in the keys: "
read NAME
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME PK/" -keyout PK.key \
        -out PK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME KEK/" -keyout KEK.key \
        -out KEK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME DB/" -keyout DB.key \
        -out DB.crt -days 3650 -nodes -sha256
openssl x509 -in PK.crt -out PK.cer -outform DER
openssl x509 -in KEK.crt -out KEK.cer -outform DER
openssl x509 -in DB.crt -out DB.cer -outform DER
GUID=`python -c 'import uuid; print str(uuid.uuid1())'`
echo $GUID > myGUID.txt
cert-to-efi-sig-list -g $GUID PK.crt PK.esl
cert-to-efi-sig-list -g $GUID KEK.crt KEK.esl
cert-to-efi-sig-list -g $GUID DB.crt DB.esl
rm -f noPK.esl
touch noPK.esl
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK PK.esl PK.auth
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK noPK.esl noPK.auth
chmod 0600 *.key
echo ""
echo ""
echo "For use with KeyTool, copy the *.auth and *.esl files to a FAT USB"
echo "flash drive or to your EFI System Partition (ESP)."
echo "For use with most UEFIs' built-in key managers, copy the *.cer files."
echo ""
The same logic as earlier applies: sign things using sbsign or kmodsign as required (use the .crt files with sbsign, and .cer files with kmodsign); and as long as the keys are properly enrolled in the firmware or in shim, they will be successfully loaded.


What's coming up for Secure Boot in Ubuntu


Signing things is complex -- you need to create SSL certificates, enroll them in firmware or shim... You need to have a fair amount of prior knowledge of how Secure Boot works, and that the commands to use are. It's rather obvious that this isn't at the reach of everybody, and somewhat bad experience in the first place. For that reason, we're working on making the key creation, enrollment and signatures easier when installing DKMS modules.

update-secureboot-policy should soon let you generate and enroll a key; and DKMS will be able to sign things by itself using that key.

Wednesday, 24 May 2017

An overview of UEFI Secure Boot on Ubuntu

Secure Boot is here

Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.

An overview

I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).


In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: BootOrder, which specified what order to boot things in, as well as BootEntry#### (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no BootEntry variable listed in BootOrder gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default BootEntry. Shim relies on that, or on a distro, to load in the first place.

Once we actually find shim to boot; this will try to validate signatures of the next piece in the puzzle: grub2, MokManager, or fallback, depending on the state of shim's own variables in NVRAM; more on this later.

In the usual scenario, shim will validate the grub2 image successfully, then grub2 itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking shim to check the signature.

Shim

Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load grub2 in the normal case, as well as how to load MokManager if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the fallback binary which can re-create BootEntry variables in case the firmware isn't able to handle them. I will expand on MokManager and fallback in a future blog post.

Your diagram says shim is signed by Microsoft, what's up with that?

Indeed, shim is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different shims for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.

One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.

Why reboot once a policy change is made or boot entries are rebuilt?

All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding BootEntry variables first, and then loading MokManager to add a new signing key before we can load a new grub2 image you signed yourself).

Grub2

grub2 is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of grub2. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through shim, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included in the firmware, as expressed earlier. Mostly just the Microsoft signatures.

grub2 in UEFI otherwise works just like it would elsewhere: it try to find its grub.cfg configuration file, and follow its instructions to boot the kernel and load the initramfs.

When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is signed. The kernels we install in Ubuntu are signed by Canonical, just like grub2 is, and shim knows about the signing key and can validate these signatures.

At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, grub2 will fall back to loading the kernel as a normal binary (as in not signed), outside of BootServices (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting BootServices means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then grub2 leaves the ExitBootServices call to be done by the kernel.

Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).

The kernel

In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.

In order to make this as easy as possible for people, for now we've opted to let users disable Secure Boot validation in shim via a semi-automatic process. Shim is still being verified by the system firmware, but any piece following it that asks shim to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). grub2 will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).

Shim validation

To toggle shim validation, if you were using DKMS packages and feel you'd really prefer to have shim validate everything (but be aware that if your system requires these drivers, they will not load and your system may be unusable, or at least whatever needs that driver will not work):
sudo update-secureboot-policy --enable
If nothing happens, it's because you already have shim validation enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, Secure Boot is not enabled on the system), the command will tell you.

And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see --help). There is an example of use of update-secureboot-policy here.

Friday, 5 May 2017

Quick and easy network configuration with Netplan

Earlier this week I uploaded netplan 0.21 in artful, with SRUs in progress for the stable releases. There are still lots of features coming up, but it's also already quite useful. You can already use it to describe typical network configurations on desktop and servers, all the way to interesting, complicated setups like bond over a bridge over multiple VLANs...

Getting started

The simplest netplan configuration might look like this:

# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager
At boot, netplan will see this configuration (which happens to be installed already on all new systems since 16.10) and generate a single , empty file: /run/NetworkManager/conf.d/10-globally-managed-devices.conf. This tells the system that NetworkManager is the only renderer for network configuration on the system, and will manage all devices by default.

Working from there: a simple server

Let's look at it on a hypothetical web server; such as for my favourite test: www.perdu.com.

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
This incredibly simple configuration tells the system that the eth0 device is to be brought up using DHCP4. Netplan also supports DHCPv6, as well as static IPs, setting routes, etc.


Building up to something more complex

Let's say I want a team of two NICs, and use them to reach VLAN 108 on my network:

            network:
              version: 2
              ethernets:
                eth0:
                  dhcp4: n
                eth1:
                  mtu: 1280
                  dhcp4: n
              bonds:
                bond0:
                  interfaces:
                  - eth1
                  - eth0
                  mtu: 9000
              vlans:
                bond0.108:
                  link: bond0
                  id: 108

I think you can see just how simple it is to configure even pretty complex networks, all in one file. The beauty in it is that you don't need to worry about what will actually set this up for you.

A choice of backends

Currently, netplan supports either NetworkManager or systemd-networkd as a backend. The default is to use systemd-networkd, but given that it does not support wireless networks, we still rely on NetworkManager to do just that.

This is why you don't need to care what supports your config in the end: netplan abstracts that for you. It generates the required config based on the "renderer" property, so that you don't need to know how to define the special device properties in each backend.

As I mentioned previously, we are still hard at work adding more features, but the core is there: netplan can set up bonds, bridges, vlans, standalone network interfaces, and do so for both static or DHCP addresses. It also supports many of the most common bridge and bond parameters used to tweak the precise behaviour of bonded or bridged devices.


Coming up...

I will be adding proper support for setting a "cloned" MAC on a device. I'm reviewing the code already to do this, and ironing out the last issues.

There are also plans on better handling administrative states for devices; along with a few bugs that relate to support MaaS, where having a simple configuration style really shines.

I'm really excited for where netplan is going. It seems like it has a lot of potential to address some of the current shortcomings in other tools. I'm also really happy to hear of stories of how it is being used in the wild, so if you use it, don't hesitate to let me know about it!

Contributing

All of the work on netplan happens on Launchpad. Its source code is at https://code.launchpad.net/netplan; we always welcome new contributions.

Friday, 15 January 2016

In full tinfoil hat mode: Using GPG with smartcards

Breaking OPSEC for a bit to write a how-to on using GPG keys with smartcards...

I've thought about experimenting with smartcards for a while. Turns out that my Thinkpad has a built-in smartcard reader, but most of my other systems don't. Also, I'd like to use a smartcard to protect my SSH keys, some of which I may use on systems that I do not fully control (ie. at the university to push code to Github or Bitbucket), or to get to my server. Smartcard readers are great, but they're not much fun to add to a list of stuff to carry everywhere.

There's an alternate option: the Yubikey. Yubico appears to have made a version 4 of the Yubikey which has CCID (smartcard magic), U2F (2-factor for GitHub and Google, on Chrome), and their usual OTP token, all on the same tiny USB key. What's more, it is documented as supporting 4096 bit RSA keys, and includes some ECC support (more on this later).

Setting up GPG keys for use with smartcards is simple. One has the choice of either creating your own keys locally, and moving them on the smartcard, or generating them on the smartcard right away. In other to have a backup of my full key available in a secure location, I've opted to generate the keys off of the card, and transferring them.

For this, you will need one (or two) Yubikey 4 (or Yubikey 4 Nano, or if you don't mind being limited to 2048 bit keys, the Yubikey NEO, which can also do NFC), some backup media of your choice, and apparently, at least the following packages:

gnupg2 gnupg-agent libpth20 libccid pcscd scdaemon libksba8 opensc

You should do all of this on a trusted system, not connected to any network.

First, setup gnupg2 to a reasonable level of security. Edit ~/.gnupg/gpg.conf to pick the options you want, I've based my config on Jeffrey Clement's blog entry on the subject:

#default-key AABBCC90DEADBEEF
keyserver hkp://keyserver.ubuntu.com
no-emit-version
no-comments
keyid-format 0xlong
with-fingerprint
use-agent
personal-cipher-preferences AES256 AES192 AES CAST5
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
You'll want to replace default-key later with the key you've created, and uncomment the line.

The downside to all of this is that you'll need to use gpg2 in all cases rather than gpg; which is still the default on Ubuntu and Debian. gpg2 so far seems to work just fine for ever use I've had (including debsign, after setting DEBSIGN_PROGRAM=gpg2 in ~/.devscripts).

You can now generate your master key:
gpg2 --gen-key

Then edit the key to add new UIDs (identities) and subkeys, which will each have their own different capabilities:

gpg2 --expert --edit-key 0xAABBCC90DEADBEEF
Best is to follow jclement's blog entry for this. There is no point in reiterating all of it. There's also a pretty complete guide from The Linux Foundation IT here, though it seems to include a lot of stuff that does not appear to be required here on my system, in xenial.

Add the subkeys. You should have one of encryption, one for signing, and one for authentication. Works out pretty well, since there are three slots, one for each of these capabilities, on the Yubikey.

If you also want your master key on a smartcard, you'll probably need a second Yubikey (that's why I wrote two earlier), which would only get used to sign other people's keys, extend expiration dates, generate new subkeys, etc. That one should be left in a very secure location.

This is a great point to backup all the keys you've just created:

gpg2 -a --export-secret-keys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.master.key
gpg2 -a --export-secret-subkeys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.sub.key
gpg2 -a --export 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.pub

Next step is to configure the smartcard/Yubikey to add your name, a URL for the public key, set the PINs, etc. Use the following command for this:
gpg2 --card-edit

Finally, go back to editing your GPG key:
gpg2 --expert --edit-key 0xAABBCC90DEADBEEF

From this point you can use toggle to select each subkey (using key #), move them to the smartcard (keytocard), and deselect them (key #). To move the master key to the card, "toggle" out of toggle mode then back in, then immediately run 'keytocard'. GPG will ask if you're certain. There is no way to get a key back out of the card, if you want a local copy, you needed to make a backup first.

Now's probably a great time to copy your key to a keyserver, so that people may eventually start to use it to send you encrypted mail, etc.

After transferring the keys, you may want to make a "second backup", which would only contain the "clues" for GPG to know on which smartcard to find the private part of your keys. This will be useful if you need to use the keys on another system.

Another option is to use the public portion of your key (saved somewhere, like on a keyserver), then have gpg2 discover that it's on a smartcard using:

gpg2 --card-status

Unfortunately, it appears to only manage to pick up either only the master key, or only the subkeys, if you use separate smartcards. This may be a blessing in disguise, in that you'd still only use the master key on an offline, very secure system, and only the subkeys in your typical daily use scenario.

Don't forget to generate a revocation certificate. This is essential if you ever lose your key, if it's compromised, or you're ever in a situation where you want to let the world know quickly not to use your key anymore:

gpg2 --gen-revoke 0xAABBCC90DEADBEEF
Store that data in a safe place.

Finally, more on backing up the GPG keys. It could be argued that keeping your master key on a smartcard might be a bad idea. After all, if the smartcard is lost, while it would be difficult to get the key out of the smartcard, you would probably want to treat it as compromised and get the key revoked. The same applies to keys kept on USB drives or on CD. A strong passphrase will help, but you still lost control of your key and at that point, no longer know whether it is still safe.

What's more, USB drives and CDs tend to eventually fail. CDs rot after a number of years, and USB drives just seem to not want to work correctly when you really need them. Paper is another option for backing up your keys, since there are ways (paperkey, for instance) to represent the data in a way that it could either be retyped or scanned back into digital data to be retrieved. Further securing a backup key could involve using gfshare to split it into multiple bits, in the hope that while one of its locations could be compromised (lost), you'll still have some of the others sufficient to reconstruct the key.

With the subkeys on the Yubikey, and provided gpg2 --card-status reports your card as detected, if you have the gpg-agent running with SSH support enabled you should be able to just run:

ssh-add -l

And have it list your card serial number. You can then use ssh-add -L to get the public key to use to add to authorized_keys files to use your authentication GPG subkey as a SSH key. If it doesn't work, make sure the gpg-agent is running and that ssh-add uses the right socket, and make sure pcscd isn't interfering (it seemed to get stuck in a weird state, and not shutting down automatically as it should after dealing with a request).

Whenever you try to use one of the subkeys (or the master key), rather than being asked for the passphrase for the key (which you should have set as a very difficult, absolutely unguessable string that you and only you could remember and think of), you will be asked to enter the User PIN set for the smartcard.

You've achieved proper two-factor authentication.

Note of ECC on the Yubikey: according to the marketing documentation, the Yubikey knows about ECC p256 and ECC p384. Unfortunately, it looks like safecurves.cr.yp.to considers these unsafe, since they do not meet all the SafeCurves requirements. I'm not especially versed in cryptography, but this means I'll read up more on the subject, and stay away from the ECC implementation on the Yubikey 4 for now. However, it doesn't seem, at first glace, that this ECC implementation is meant for GPG at all. The Yubikey also has PIV magic which would allow it to be used as a pure SSH smartcard (rather than using a GPG authentication subkey for SSH), with a private certificate being generated by the card. These certificates could be created using RSA or ECC. I tried to play a bit with it (using RSA), following the SSH with PIV and PKCS11 document on developers.yubico.com; but I didn't manage to make it work. It looks like the GPG functions might interfere with PIV in some way or I could just not handle the ssh-agent the right way. I'm happy to be shown how to use this correctly.




Sunday, 29 March 2015

Preseeding installations

In early February, I completed a move from Canonical's Phonedations team to the Foundations team. Part of this new work means debugging a lot of different failure cases in the installer, grub, and other early boot or low-level sofware, some of which requiring careful reproduction steps and probably quite a few install runs in VMs on on hardware.

Given the number of installations I do I've started to keep around preseed files; the text files used to configure automatic installations. I've made them available at http://people.canonical.com/~mtrudel/preseed/ so that they can be reused as necessary. Most of these preseed files make heavy use of the network to get the installation data and packages from the web, so they will need to be tweaked for use in an isolated network. They are annotated enough that it should be possible for anyone to improve on them to suit their own needs. I will add to these files as I run across things to test and automate. I hope we can use some of them soon in new automated QA tests where appropriate, so that it can help catch regressions.

For those not familiar with preseeding, these files can be used and referred to in the installation command-line when starting from a network PXE boot or a CDROM or pretty much any other installation medium. They are useful to tell the installer how you want the installation to be done without having to answer all of the individual questions one by one in the forms in ubiquity or debian-installer. The installer will read the preseed file and use these answers without showing the prompts. This also means some of the files I make available should not be used lightly, as they will happily wipe disks without asking. You've been warned :)

To use this, you'll want to specify "preseed/file=/path/to/file" (or just file=) for a file directly accessible as a file system or through TFTP, or "preseed/url=http://URI/to/file" (or just url=) if it's available using HTTP. On d-i installs, this means you may also need to add "auto=true priority=critical" to avoid having to fill in language settings and the like (since the preseeds are typically only read after language, country, and network have been configured); and on ubiquity installs (for example, using a CD), you'll want to add 'only-ubiquity automatic-ubiquity' to the kernel command-line, again to keep the automated, minimal look and feel.

I plan on writing another entry soon on how to debug early boot issues in VMs or hardware using serial. Stay tuned.

Tuesday, 21 January 2014

Call for testing: urfkill / Getting flight mode to rock on Ubuntu and Ubuntu Touch

Last month, I blogged about urfkill, and what it's meant to be used for.

The truth is, flight mode and proper killswitch (read: disabling radios on devices) handling is something that needs to happen on any device that deems itself "mobile". As such, it's one thing we need to look into for Ubuntu Touch.

I spent the last month or so working on improving urfkill. I've implemented better logging, a way to get debugging logs, flight mode (with patches from my friend Tony Espy), persistence, ...

At this point, urfkill seems to be in the proper state to make it, with the latest changes from the upstream git repository, into the distro. There is no formal release yet, but this is likely to happen very soon. So, I uploaded a git snapshot from the urfkill upstream repository into Trusty. It's now time to ask people to try it out, see how well it works on their systems, and just generally get to know how solid it is, and whether it's time to enable it on the desktop.

In time, it would be nice to replace the current implementation we have of killswitch persistence (saving and restoring the state of the "soft" killswitches) currently in two upstart jobs — rfkill-store and rfkill-restore — with urfkill as a first step, for the 14.04 release (and to handle flight mode on Touch, of course). In the end, my goal would be to achieve convergence on this particular aspect of the operating system sooner than later, since it's a relatively small part of the overall communications/networking picture.

So I call on everyone running Trusty to try to install the urfkill package, and file bugs or otherwise get me feedback on the software. :)

Friday, 13 December 2013

urfkill : a daemon to centrally control RF killswitches

Here's another project of the u-daemon variety. The latest addition to upower, udev, etc. Meet urfkill.

urfkill is meant to be a daemon that centralizes killswitch handling, rather than having all kind of different applications and daemon handle Wi-Fi, Bluetooth, WWAN and whatnot separately, and potentially fighting over them, you can have just one system that tracks the states and makes it possible to switch just one type of killswitch on all at one, or turn everything off should you so desire...

One reason I've taken an interest in urfkill in Ubuntu is that as we build a phone, we have to keep thinking about how users of the devices will be mobile. That's even more the case when you think about a phone or tablet than a laptop: on a laptop, you may have to think of WiFi and Bluetooth, but you're just about as likely to have your laptop off or not have it at all; whereas phones and tablets have become ubiquitous in our way of life.

Like anyone, thinking mobile I'd first think of walking around, driving, or other methods of travel. Granted, nobody needs to turn off Wi-Fi when getting in their car, but what about on planes?

This is the first thing everything brings up when talking about killswitches. Planes. Alright, you really do need to turn the device off on take off and landing, but some airlines do now allow wifi to be on and offer in-flight service. They still require you to keep cellular and bluetooth off. Also, while I sometimes do take my laptop out of my bag on long flights, it's just cramped. Space is at a premium on a flight (hey, I fly economy...), you'll likely want to have a drink, people besides you may need to get up, spillage could occur if there is turbulence...

I don't really enjoy using my laptop on a flight, even though it's quite small. It's just so much trouble and not very comfortable.

However, I do love to watch saved movies, listen to music, and play games on a tablet. That tablet will most likely need to have radios turned off. My phone will typically just stay off and stowed far enough, since I don't really change SIM cards until I can do so safely without risking to lose the thing.

But then, one can also think of how you should avoid using transmitting equipment in a hospital. They have similar rules about radios as planes to avoid interfering with cardiac stimulators, MRI equipment, etc.

Having all kind of different applications handle each type of killswitches separately is quite risky and complicated. How are you certain that things have been turned off? How do you check in the UI whether it's the case? Can you see it quickly by scanning the screen?

What about the actual process of switching things off? Do you need to go through three different interfaces to toggle everything? What do you need to do if you don't have a physical switch to use?

What about persistence after a reboot?

urfkill is meant to, in time, address all such questions. At the moment, it still needs a lot of work though.

I've spent the last day fixing bugs I could find while testing urfkill on my laptop, as well as porting it to logind (still in progress). In time, we should be able to use it efficiently in Ubuntu to handle all the killswitches. With some more work, we will also be able to use it to manage the modem and other systems on Touch.

For the time being, urfkill is available in the archive, for those who want to start experimenting with it.

Tuesday, 29 October 2013

Hacking with a Samsung ARM Chromebook on Trusty

So I decided it was about time to update / reinstall my Samsung Chromebook (the ARM one...) to Trusty, or at least to use Saucy. Turns out it's not that simple.

First, you need to know where to get the right stuff. I installed straight on the device, so chrUbuntu was the obvious choice. It's a pretty nice script that allows you to do just about anything necessary.

1) Bring your Chromebook to developer mode.

I'm not going to give the details. It's findable on the Internet, and unsafe enough that you should only do this if you know what you're doing... That counts double for running Trusty on the Samsung Chromebook.

From there, get into crosh (Ctrl+Alt+T), in shell mode (type shell at the crosh prompt).

2) Download the script:

cd ~/Downloads
wget http://goo.gl/s9ryd

3) Run the script:

sudo bash s9ryd xubuntu-desktop dev

This will do the gory install step, partition your device and format the new partition, download the ubuntu core tarball, and from there install the metapackage you've asked for as a first argument.

Be aware that if you have never repartitioned the device, you'll likely notice the system rebooting during the process -- if that happens, just re-run the same command to pick this back up where they ended. It's clear when the process is done and the system installed -- the script requires you to press Enter to reboot.

This was where things got fun.

Turns out my device booted fine into Trusty, but it would only show a black screen with the mouse cursor. If you moved the mouse, you could see the cursor changing but still nothing else. Switching to another VT (Ctrl-Alt-arrow (F1) or whatnot) would work to get you a text-mode login, but only if you switched early enough while X was getting ready to load... otherwise, you'd just get a pretty garbled display.

I hacked at the whole thing for a good while. I already know xf86-video-armsoc was involved in ChromeOS at some point, so I tried to install that.

Still no love. Tried to copy the libs from ChromeOS to the device, in case it was some libmali or EGL/GLES issue... Still nothing better.

I even touched /etc/X11/xorg.conf with some black magic, looking up the details using w3m in a text console...

Turns out the problem was with xf86-video-armsoc itself. I initially clued in when I looked at the dates for upload of the X packages and xf86-video-armsoc itself -- it didn't seem quite right: X was newer by a bit. I knew there could be some issue with the ABI in some cases; but after more careful investigation, that's probably fine too -- armsoc properly depends on -abi-14.

After much more work and trial and error, I updated xf86-video-armsoc to 0.6.0 from the Linaro git tree and also reverted one commit changing flags and it's not mostly working. X runs, I get lightdm, I can run apps -- "compositing" in Xubuntu works too, to get transparency and gradients... all with some minimal display corruption of the window decorations.

So the end of the line is -- if you want to run Trusty on your Chromebook and run into similar black screen issues, and you feel daring, feel free to try my newly-built xf86-video-armsoc package in my PPA:

https://launchpad.net/~mathieu-tl/+archive/ppa/+sourcepub/3627079/+listing-archive-extra

It's simple; once you're in a text console on the machine (login as user/user):

nmcli dev wifi connect <your wifi network> password <your wifi password>
sudo add-apt-repository ppa:mathieu-tl/ppa
sudo apt-get update
sudo apt-get install xserver-xorg-video-armsoc-exynos
sudo reboot

These updated packages, or at least some kind of permanent fix, should make it into Trusty soon. Stay tuned :)

Saturday, 4 May 2013

Protocol stacks on Ubuntu Touch

We're working really hard to get the Ubuntu Touch images into a state where the UI is really polished, the experience rocks, and just to make it a truly amazing product that reflects the core principles of Ubuntu.

One of the aspects of this work is to get to a really good story with protocol stacks in general -- that is to say, bluetooth, WiFi, and the fun things behind connectivity on a mobile device. How can I get my files on the device? How can I copy the pictures that I've just taken back to my computer?

On the way back home from Oakland I've had quite a lot of time to reflect on what we've done so far. I've seen really cool demos of things you could get done on Touch and how things are going to look like in the near future. It makes me very proud to be part of getting Ubuntu to a large number of people through a solid Desktop system, but also stellar mobile devices support.

So, we did get bluetooth to work pretty okay so far on Touch. It seems like the current baddest issue is really just UI, but fortunately people are already hard at work fixing that, too. Keyboards can be discovered, and so can mice (I've uploaded a video to YouTube about that before). Bluetooth headsets should follow soon, but when I last tried I was running into issues with pulseaudio on the Nexus 7... If you're interested in bluetooth and know a little about BlueZ and the command line, by all means, let me know on IRC and let's get this to be really awesome.

For WiFi, we also have indicator-network in the archive, all rewritten, received tons of love, and soon to be ready to shine on both the desktop and mobile phones or tablets. Don't get me wrong, there's still a long way to go, but it feels to me like one thing we can pretty quickly ramp up to converge and essentially be the same experience no matter what form factor it is running on.

That covers WiFi -- but what about mobile data? (3G / 4G, but I rather speak of it in the most general terms) Well, that's being actively worked on as well. We're not too far off from having working 3G data  on the "officially supported" devices (Galaxy Nexus, Nexus 4, Nexus 7); and from there it seems like it may not be too much trouble for people to ramp up that support for other devices. Sure, it's complicated work because of how technical it is, but I think it's still approachable.

For mobile data, I've lately been working on teaching NetworkManager to speak to oFono; which are the two stacks we're decided, at UDS, to use to handle networking and telephony. The code itself isn't too pretty yet, but I'll add just a bit more meat to it and provide it as a test bzr branch for people to experiment with, until it's stable enough to make it into the archive altogether.

All this to mention that I'm really excited about the current progress for Touch, and although my progress on my own work items wasn't exactly stellar, I'm thrilled about what's to come. So thrilled I'll contact my cell provider to get the right SIM card size to start using Touch on a Nexus 4 as my main phone next week.

See you all at virtual UDS, looking forward to lots of constructive discussions about networking and connectivity!

Saturday, 18 June 2011

Hacking on usb-modeswitch, part 1

Lately I've been spending time porting the usb_modeswitch_dispatcher tcl script from the usb-modeswitch package to C.

While being a great exercise at both my knowledge of Tcl (almost non-existent) and my knowledge of C; it's also been very interesting so far to look at how things were being done to "switch" USB devices from a storage mode into modem mode.

One of the problems I'm hitting now is balancing between performance and disk space usage. In an attempt to cut down on installed space, usb_modeswitch data has been all compressed into one tarball, comprising 162 small text files with the necessary vendor and product IDs expected before, after switching and the magic message to do the actual switch (see Debian bug 578024 for the rationale for compressing files). Having a compressed tarball is great to save space, but would tend to cause delays at boot time when the file needs to be uncompressed (perhaps multiple times) during boot-up. On the other hand, separate files take more space, which is especially a problem for those who don't need usb-modeswitch on their systems.

I'm now working on quantifying the performance impact between compressed and uncompressed, as well as trying to figure out the actual size impact between both options for the Live CD. Theoretically, there should be little difference or even higher space usage with the compressed tarball on the LiveCD (because you can't really compressed something already compressed). I'll find out and post results here. As for the performance impact, there may not be much to look through the compressed tarball and extract one file from it, but every little bit of gain can help.

Thursday, 24 March 2011

Idea #27250: Auto eth0 isn't very user friendly. Many people wont know what it is.

The Problem

In case you didn't already figure it out, the title refers to a quite popular idea on Ubuntu Brainstorm. It also refers to a bug report against NetworkManager in Launchpad: bug #386900.

This is one issue I'd particularly like to solve soon. Although it most likely won't change for Natty (given that we're in Feature Freeze, and UI Freeze incessantly), I believe the question truly can be brought to a concensus and fixed early in the Oneiric cycle (and actually be made available upstream for everyone's benefit).

I think much of the issues coming from this bug report stem from diverging expectations of people who just care about their wired connection working and likely don't need to change it all that often, and people who actively use NetworkManager's connection profiles to achieve various things.

First, some background:

Why "Auto eth0" ?

The name "Auto eth0" comes from... well, the fact that it's a connection that was created automatically by NetworkManager with the simplest default settings (that is, just use DHCP to set an IP address), and the fact that it was created for the interface eth0. As such, people with multiple wired network cards would then get one "Auto XXXX" profile for each wired card. This profile should take care of 90% of all use cases, since most people will just want their system to be plugged in, their home router to hand over an IP address and be able to get online.

What's this with profiles ?

I just mentioned that the connection names shown are profiles. This is actually very important to me and quite a lot of people, because there are often cases where one would want to use specific network settings when at work and while at home. In other words, one could use "Auto eth0" at home with a simple setup, and benefit from a "At work" profile which sets a static IP address, or different DNS search strings (what would let your computer access "planet", instead of "planet.ubuntu.com" in Firefox, for instance).

Why so many issues ?

I guess this all falls apart when you consider that most people probably won't use alternate profiles for wired connections. DHCP tends to get most things right from the start, which make profiles not very useful unless you want to do very specific things with your connection.

Add to this the fact that not everyone knows that eth0 is what Linux calls your first wired network card (instead of say "Local Area Connection" as I believe it is on Windows), and you have a nice little mess to untangle.

Fixing all of this

I can't say I have all the answers. It's still unclear to me how much information is absolutely required, and I'm well aware that we can't really please everyone.

However, I've added a proposal to the brainstorm idea (Solution #7). It goes like this:
I'm suggesting the name of the profile to be something like "Default". It should not be tied to any particular adapter.
This way, any new connection use that profile which will have default settings to use DHCP and the usual (as Auto eth0 is set). All adapters could share it, so adding a new interface to a computer would still just "work".

For notifications, I suggest the following changes:

- The title should mention "Wired network", and probably the same of the interface (eth0 in most cases).
- The text of the notification should say:

Connection established, using profile "Default"

or whatever profile in use.
Furthermore, perhaps items in the network menu shouldn't list the full details of the network card (it's full name from udev as it does now). Instead, the interface name would be sufficient. I expect people who use multiple cards to know what eth0 and eth1 mean and refer to.

Lastly, drop the "Auto" from user-created wireless network profiles too. Since they are created by the user and carry the name of the wireless network, "Auto" is both unnecessary, and possibly incorrect (since people can change the settings after creating it).

I'd very much like anyone with an opinion on this to vote on Ubuntu Brainstorm for the idea they prefer, and comment with why my suggestion breaks things for them if it does. I certainly could have forgotten things. Comments on this blog are welcome too ;)

Tuesday, 12 October 2010

Booting to ISO images from a USB key

I'm pretty sure this has been done before (I recall seeing a blog post about it... if you're the one who posted on Planet Ubuntu with a similar howto, let me know as I want to talk to you, at least to say thanks), but here it is anyway:

A couple of days ago I wrote this quick script to generate a roughly correct grub.cfg from the contents of a directory filled with .iso files. The goal: generate a USB bootable key that runs GRUB and allows you to choose which ISO to boot. It could be Ubuntu desktop, netbook, etc, doesn't matter, as long as you have enough room on the key.

This is done by leveraging the loopback grub command and the isoscan parameter. But first, setting it up...

You'll need (to):

  • one free USB key, formatted to vfat (mkfs.vfat -F 32 /dev/sdX1) the more space the better
  • create a directory called "iso" on it
  • install grub on it:
    grub-install --root-directory=/media/MOUNTPOINT /dev/sdX
    • this will create the boot/grub directories and install everything grub needs
  • copy iso files to the iso/ directory
  • run update-grub.py (available in bzr branch: lp:~mathieu-tl/+junk/bootable-iso-usb) from the key's mountpoint 
    • Careful: it's only quickly tested, and overwrites boot/grub/grub.cfg from the current working directory.
Sorry for the unimaginative naming of the script.

In short, this script attempts to guess what kind of distribution is in the ISO file. I've tested desktop and alternate with success, both seem to boot and properly get you to an install (or for desktop, "maybe-ubiquity", which means you'll get a prompt for whether you want to run the live session or just ubiquity to install). All this needed was to make use of the iso naming scheme and more importantly of the contents of the ISO files, as read by isoinfo (from the genisoimage package).

Even if it never ends up being of any use, it was at least a fun little thing to write.

Tuesday, 28 September 2010

Videotron Huawei E1381 USB 3G dongle

I've been fortunate enough today to be lended a Huawei E1381 3G USB modem. I'm happy to report all I had to do was to plug it in to my Aspire One running Maverick UNE and confirm it was immediately recognized and available in NetworkManager.

What's more, Videotron's new 3G offering is already available from the mobile-broadband-provider-info package, so you can already plug in such a key, create a new 3G connection in NM and select Videotron as the ISP to quickly get online!

Needless to say, this is pretty good news knowing the current developments in 3G offerings in the province of Quebec; so thanks to all those involved to make this available :)

As I side note, I'll be going to UDS-N in October, and I'm very interested to know about your success stories or even failure stories about 3G modems. I can't promise anything, but I *do* think it would be great to get a good grasp of what 3G devices are supported or not. So if you own a 3G USB dongle which still doesn't work in Maverick and you are going to UDS, don't hesitate to ping me on IRC or stop me IRL so we can look into it.

I may write up a wiki page with devices which are known to work (or not to work), so stay tuned for an update on this subject.

Friday, 10 September 2010

Broadcom driver now open source

This has been brought to my attention today. I see this as a huge win for Ubuntu, since, let's face it, Broadcom devices are a large part of the WiFi device ecosystem. I'm happy to see Broadcom finally publishing some of their source code, and through this helping out Linux users of all distributions get better working WiFi. I see this as a win for Ubuntu in general because it will always make me happier to see people having fewer issues with wireless devices. This also translates to less wheel-spinning with broken drivers we could not possibly fix (since they are closed source), towards more productive work on other aspects of Ubuntu... or just a better chance of fixing driver issues :)

Of course, this is a work in progress and not all Broadcom devices are currently supported by this driver, but I believe that in due time, Broadcom could become one of the golden examples we can give to hardware manufacturers on how to provide the best kind of support they can to the Linux community.

Read more about this on Slashdot, OSNews, or read the annoucement on the Linux Wireless mailing list.

Thanks Broadcom!

Tuesday, 15 June 2010

HP Mini netbooks mic/headphones adapter

Engineers at HP have been having these wonderful ideas in hope of saving space (I guess). On some of their netbooks, in particular the Mini 1000, there is only one 3.5mm port to be used for both the microphone and headphones.

I needed to tests this system in order to confirm the status of bug #532958. If you're like me and didn't get an adapter with the system, here is how you can make one.


Requirements:








Process:

Note the type of cable I'm using: it is somewhat important. You could of course use any type of cable with at least 3 wires, but you would then have to find a 4-channel 3.5mm plug, which is a little less easy to find. This cable already has it, so just remove the RCA plugs, and skin these ends to reveal the signal wire and ground.

You'll want to then attach the wires to the 3.5mm jacks as shown in the pictures below.

  • Headphone end: use the yellow and red RCA wires. Attach the colored wires to the ring and tip (the name for the metal tabs that carry actual signal when everything is connected, at the bottom of the jack), then attach both ground wires to the ground tab (the tab sticking out to the side).
  • Microphone end: use the white RCA wire, attach to either the ring or tip, then ground it.



Result:


If the jacks aren't properly grounded or the RCA cable you have used uses a different pinout, you will get no sound, or mono sound, or just very low volume. If so, detach everything and start over, maybe using different color mixes.

Once done correctly, you will retain stereo sound for the headphones, and will be able to use an external microphone, though the internal mic is disabled when this adapter is connected and the setting in Sound Input is at "Microphone". To use the internal microphone instead, you would want to use "Line Input"... but then you probably don't need this adapter and would be better off using standard headphones.

Note: after more research, I found out that an iPhone headset might be the easiest way to go. I however do not own an iPhone, and only found out after finishing up the adapter.

Thursday, 12 November 2009

Server FAIL

Wow. BIG Fail for the IBM xSeries X306. Its hard drive setup is the worst I've seen in a long, long time.

Trying to kickstart an old X306 we had, to still use good-ish hardware for stuff that can make use of it. At first, it doesn't detect the disks... Me and Michel are all "what the hell could be wrong with this system?".

Looking through the ventilation holes, I see a little sticker that essentially says "do not pull without disconnecting inside" in some weird (and pretty bad, actually) translation by one of our coworkers.

Obviously, that's what Michel had done a little earlier, without expecting the kind of SNAFU we were seeing to really be possible. Of course, I'd probably have done the same. You work long enough with good HP servers to come and expect hard, well engineered connectors inside a server to plug in the disks. You kind of expect it with IBM servers too, seeing as other, bigger systems don't pull that kind of crap...

Nope. It seems hard connectors were too hard.

Instead, the IBM xSeries X306 has some kind of IDE-like ribbons and connectors to the disks, cables that are short as hell, attached to the front-accessible (hot-swap? (probably not)) disks. When you pull on the disks, the ribbons obviously eventually detach (or rip something out, YMMV there...), but as you insert the disks you'll end up with major problems -- ribbon is at the end, you need to open the top of the server to kind of wiggle the ribbon and power cord into submission and insert them into the back of the disk, with less than an inch of room for itself, the disk, and your fingers.

IBM, you really could have done better on that one. On lots of things on many server systems in the xSeries actually, including the RSA remote access stuff...

Thursday, 18 June 2009

concordance now in Debian NEW queue!

Hi!

For those interested, my efforts to package concordance and congruity, the CLI and GUI tools to configure a Logitech Harmony remote are finally starting to pay off!

concordance has been accepted from the Debian-mentors repo and uploaded to the Debian NEW queue! Big thanks to Patrick Matthäi for his guidance.

That said, it means that concordance will hopefully soon also be in Ubuntu. (and it was also in REVU. Thanks to all those who reviewed it and posted comments. They were all very helpful.)

Now, I'm still having some issues packaging the newest version of congruity, or more precisely getting an error in the package after it's installed. Something that I believe is related to python-wxgtk2.8-0, about 'AttributeError: 'BoxSizer' object has no attribute 'AddStretchSpacer''. I just haven't had much luck it finding out what dependency I had wrong or missing...

Fortunately, the packages are also very much reachable in case you want to try them: check out my PPA, at https://edge.launchpad.net/~mathieu-tl/+archive/ppa.

Sunday, 14 September 2008

Planet Ubuntu Meme (aka I am so unoriginal)

My computers are named with the names of Greek/Roman dieties, and more specifically some that I've also seen in the songs of my favorite band, The Cruxshadows.

So far, I have:

  • eurydice : my openwrt router
  • orpheus: my main desktop computer (currently offline because of some issues though)
  • icarus: my work laptop. It also obviously has a name for the office.
  • daedelus: used to be my palm pilot.
  • cerberus: my EEEPC.
  • persephone: my jailbroken iPod Touch.
Hopefully there will be more to come.