Tuesday 1 August 2017

How to sign things for Secure Boot

Secure Boot signing


The whole concept of Secure Boot requires that there exists a trust chain, from the very first thing loaded by the hardware (the firmware code), all the way through to the last things loaded by the operating system as part of the kernel: the modules. In other words, not just the firmware and bootloader require signatures, the kernel and modules too. People don't generally change firmware or bootloader all that much, but what of rebuilding a kernel or adding extra modules provided by hardware manufacturers?

The Secure Boot story in Ubuntu includes the fact that you might want to build your own kernel (but we do hope you can just use the generic kernel we ship in the archive), and that you may install your own kernel modules. This means signing UEFI binaries and the kernel modules, which can be done with its own set of tools.

But first, more on the trust chain used for Secure Boot.


Certificates in shim


To begin with signing things for UEFI Secure Boot, you need to create a X509 certificate that can be imported in firmware; either directly though the manufacturer firmware, or more easily, by way of shim.

Creating a certificate for use in UEFI Secure Boot is relatively simple. openssl can do it by running a few SSL commands. Now, we needs to create a SSL certificate for module signing...

First, let's create some config to let openssl know what we want to create (let's call it 'openssl.cnf'):

# This definition stops the following lines choking if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd 
[ req ]
distinguished_name      = req_distinguished_name
x509_extensions         = v3
string_mask             = utf8only
prompt                  = no

[ req_distinguished_name ]
countryName             = CA
stateOrProvinceName     = Quebec
localityName            = Montreal
0.organizationName      = cyphermox
commonName              = Secure Boot Signing
emailAddress            = example@example.com

[ v3 ]
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always,issuer
basicConstraints        = critical,CA:FALSE
extendedKeyUsage        = codeSigning,1.3.6.1.4.1.311.10.3.6,1.3.6.1.4.1.2312.16.1.2
nsComment               = "OpenSSL Generated Certificate"
Either update the values under "[ req_distinguished_name ]" or get rid of that section altogether (along with the "distinguished_name" field) and remove the "prompt" field. Then openssl would ask you for the values you want to set for the certificate identification.

The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure "1.3.6.1.4.1.2312.16.1.2" is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate.

Then, we can start the fun part: creating the private and public keys.

openssl req -config ./openssl.cnf \
        -new -x509 -newkey rsa:2048 \
        -nodes -days 36500 -outform DER \
        -keyout "MOK.priv" \
        -out "MOK.der"
This command will create both the private and public part of the certificate to sign things. You need both files to sign; and just the public part (MOK.der) to enroll the key in shim.


Enrolling the key


Now, let's enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don't include that '1.3.6.1.4.1.2312.16.1.2' OID discussed earlier).

To enroll a key, use the mokutil command:
sudo mokutil --import MOK.der
Follow the prompts to enter a password that will be used to make sure you really do want to enroll the key in a minute.

Once this is done, reboot. Just before loading GRUB, shim will show a blue screen (which is actually another piece of the shim project called "MokManager"). use that screen to select "Enroll MOK" and follow the menus to finish the enrolling process. You can also look at some of the properties of the key you're trying to add, just to make sure it's indeed the right one using "View key". MokManager will ask you for the password we typed in earlier when running mokutil; and will save the key, and we'll reboot again.


Let's sign things


Before we sign, let's make sure the key we added really is seen by the kernel. To do this, we can go look at /proc/keys:

$ sudo cat /proc/keys
0020f22a I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
0022a089 I------     2 perm 1f0b0000     0     0 keyring   .builtin_trusted_keys: 1
003462c9 I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
00709f1c I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
00f488cc I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
[...]
1dcb85e2 I------     1 perm 1f030000     0     0 asymmetri Build time autogenerated kernel key: eae8fa5ee6c91603c031c81226b2df4b135df7d2: X509.rsa 135df7d2 []
[...]

Just make sure a key exists there with the attributes (commonName, etc.) you entered earlier.

To sign kernel modules, we can use the kmodsign command:
kmodsign sha512 MOK.priv MOK.der module.ko
module.ko should be the file name of a kernel module file you want to sign. The signature will be appended to it by kmodsign, but if you would rather keep the signature separate and concatenate it to the module yourself, you can do that too (see 'kmosign --help').

You can validate that the module is signed by checking that it includes the string '~Module signature appended~':

$ hexdump -Cv module.ko | tail -n 5
00002c20  10 14 08 cd eb 67 a8 3d  ac 82 e1 1d 46 b5 5c 91  |.....g.=....F.\.|
00002c30  9c cb 47 f7 c9 77 00 00  02 00 00 00 00 00 00 00  |..G..w..........|
00002c40  02 9e 7e 4d 6f 64 75 6c  65 20 73 69 67 6e 61 74  |..~Module signat|
00002c50  75 72 65 20 61 70 70 65  6e 64 65 64 7e 0a        |ure appended~.|
00002c5e

You can also use hexdump this way to check that the signing key is the one you created.


What about kernels and bootloaders?


To sign a custom kernel or any other EFI binary you want to have loaded by shim, you'll need to use a different command: sbsign. Unfortunately, we'll need the certificate in a different format in this case.

Let's convert the certificate we created earlier into PEM:

openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem

Now, we can use this to sign our EFI binary:

sbsign --key MOK.priv --cert MOK.pem my_binary.efi --output my_binary.efi.signed
As long as the signing key is enrolled in shim and does not contain the OID from earlier (since that limits the use of the key to kernel module signing), the binary should be loaded just fine by shim.


Doing signatures outside shim


If you don't want to use shim to handle keys (but I do recommend that you do use it), you will need to create different certificates; one of which being the PK (Platform Key) for the system, which you can enroll in firmware directly via KeyTool or some firmware tool provided with your system. I will not elaborate the steps to enroll the keys in firmware as it tends to vary from system to system, but the main idea is to put the system in Secure Boot "Setup Mode"; run KeyTool (which is its own EFI binary you can build yourself and run), and enroll the keys -- first by installing the KEK and DB keys, and finishing with the PK. These files need to be available from some FAT partition.

I do have this script to generate the right certificates and files; which I can share (and itself is copied from somewhere, I can't remember):

#!/bin/bash
echo -n "Enter a Common Name to embed in the keys: "
read NAME
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME PK/" -keyout PK.key \
        -out PK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME KEK/" -keyout KEK.key \
        -out KEK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME DB/" -keyout DB.key \
        -out DB.crt -days 3650 -nodes -sha256
openssl x509 -in PK.crt -out PK.cer -outform DER
openssl x509 -in KEK.crt -out KEK.cer -outform DER
openssl x509 -in DB.crt -out DB.cer -outform DER
GUID=`python -c 'import uuid; print str(uuid.uuid1())'`
echo $GUID > myGUID.txt
cert-to-efi-sig-list -g $GUID PK.crt PK.esl
cert-to-efi-sig-list -g $GUID KEK.crt KEK.esl
cert-to-efi-sig-list -g $GUID DB.crt DB.esl
rm -f noPK.esl
touch noPK.esl
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK PK.esl PK.auth
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK noPK.esl noPK.auth
chmod 0600 *.key
echo ""
echo ""
echo "For use with KeyTool, copy the *.auth and *.esl files to a FAT USB"
echo "flash drive or to your EFI System Partition (ESP)."
echo "For use with most UEFIs' built-in key managers, copy the *.cer files."
echo ""
The same logic as earlier applies: sign things using sbsign or kmodsign as required (use the .crt files with sbsign, and .cer files with kmodsign); and as long as the keys are properly enrolled in the firmware or in shim, they will be successfully loaded.


What's coming up for Secure Boot in Ubuntu


Signing things is complex -- you need to create SSL certificates, enroll them in firmware or shim... You need to have a fair amount of prior knowledge of how Secure Boot works, and that the commands to use are. It's rather obvious that this isn't at the reach of everybody, and somewhat bad experience in the first place. For that reason, we're working on making the key creation, enrollment and signatures easier when installing DKMS modules.

update-secureboot-policy should soon let you generate and enroll a key; and DKMS will be able to sign things by itself using that key.

Tuesday 20 June 2017

Netplan by default in 17.10

Friday, I uploaded an updated nplan package (version 0.24) to change its Priority: field to important, as well as an update of ubuntu-meta (following a seeds update), to replace ifupdown with nplan in the minimal seed.

What this means concretely is that nplan should now be installed by default on all images, part of ubuntu-minimal, and dropped ifupdown at the same time.

For the time being, ifupdown is still installed by default due the way debootstrap generates the very minimal images used as a base for other images -- how it generates its base set of packages, since that depends only on the Priority: field of packages. Thus, nplan was added, but ifupdown still needs to be changed (which I will do shortly) to disappear from all images.

The intent is that nplan would now be the standard way of configuring networks. I've also sent an email about this to ubuntu-devel-announce@.

I've already written a bit about what netplan is and does, and I have still more to write on the subject (discussing syntax and how to do common things). We especially like how using a purely declarative syntax makes things easier for everyone (and if you can't do what you want that way, then it's a bug you should report).

MaaS, cloud-init and others have already started to support writing netplan configuration.

The full specification (summary wiki page and a blueprint reachable from it) for the migration process is available here.

While I get to writing something comprehensive about how to use the netplan YAML to configure networks, if you want to know more there's always the manpage, which is the easiest to use documentation. It should always be up to date with the current version of netplan available on your release (since we backported the last version to Xenial, Yakkety, and Zesty), and accessible via:

man 5 netplan

To make things "easy" however, you can also check out the netplan documentation directly from the source tree here:

https://git.launchpad.net/netplan/tree/doc/netplan.md

There's also a wiki page I started to get ready that links to the most useful things, such as an overview of the design of netplan, some discussion on the renderers we support and some of the commands that can be used.

We even have an IRC channel on Freenode: #netplan

I think you'll find that using netplan makes configuring networks easy and even enjoyable; but if you run into an issue, be sure to file a bug on Launchpad here:

Wednesday 24 May 2017

An overview of UEFI Secure Boot on Ubuntu

Secure Boot is here

Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.

An overview

I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).


In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: BootOrder, which specified what order to boot things in, as well as BootEntry#### (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no BootEntry variable listed in BootOrder gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default BootEntry. Shim relies on that, or on a distro, to load in the first place.

Once we actually find shim to boot; this will try to validate signatures of the next piece in the puzzle: grub2, MokManager, or fallback, depending on the state of shim's own variables in NVRAM; more on this later.

In the usual scenario, shim will validate the grub2 image successfully, then grub2 itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking shim to check the signature.

Shim

Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load grub2 in the normal case, as well as how to load MokManager if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the fallback binary which can re-create BootEntry variables in case the firmware isn't able to handle them. I will expand on MokManager and fallback in a future blog post.

Your diagram says shim is signed by Microsoft, what's up with that?

Indeed, shim is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different shims for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.

One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.

Why reboot once a policy change is made or boot entries are rebuilt?

All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding BootEntry variables first, and then loading MokManager to add a new signing key before we can load a new grub2 image you signed yourself).

Grub2

grub2 is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of grub2. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through shim, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included in the firmware, as expressed earlier. Mostly just the Microsoft signatures.

grub2 in UEFI otherwise works just like it would elsewhere: it try to find its grub.cfg configuration file, and follow its instructions to boot the kernel and load the initramfs.

When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is signed. The kernels we install in Ubuntu are signed by Canonical, just like grub2 is, and shim knows about the signing key and can validate these signatures.

At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, grub2 will fall back to loading the kernel as a normal binary (as in not signed), outside of BootServices (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting BootServices means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then grub2 leaves the ExitBootServices call to be done by the kernel.

Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).

The kernel

In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.

In order to make this as easy as possible for people, for now we've opted to let users disable Secure Boot validation in shim via a semi-automatic process. Shim is still being verified by the system firmware, but any piece following it that asks shim to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). grub2 will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).

Shim validation

To toggle shim validation, if you were using DKMS packages and feel you'd really prefer to have shim validate everything (but be aware that if your system requires these drivers, they will not load and your system may be unusable, or at least whatever needs that driver will not work):
sudo update-secureboot-policy --enable
If nothing happens, it's because you already have shim validation enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, Secure Boot is not enabled on the system), the command will tell you.

And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see --help). There is an example of use of update-secureboot-policy here.

Tuesday 23 May 2017

ss: another way to get socket statistics

In my last blog post I mentioned ss, another tool that comes with the iproute2 package and allows you to query statistics about sockets. The same thing that can be done with netstat, with the added benefit that it is typically a little bit faster, and shorter to type.

Just ss by default will display much the same thing as netstat, and can be similarly passed options to limit the output to just what you want. For instance:

$ ss -t
State       Recv-Q Send-Q       Local Address:Port                        Peer Address:Port              
ESTAB       0      0                127.0.0.1:postgresql                     127.0.0.1:48154              
ESTAB       0      0            192.168.0.136:35296                      192.168.0.120:8009                
ESTAB       0      0            192.168.0.136:47574                     173.194.74.189:https
[...]

ss -t shows just TCP connections. ss -u can be used to show UDP connections, -l will show only listening ports, and things can be further filtered to just the information you want.

I have not tested all the possible options, but you can even forcibly close sockets with -K.

One place where ss really shines though is in its filtering capabilities. Let's list all connections with a source port of 22 (ssh):

$ ss state all sport = :ssh
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   LISTEN     0      128                    *:ssh                                  *:*                  
tcp   ESTAB      0      0          192.168.0.136:ssh                      192.168.0.102:46540              
tcp   LISTEN     0      128                   :::ssh                                 :::* 
And if I want to show only connected sockets (everything but listening or closed):

$ ss state connected sport = :ssh
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   ESTAB      0      0          192.168.0.136:ssh                      192.168.0.102:46540 

Similarly, you can have it list all connections to a specific host or range; in this case, using the 74.125.0.0/16 subnet, which apparently belongs to Google:

$ ss state all dst 74.125.0.0/16
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   ESTAB      0      0          192.168.0.136:33616                   74.125.142.189:https              
tcp   ESTAB      0      0          192.168.0.136:42034                    74.125.70.189:https              
tcp   ESTAB      0      0          192.168.0.136:57408                   74.125.202.189:https
        
This is very much the same syntax as for iptables, so if you're familiar with that already, it will be quite easy to pick up. You can also install the iproute2-doc package, and look in /usr/share/doc/iproute2-doc/ss.html for the full documentation.

Try it for yourself! You'll see how well it works. If anything, I'm glad for the fewer characters this makes me type.

Tuesday 9 May 2017

If you're still using ifconfig, you're living in the past

The world evolves

I regularly see "recommendations" to use ifconfig to get interface information in mailing list posts or bug reports and other places. I might even be guilty of it myself. Still, the world of networking has evolved quite a lot since ifconfig was the de-facto standard to bring up a device, check its IP or set an IP.

Following some improvements in the kernel and the gradual move to driving network things via netlink; ifconfig has been largely replaced by the ip command.

Running just ip yields the following:

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec |
                    -f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}

I understand this may look complicated to some people, but the jist of it is to understand that with ip, you interact with objects, and apply some kind of function to it. For example:

ip address show

This is the main command that would be used in place of ifconfig. It will just display the IP addresses assigned to all interfaces. To be precise, it will show you the layer 3 details the interface: the IPv4 and IPv6 addresses, whether it is up, what are the different properties related to the addresses...

Another command will give you details about the layer 2 properties of the interface: its MAC address (ethernet address), etc; even if it is shown by ip address:

ip link show

Furthermore, you can set devices up or down (similar to ifconfig eth0 up or ifconfig eth0 down) simply by using:

ip link set DEVICE up or ip link set DEVICE down

As shown above, there are lots of other objects that can be interacted with using the ip command. I'll cover another: ip route, in another post.

Why is this important?

As time passes, more and more features are becoming easier to use with the ip command instead of with ifconfig. We've already stopped installing ifconfig on desktops (it still gets installed on servers for now), and people have been discussing dropping net-tools (the package that ships ifconfig and a few other old commands that are replaced) for a while now. It may be time to revisit not installing net-tools by default anywhere.

I want to know about your world

Are you still using one of the following tools?

/bin/netstat    (replaced by ss, for which I'll dedicate another blog post entirely)
/sbin/ifconfig
/sbin/ipmaddr   (replaced by ip maddress)
/sbin/iptunnel
/sbin/mii-tool    (ethtool should appropriately replace it)
/sbin/nameif
/sbin/plipconfig
/sbin/rarp
/sbin/route
/sbin/slattach

If so and there is just no alternative to using them that comes from iproute2 (well, the ip or ss commands) that you can use to do the same, I want to know about how you are using them. We're always watching for things that might be broken by changes; we want to avoid breaking things when possible.

Friday 5 May 2017

Quick and easy network configuration with Netplan

Earlier this week I uploaded netplan 0.21 in artful, with SRUs in progress for the stable releases. There are still lots of features coming up, but it's also already quite useful. You can already use it to describe typical network configurations on desktop and servers, all the way to interesting, complicated setups like bond over a bridge over multiple VLANs...

Getting started

The simplest netplan configuration might look like this:

# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager
At boot, netplan will see this configuration (which happens to be installed already on all new systems since 16.10) and generate a single , empty file: /run/NetworkManager/conf.d/10-globally-managed-devices.conf. This tells the system that NetworkManager is the only renderer for network configuration on the system, and will manage all devices by default.

Working from there: a simple server

Let's look at it on a hypothetical web server; such as for my favourite test: www.perdu.com.

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
This incredibly simple configuration tells the system that the eth0 device is to be brought up using DHCP4. Netplan also supports DHCPv6, as well as static IPs, setting routes, etc.


Building up to something more complex

Let's say I want a team of two NICs, and use them to reach VLAN 108 on my network:

            network:
              version: 2
              ethernets:
                eth0:
                  dhcp4: n
                eth1:
                  mtu: 1280
                  dhcp4: n
              bonds:
                bond0:
                  interfaces:
                  - eth1
                  - eth0
                  mtu: 9000
              vlans:
                bond0.108:
                  link: bond0
                  id: 108

I think you can see just how simple it is to configure even pretty complex networks, all in one file. The beauty in it is that you don't need to worry about what will actually set this up for you.

A choice of backends

Currently, netplan supports either NetworkManager or systemd-networkd as a backend. The default is to use systemd-networkd, but given that it does not support wireless networks, we still rely on NetworkManager to do just that.

This is why you don't need to care what supports your config in the end: netplan abstracts that for you. It generates the required config based on the "renderer" property, so that you don't need to know how to define the special device properties in each backend.

As I mentioned previously, we are still hard at work adding more features, but the core is there: netplan can set up bonds, bridges, vlans, standalone network interfaces, and do so for both static or DHCP addresses. It also supports many of the most common bridge and bond parameters used to tweak the precise behaviour of bonded or bridged devices.


Coming up...

I will be adding proper support for setting a "cloned" MAC on a device. I'm reviewing the code already to do this, and ironing out the last issues.

There are also plans on better handling administrative states for devices; along with a few bugs that relate to support MaaS, where having a simple configuration style really shines.

I'm really excited for where netplan is going. It seems like it has a lot of potential to address some of the current shortcomings in other tools. I'm also really happy to hear of stories of how it is being used in the wild, so if you use it, don't hesitate to let me know about it!

Contributing

All of the work on netplan happens on Launchpad. Its source code is at https://code.launchpad.net/netplan; we always welcome new contributions.