tag:blogger.com,1999:blog-14007657992630792382024-03-05T01:54:44.092-05:00Matt's blogThe thoughts of a Systems and Network Administrator and computer geek on networking, computers, and life.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.comBlogger107125tag:blogger.com,1999:blog-1400765799263079238.post-31824853451551108122018-09-20T03:35:00.000-04:002018-09-20T03:35:14.073-04:00Help needed to improve proposed migrationHi!<br />
<br />
Every once in a while, in the Foundations team, we do a coding day.
A year ago, Lukasz and I wrote a script, following an idea from Steve Langasek, to provide "hints" and help for the next steps necessary for a package to migrate from -proposed to -release.<br />
<br />
"ubuntu-archive-assistant" was born.
I just pushed this to <b>lp:ubuntu-dev-tools</b>, after it being on its own in a separate git tree for a long while. I'd love to get help for feedback, as well as more people contributing fixes, etc.
ubuntu-archive-assistant is designed to let you look at a specific package in -proposed and try to tell you what to do next to ensure it migrates from -proposed.<br />
<br />
This is great work for new contributors wishing to work on something, say, to get upload privileges in Ubuntu.<br />
<br />
Here's how it works (it uses subcommands right now).<br />
<br />
Without any further options than "ubuntu-archive-assistant proposed", it will list packages in -proposed and let you pick:
<br />
<blockquote>
$ ./ubuntu-archive-assistant proposed<br />
No source package name was provided. The following packages are blocked in proposed:<br />
<br />
(1) gnome-shell-extension-multi-monitors (Age: 338 days)<br />
(2) node-is-glob (Age: 278 days)<br />
(3) node-concat-with-sourcemaps (Age: 264 days)<br />
(4) node-postcss (Age: 231 days)<br />
(5) node-source-map (Age: 229 days)<br />
(6) android-platform-system-core (Age: 226 days)<br />
(7) libdigidocpp (Age: 226 days)<br />
(8) qesteidutil (Age: 225 days)<br />
(9) schleuder (Age: 218 days)<br />
(10) ncbi-blast+ (Age: 216 days)<br />
(11) node-postcss-filter-plugins (Age: 213 days)<br />
(12) node-postcss-load-options (Age: 213 days)<br />
(13) node-postcss-load-plugins (Age: 213 days)<br />
(14) node-postcss-minify-font-values (Age: 213 days)<br />
(15) node-postcss-load-config (Age: 209 days)<br />
(16) live-config (Age: 207 days)<br />
Page -1-. Press any key for next page or Q to select a package.<br />
Which package do you want to look at? 9<br />
Next steps for schleuder 3.2.2-1:<br />
Fix missing builds: amd64<br />
https://launchpad.net/ubuntu/+source/schleuder/3.2.2-1</blockquote>
<br />
If you specify which package you want to look at, it will give you the specifics for that package (examples here are for what is currently in -proposed):
<br />
<blockquote>
$ ./ubuntu-archive-assistant proposed -s qesteidutil<br />
Next steps for qesteidutil 0.3.1-0ubuntu4:<br />
Fix missing builds: amd64, arm64, armhf, i386, ppc64el, s390x<br />
https://launchpad.net/ubuntu/+source/qesteidutil/0.3.1-0ubuntu4</blockquote>
<blockquote class="tr_bq">
$ ./ubuntu-archive-assistant proposed -s android-platform-system-core<br />Next steps for android-platform-system-core 1:7.0.0+r33-2build1:<br /> Fix missing builds: amd64, arm64, armhf, i386<br /> https://launchpad.net/ubuntu/+source/android-platform-system-core/1:7.0.0+r33-2build1 </blockquote>
<br />
You can even get more information about the next steps for a package, by enabling <i>--debug</i> or <i>--verbose</i>:
<br />
<blockquote>
<br />
$ ./ubuntu-archive-assistant proposed -s live-config Next steps for live-config 5.20180224:<br />
Fix unsatisfiable dependencies in live-config:<br />
<br /></blockquote>
<blockquote class="tr_bq">
$ ./ubuntu-archive-assistant proposed --verbose -s live-config<br />live-config is not considered ✘<br />Next steps for live-config 5.20180224:<br /> Fix unsatisfiable dependencies in live-config:<br /> sysvinit-core | sysvinit (<< 2.88dsf-44) can not be satisfied on amd64 ✘<br /> sysvinit-core only exists in Debian ✘<br /> </blockquote>
<blockquote>
$ ./ubuntu-archive-assistant proposed --debug -s live-config<br />
live-config is not considered ✘<br />
Next steps for live-config 5.20180224:<br />
DEBUG<review .proposed.live-config="">: reasons: ['depends'] </review><br />
Fix unsatisfiable dependencies in live-config:<br />
sysvinit-core | sysvinit (<< 2.88dsf-44) can not be satisfied on amd64 ✘<br />
sysvinit-core only exists in Debian ✘<br />
DEBUG<review .proposed.live-config.unsatisfiable="">: Is this package blacklisted? Should it be synced?</review></blockquote>
<br />
We've covered some of the common reasons for a package to be stuck in proposed, but there are a ton of others. We'll need help to improve the tooling and make it useful for everyone wishing to work on proposed migration. There's a lot more that can be done, including spending time to parse <b>update_output.txt</b> (or better yet, a YAML representation of it) and testing package installation automatically to figure out what packages need no-change rebuilds, etc. A lot of it is integration of other tools that already exist.<br />
<br />
That's where you come in.<br />
<br />
This is a great way to learn a lot more about what happens to packages after they are uploaded, and what more you can do to ensure your own uploads move quickly to be accessible to all Ubuntu users; and many of the improvements can be as simple as contributing a simple test for one failure case for packages in -proposed.<br />
<br />
More to come about <b>ubuntu-archive-assistant</b>. There are other subcommands in it than just "proposed". :)Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-34853484319298848832018-05-16T18:10:00.000-04:002018-05-16T18:47:56.981-04:00Building a local testing lab with Ubuntu, MAAS and netplan<h2>
Overview</h2>
I'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjua-lGuBFm3RFZ0vTa60GHvo-k45jnEZ1ePKLa2knYnXO43E8bglIbZ0c548ZVrjf7uuKxwg8iYt7BnuhQVpSLiGiSOGs3aUREWjyDWEETDc3HcTpsd6XmznWOxCbcBLSqgp8cQF_FZt0/s1600/IMG_20180516_180157.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjua-lGuBFm3RFZ0vTa60GHvo-k45jnEZ1ePKLa2knYnXO43E8bglIbZ0c548ZVrjf7uuKxwg8iYt7BnuhQVpSLiGiSOGs3aUREWjyDWEETDc3HcTpsd6XmznWOxCbcBLSqgp8cQF_FZt0/s320/IMG_20180516_180157.jpg" width="240" /></a></div>
One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (<i>Cisco C3750</i>, 24 ports, with PoE support in case I'd want to start using that), a small <i>Cisco ASA 5505</i> to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.<br />
<br />
The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.<br />
<br />
My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using <i>home.example.com</i> or <i>staging.example.com</i>, or even <i>maas.example.com</i> as a domain name following the name of the system. Name resolution for the <i>maas.example.com</i> domain is forwarded to the MAAS server. More on all of this later.<br />
<br />
The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.<br />
<br />
Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.<br />
<br />
Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.<br />
<br />
<h2>
Setting up the network hardware</h2>
I will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.<br />
<br />
For simplicity, the ASA is always <b>.1</b> in any subnet, the switch is <b>.2</b> when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always <b>.5</b>, and the MAAS server is always <b>.25</b>. DHCP ranges were designed to reserve anything <i>.25 and below</i> for static assignments on the staging and sandbox networks, and since I use a <b>/23</b> subnet for home, half is for static assignments, and the other half is for DHCP there.<br />
<br />
<h3>
MAAS server hardware setup</h3>
Netplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">network:<br /> ethernets:<br /> enp0s25:<br /> addresses: []<br /> dhcp4: true<br /> optional: true<br /> bridges:<br /> maasbr0:<br /> addresses: [ 10.3.99.25/24 ]<br /> dhcp4: no<br /> dhcp6: no<br /> interfaces: [ vlan15 ]<br /> staging:<br /> addresses: [ 10.3.98.25/24 ]<br /> dhcp4: no<br /> dhcp6: no<br /> interfaces: [ vlan10 ]<br /> vlans:<br /> vlan15:<br /> dhcp4: no<br /> dhcp6: no<br /> accept-ra: no<br /> id: 15<br /> link: enp0s25<br /> vlan10:<br /> dhcp4: no<br /> dhcp6: no<br /> accept-ra: no<br /> id: 10<br /> link: enp0s25<br /> version: 2</span></blockquote>
<div>
<div>
Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (<i>/etc/libvirt/qemu/networks/maasbr0.xml</i>):</div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">
<network><br />
<name>maasbr0</name><br />
<bridge name="maasbr0"><br />
<forward mode="bridge"><br />
</forward></bridge></network><br />
</span>
</blockquote>
</div>
<div>
Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "<b>listen_tcp = 1</b>", and set authentication as you see fit, in <i>/etc/libvirt/libvirtd.conf</i>. Also set:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">libvirtd_opts="-l"</span></blockquote>
<div>
<br /></div>
<div>
In <i>/etc/default/libvirtd</i>, then restart the <b>libvirtd</b> service.</div>
<div>
<br /></div>
<div>
<br /></div>
<h3>
dnsmasq server</h3>
<div>
The raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:</div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases<br />dhcp-hostsdir=/depot/dnsmasq/reservations<br />dhcp-authoritative<br />dhcp-fqdn<br /># copied from maas, specify boot files per-arch.<br />dhcp-boot=tag:x86_64-efi,bootx64.efi<br />dhcp-boot=tag:i386-pc,pxelinux<br />dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32<br />dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64<br /># pass search domains everywhere, it's easier to type short names<br />dhcp-option=119,home.example.com,staging.example.com,maas.example.com<br />domain=example.com<br />no-hosts<br />addn-hosts=/depot/dnsmasq/dns/<br />domain-needed<br />expand-hosts<br />no-resolv<br /># home network<br />domain=home.example.com,10.3.0.0/23<br />auth-zone=home.example.com,10.3.0.0/23<br />dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h<br /># specify the default gw / next router<br />dhcp-option=tag:home,3,10.3.0.1<br /># define the tftp server<br />dhcp-option=tag:home,66,10.3.0.5<br /># staging is configured as above, but on 10.3.98.0/24.<br /># maas.example.com: "isolated" maas network.<br /># send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)<br />server=/maas.example.com/10.3.99.25<br /># very basic tftp config<br />enable-tftp<br />tftp-root=/depot/tftp<br />tftp-no-fail<br /># set some "upstream" nameservers for general name resolution.<br />server=8.8.8.8<br />server=8.8.4.4</span></blockquote>
<div>
<br /></div>
<div>
<br /></div>
<div>
DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in <i>/depot/dnsmasq/reservations</i> (as per the above), and look like this:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">de:ad:be:ef:ca:fe,10.3.0.21</span></blockquote>
<div>
<br /></div>
<div>
I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.</div>
<div>
<br /></div>
<div>
Similarly, files in <i>/depot/dnsmasq/dns</i> have names describing the hardware, but then contain entries in hosts file form:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">10.3.0.21<span style="white-space: pre;"> </span>izanagi</span></blockquote>
<div>
<br /></div>
<div>
Again, this is used so any rename of a device only requires changing the content of a single file in <i>/depot/dnsmasq/dns</i>, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
Installing MAAS</h2>
<div>
At this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.</div>
<div>
<br /></div>
<div>
The MAAS installation process is very straightforward. Simply install the <b>maas</b> package, which will pull in <b>maas-rack-controller</b> and <b>maas-region-controller</b>.</div>
<div>
<br /></div>
<div>
Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.</div>
<div>
<br /></div>
<div>
This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).</div>
<div>
<br /></div>
<div>
Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to <i>/etc/maas/preseed/curtin_userdata</i>:</div>
<div>
<br /></div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">late_commands:<br />[...]<br /> pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]<br /> pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]<br /> pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]<br />apt:<br /> sources:<br /> proposed.list:<br /> source: deb $MIRROR {{release}}-proposed main universe<br />write_files:<br /> userconfig:<br /> path: /etc/cloud/cloud.cfg.d/99-users.cfg<br /> content: |<br /> system_info:<br /> default_user:<br /> lock_passwd: False<br /> plain_text_passwd: [REDACTED]<br /> users:<br /> - default<br /> - name: mtrudel<br /> groups: sudo<br /> gecos: Matt<br /> shell: /bin/bash<br /> lock-passwd: False<br /> passwd: [REDACTED]</span></blockquote>
<div>
<br /></div>
<div>
<br /></div>
<div>
The <i>pinning_</i> entries are simply added to the end of the "late_commands" section.</div>
<div>
<br /></div>
<div>
For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">maas login local http://localhost:5240/MAAS/ [your MAAS API key]<br />maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"</span></blockquote>
<div>
<br /></div>
<div>
The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
Setting up the client</h2>
<div>
<div>
Deployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the <b>ubuntu</b> user (or another user you already configured):</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">ssh ubuntu@vocal-toad.maas.example.com</span></blockquote>
</div>
<div>
<br /></div>
<div>
There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (<i>~/.ssh/config</i>). Here, I'm sharing the relevant parts of the configuration file I use:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">CanonicalDomains home.example.com<br />CanonicalizeHostname yes<br />CanonicalizeFallbackLocal no<br />HashKnownHosts no<br />UseRoaming no<br /># canonicalize* options seem to break github for some reason<br /># I haven't spent much time looking into it, so let's make sure it will go through the<br /># DNS resolution logic in SSH correctly.<br />Host github.com<br /> Hostname github.com.<br />Host *.maas<br /> Hostname %h.example.com<br />Host *.staging<br /> Hostname %h.example.com<br />Host *.maas.example.com<br /> User ubuntu<br /> StrictHostKeyChecking no<br /> UserKnownHostsFile /dev/null<br /><br />Host *.staging.example.com<br /> StrictHostKeyChecking no<br /> UserKnownHostsFile /dev/null<br />Host *.lxd<br /> StrictHostKeyChecking no<br /> UserKnownHostsFile /dev/null<br /> ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p<br />Host *.libvirt<br /> StrictHostKeyChecking no<br /> UserKnownHostsFile /dev/null<br /> ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p</span></blockquote>
<div>
<br /></div>
<div>
As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.</div>
<div>
<br /></div>
<div>
The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.</div>
<div>
<br /></div>
<div>
<div>
Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">vocal-toad.maas</span></blockquote>
</div>
<div>
And the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you <i>definitely</i> want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).</div>
<div>
<br /></div>
<div>
<div>
It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">#!/bin/sh<br />ssh $@.maas</span></blockquote>
<div>
<br /></div>
<div>
And I saved this as "<i>sandbox</i>" in ~/bin and making it executable.</div>
</div>
<div>
<br /></div>
<div>
And with this, the lab is ready.</div>
<div>
<br /></div>
<h2>
Usage</h2>
<div>
To connect to a deployed system, one can now do the following:</div>
<div>
<br /></div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">$ sandbox vocal-toad<br />Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.<br />Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)<br />[...]<br />ubuntu@vocal-toad:~$<br />ubuntu@vocal-toad:~$ id mtrudel<br />uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)</span></blockquote>
<div>
<br /></div>
<h2>
Mobility</h2>
<div>
One important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").</div>
<div>
<br /></div>
<div>
All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.</div>
<div>
<br /></div>
<h2>
Next steps</h2>
<div>
One of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:</div>
<div>
<ul>
<li>Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.</li>
<li>Automate customizing installed packages based on tags applied to the machines.</li>
<ul>
<li>juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.</li>
<li>Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.</li>
<ul>
<li>This could be done using chef or puppet, but will require setting up the infrastructure for it.</li>
</ul>
<li>Integrate automatic installation of snaps.</li>
</ul>
<li>Load new images into the raspberry pi automatically for netboot / preseeded installs</li>
<ul>
<li>I have scripts for this, but they will take time to adapt</li>
<li>Space on such a device is at a premium, there must be some culling of old images</li>
</ul>
</ul>
</div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com1tag:blogger.com,1999:blog-1400765799263079238.post-48053895390434026202018-03-15T09:18:00.003-04:002018-03-15T09:18:41.904-04:00Call for testing: netplan.io in 18.04Since 17.10, netplan has been the default network configuration tool in Ubuntu. Since then, it has grown in features, bug fixes, and even got its package renamed in the archive from "nplan" to netplan.io. We added better routing, improved handling for bridges, support for marking devices as "optional" for boot (so that the system doesn't wait for them to come up at boot time), lots of documentation updates... There's even been work to get it building for other distros.<br />
<br />
<br />
We have a website for it, too: <a href="http://netplan.io/">netplan.io</a><br />
<br />
<br />
As we get closer to the release of Ubuntu 18.04, it is past due to involve everyone in testing netplan and making sure it is solid and as featureful as possible for a wide range of use cases.<br />
<br />
<br />
This is where you get to participate. <br />
<br />
<br />
Let us know about any feature gaps that remain in what<br />
netplan supports, so that we can add the features when it's possible, or so that these feature gaps can be properly documented if they can't be closed by release time.<br />
<br />
<br />
Report any bugs you find in <a href="https://bugs.launchpad.net/netplan/+filebug">netplan on Launchpad</a>.<br />
<br />
<br />
If you are unsure whether something is a bug, it might well be, so it doesn't hurt to file a bug. At the very least, we do want to know if something feels really difficult to do, so we can look into improving the experience.<br />
<br />
<br />
If you're unsure how to do something you can look up questions and answers, or add your own, on AskUbuntu here:<br />
<a href="https://askubuntu.com/questions/tagged/netplan">https://askubuntu.com/questions/tagged/netplan</a><br />
<br />
<br />
Netplan is being actively developed and we can use your help; so if there's one feature you care deeply about, or a bug that bugs you and you want to have a hand in fixing it, you can also jump right in to the code in Github: <a href="http://github.com/CanonicalLtd/netplan">http://github.com/CanonicalLtd/netplan</a>Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-44287838975898189192018-03-07T15:19:00.000-05:002018-03-07T15:19:42.527-05:00Backing up GPG keysUsing PGP/GPG keys for a long period of time (either expiring keys, or extending expiration dates) and the potential for travel, for hardware to fail, or for life's other events means that <i>eventually</i> rather than <i>potentially</i>, you will end up in a situation where a key is lost, damaged, or where you otherwise need to proceed with <b>some</b> disaster recovery techniques.<br />
<br />
These techniques could be as simple as forgetting about the key altogether and letting it live forever on the Internet, without being used. It could also be that you were clever and saved a revocation certificate somewhere different than your private key is backed up, but what if you didn't?<br />
<br />
What if you did not print the revocation certificate? Or you just really don't feel very much like re-typing half a gazillion character?<br />
<br />
I wouldn't wish it to anyone, but there will always be the risk of a failure of your "backup options"; so I'm sharing here my personal backup methods.<br />
<br />
I back up my GPG keys, which I use both at and outside of work, on multiple different media:<br />
<br />
<br />
<ul>
<li>"Daily use" happens using a Yubikey that holds securely the private part of the keys (it can't be extracted from the smartcard), as well as the public part. I've already written about this two years ago, <a href="http://blog.cyphermox.net/2016/01/in-full-tinfoil-hat-mode-using-gpg-with.html">on this blog</a>.</li>
<li>The first layer of backup is on a LUKS-encrypted USB key. The USB key must obviously be encrypted to block out most attempts at accessing the contents of the key; and it is a key that I usually carry on my person at all times, like the Yubikeys -- I also use it to back up other files I can't leave without, such as a password vault, some other certificates, copies of ID documents in case of loss for when I travel, etc.</li>
<li>The next layer is on <b>paper</b>. Well, cardstock actually, to avoid wanting to fold it. This is the process I want to dig into deeper here.</li>
</ul>
<div>
<br /></div>
<div>
It turns out that backing up secure keys on paper is pretty straightforward, and something just fine to do. You will obviously want to keep the paper copies in a secure location that only you have access to, as much as possible safe from fire (or at least somewhere unlikely to burn down at the same time as you'd lose the other backups).</div>
<div>
<br /></div>
<div>
<b>paperkey</b> is a generally accepted way of saving the private part of your GPG key. It does a decent job at saving things in a printable form, from which point you would go ahead and re-type, or use OCR to recover the text generated by <i>paperkey:</i></div>
<div>
</div>
<br />
<blockquote class="tr_bq">
paperkey --secret-key secret.gpg --output printme.txt</blockquote>
<br />
This retains the same security systems as your original key. You should have added a passphrase to it anyway, so even if the paper copy was found and used to recover the key, you would be protected by the complexity of your passphrase.<br />
<br />
But this depends on OCR working correctly, especially on an aging medium such as paper, or you spending many hours re-typing the contents, and potentially tracking down typos. There's error correction, but that sounds to me like not fun at all. When you want to recover your key, presumably it is because you really do need it as soon as possible.<br />
<br />
Back in 2015 when I generated my latest keys, I found a blog post that explained how to use <a href="https://en.wikipedia.org/wiki/QR_code">QR codes</a> to back up data. QR codes have the benefit of being very resilient to corruption, and above all, do not require typing. QR codes are however limited in size, being limited to 177x177 squares, for about 1200 characters storage.<br />
<br />
Along with that blog post, I also found out about <a href="https://en.wikipedia.org/wiki/Data_Matrix">DataMatrix</a> codes (which are quite similar to QR codes), but where each symbol can save a bit more data (about 1500 bytes per image in the biggest size). Pick the format you prefer, I picked DataMatrix. Simply modify the size you split to in the commands below.<br />
<br />
One might wish to save the paperkey or the private key directly (obviously, saving the private key might mean more chunks to print), and that can be done using the programs in <b>dmtx-utils</b>:<br />
<blockquote class="tr_bq">
cat printme.txt | split -b 1500 - part-
<br />
rm printme.txt
<br />
for part in part-*; do<br />
dmtxwrite -e 8 ${part} > ${part}.png<br />
done </blockquote>
<br />
You will be left with multiple parts of the file you originally split (without a file extension), as well as a corresponding image in PNG format that can be printed, and later scanned, to recover the original.<br />
<br />
Keep these in a safe location and your key should be recoverable years down the line. It's not a bad idea to "pretend" there's a catastrophe and attempt to recover your key every few months, just to be sure you can go through the steps easily and that the paper keys are in good shape.<br />
<br />
Recovery is simple:<br />
<br />
<blockquote class="tr_bq">
for file in *.png; do dmtxread $file >> printme.txt; done</blockquote>
<br />
If all went well, the original and recovered files should be identical, and you just avoided a couple of hours of typing.<br />
<br />
Stay safe!Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-45764199431684281302017-08-01T09:32:00.000-04:002017-08-01T10:48:34.852-04:00How to sign things for Secure Boot<h2 class="tr_bq">
Secure Boot signing</h2>
<div>
<br /></div>
<div>
The whole concept of Secure Boot requires that there exists a trust chain, from the very first thing loaded by the hardware (the firmware code), all the way through to the last things loaded by the operating system as part of the kernel: the modules. In other words, not just the firmware and bootloader require signatures, the kernel and modules too. People don't generally change firmware or bootloader all that much, but what of rebuilding a kernel or adding extra modules provided by hardware manufacturers?</div>
<div>
<br /></div>
<div>
The Secure Boot story in Ubuntu includes the fact that you might want to build your own kernel (but we do hope you can just use the generic kernel we ship in the archive), and that you may install your own kernel modules. This means signing UEFI binaries and the kernel modules, which can be done with its own set of tools.</div>
<div>
<br /></div>
<div>
But first, more on the trust chain used for Secure Boot.</div>
<div>
<br />
<br /></div>
<h2>
Certificates in shim</h2>
<div>
<br /></div>
<div>
To begin with signing things for UEFI Secure Boot, you need to create a X509 certificate that can be imported in firmware; either directly though the manufacturer firmware, or more easily, by way of <i>shim</i>.</div>
<div>
<br /></div>
<div>
Creating a certificate for use in UEFI Secure Boot is relatively simple. <i>openssl</i> can do it by running a few SSL commands. Now, we needs to create a SSL certificate for module signing...</div>
<div>
<br /></div>
<div>
First, let's create some config to let openssl know what we want to create (let's call it 'openssl.cnf'):</div>
<blockquote>
<br />
# This definition stops the following lines choking if HOME isn't<br />
# defined.<br />
HOME = .<br />
RANDFILE = $ENV::HOME/.rnd </blockquote>
<blockquote>
[ req ]<br />
distinguished_name = req_distinguished_name<br />
x509_extensions = v3<br />
string_mask = utf8only<br />
prompt = no<br />
<br />
[ req_distinguished_name ]<br />
countryName = CA<br />
stateOrProvinceName = Quebec<br />
localityName = Montreal<br />
0.organizationName = cyphermox<br />
commonName = Secure Boot Signing<br />
emailAddress = example@example.com<br />
<br />
[ v3 ]<br />
subjectKeyIdentifier = hash<br />
authorityKeyIdentifier = keyid:always,issuer<br />
basicConstraints = critical,CA:FALSE<br />
extendedKeyUsage = codeSigning,1.3.6.1.4.1.311.10.3.6,1.3.6.1.4.1.2312.16.1.2<br />
nsComment = "OpenSSL Generated Certificate"</blockquote>
Either update the values under "[ req_distinguished_name ]" or get rid of that section altogether (along with the "distinguished_name" field) and remove the "prompt" field. Then openssl would ask you for the values you want to set for the certificate identification.<br />
<br />
The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure "1.3.6.1.4.1.2312.16.1.2" is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate.<br />
<br />
Then, we can start the fun part: creating the private and public keys.<br />
<br />
<blockquote class="tr_bq">
openssl req -config ./openssl.cnf \<br />
-new -x509 -newkey rsa:2048 \<br />
-nodes -days 36500 -outform DER \<br />
-keyout "MOK.priv" \<br />
-out "MOK.der"</blockquote>
This command will create both the private and public part of the certificate to sign things. You need both files to sign; and just the public part (<i>MOK.der</i>) to enroll the key in shim.<br />
<br />
<br />
<h2>
Enrolling the key</h2>
<div>
<br /></div>
<div>
Now, let's enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don't include that '1.3.6.1.4.1.2312.16.1.2' OID discussed earlier).</div>
<div>
<br /></div>
<div>
To enroll a key, use the <i>mokutil</i> command:</div>
<blockquote class="tr_bq">
sudo mokutil --import MOK.der</blockquote>
Follow the prompts to enter a password that will be used to make sure you really do want to enroll the key in a minute.<br />
<br />
Once this is done, reboot. Just before loading GRUB, shim will show a blue screen (which is actually another piece of the shim project called "MokManager"). use that screen to select "Enroll MOK" and follow the menus to finish the enrolling process. You can also look at some of the properties of the key you're trying to add, just to make sure it's indeed the right one using "View key". MokManager will ask you for the password we typed in earlier when running <i>mokutil</i>; and will save the key, and we'll reboot again.<br />
<br />
<br />
<h2>
Let's sign things</h2>
<div>
<br /></div>
<div>
Before we sign, let's make sure the key we added really is seen by the kernel. To do this, we can go look at <b>/proc/keys</b>:</div>
<div>
<br /></div>
<div>
<blockquote class="tr_bq">
$ sudo cat /proc/keys<br />
0020f22a I--Q--- 1 perm 0b0b0000 0 0 user invocation_id: 16<br />
0022a089 I------ 2 perm 1f0b0000 0 0 keyring .builtin_trusted_keys: 1<br />
003462c9 I--Q--- 2 perm 3f030000 0 0 keyring _ses: 1<br />
00709f1c I--Q--- 1 perm 0b0b0000 0 0 user invocation_id: 16<br />
00f488cc I--Q--- 2 perm 3f030000 0 0 keyring _ses: 1<br />
[...]<br />
1dcb85e2 I------ 1 perm 1f030000 0 0 asymmetri Build time autogenerated kernel key: eae8fa5ee6c91603c031c81226b2df4b135df7d2: X509.rsa 135df7d2 []<br />
[...]</blockquote>
</div>
<br />
Just make sure a key exists there with the attributes (commonName, etc.) you entered earlier.<br />
<br />
<div>
To sign kernel modules, we can use the <i>kmodsign</i> command:</div>
<blockquote class="tr_bq">
kmodsign sha512 MOK.priv MOK.der module.ko</blockquote>
<b>module.ko</b> should be the file name of a kernel module file you want to sign. The signature will be appended to it by <i>kmodsign</i>, but if you would rather keep the signature separate and concatenate it to the module yourself, you can do that too (see '<i>kmosign --help</i>').<br />
<br />
You can validate that the module is signed by checking that it includes the string '~Module signature appended~':<br />
<br />
<blockquote class="tr_bq">
$ hexdump -Cv module.ko | tail -n 5<br />
00002c20 10 14 08 cd eb 67 a8 3d ac 82 e1 1d 46 b5 5c 91 |.....g.=....F.\.|<br />
00002c30 9c cb 47 f7 c9 77 00 00 02 00 00 00 00 00 00 00 |..G..w..........|<br />
00002c40 02 9e 7e 4d 6f 64 75 6c 65 20 73 69 67 6e 61 74 |..~Module signat|<br />
00002c50 75 72 65 20 61 70 70 65 6e 64 65 64 7e 0a |ure appended~.|<br />
00002c5e</blockquote>
<div>
<br /></div>
<div>
You can also use <i>hexdump</i> this way to check that the signing key is the one you created.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
What about kernels and bootloaders?</h2>
<div>
<br /></div>
<div>
To sign a custom kernel or any other EFI binary you want to have loaded by shim, you'll need to use a different command: <i>sbsign</i>. Unfortunately, we'll need the certificate in a different format in this case.</div>
<div>
<br /></div>
<div>
Let's convert the certificate we created earlier into PEM:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem</blockquote>
<div>
<br /></div>
<div>
Now, we can use this to sign our EFI binary:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
sbsign --key MOK.priv --cert MOK.pem my_binary.efi --output my_binary.efi.signed</blockquote>
<div>
As long as the signing key is enrolled in shim and does not contain the OID from earlier (since that limits the use of the key to kernel module signing), the binary should be loaded just fine by shim.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
Doing signatures outside shim</h2>
<div>
<br /></div>
<div>
If you don't want to use shim to handle keys (but I do recommend that you do use it), you will need to create different certificates; one of which being the PK (Platform Key) for the system, which you can enroll in firmware directly via KeyTool or some firmware tool provided with your system. I will not elaborate the steps to enroll the keys in firmware as it tends to vary from system to system, but the main idea is to put the system in Secure Boot "Setup Mode"; run KeyTool (which is its own EFI binary you can build yourself and run), and enroll the keys -- first by installing the KEK and DB keys, and finishing with the PK. These files need to be available from some FAT partition.</div>
<div>
<br /></div>
<div>
I do have this script to generate the right certificates and files; which I can share (and itself is copied from somewhere, I can't remember):</div>
<div>
<br /></div>
<div>
<blockquote>
#!/bin/bash<br />
echo -n "Enter a Common Name to embed in the keys: "<br />
read NAME<br />
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME PK/" -keyout PK.key \<br />
-out PK.crt -days 3650 -nodes -sha256<br />
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME KEK/" -keyout KEK.key \<br />
-out KEK.crt -days 3650 -nodes -sha256<br />
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME DB/" -keyout DB.key \<br />
-out DB.crt -days 3650 -nodes -sha256<br />
openssl x509 -in PK.crt -out PK.cer -outform DER<br />
openssl x509 -in KEK.crt -out KEK.cer -outform DER<br />
openssl x509 -in DB.crt -out DB.cer -outform DER<br />
GUID=`python -c 'import uuid; print str(uuid.uuid1())'`<br />
echo $GUID > myGUID.txt<br />
cert-to-efi-sig-list -g $GUID PK.crt PK.esl<br />
cert-to-efi-sig-list -g $GUID KEK.crt KEK.esl<br />
cert-to-efi-sig-list -g $GUID DB.crt DB.esl<br />
rm -f noPK.esl<br />
touch noPK.esl<br />
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \<br />
-k PK.key -c PK.crt PK PK.esl PK.auth<br />
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \<br />
-k PK.key -c PK.crt PK noPK.esl noPK.auth<br />
chmod 0600 *.key<br />
echo ""<br />
echo ""<br />
echo "For use with KeyTool, copy the *.auth and *.esl files to a FAT USB"<br />
echo "flash drive or to your EFI System Partition (ESP)."<br />
echo "For use with most UEFIs' built-in key managers, copy the *.cer files."<br />
echo ""</blockquote>
</div>
<div>
The same logic as earlier applies: sign things using <i>sbsign</i> or <i>kmodsign</i> as required (use the .crt files with <i>sbsign</i>, and .cer files with <i>kmodsign</i>); and as long as the keys are properly enrolled in the firmware or in shim, they will be successfully loaded.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
What's coming up for Secure Boot in Ubuntu</h2>
<div>
<br /></div>
<div>
Signing things is complex -- you need to create SSL certificates, enroll them in firmware or shim... You need to have a fair amount of prior knowledge of how Secure Boot works, and that the commands to use are. It's rather obvious that this isn't at the reach of everybody, and somewhat bad experience in the first place. For that reason, we're working on making the key creation, enrollment and signatures easier when installing DKMS modules.</div>
<div>
<br /></div>
<div>
<b>update-secureboot-policy</b> should soon let you generate and enroll a key; and DKMS will be able to sign things by itself using that key.</div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com5tag:blogger.com,1999:blog-1400765799263079238.post-38616550809162203352017-06-20T22:10:00.002-04:002017-06-20T22:10:47.427-04:00Netplan by default in 17.10Friday, I uploaded an updated <b>nplan</b> package (version 0.24) to change its Priority: field to <i>important</i>, as well as an update of <b>ubuntu-meta</b> (following a seeds update), to replace ifupdown with nplan in the <i>minimal</i> seed.<br /><br />What this means concretely is that nplan should now be installed by default on all images, part of <b>ubuntu-minimal</b>, and dropped ifupdown at the same time.<br /><br />For the time being, ifupdown is still installed by default due the way <b>debootstrap</b> generates the very minimal images used as a base for other images -- how it generates its base set of packages, since that depends only on the Priority: field of packages. Thus, nplan was added, but ifupdown still needs to be changed (which I will do shortly) to disappear from all images.<br /><br />The intent is that nplan would now be the standard way of configuring networks. I've also sent an email about this to <i>ubuntu-devel-announce@</i>.<br /><br />I've already written a bit about what netplan is and does, and I have still more to write on the subject (discussing syntax and how to do common things). We especially like how using a purely declarative syntax makes things easier for everyone (and if you can't do what you want that way, then it's a bug you should report).<br /><br /><a href="http://bazaar.launchpad.net/~maas-committers/maas/trunk/view/head:/src/maasserver/preseed_network.py">MaaS</a>, <a href="https://git.launchpad.net/cloud-init/tree/cloudinit/net/netplan.py">cloud-init</a> and <a href="https://github.com/CanonicalLtd/subiquity/blob/master/subiquitycore/models/network.py">others</a> have already started to support writing netplan configuration.<br /><br />The full specification (summary wiki page and a blueprint reachable from it) for the migration process is available <a href="https://wiki.ubuntu.com/MigratingToNetplan">here</a>.<br /><br />While I get to writing something comprehensive about how to use the netplan YAML to configure networks, if you want to know more there's always the manpage, which is the easiest to use documentation. It should always be up to date with the current version of netplan available on your release (since we <a href="https://launchpad.net/ubuntu/+source/nplan">backported</a> the last version to Xenial, Yakkety, and Zesty), and accessible via:<br /><br /><blockquote class="tr_bq">
<i><b>man 5 netplan</b></i></blockquote>
<br />To make things "easy" however, you can also check out the netplan documentation directly from the source tree here:<div>
<br /><a href="https://git.launchpad.net/netplan/tree/doc/netplan.md">https://git.launchpad.net/netplan/tree/doc/netplan.md</a><br /><br />There's also a <a href="https://wiki.ubuntu.com/Netplan">wiki page</a> I started to get ready that links to the most useful things, such as an overview of the design of netplan, some discussion on the renderers we support and some of the commands that can be used.<br /><br />We even have an IRC channel on Freenode: <b>#netplan</b><br /><br />I think you'll find that using netplan makes configuring networks easy and even enjoyable; but if you run into an issue, be sure to file a bug on Launchpad here:<div>
<br /></div>
<div>
<a href="http://launchpad.net/ubuntu/+source/nplan">https://bugs.launchpad.net/ubuntu/+source/nplan/+filebug</a></div>
</div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com1tag:blogger.com,1999:blog-1400765799263079238.post-2378280930065100372017-05-24T16:07:00.000-04:002017-05-24T16:07:38.288-04:00An overview of UEFI Secure Boot on Ubuntu<h2>
Secure Boot is here</h2>
Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.<br />
<br />
<h2>
An overview</h2>
I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://people.canonical.com/~mtrudel/secureboot/secureboot.png"><img border="0" data-original-height="968" data-original-width="1197" height="322" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvaqDjtbykmG3wQxXeCC5RDXnDsbBLcF3Kh4S38_5iYkUm8e4ltDEZDi0VyWbNuaa-U_mGeSSe5CxbW7kx1Oz43Bb9-aKYizV8I73oQkczrCNNM2BVTAWBl6IZ5nSsTDxZOxTc5tO7P7c/s400/secureboot.png" width="400" /></a></div>
<br />
In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: <i>BootOrder</i>, which specified what <b>order</b> to boot things in, as well as <i>BootEntry####</i> (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no <i>BootEntry</i> variable listed in <i>BootOrder</i> gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default <i>BootEntry</i>. Shim relies on that, or on a distro, to load in the first place.<br />
<br />
Once we actually find <i>shim</i> to boot; this will try to validate signatures of the next piece in the puzzle: <i>grub2</i>, <i>MokManager</i>, or <i>fallback</i>, depending on the state of <i>shim</i>'s own variables in NVRAM; more on this later.<br />
<br />
In the usual scenario, <i>shim</i> will validate the <i>grub2</i> image successfully, then <i>grub2</i> itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking <i>shim</i> to check the signature.<br />
<br />
<h2>
Shim</h2>
<div>
Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load <i>grub2</i> in the normal case, as well as how to load <i>MokManager</i> if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the <i>fallback</i> binary which can re-create <i>BootEntry</i> variables in case the firmware isn't able to handle them. I will expand on <i>MokManager</i> and <i>fallback</i> in a future blog post.</div>
<div>
<br /></div>
<h3>
Your diagram says shim is signed by Microsoft, what's up with that?</h3>
<div>
Indeed, <i>shim</i> is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different <i>shim</i>s for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.</div>
<div>
<br /></div>
<div>
One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.</div>
<div>
<br /></div>
<h3>
Why reboot once a policy change is made or boot entries are rebuilt?</h3>
<div>
All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding <i>BootEntry</i> variables first, and then loading <i>MokManager</i> to add a new signing key before we can load a new <i>grub2</i> image you signed yourself).</div>
<div>
<br /></div>
<h2>
Grub2</h2>
<div>
<i>grub2</i> is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of <i>grub2</i>. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through <i>shim</i>, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included <i>in the firmware</i>, as expressed earlier. Mostly just the Microsoft signatures.</div>
<div>
<br /></div>
<div>
<i>grub2</i> in UEFI otherwise works just like it would elsewhere: it try to find its <b>grub.cfg </b>configuration file, and follow its instructions to boot the kernel and load the initramfs.</div>
<div>
<br /></div>
<div>
When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is <b>signed</b>. The kernels we install in Ubuntu are signed by Canonical, just like <i>grub2</i> is, and <i>shim</i> knows about the signing key and can validate these signatures.</div>
<div>
<br /></div>
<div>
At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, <i>grub2</i> will fall back to loading the kernel as a normal binary (as in <b>not signed</b>), outside of <i>BootServices</i> (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting <i>BootServices</i> means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then <i>grub2</i> leaves the <i>ExitBootServices</i> call to be done by the kernel.</div>
<div>
<br /></div>
<div>
Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).</div>
<div>
<br /></div>
<h2>
The kernel</h2>
<div>
In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.</div>
<div>
<br /></div>
<div>
In order to make this as easy as possible for people, for now we've opted to let users disable <b>Secure Boot validation</b> in <i>shim</i> via a semi-automatic process. <i>Shim</i> is still being verified by the system firmware, but any piece following it that asks <i>shim</i> to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). <i>grub2</i> will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).</div>
<div>
<br /></div>
<h3>
Shim validation</h3>
<div>
To toggle <i>shim validation</i>, if you were using DKMS packages and feel you'd really prefer to have <i>shim </i>validate everything (but be aware that if your system requires these drivers, <b>they will not load and your system may be unusable</b>, or at least whatever needs that driver will not work):</div>
<blockquote class="tr_bq">
sudo update-secureboot-policy --enable</blockquote>
If nothing happens, it's because you already have <i>shim validation</i> enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, <i>Secure Boot</i> is not enabled on the system), the command will tell you.<br />
<br />
And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see <b>--help</b>). There is an example of use of <b>update-secureboot-policy</b> <a href="https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS">here</a>.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com2tag:blogger.com,1999:blog-1400765799263079238.post-4171812035421505072017-05-23T20:54:00.000-04:002017-05-23T20:54:21.978-04:00ss: another way to get socket statisticsIn my last blog post I mentioned <b>ss</b>, another tool that comes with the <i>iproute2</i> package and allows you to query statistics about sockets. The same thing that can be done with <b>netstat</b>, with the added benefit that it is typically a little bit faster, and shorter to type.<br />
<br />
Just <b>ss </b>by default will display much the same thing as netstat, and can be similarly passed options to limit the output to just what you want. For instance:<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">$ ss -t</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">State Recv-Q Send-Q Local Address:Port Peer Address:Port </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ESTAB 0 0 127.0.0.1:postgresql 127.0.0.1:48154 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ESTAB 0 0 192.168.0.136:35296 192.168.0.120:8009 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">ESTAB 0 0 192.168.0.136:47574 173.194.74.189:https</span></blockquote>
[...]<br />
<br />
<b>ss -t </b>shows just TCP connections. <b>ss -u </b>can be used to show UDP connections, <b>-l </b>will show only listening ports, and things can be further filtered to just the information you want.<br />
<br />
I have not tested all the possible options, but you can even forcibly close sockets with <b>-K</b>.<br />
<br />
One place where <b>ss</b> really shines though is in its filtering capabilities. Let's list all connections with a source port of 22 (ssh):<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">$ ss state all sport = :ssh</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">tcp LISTEN 0 128 *:ssh *:* </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">tcp ESTAB 0 0 192.168.0.136:ssh 192.168.0.102:46540 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">tcp LISTEN 0 128 :::ssh :::* </span></blockquote>
And if I want to show only connected sockets (everything but <i>listening</i> or <i>closed</i>):<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">$ ss state connected sport = :ssh</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">tcp ESTAB 0 0 192.168.0.136:ssh 192.168.0.102:46540 </span></blockquote>
<br />
Similarly, you can have it list all connections to a specific host or range; in this case, using the 74.125.0.0/16 subnet, which apparently belongs to Google:<br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: x-small;"><br /></span></span>
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">$ ss state all dst 74.125.0.0/16<br />Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port <br />tcp ESTAB 0 0 192.168.0.136:33616 74.125.142.189:https <br />tcp ESTAB 0 0 192.168.0.136:42034 74.125.70.189:https <br />tcp ESTAB 0 0 192.168.0.136:57408 74.125.202.189:https</span><span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: x-small;"> </span> </span><span style="font-family: "Courier New", Courier, monospace;"> </span></blockquote>
This is very much the same syntax as for <i>iptables</i>, so if you're familiar with that already, it will be quite easy to pick up. You can also install the <i>iproute2-doc</i> package, and look in <i>/usr/share/doc/iproute2-doc/ss.html</i> for the full documentation.<br />
<br />
Try it for yourself! You'll see how well it works. If anything, I'm glad for the fewer characters this makes me type.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-36828760128161858482017-05-09T12:31:00.000-04:002017-05-12T16:05:48.368-04:00If you're still using ifconfig, you're living in the past<h2>
The world evolves</h2>
I regularly see "recommendations" to use ifconfig to get interface information in mailing list posts or bug reports and other places. I might even be guilty of it myself. Still, the world of networking has evolved quite a lot since <b>ifconfig</b> was the de-facto standard to bring up a device, check its IP or set an IP.<br />
<br />
Following some improvements in the kernel and the gradual move to driving network things via netlink; <b>ifconfig</b> has been largely replaced by the <b>ip</b> command.<br />
<br />
Running just <b>ip </b>yields the following:<br />
<br />
<blockquote class="tr_bq">
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }<br />
ip [ -force ] -batch filename<br />
where OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |<br />
tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |<br />
netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila }<br />
OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |<br />
-h[uman-readable] | -iec |<br />
-f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |<br />
-4 | -6 | -I | -D | -B | -0 |<br />
-l[oops] { maximum-addr-flush-attempts } | -br[ief] |<br />
-o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |<br />
-rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}</blockquote>
<br />
I understand this may look complicated to some people, but the jist of it is to understand that with <b>ip</b>, you interact with <i>objects</i>, and apply some kind of <i>function</i> to it. For example:<br />
<br />
<b>ip address show</b><br />
<br />
This is the main command that would be used in place of <b>ifconfig</b>. It will just display the IP addresses assigned to all interfaces. To be precise, it will show you the layer 3 details the interface: the IPv4 and IPv6 addresses, whether it is up, what are the different properties related to the addresses...<br />
<br />
Another command will give you details about the <i>layer 2</i> properties of the interface: its MAC address (ethernet address), etc; even if it <i>is</i> shown by <b>ip address:</b><br />
<b><br /></b>
<b>ip link show</b><br />
<b><br /></b>
Furthermore, you can set devices up or down (similar to <b>ifconfig eth0 up </b>or <b>ifconfig eth0 down</b>) simply by using:<br />
<br />
<b>ip link set <i>DEVICE</i> up </b>or <b>ip link set <i>DEVICE</i> down</b><br />
<b><br /></b>
As shown above, there are lots of other objects that can be interacted with using the <b>ip</b> command. I'll cover another: <b>ip route</b>, in another post.<br />
<br />
<h2>
Why is this important?</h2>
<div>
As time passes, more and more features are becoming easier to use with the <b>ip</b> command instead of with ifconfig. We've already stopped installing ifconfig on desktops (it still gets installed on servers for now), and people have been discussing dropping net-tools (the package that ships ifconfig and a few other old commands that are replaced) for a while now. It may be time to revisit not installing net-tools by default anywhere.</div>
<div>
<br /></div>
<h2>
I want to know about your world</h2>
<div>
Are you still using one of the following tools?</div>
<div>
<br /></div>
<div>
<div>
/bin/netstat (replaced by <b>ss</b>, for which I'll dedicate another blog post entirely)</div>
<div>
/sbin/ifconfig</div>
<div>
/sbin/ipmaddr (replaced by <b>ip maddress</b>)</div>
<div>
/sbin/iptunnel</div>
<div>
/sbin/mii-tool (<b>ethtool</b> should appropriately replace it)</div>
<div>
/sbin/nameif</div>
<div>
/sbin/plipconfig</div>
<div>
/sbin/rarp</div>
<div>
/sbin/route</div>
<div>
/sbin/slattach</div>
</div>
<div>
<br /></div>
<div>
If so and there is just no alternative to using them that comes from iproute2 (well, the <b>ip </b>or <b>ss</b> commands) that you can use to do the same, I want to know about how you are using them. We're always watching for things that might be broken by changes; we want to avoid breaking things when possible.</div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com8tag:blogger.com,1999:blog-1400765799263079238.post-35621071499814845362017-05-05T16:25:00.000-04:002017-05-05T16:25:12.248-04:00Quick and easy network configuration with NetplanEarlier this week I uploaded <a href="https://launchpad.net/ubuntu/+source/nplan/0.21">netplan 0.21</a> in artful, with SRUs in progress for the stable releases. There are still lots of features coming up, but it's also already quite useful. You can already use it to describe typical network configurations on desktop and servers, all the way to interesting, complicated setups like bond over a bridge over multiple VLANs...<br />
<br />
<h2>
Getting started</h2>
<div>
The simplest netplan configuration might look like this:</div>
<div>
<br /></div>
<div>
<blockquote class="tr_bq">
# Let NetworkManager manage all devices on this system<br />network:<br /> version: 2<br /> renderer: NetworkManager</blockquote>
</div>
<div>
At boot, netplan will see this configuration (which happens to be installed already on all new systems since 16.10) and generate a single , empty file: /run/NetworkManager/conf.d/10-globally-managed-devices.conf. This tells the system that NetworkManager is the only <i>renderer</i> for network configuration on the system, and will manage all devices by default.</div>
<div>
<br /></div>
<h2>
Working from there: a simple server</h2>
<div>
Let's look at it on a hypothetical web server; such as for my favourite test: www.perdu.com.</div>
<blockquote class="tr_bq">
<br />network:<br /> version: 2<br /> ethernets:<br /> eth0:<br /> dhcp4: true</blockquote>
This incredibly simple configuration tells the system that the <b>eth0</b> device is to be brought up using DHCP4. Netplan also supports DHCPv6, as well as static IPs, setting routes, etc.<br />
<br />
<br />
<h2>
Building up to something more complex</h2>
<div>
Let's say I want a team of two NICs, and use them to reach VLAN 108 on my network:</div>
<div>
<br /></div>
<div>
<div>
network:</div>
<div>
version: 2</div>
<div>
ethernets:</div>
<div>
eth0:</div>
<div>
dhcp4: n</div>
<div>
eth1:</div>
<div>
mtu: 1280</div>
<div>
dhcp4: n</div>
<div>
bonds:</div>
<div>
bond0:</div>
<div>
interfaces:</div>
<div>
- eth1</div>
<div>
- eth0</div>
<div>
mtu: 9000</div>
<div>
vlans:</div>
<div>
bond0.108:</div>
<div>
link: bond0</div>
<div>
id: 108</div>
</div>
<div>
<br /></div>
<div>
I think you can see just how simple it is to configure even pretty complex networks, all in one file. The beauty in it is that you don't need to worry about what will actually set this up for you.</div>
<div>
<br /></div>
<h2>
A choice of backends</h2>
<div>
Currently, netplan supports either NetworkManager or systemd-networkd as a backend. The default is to use systemd-networkd, but given that it does not support wireless networks, we still rely on NetworkManager to do just that.</div>
<div>
<br /></div>
<div>
This is why you don't need to care what supports your config in the end: netplan abstracts that for you. It generates the required config based on the "renderer" property, so that you don't need to know how to define the special device properties in each backend.</div>
<div>
<br /></div>
<div>
As I mentioned previously, we are still hard at work adding more features, but the core is there: netplan can set up bonds, bridges, vlans, standalone network interfaces, and do so for both static or DHCP addresses. It also supports many of the most common bridge and bond parameters used to tweak the precise behaviour of bonded or bridged devices.</div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
Coming up...</h2>
<div>
I will be adding proper support for setting a "cloned" MAC on a device. I'm reviewing the code already to do this, and ironing out the last issues.</div>
<div>
<br /></div>
<div>
There are also plans on better handling administrative states for devices; along with a few bugs that relate to support MaaS, where having a simple configuration style really shines.</div>
<div>
<br /></div>
<div>
I'm really excited for where netplan is going. It seems like it has a lot of potential to address some of the current shortcomings in other tools. I'm also really happy to hear of stories of how it is being used in the wild, so if you use it, don't hesitate to let me know about it!</div>
<div>
<br /></div>
<h2>
Contributing</h2>
<div>
All of the work on netplan happens on Launchpad. Its source code is at <a href="https://code.launchpad.net/netplan">https://code.launchpad.net/netplan</a>; we always welcome new contributions.</div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com7tag:blogger.com,1999:blog-1400765799263079238.post-20896707030807899882016-06-02T20:15:00.001-04:002016-06-02T20:15:23.846-04:00Netflix, or the pains of dealing with royalties and DMCAA few days ago, after enjoying the use of a pretty much static IP address for a long while from my ISP (it hadn't changed in easily a year), my IP changed. This took down my IPv6 tunnel, which I tend to use a lot to access to various services for work -- you know, dogfooding and all of that. My IPv6 address depends on a tunnel that needs to stay up (and for that requires my IPv4 address to not change too much, but whatever).<br />
<br />
Probably since then (but I did not really notice until yesterday or so), I've had multiple issues with Netflix streaming. As many know, Netflix is now enforcing some method of trying to detect VPN and proxy users to force local content upon its users. I think it's a stupid idea, but I see where they come from with that decision.<br />
<br />
Netflix has to deal with royalties, copyright, and varying laws depending on where a user might be. For instance, you may wish to watch NCIS -- this will likely depend on Netflix having paid CBS (or whomever the title belong to) to be allowed to present it to clients. I have no idea how these costs are done, they might well be a percentage based on number of viewers or some such.<br />
<br />
In the US, this is relatively easy, they can deal with local companies and handle things. This becomes more complicated when you factor in different copyright laws in other countries, and exclusivity rights, etc. In the case of NCIS, Global appears to have (exclusive?) rights for NCIS, so they look to be the only legit place to stream episodes online. I suspect Netflix would possibly have to pay *them* to stream NCIS in Canada, or otherwise be subject to random other byzantine rules. I don't pretend to understand the intricacies past the one class I took on Canadian copyright/patent/IP law over a year ago. Suffice to say it's complicated, and there are probably good reasons to try and have users in country X watch country X's content, and not country Y's. It's likely to cut costs.<br />
<br />
My issue stemmed from the fact that with the reset of my IPv4/IPv6 connection, or possibly just as a coincidence, Netflix started to care about my IPv6 addresses. It's possible that geoip data informed this, or that Netflix started to do more checking, or started to do proper IPv6, etc. I don't know.<br />
<br />
I had an online chat with an awesome Netflix Customer Service rep; HecThor (the name is awesome too!), and received great service even if they could not help:<br />
<br />
<br />
<blockquote class="tr_bq">
Netflix Customer Service<br />
You are now chatting with: Hecthor<br />
Netflix Hecthor<br />
Hello!! My name is HecThor! How can I help?<br />
You<br />
Hi, I'm Matt, I keep getting error messages saying that I am behind a proxy or VPN when I am not<br />
You<br />
Would you be able to consult logs or whatever you might have to tell me why that has been detected so I can take the necessary steps?<br />
Netflix Hecthor<br />
Oh, let me check this out for you. Could you please tell me which device are you using?<br />
You<br />
Right now, my Chrome browser<br />
You<br />
probably listed as Chrome on linux, version 50.0.2661.94<br />
You<br />
I had the same issues on a different device too (another Chrome, version is most likely different as it is running on a Chromebook device)<br />
Netflix Hecthor<br />
Just a quick question, have you tried going to the extensions of Google Chrome and unchecked and tried Netflix one more time?<br />
You<br />
yes<br />
You<br />
what's more, this one does not have any extensions<br />
Netflix Hecthor<br />
Alright, just to confirm, are you using Linux?<br />
You<br />
not exclusively, but yes<br />
Netflix Hecthor<br />
Oh got it, I'm seeing here that the signal is being redirected to the US and then to Canada several times in a day, in this case the best thing to do is to check with your Internet Service Provider to investigate why your connection appears to come from a VPN or a proxy service, as they are in charge of the signal.<br />
You<br />
this is to be expected, I get IPv6 connectivity from a US provider for work purposes<br />
You<br />
could it be that you guys started to allow ipv6?<br />
Netflix Hecthor<br />
Oh got it, we do support with IPv6, however having the setting set to the US instead of Canada may cause this conflict , so in this case what I recommend is to turn it off and you'll be able to stream without a problem. :)<br />
You<br />
it's not the kind of thing I can turn off<br />
You<br />
there aren't providers here who do IPv6<br />
You<br />
is there any way for you to set my account to only use IPv4?<br />
Netflix Hecthor<br />
Got it, you see we don't have a way to set an account to use IPv4 or Ipv6 as this has to do with the Internet service, so in this case I would recommend you to contact them and try to reset the signal or check if they're able to do that change on your settings, I'm sure that once they do you'll be able to stream Netflix without a problem.<br />
You<br />
There is no thing to reset, there is no Canada endpoint for this thing.<br />
You<br />
in fact, it only started to be an issue since the last reset, because my IPv4 address changed a few days ago as well<br />
Netflix Hecthor<br />
I understand, and do you have a way so you can return to IPv4? The thing is that Netflix is working fine, however the system is detecting that your IP is constantly changing from region to region, this is why the system is not letting you stream.<br />
You<br />
I can't do this change on the local systems, no. This is how my home network is set up -- like I said, I do need IPv6 for my work. I work from home.<br />
Netflix Hecthor<br />
Oh I definitely understand what you mean, however, to be completely honest, the process you use will not let you stream. Unless you change that wont be able to stream, because when the system detects that you're in a country and your network shows another one, this issue appears, it might work some times but I can't guaranty it will always work, if you like you can try Netflix on your mobile's network to verify this.<br />
You<br />
I don't especially want to verify anything, since we have a fair expectation of what the issue is<br />
You<br />
you've been quite helpful<br />
You<br />
Do you object to me using this chat log for documentation purposes?<br />
You<br />
I can remove your name if you prefer, but I thought it looked badass enough ;)<br />
Netflix Hecthor<br />
Sure, no problem, and it's been a pleasure being able to help! :) Is there anything else I can do for you?</blockquote>
<div>
<br />
I went on to ask to file a complaint / provide feedback to the team, since Netflix should be aware of the complexity, and inconvenience this poses on its customers. Still, I want to reiterate that I was quite happy with the service I've had from Customer Support rep HecThor, who was helpful and understanding.<br />
<br />
I'm technical enough to be able to deal with such issues in various ways. I did some searching, and it looks like you *can not* disable IPv6 simply for Chrome. It's also impractical to disable the IPv6 tunnel... I have it up for a good reason, and it had been working for a long while (that too, over a year) with no issues. Other people could also have other special network setups that could impede on Netflix steaming services. VPNs happen, and they are not all used to watch US content. They can also be done at the router level rather than at the device level; and even some ISPs require PPTP VPN use to get any kind of connectivity at all (or did in the past).<br />
<br />
The inability to disable IPv6 in Chrome is probably really a usability bug in it, but it shows how the average user might eventually run into issues dealing with content "blocking" based on location. I'm not really expecting the average user to have a network setup like mine: I had to set up IPv6 myself here, as none of the providers in Canada do a satisfactory job at it. I also don't expect the average user to care about the IP family at all -- but we'll soon get to a point where blocking based on IP and location won't make sense. IPv6 is meant to improve <a href="https://en.wikipedia.org/wiki/Mobile_IP">mobility</a>, and there are some steps taken to ensure this (see <a href="https://tools.ietf.org/html/rfc3775">RFC 3775</a>). GeoIP data can be wrong, misleading, or simply inexistant too, so you really ought not to rely on that at all.<br />
<br />
Netflix has been doing relatively well in leading some interesting infrastructure ideas it seems, aside from not being very cooperative with Linux users for a long while (fortunately, now Netflix works on Linux, but only with the official Google Chrome, still not with free software browsers). It would be good to see that leadership continue and avoid restrictive policies in favor of cooperation, especially for a company <a href="http://techblog.netflix.com/">priding itself</a> on <a href="http://fr.slideshare.net/adrianco/netflix-and-open-source">using Linux and open source <span id="goog_788770249"></span>technologies<span id="goog_788770250"></span></a>.<br />
<br />
For now, I've opted to null-route IPv6 netflix, which means I get a small delay but I can still watch Futurama. It's the least intrusive change I thought of to not have to tear down my IPv6 tunnel, but still be able to watch content.<br />
<br />
If for some inexplicable reason you also have a Cisco router at home and use an IPv6 provider from the US to get IPv6 connectivity and want to make sure Netflix keeps working; this is the command I used:<br />
<br />
<blockquote class="tr_bq">
#ipv6 route 2406:DA00:FF00::/48 Null0</blockquote>
<br />
Rather than using outdated, unreliable technology to enforce restrictive, ill-designed content rules, Netflix should lead an overhaul of the limitations imposed upon it by the original content providers. That, or use some of those uncountable piles of moneys to cover potential costs of out-of-country-content access. </div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-49125118832773304962016-02-29T16:24:00.000-05:002016-02-29T16:24:14.469-05:00Nominations wanted for the Developer Membership BoardHi!<br /><br />The Ubuntu Developer Membership Board is in need of new blood.<br /><br />Of the seven members of the board, five (5) will be expiring on March 9th. Members of the Developer Membership Board are elected by all Ubuntu Developers for a term of 2 years, meeting in #ubuntu-meeting about once a fortnight. Candidates should be Ubuntu developers themselves, and should be well qualified to evaluate prospective Ubuntu developers.<br /><br />The DMB is responsible for reviewing developer applicants and decides when to entrust them with developer privileges or to grant them Ubuntu membership status.<br /><br />Providing at least six valid nominations are received, the new members will be chosen using Condorcet voting. Members of the ubuntu-dev team in Launchpad will be eligible to vote, and will receive voting ballots by email (to their email address recorded in Launchpad). A Call for Nominations has already been sent by email to the ubuntu-devel-announce mailing list (but another call for nominations should follow soon): <a href="https://lists.ubuntu.com/archives/ubuntu-devel-announce/2016-February/001167.html">https://lists.ubuntu.com/archives/ubuntu-devel-announce/2016-February/001167.html</a>.<br /><br />Applications should be sent as GPG-signed emails to developer-membership-board at lists.ubuntu.com (which is a private mailing list accessible only by DMB members).<br />
<br />
Of course, if you're nominating a developer other than yourself, please make sure to ask who you're about to nominate beforehand, to make sure they're okay with it.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-37103996733632590602016-01-15T23:28:00.000-05:002016-01-15T23:48:48.668-05:00In full tinfoil hat mode: Using GPG with smartcardsBreaking OPSEC for a bit to write a how-to on using GPG keys with smartcards...<br />
<br />
I've thought about experimenting with smartcards for a while. Turns out that my Thinkpad has a built-in smartcard reader, but most of my other systems don't. Also, I'd like to use a smartcard to protect my SSH keys, some of which I may use on systems that I do not fully control (ie. at the university to push code to Github or Bitbucket), or to get to my server. Smartcard readers are great, but they're not much fun to add to a list of stuff to carry everywhere.<br />
<br />
There's an alternate option: the Yubikey. Yubico appears to have made a version 4 of the Yubikey which has CCID (smartcard magic), U2F (2-factor for GitHub and Google, on Chrome), and their usual OTP token, all on the same tiny USB key. What's more, it is documented as supporting 4096 bit RSA keys, and includes some ECC support (more on this later).<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCU_rx32Bv1CX1-uPkWX3cyBvvdOaX-E7mha5HoDiNG8LYr90FkZ0RJ2ALhvfw6fnIDA_CxmceU2v_V3QCX28CuxtFc6oq5D5HneDxhsJoXvrdj7600qdfUByi7MpbRem3jusOjZCCJ3g/s1600/IMG_20160115_172841.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCU_rx32Bv1CX1-uPkWX3cyBvvdOaX-E7mha5HoDiNG8LYr90FkZ0RJ2ALhvfw6fnIDA_CxmceU2v_V3QCX28CuxtFc6oq5D5HneDxhsJoXvrdj7600qdfUByi7MpbRem3jusOjZCCJ3g/s320/IMG_20160115_172841.jpg" width="320" /></a>Setting up GPG keys for use with smartcards is simple. One has the choice of either creating your own keys locally, and moving them on the smartcard, or generating them on the smartcard right away. In other to have a backup of my full key available in a secure location, I've opted to generate the keys off of the card, and transferring them.<br />
<br />
For this, you will need one (or two) Yubikey 4 (or Yubikey 4 Nano, or if you don't mind being limited to 2048 bit keys, the Yubikey NEO, which can also do NFC), some backup media of your choice, and apparently, at least the following packages:<br />
<br />
<blockquote class="tr_bq">
<code>gnupg2 gnupg-agent libpth20 libccid pcscd scdaemon libksba8 opensc</code>
</blockquote>
<br />
You should do all of this on a trusted system, not connected to any network.<br />
<br />
First, setup gnupg2 to a reasonable level of security. Edit ~/.gnupg/gpg.conf to pick the options you want, I've based my config on <a href="https://jclement.ca/articles/2015/gpg-smartcard/">Jeffrey Clement's blog entry on the subject</a>:<br />
<br />
<blockquote class="tr_bq">
<code>#default-key AABBCC90DEADBEEF <br />
keyserver hkp://keyserver.ubuntu.com<br />
no-emit-version<br />
no-comments<br />
keyid-format 0xlong<br />
with-fingerprint<br />
use-agent<br />
personal-cipher-preferences AES256 AES192 AES CAST5<br />
personal-digest-preferences SHA512 SHA384 SHA256 SHA224<br />
cert-digest-algo SHA512<br />
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed</code></blockquote>
You'll want to replace default-key later with the key you've created, and uncomment the line.<br />
<br />
The downside to all of this is that you'll need to use gpg2 in all cases rather than gpg; which is still the default on Ubuntu and Debian. gpg2 so far seems to work just fine for ever use I've had (including debsign, after setting <i>DEBSIGN_PROGRAM=gpg2</i> in ~/.devscripts).<br />
<br />
You can now generate your master key:<br />
<blockquote class="tr_bq">
<code>gpg2 --gen-key</code></blockquote>
<br />
Then edit the key to add new UIDs (identities) and subkeys, which will each have their own different capabilities:<br />
<br />
<blockquote class="tr_bq">
<code>gpg2 --expert --edit-key 0xAABBCC90DEADBEEF</code></blockquote>
Best is to follow jclement's blog entry for this. There is no point in reiterating all of it. There's also a pretty complete guide from The Linux Foundation IT <a href="https://github.com/lfit/ssh-gpg-smartcard-config/blob/master/YubiKey_NEO.rst">here</a>, though it seems to include a lot of stuff that does not appear to be required here on my system, in xenial.<br />
<br />
Add the subkeys. You should have one of encryption, one for signing, and one for authentication. Works out pretty well, since there are three slots, one for each of these capabilities, on the Yubikey.<br />
<br />
If you also want your master key on a smartcard, you'll probably need a second Yubikey (that's why I wrote two earlier), which would only get used to sign other people's keys, extend expiration dates, generate new subkeys, etc. That one should be left in a very secure location.<br />
<br />
This is a great point to backup all the keys you've just created:<br />
<br />
<blockquote class="tr_bq">
<code>
gpg2 -a --export-secret-keys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.master.key<br />
gpg2 -a --export-secret-subkeys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.sub.key<br />
gpg2 -a --export 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.pub</code></blockquote>
<br />
Next step is to configure the smartcard/Yubikey to add your name, a URL for the public key, set the PINs, etc. Use the following command for this:<br />
<blockquote class="tr_bq">
<code>gpg2 --card-edit</code>
</blockquote>
<br />
Finally, go back to editing your GPG key:<br />
<blockquote class="tr_bq">
<code>gpg2 --expert --edit-key 0xAABBCC90DEADBEEF</code>
</blockquote>
<br />
From this point you can use toggle to select each subkey (using key #), move them to the smartcard (keytocard), and deselect them (key #). To move the master key to the card, "toggle" out of toggle mode then back in, then immediately run 'keytocard'. GPG will ask if you're certain. There is no way to get a key back out of the card, if you want a local copy, you needed to make a backup first.<br />
<br />
Now's probably a great time to copy your key to a keyserver, so that people may eventually start to use it to send you encrypted mail, etc.<br />
<br />
After transferring the keys, you may want to make a "second backup", which would only contain the "clues" for GPG to know on which smartcard to find the private part of your keys. This will be useful if you need to use the keys on another system.<br />
<br />
Another option is to use the public portion of your key (saved somewhere, like on a keyserver), then have gpg2 discover that it's on a smartcard using:<br />
<br />
<blockquote class="tr_bq">
<code>gpg2 --card-status</code></blockquote>
<br />
Unfortunately, it appears to only manage to pick up either only the master key, or only the subkeys, if you use separate smartcards. This may be a blessing in disguise, in that you'd still only use the master key on an offline, very secure system, and only the subkeys in your typical daily use scenario.<br />
<br />
Don't forget to generate a revocation certificate. This is essential if you ever lose your key, if it's compromised, or you're ever in a situation where you want to let the world know quickly not to use your key anymore:<br />
<br />
<blockquote class="tr_bq">
<code>gpg2 --gen-revoke 0xAABBCC90DEADBEEF</code></blockquote>
Store that data in a safe place.<br />
<br />
Finally, more on backing up the GPG keys. It could be argued that keeping your master key on a smartcard might be a bad idea. After all, if the smartcard is lost, while it would be difficult to get the key out of the smartcard, you would probably want to treat it as compromised and get the key revoked. The same applies to keys kept on USB drives or on CD. A strong passphrase will help, but you still lost control of your key and at that point, no longer know whether it is still safe.<br />
<br />
What's more, USB drives and CDs tend to eventually fail. CDs rot after a number of years, and USB drives just seem to not want to work correctly when you really need them. Paper is another option for backing up your keys, since there are ways (paperkey, for instance) to represent the data in a way that it could either be retyped or scanned back into digital data to be retrieved. Further securing a backup key could involve using gfshare to split it into multiple bits, in the hope that while one of its locations could be compromised (lost), you'll still have some of the others sufficient to reconstruct the key.<br />
<br />
With the subkeys on the Yubikey, and provided <i>gpg2 --card-status</i> reports your card as detected, if you have the gpg-agent running with SSH support enabled you should be able to just run:<br />
<br />
<blockquote class="tr_bq">
<code>ssh-add -l</code></blockquote>
<br />
And have it list your card serial number. You can then use <i>ssh-add -L</i> to get the public key to use to add to authorized_keys files to use your authentication GPG subkey as a SSH key. If it doesn't work, make sure the gpg-agent is running and that ssh-add uses the right socket, and make sure pcscd isn't interfering (it seemed to get stuck in a weird state, and not shutting down automatically as it should after dealing with a request).<br />
<br />
Whenever you try to use one of the subkeys (or the master key), rather than being asked for the passphrase for the key (which you should have set as a very difficult, absolutely unguessable string that you and only you could remember and think of), you will be asked to enter the User PIN set for the smartcard.<br />
<br />
You've achieved proper two-factor authentication.<br />
<br />
<i>Note of ECC on the Yubikey</i>: according to the marketing documentation, the Yubikey knows about ECC p256 and ECC p384. Unfortunately, it looks like <a href="http://safecurves.cr.yp.to/">safecurves.cr.yp.to</a> considers these unsafe, since they do not meet all the SafeCurves requirements. I'm not especially versed in cryptography, but this means I'll read up more on the subject, and stay away from the ECC implementation on the Yubikey 4 for now. However, it doesn't seem, at first glace, that this ECC implementation is meant for GPG at all. The Yubikey also has PIV magic which would allow it to be used as a pure SSH smartcard (rather than using a GPG authentication subkey for SSH), with a private certificate being generated by the card. These certificates could be created using RSA or ECC. I tried to play a bit with it (using RSA), following the <a href="https://developers.yubico.com/yubico-piv-tool/SSH_with_PIV_and_PKCS11.html">SSH with PIV and PKCS11 document on developers.yubico.com</a>; but I didn't manage to make it work. It looks like the GPG functions might interfere with PIV in some way or I could just not handle the ssh-agent the right way. I'm happy to be shown how to use this correctly.<br />
<br />
<br />
<br />
<br />Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com1tag:blogger.com,1999:blog-1400765799263079238.post-25495198610708393262015-11-05T10:16:00.001-05:002015-11-05T10:16:07.671-05:00One manpage a day...I first heard of this in a Google Doc, which was linked to by a wiki page in Swedish I was shown by someone on IRC. Unfortunately, I can't find any of these links anymore...<br />
<br />
Documentation in some areas of Ubuntu is sorely lacking. Have you ever ran into a case where you tried to use this shiny new program, or do a one-shot use of some obscure old thing, without managing to find any documentation for it?<br />
<br />
One of the first things we're trained to do as Unix users is to look for the manpage for a command. Many packages are missing manpages. Any missing manpage can be considered as a bug for a program, since it means we're missing documentation for it, and people who might use that software would have no idea how to use it, or perhaps would have no idea how to use it effectively, how to make the most of the available features.<br />
<br />
Two such examples I found on my own system, looking at the contents of /usr/bin are ubuntu-drivers (pkg:ubuntu-drivers-common) and ubuntu-support-status (pkg:update-manager-core). I'm not trying to point fingers at anything (in fact, I've contributed to ubuntu-drivers before, too), just showing that examples of commands missing a manpage can be trivially found.<br />
<br />
Let's all try to find one manpage to write a day, and we'll quickly improve the state of documentation in Ubuntu by a noticeable amount. Try to write the manpage, but otherwise at least file a bug for the fact that it's missing, against the package that contains that binary.<br />
<br />
For convenience; here's a command to get a list of commands that man couldn't find a manpage for in /usr/bin (there may well be a better way to do this, and it will list some false-positives):<br />
<br />
<i>ls -1 --file-type /usr/bin | sed 's/@//' | LC_ALL=C xargs -n1 man -w >/dev/null</i><br />
<br />
Then, to find out which package contains that binary, use <i>dpkg -S</i> with the name of the command.<br />
<br />
Thanks to Stefan Bader, Colin King and Louis Bouchard for a stimulating discussion on documentation this morning. :)Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com1tag:blogger.com,1999:blog-1400765799263079238.post-19855856927422523322015-05-04T11:52:00.000-04:002015-05-04T11:52:25.268-04:00Installer session at UOSIf you're interested in how Ubuntu gets installed on systems, want to ask about specific features, or have already filed bugs that you'd like to bring to our attention, watch for my session on the calendar:<br />
<br />
<a href="http://summit.ubuntu.com/uos-1505/meeting/22512/core-1505-installer-healthcheck/">http://summit.ubuntu.com/uos-1505/meeting/22512/core-1505-installer-healthcheck/</a><br />
<br />
It's currently scheduled for Tuesday May 5th at 18:00 UTC (that's in a little bit more than 24 hours!); but just in case it changes time, make sure you're marked as attending and subscribed to the blueprint.<br />
<br />
As stated in the blueprint summary, I can't guarantee we'll get to everything, but it will be the right place to see what has to be done, and for anyone to pitch in time if they're interested in helping out!Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-58875803623219144082015-03-29T21:29:00.000-04:002015-03-29T21:29:07.272-04:00Preseeding installationsIn early February, I completed a move from Canonical's Phonedations team to the Foundations team. Part of this new work means debugging a lot of different failure cases in the installer, grub, and other early boot or low-level sofware, some of which requiring careful reproduction steps and probably quite a few install runs in VMs on on hardware.<br />
<br />
Given the number of installations I do I've started to keep around preseed files; the text files used to configure automatic installations. I've made them available at <a href="http://people.canonical.com/~mtrudel/preseed/">http://people.canonical.com/~mtrudel/preseed/</a> so that they can be reused as necessary. Most of these preseed files make heavy use of the network to get the installation data and packages from the web, so they will need to be tweaked for use in an isolated network. They are annotated enough that it should be possible for anyone to improve on them to suit their own needs. I will add to these files as I run across things to test and automate. I hope we can use some of them soon in new automated QA tests where appropriate, so that it can help catch regressions.<br /><br />
For those not familiar with preseeding, these files can be used and referred to in the installation command-line when starting from a network PXE boot or a CDROM or pretty much any other installation medium. They are useful to tell the installer how you want the installation to be done without having to answer all of the individual questions one by one in the forms in ubiquity or debian-installer. The installer will read the preseed file and use these answers without showing the prompts. This also means some of the files I make available should not be used lightly, as they will happily wipe disks without asking. You've been warned :)<br />
<br />
To use this, you'll want to specify "<i>preseed/file=/path/to/file</i>" (or just <i>file=</i>) for a file directly accessible as a file system or through TFTP, or "<i>preseed/url=http://URI/to/file</i>" (or just <i>url=</i>) if it's available using HTTP. On d-i installs, this means you may also need to add "<i>auto=true priority=critical</i>" to avoid having to fill in language settings and the like (since the preseeds are typically only read after language, country, and network have been configured); and on ubiquity installs (for example, using a CD), you'll want to add '<i>only-ubiquity automatic-ubiquity</i>' to the kernel command-line, again to keep the automated, minimal look and feel.<br />
<br />
I plan on writing
another entry soon on how to debug early boot issues in VMs or hardware using serial. Stay tuned.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com2tag:blogger.com,1999:blog-1400765799263079238.post-9635895792865398932014-08-16T21:09:00.000-04:002014-08-16T21:09:06.417-04:00Focusing and manual skillsI don't think anyone can argue against the fact that to be an effective developer, you need to somehow attain sufficient focus to fully take in the task at hand, and be sufficiently in the zone that answers to the problem at hand come naturally. In a way, programming is like that, a transcendent form of art.<br />
<br />
At least, it is, to some degree, for me. And I do feel that given sufficient focus, calm and quiet (or perhaps background noise, depending on the mood I'm in), I can get "in the zone", and solutions to what I'm trying to do come somewhat naturally. Not to say that I'm necessarily writing good code, but at least it forms some sort of sense in my mind.<br />
<br />
People have different ways to achieve focus. Some meditate, some have it come to them more easily than others. It does happen that for some people, it works well to execute some kind of ritual to get in the right frame of mind: those can be as insignificant as getting out of bed in a certain way (for those fortunate enough to work from home), or a complicated as necessary. I believe many, if not most, integrate it in their routine, to the point they perhaps forget what it is that they do to attain focus.<br />
<br />
For me, it now happens to be shaving, and the associated processes. It used to be kind of a chore, until I picked up wet shaving, and in particular, straight razor shaving.<br />
<br />
There's nothing quite like putting a naked, extremely sharp blade against your skin to get you to only think about one thing at a time :)<br />
<br />
I won't lie, the first shave with that relic was a scary experience. I wasn't at all sure of myself, with only a few tips and some videos on Youtube as training. I had bought a straight razor from Le Centre du Rasoir near my house after stumbling on articles about barbershops on the web, and it somehow interested me.<br />
<br />
Since then, I've slowly taken up the different tasks that go with the actual act of shaving with a straight razor: honing the blade, stropping, shaving, etc.; picking up the different tools required (blade, strop, honing stones, shaving creams or soaps, etc.). It's as I was slowly honing and restoring four straight razors that got to me from eBay and as a gift from my father than I thought of writing this post, in a short break I took from the honing. Getting back home and putting the finishing touches on the four razors got me to think, and I noticed I had again become much more relaxed just by taking the time to do one thing well, taking care in what I was doing.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVoc5pD134W2ihVjTVMyjJ27YBj-glji8h478Z2s8-oi3TLKeLfCNWpCzylrZ1v53oe67f1WhLwvKzW5T2brlbqB-gbjWz2BKmpo9OUk4k1XYoj7tTXmtoNG0JmBxYLFT7TtCmkJ-wyWc/s1600/IMG_20140816_201416.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVoc5pD134W2ihVjTVMyjJ27YBj-glji8h478Z2s8-oi3TLKeLfCNWpCzylrZ1v53oe67f1WhLwvKzW5T2brlbqB-gbjWz2BKmpo9OUk4k1XYoj7tTXmtoNG0JmBxYLFT7TtCmkJ-wyWc/s1600/IMG_20140816_201416.jpg" height="240" width="320" /></a></div>
<br />
<br />
I think every developer.... well, everyone can benefit in acquiring some kind of ritual like this, using our hands rather that our brain to achieve something. It's at least a great experience to get a little bit away from technology for a short while, visiting old skills of earlier times.<br />
<br />
As for the wet shaving itself, I'd be happy to respond to comments here, or blog again about it if there's enough interest in the subject; I'd love to hear that I'm not the only one in the Ubuntu and Debian communities crazy enough to take a blade to my face.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-71744472146163408062014-01-21T19:17:00.001-05:002014-01-21T19:17:37.682-05:00Call for testing: urfkill / Getting flight mode to rock on Ubuntu and Ubuntu TouchLast month, I blogged about urfkill, and what it's meant to be used for.<br />
<br />
The truth is, flight mode and proper killswitch (read: disabling radios on devices) handling is something that needs to happen on any device that deems itself "mobile". As such, it's one thing we need to look into for Ubuntu Touch.<br />
<br />
I spent the last month or so working on improving urfkill. I've implemented better logging, a way to get debugging logs, flight mode (with patches from my friend Tony Espy), persistence, ...<br />
<br />
At this point, urfkill seems to be in the proper state to make it, with the latest changes from the upstream git repository, into the distro. There is no formal release yet, but this is likely to happen very soon. So, I uploaded a git snapshot from the urfkill upstream repository into Trusty. It's now time to ask people to try it out, see how well it works on their systems, and just generally get to know how solid it is, and whether it's time to enable it on the desktop.<br />
<br />
In time, it would be nice to replace the current implementation we have of killswitch persistence (saving and restoring the state of the "soft" killswitches) currently in two upstart jobs — rfkill-store and rfkill-restore — with urfkill as a first step, for the 14.04 release (and to handle flight mode on Touch, of course). In the end, my goal would be to achieve convergence on this particular aspect of the operating system sooner than later, since it's a relatively small part of the overall communications/networking picture.<br />
<br />
So I call on everyone running Trusty to try to install the <b>urfkill</b> package, and file bugs or otherwise get me feedback on the software. :)Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com4Montreal, QC, Canada45.5086699 -73.55399249999999345.1529059 -74.1994395 45.8644339 -72.908545499999988tag:blogger.com,1999:blog-1400765799263079238.post-19675275546734873122013-12-13T13:30:00.000-05:002013-12-13T13:30:31.293-05:00urfkill : a daemon to centrally control RF killswitchesHere's another project of the u-daemon variety. The latest addition to upower, udev, etc. Meet urfkill.<br />
<br />
urfkill is meant to be a daemon that centralizes killswitch handling, rather than having all kind of different applications and daemon handle Wi-Fi, Bluetooth, WWAN and whatnot separately, and potentially fighting over them, you can have just one system that tracks the states and makes it possible to switch just one type of killswitch on all at one, or turn everything off should you so desire...<br />
<br />
One reason I've taken an interest in urfkill in Ubuntu is that as we build a phone, we have to keep thinking about how users of the devices will be mobile. That's even more the case when you think about a phone or tablet than a laptop: on a laptop, you may have to think of WiFi and Bluetooth, but you're just about as likely to have your laptop off or not have it at all; whereas phones and tablets have become ubiquitous in our way of life.<br />
<br />
Like anyone, thinking mobile I'd first think of walking around, driving, or other methods of travel. Granted, nobody needs to turn off Wi-Fi when getting in their car, but what about on planes?<br />
<br />
This is the first thing everything brings up when talking about killswitches. Planes. Alright, you really do need to turn the device off on take off and landing, but some airlines do now allow wifi to be on and offer in-flight service. They still require you to keep cellular and bluetooth off. Also, while I sometimes do take my laptop out of my bag on long flights, it's just cramped. Space is at a premium on a flight (hey, I fly economy...), you'll likely want to have a drink, people besides you may need to get up, spillage could occur if there is turbulence...<br />
<br />
I don't really enjoy using my laptop on a flight, even though it's quite small. It's just so much trouble and not very comfortable.<br />
<br />
However, I do love to watch saved movies, listen to music, and play games on a tablet. That tablet will most likely need to have radios turned off. My phone will typically just stay off and stowed far enough, since I don't really change SIM cards until I can do so safely without risking to lose the thing.<br />
<br />
But then, one can also think of how you should avoid using transmitting equipment in a hospital. They have similar rules about radios as planes to avoid interfering with cardiac stimulators, MRI equipment, etc.<br />
<br />
Having all kind of different applications handle each type of killswitches separately is quite risky and complicated. How are you certain that things have been turned off? How do you check in the UI whether it's the case? Can you see it quickly by scanning the screen?<br />
<br />
What about the actual process of switching things off? Do you need to go through three different interfaces to toggle everything? What do you need to do if you don't have a physical switch to use?<br />
<br />
What about persistence after a reboot?<br />
<br />
urfkill is meant to, in time, address all such questions. At the moment, it still needs a lot of work though.<br />
<br />
I've spent the last day fixing bugs I could find while testing urfkill on my laptop, as well as porting it to logind (still in progress). In time, we should be able to use it efficiently in Ubuntu to handle all the killswitches. With some more work, we will also be able to use it to manage the modem and other systems on Touch.<br />
<br />
For the time being, urfkill is available in the archive, for those who want to start experimenting with it.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0Montreal, QC, Canada45.5086699 -73.55399249999999345.1529059 -74.1994395 45.8644339 -72.908545499999988tag:blogger.com,1999:blog-1400765799263079238.post-54369654665659211112013-10-29T20:42:00.001-04:002013-10-29T20:42:37.092-04:00Hacking with a Samsung ARM Chromebook on TrustySo I decided it was about time to update / reinstall my Samsung Chromebook (the ARM one...) to Trusty, or at least to use Saucy. Turns out it's not that simple.<br />
<br />First, you need to know where to get the right stuff. I installed straight on the device, so chrUbuntu was the obvious choice. It's a pretty nice script that allows you to do just about anything necessary.<br />
<br />
1) Bring your Chromebook to developer mode.<br />
<br />
I'm not going to give the details. It's findable on the Internet, and unsafe enough that you should only do this if you know what you're doing... That counts double for running Trusty on the Samsung Chromebook.<br />
<br />From there, get into crosh (Ctrl+Alt+T), in shell mode (type shell at the crosh prompt). <br />
<br />
2) Download the script:<br />
<br />
<blockquote class="tr_bq">
cd ~/Downloads<br />
wget http://goo.gl/s9ryd</blockquote>
<br />
3) Run the script:<br />
<br />
<blockquote class="tr_bq">
sudo bash s9ryd xubuntu-desktop dev</blockquote>
<br />
This will do the gory install step, partition your device and format the new partition, download the ubuntu core tarball, and from there install the metapackage you've asked for as a first argument.<br />
<br />
Be aware that if you have never repartitioned the device, you'll likely notice the system rebooting during the process -- if that happens, just re-run the same command to pick this back up where they ended. It's clear when the process is done and the system installed -- the script requires you to press Enter to reboot.<br />
<br />
This was where things got fun.<br />
<br />
Turns out my device booted fine into Trusty, but it would only show a black screen with the mouse cursor. If you moved the mouse, you could see the cursor changing but still nothing else. Switching to another VT (Ctrl-Alt-arrow (F1) or whatnot) would work to get you a text-mode login, but only if you switched early enough while X was getting ready to load... otherwise, you'd just get a pretty garbled display.<br />
<br />
I hacked at the whole thing for a good while. I already know xf86-video-armsoc was involved in ChromeOS at some point, so I tried to install that.<br />
<br />
Still no love. Tried to copy the libs from ChromeOS to the device, in case it was some libmali or EGL/GLES issue... Still nothing better.<br />
<br />
I even touched /etc/X11/xorg.conf with some black magic, looking up the details using w3m in a text console...<br />
<br />
Turns out the problem was with xf86-video-armsoc itself. I initially clued in when I looked at the dates for upload of the X packages and xf86-video-armsoc itself -- it didn't seem quite right: X was newer by a bit. I knew there could be some issue with the ABI in some cases; but after more careful investigation, that's probably fine too -- armsoc properly depends on -abi-14.<br />
<br />
After much more work and trial and error, I updated xf86-video-armsoc to 0.6.0 from the Linaro git tree and also reverted one commit changing flags and it's not mostly working. X runs, I get lightdm, I can run apps -- "compositing" in Xubuntu works too, to get transparency and gradients... all with some minimal display corruption of the window decorations.<br />
<br />
So the end of the line is -- if you want to run Trusty on your Chromebook and run into similar black screen issues, and you feel daring, feel free to try my newly-built <i>xf86-video-armsoc</i> package in my <a href="http://launchpad.net/~mathieu-tl/+archive/ppa">PPA</a>:<br />
<br />
<a href="https://launchpad.net/~mathieu-tl/+archive/ppa/+sourcepub/3627079/+listing-archive-extra">https://launchpad.net/~mathieu-tl/+archive/ppa/+sourcepub/3627079/+listing-archive-extra</a><br />
<br />
It's simple; once you're in a text console on the machine (login as <i>user/user</i>):<br />
<br />
<blockquote class="tr_bq">
nmcli dev wifi connect <i><your wifi network></i> password <i><your wifi password></i><br />
sudo add-apt-repository ppa:mathieu-tl/ppa<br />
sudo apt-get update<br />
sudo apt-get install xserver-xorg-video-armsoc-exynos<br />
sudo reboot</blockquote>
<br />
These updated packages, or at least some kind of permanent fix, should make it into Trusty soon. Stay tuned :) Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com5tag:blogger.com,1999:blog-1400765799263079238.post-47374379379474145882013-09-06T17:49:00.000-04:002013-09-06T17:50:23.631-04:00Implementing MTP on Ubuntu TouchSome of the fun parts of working at Canonical is how you get to work on so many different things.<br />
<br />
I've spent the last few days working on an implementation of the MTP protocol for Ubuntu Touch, based on some amazing initial work from Thomas Voß to port the Android MTP libraries to C++.<br />
<br />
I hadn't touched C++ in just about ten years. Last time was for school stuff, and it seems like things have changed a fair amount since, so I first had to get back in touch with the quirks of C++, learning Qt (which I used for my initial work), and learning Boost; in particular Boost::Filesystem, and some of the most fun pieces of Boost like BOOST_FOREACH, range adaptors, etc.<br />
<br />
But MTP has been progressing very nicely, with code surely ready to land pretty soon in the Touch images. It was initially only exposing image files in /home/phablet/Pictures, but the latest code (still at lp:~mathieu-tl/mtp/images) now exposes all files in /home/phablet.<br />
<br />
Not everything is implemented yet: for instance, you won't really have access to see or change permissions, copy or move files around the exposed filesystem, or delete files, but you can already browse and retrieve files, just like you can use the current MTP code to push files to an Ubuntu Touch device.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmKYESHQNi66V6REz0DhjlTg1hmpgVmi72q8YjiWLXtIV7bKw1l2csFXu4qWt79zplED-yrEpzVmj8cJ-oOHwNpK5BGLlen806uqFqVOCC3MjpduSY3w185unAIFv4WMoyATQEnD4M8Y8/s1600/CIMG7070.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmKYESHQNi66V6REz0DhjlTg1hmpgVmi72q8YjiWLXtIV7bKw1l2csFXu4qWt79zplED-yrEpzVmj8cJ-oOHwNpK5BGLlen806uqFqVOCC3MjpduSY3w185unAIFv4WMoyATQEnD4M8Y8/s320/CIMG7070.JPG" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
I'm thrilled at what's to come on that front, this is going to be a lot of fun. </div>
If you're interested in testing this, feel free to take a copy of <a href="https://code.launchpad.net/~mathieu-tl/mtp/images">my bzr branch</a> and experiment. You can build mtp simply:<br />
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">cd mtp/</span> </blockquote>
<blockquote class="tr_bq">
<div class="separator" style="clear: both; text-align: left;">
<span style="font-family: "Courier New",Courier,monospace;"></span></div>
<span style="font-family: "Courier New",Courier,monospace;">sudo apt-get install bzr bzr-builddeb debhelper build-essential cmake libboost-dev libboost-filesystem-dev</span> </blockquote>
<blockquote class="tr_bq">
<div class="separator" style="clear: both; text-align: left;">
<span style="font-family: "Courier New",Courier,monospace;">bzr bd</span></div>
</blockquote>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Let's keep working on making Ubuntu and the Touch images rock!</div>
Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com4Montréal, QC, Canada45.5086699 -73.55399249999999345.1529059 -74.1994395 45.8644339 -72.908545499999988tag:blogger.com,1999:blog-1400765799263079238.post-44722703408114716752013-05-04T18:39:00.000-04:002013-05-04T18:39:38.163-04:00Protocol stacks on Ubuntu TouchWe're working really hard to get the Ubuntu Touch images into a state where the UI is really polished, the experience rocks, and just to make it a truly amazing product that reflects the core principles of Ubuntu.<br />
<br />
One of the aspects of this work is to get to a really good story with protocol stacks in general -- that is to say, bluetooth, WiFi, and the fun things behind connectivity on a mobile device. How can I get my files on the device? How can I copy the pictures that I've just taken back to my computer?<br />
<br />
On the way back home from Oakland I've had quite a lot of time to reflect on what we've done so far. I've seen really cool demos of things you could get done on Touch and how things are going to look like in the near future. It makes me very proud to be part of getting Ubuntu to a large number of people through a solid Desktop system, but also stellar mobile devices support.<br />
<br />
So, we did get bluetooth to work pretty okay so far on Touch. It seems like the current baddest issue is really just UI, but fortunately people are already hard at work fixing that, too. Keyboards can be discovered, and so can mice (I've <a href="http://www.youtube.com/watch?v=lJI7mpAFdLI">uploaded a video</a> to YouTube about that before). Bluetooth headsets should follow soon, but when I last tried I was running into issues with pulseaudio on the Nexus 7... If you're interested in bluetooth and know a little about BlueZ and the command line, by all means, let me know on IRC and let's get this to be really awesome.<br />
<br />
For WiFi, we also have <i style="font-weight: bold;">indicator-network</i> in the archive, all rewritten, received tons of love, and soon to be ready to shine on both the desktop and mobile phones or tablets. Don't get me wrong, there's still a long way to go, but it feels to me like one thing we can pretty quickly ramp up to <i>converge</i> and essentially be the same experience no matter what form factor it is running on.<br />
<br />
That covers WiFi -- but what about mobile data? (3G / 4G, but I rather speak of it in the most general terms) Well, that's being actively worked on as well. We're not too far off from having working 3G data on the "officially supported" devices (Galaxy Nexus, Nexus 4, Nexus 7); and from there it seems like it may not be too much trouble for people to ramp up that support for other devices. Sure, it's complicated work because of how technical it is, but I think it's still approachable.<br />
<br />
For mobile data, I've lately been working on teaching NetworkManager to speak to oFono; which are the two stacks we're decided, at UDS, to use to handle networking and telephony. The code itself isn't too pretty yet, but I'll add just a bit more meat to it and provide it as a test <b>bzr</b> branch for people to experiment with, until it's stable enough to make it into the archive altogether.<br />
<br />
All this to mention that I'm really excited about the current progress for Touch, and although my progress on my own work items wasn't exactly stellar, I'm thrilled about what's to come. So thrilled I'll contact my cell provider to get the right SIM card size to start using Touch on a Nexus 4 as my main phone next week.<br />
<br />
See you all at virtual UDS, looking forward to lots of constructive discussions about networking and connectivity!<br />
<br />Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com2tag:blogger.com,1999:blog-1400765799263079238.post-40348233524541671852012-08-06T17:22:00.000-04:002012-08-06T17:53:37.522-04:00Bug 1010724: Why doesn't dnsmasq listen on both IPv4 and IPv6?Dnsmasq currently only listens on 127.0.0.1; that's done on purpose. If the only nameserver you have is 127.0.0.1, both IPv4 and IPv6 queries will go through it. It doesn't listen on an IPv6 address. We'll likely change the actual address to '127.0.1.1' as soon as this is possible with dnsmasq, there are changes coming up upstream that should support this.<br />
<br />
Letting dnsmasq listen on IPv6 is definitely something I wouldn't mind to see working; but it's unfortunately not as simple as adding '--listen-address=::1' to the parameters passed to dnsmasq by NetworkManager. (Actually, it could be, see below)<br />
<br />
I understand some may want to disable all IPv4 on their systems, but that's not advisable, at least for the time being and for the loopback interface and dnsmasq specifically. You absolutely can have an IPv6-only system with no IPv4 addresses on any of the physical interfaces, yet retain the use of 127.0.0.1 on the loopback interface for dnsmasq and others -- DNS resolution will still work for both IPv4 and IPv6 without issues, and you will simply not be able to access IPv4 addresses (since it would be an IPv6-only system for the physical interfaces).<br />
<br />
The reason just '--listen-address' can't be used is because we've already had reports about dnsmasq listening on 127.0.0.1 being an issue. It's one we want to address. When installed from the 'dnsmasq' package on Ubuntu/Debian; dnsmasq ships an init script that listens on that loopback IPv4 address as well; causing issues for those who genuinely want to run a system-wide instance of dnsmasq that can be interrogated via loopback (thus serving the local machine), or users who haven't changed any of the default configuration for dnsmasq.<br />
<br />
In the case of 127.0.0.1, the fix is relatively simple because we can switch to using 127.0.0.2 or 127.0.1.1; but for IPv6, there doesn't seem to be any such thing other than ::1 specifically meant to be used as a loopback address. In IPv4, it's actually a whole subnet that is available to the loopback interface; while in IPv6 you only have one address (::1/128) (see <a href="http://tools.ietf.org/html/draft-smith-v6ops-larger-ipv6-loopback-prefix-00">http://tools.ietf.org/html/draft-smith-v6ops-larger-ipv6-loopback-prefix-00</a>).<br />
<br />
I'm very open to suggestions; at this point I'm looking for great ideas on how to best fix this and avoid concurrency issues with other applications; but given the rather minimal return of enabling it vs. the impact on other software running on the machine, and because we ran into precisely this kind of issue (multiple applications listening on the same address on port 53) already, I'd be inclined to have a real good alternative before changing things.<br />
<br />
Consider the following two strace outputs for 'ping6 www.google.com'. The first one was run with dnsmasq started (manually, for testing purposes, but with the same parameters as NetworkManager uses) to listen on IPv4:<br />
<br />
<code>
read(3, "# Dynamic resolv.conf(5) file fo"..., 4096) = 183<br />
read(3, "", 4096) = 0<br />
close(3) = 0<br />
munmap(0x7f45cba80000, 4096) = 0<br />
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3<br />
connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, 16) = 0<br />
poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}])<br />
sendto(3, "\r\347\1\0\0\1\0\0\0\0\0\0\3www\6google\3com\0\0\34\0\1", 32, MSG_NOSIGNAL, NULL, 0) = 32<br />
poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])<br />
ioctl(3, FIONREAD, [90]) = 0<br />
recvfrom(3, "\r\347\201\200\0\1\0\2\0\0\0\0\3www\6google\3com\0\0\34\0\1"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, [16]) = 90<br />
close(3) = 0<br />
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3<br />
connect(3, {sa_family=AF_INET6, sin6_port=htons(1025), inet_pton(AF_INET6, "2001:4860:800a::93", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)</code>
<br />
<br />
The network is unreachable only because I didn't have IPv6 access at the time. You can see that the request was sent and the address was properly discovered as "2001:4860:800a::93". The most important part is the first connect() using AF_INET as family, and "127.0.0.1" as the address -- that was libc trying to reach the nameserver defined in /etc/resolv.conf.<br />
<br />
Now consider the following strace output, which is for the same request sent while dnsmasq was configured to listen only on ::1; and with ::1 defined as the nameserver in /etc/resolv.conf:<br />
<br />
<code>
socket(PF_INET6, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3<br />
connect(3, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0<br />
poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}])<br />
sendto(3, "\220]\1\0\0\1\0\0\0\0\0\0\3www\6google\3com\0\0\34\0\1", 32, MSG_NOSIGNAL, NULL, 0) = 32<br />
poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])<br />
ioctl(3, FIONREAD, [90]) = 0<br />
recvfrom(3, "\220]\201\200\0\1\0\2\0\0\0\0\3www\6google\3com\0\0\34\0\1"..., 1024, 0, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 90<br />
close(3) = 0<br />
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3<br />
connect(3, {sa_family=AF_INET6, sin6_port=htons(1025), inet_pton(AF_INET6, "2001:4860:800a::93", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)
</code>
<br />
<br />
Very much the same behavior as above. This time, the entry in /etc/resolv.conf was ::1, so that's what was used for the first connect(); and because that's an IPv6 address, AF_INET6 was used as sa_family.<br />
<br />
Both IPv4 and IPv6 queries were the first to run, and returned pretty much instantly.<br />
<br />
One alternative to allow dnsmasq to listen on both IPv4 and IPv6 could be adding a loopback interface (or a tap interface) and using a limited scope IPv6 address, but there remains gotchas with this particular course of action -- for instance, dnsmasq currently appears to bind to *both* the specified link-local address added to lo as well as the "primary" IPv6 address defined for lo (::1/128).<br />
<br />
Furthermore, it seems rather clumsy to me to include both the IPv4 and IPv6 addresses in /etc/resolv.conf when they refer to the same software instance. It's not going to bring much.<br />
<br />
If you don't care at all about these details, whether ::1 shows up in /etc/resolv.conf automatically, don't run other instances of dnsmasq and want to experiment with custom configurations; in Quantal you'll be able to add configuration settings to files in /etc/NetworkManager/dnsmasq.d and tweak the settings as required.Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com0tag:blogger.com,1999:blog-1400765799263079238.post-27619797340637058152012-05-22T22:46:00.000-04:002012-05-22T22:46:21.187-04:00Here's a quarter, go buy yourself a clue<blockquote class="tr_bq">
"<i>Don't hate the media, become the media</i>" <i>~Jello Biafra</i></blockquote>
<br />
TL;DR: Ôtez-vous de mon chemin. Je suis pour la hausse. Pas de raison de manifester, pour l'instant.<br />
<br />
Je ne veux rien savoir des manifs, de la loi 78, de l'autre loi à Montréal, des hausses des frais de scolarité ou alors si vous êtes pour au contre. Je n'en ai rien à faire. Y'a tellement d'autres chats à fouetter. Je "recommence" l'école pour la session d'automne à l'UQAM en informatique (de soir, temps partiel). Arrangez vous pour pas me faire chier et pas être dans mes pattes à ce moment là.<br /><br />Pour ceux qui se questionnent; mes opinions sont simples:<br />
<ul>
<li>Je peux payer mes cours sans problème, alors la hausse, ça ne me touche pas, tant que c'est raisonnable. Ça l'est pour l'instant, à mon avis.</li>
<li>L'éducation doit être accessible à qui le veut bien, mais ca veut pas dire qu'une hausse va véritablement dramatiquement changer les choses.</li>
<li>Le gouvernement se doit de bonifier les prêts et bourses de toute façon, pour aider les gens qui ont du talent ou du potentiel à faire valoir ce potentiel et leur droit à l'éducation.</li>
<li>L'éducation est un droit, certes, mais il n'est pas indispensables. Si on a que des doctorants, qui va être plombier, ramasser les vidanges, etc? Ca en prend de tous les niveaux, de toutes les cultures et de tous les niveaux d'éducation, il n'y a pas de sous-emploi.</li>
<li>Suivant cette ligne de pensée, travailler risque d'aider fortement à pouvoir se payer les frais de scolarité. Pas oubligé d'aller directement à l'université après le CEGEP, pas oubligé d'y aller à temps plein. Je serai le cobaye de ce que j'avance, mais je serais fort surpris d'apprendre que j'aie tort... (ce que d'autres personnes de mon entourage ont déjà fait également avec succès).</li>
<li>Encore sur le même train: /there is no such thing as a free lunch/. Tout se paye, y'a des emplois en jeu, des salaires à payer. Ca doit venir des poches de quelqu'un.</li>
<li>Et plus: si ca vient pas directement de l'utilisateur, ca vient forcément des poches de quelqu'un d'autre. Si ce "quelqu'un" d'autre est le gouvernement, c'est qu'implicitement ca vient globalement des poches de la population au complet.</li>
<li>Pour terminer, toujours sur la même pensée: je ne devrait pas devoir payer pour donner un meilleur salaire potentiel, pour ouvrir des portes à quiconque, sauf moi-même (et mes enfants, mais alors voir plus haut, ils paieraient eux-mêmes avec aide, et potentiellement bourses, etc.)</li>
<li>Je suis totalement pour le fait de donner une chance égale à tous, et une éducation convenable à l'ensemble de la population, de façon à partir sur un pied d'égalité. En ce sens, la "gratuité scolaire" est tout à fait sensée au niveau primaire et secondaire, et même au niveau collégial. Il en va d'une compréhension correcte des maths, du français, de l'anglais, etc., ainsi que d'apprendre à apprendre/développer un sens critique. On veut pas des moutons qui suivent ce qu'on leur dit, mais des personnes sensées qui ont leurs propres idées. Cela ne veut pas pour autant dire que l'enseignement universitaire devrait être cette "base commune", où on apprend d'avantage des concepts plus pointus, où on se spécialise, etc.</li>
<li>Le point ci-haut implique un changement de mentalité radical des entreprises. Doit-on vraiment rendre indispensable un papier signé pour signifier une compréhension d'une matière (et attention, il y a des exceptions, la médecine, le droit, l'enseignement, etc.)? En informatique, je ne suis pas du tout convaincu de sa nécessité. Je préférerais engager des postulants qui prouvent leur compréhension technique et *active*, *dans le vrai monde*, basée sur expérience ou sur un potentiel clair (test d'aptitudes, entrevues poussées), qu'un postulant fraîchement sorti de l'université avec son diplôme.</li>
<li>Je ne considère pas la loi 78 antidémocratique. Interdire les manifestations, les restreindre, etc., le serait. Demander de bien vouloir annoncer les quand, comment, pourquoi, c'est simplement donner un outil aux policiers pour pouvoir assurer la sécurité des manifestants et du reste de la population. Imaginez si ça devait tourner à un affrontement entre différents groupes de citoyens? (oh wait...) Ça donne au moins un indice sur le fait que quelque chose ne fonctionne peut-être pas correctement si les détails (voir précédemment) venaient à changer.</li>
<li>Dans le même sens: cette loi sur les masques, dissimulation du visage, etc.; je crois que c'est une bonne chose. Vous n'en avez pas besoin pour mener à bien une manifestation. Sinon, si vous avez quelque chose à cacher, on est en droit de se demander ce que vous faites là, et en droit de penser devoir intervenir. Je ne me servirai pas de ce billet comme d'une tribune pour d'autres instances où la dissimulation du visage est entrave à la vie publique.</li>
</ul>
Oui, j'y vais à l'UQAM, faire mon certificat, pour mon édification personnelle. Parce que des sujets comme le développement agile et le droit de l'informatique m'intéressent vraiment (pour ne donner que deux exemples). Je suis peut-être fou comme ça, mais je crois que c'est bien ce genre de questionnement, de soif de savoir et ce genre de personnes dont j'aimerais voir le Québec composé. Pas des grognards prêts à tout arracher à la moindre lueur d'un peu de complexité dans leur vie coussinée.<br />
<br />
Pour en passer aux manifs; j'aime avoir la PAIX. Pouvoir marcher à Montréal, "ma ville" (erm.. vous comprenez...) sans devoir me demander si je vais être bloqué par des manifestants, par la police, ou par un blocage du métro ou des ponts pour revenir chez moi. Ne pas avoir à me demander si je pourrai passer une soirée tranquille sur une terrasse, ou voir mon auto brûlée par un émeutier ou me faire poivrer par un policier...<br />
<br />
Bien sûr, certains peuvent avoir mauvaise attitude, bousculer et tout -- mais ces agents devront répondre de leurs actes et ne sont pas majorité. La police est là pour garder tout le monde en sécurité, ça vient avec beaucoup de stress et de responsabilités, donnons-leur donc un peu de chances. Pas besoin de se mettre en travers de leur travail non plus. Je connais des policiers. Ce ne sont pas que des mauvaises personnes...<br />
<br />
Et les manifestants. Sérieusement? Peut-on toujours parler de manifestation si ça tourne à l'émeute (ou presque), une fois sur deux? Du moment où on voit des actes de violence, il me semble que le gros bon sens est de se dissocier du groupe, le plus radicalement et de la façon la plus permanente possible. Déjà après trente jours de manifestations consécutives, on peut comprendre qu'il y ait grogne et mécontentement, mais il me semble que le bon sens me dit, à moi, de "crisser" mon camps et de plus rien avoir à faire avec ces personnes, que ça va tourner mal, que ce n'est qu'une question de temps. En plus, je suis certain qu'il existe d'autres méthodes pour faire des moyens de pression, pour signifier son mécontentement. De bien meilleures méthode qui ne seront pas, d'autant plus, n'empêcheront pas à d'autres de finalement bien pouvoir terminer leur dernière session. De pouvoir enfin travailler (voir plus haut, exceptions aux requis de diplômes).<br />
<br />
Finalement, le gouvernement. Bah. À ce point c'en est que triste. Que croyez-vous vraiment qu'ils feront? Pensez-vous sérieusement qu'on arrive à l'orée d'un état policier? Je serais bien le premier à m'insurger. Vous avez, généralement, voté pour le gouvernement qui a été mis en poste. Possiblement comme alternative "la moins pire", mais quand même. Ils sont là pour trouver des solutions et tenter d'arriver à un compromis pour le bien de la population en général. Le fait est qu'en prenant des décisions on arrive tôt ou tard à déplaire à quelqu'un. Je sens que je m'en attire moi-même tout plein, des commentaires sur ce blogue... J'aime quand même mieux faire payer ceux qui étudie pour leurs études que la population globale, qui n'a pas besoin de plus de taxes ou d'impôts, ou de détourner l'argent de d'autres domaines (*toussote* la santé *toussote*) qui en ont bien besoin.<br />
<br />
Ce qui m'insulte et me révolte le plus de tout ça, c'est la mésinformation. Si j'ai écris des conneries ici, répondez-moi et j'éditerai. Ou alors j'expliquerai mon raisonnement. Mais le fait que des journalistes construisent des nouvelles, les modifient (donnent leur opinion, ce qui me choque déjà depuis longtemps... en général, à la télé, c'est pas éditorialiste mais journaliste ta job!!! À moins que tu t'appelles Mongrain, Martineau, Lévesque ou même Poirier jusqu'à un certain point, on veut pas connaître ton opinion, juste avoir les faits. (et comme ça, pour les opinions on peut éviter de les regarder)), c'est totalement inacceptable. Mais encore, pas une raison d'aller dans la rue retourner des voitures. D'un autre côté, j'ai vu beaucoup de spéculations sur le sujet. N'oublions pas que les journalistes doivent incorporer un grand nombre de détails disparates pour relater les faits d'un événement comme une émeute. Tout plein d'informations venant de toutes parts, des réseaux sociaux, de la police, de leurs expériences. Des erreurs, on en fait tous. Je préfère donner le bénéfice du doute.<br />
<br />
Mais pour terminer, je suis tout simplement amusé par tout ce que je vois, dans les médias, les discussions avec collègues, étudiants ou non, etc. Trop d'opinions (et moi qui en rajoute!), trop peu de faits purs et durs, trop de violence inutile et de grogne mal canalisée. Un nombre impossible d'opiniâtres mal informés... J'en suis peut-être un, même si j'ose croire que tout ce que j'ai écris ici vient de ma propre analyse, colorée par mes propres expériences de toute une vie, et basée sur les faits que j'ai pu trouver et colliger. Avec un peu de chance, il s'agit d'une synthèse pas trop folle d'un situation qui elle, l'est.<br />
<br />
Le plus ridicule, c'est maintenant les masques de Guy Fawkes. Anonymous. Y'en a qui ont trop regardé V pour Vendetta... Mais ca me fait penser à une citation du film, justement:<br />
<blockquote class="tr_bq">
People should not be afraid of their governments. Governments should be afraid of their people.</blockquote>
<br />
N'empêche, on est pas encore rendu à 1984, ni au Londres d'Alan Moore. Quand ce sera le cas, vous me verrez dans la rue moi aussi. En attendant, ôtez-vous de mon chemin et laissez-moi vivre en paix.<br />
<br />
<i><span class="st">Thoughtful discussion and criticism is welcomed, but please send all f</span></i><i>lames to /dev/null.</i><br />Matt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com6tag:blogger.com,1999:blog-1400765799263079238.post-73905611972660078522012-03-07T12:46:00.000-05:002012-03-09T17:11:29.645-05:00Call for testing: BlueZ A2DP and HSP/HFP profilesWe just landed <a href="https://launchpad.net/ubuntu/+source/bluez/4.98-2ubuntu2">bluez 4.98-2ubuntu2</a> in Precise. The key change was to enable the Source and Gateway profiles by default, which will now allow people to use their system as an audio output device. It works great on Android, but any help in testing this on a bunch of different devices (especially on iPhone) would be much appreciated!<br />
<br />
The Source profile is what enables the A2DP bluetooth profile. With it, you can use your system as a stereo audio output device: in other words, you can send output audio from an external device such as a phone or a tablet to your computer (though this will require a bit of work on the PulseAudio side to get the audio stream to go through the right devices). <br />
<br />
There's the AskUbuntu question "<span style="font-size: small;"><a class="question-hyperlink" href="http://askubuntu.com/questions/2573/can-i-use-my-computer-as-an-a2dp-receiver">Can I use my computer as an A2DP receiver?</a></span>" that describes the steps to use this <i><b>right now</b></i>. Thanks to Steve Langasek for figuring out the details. I've also written a draft script that implements the suggested steps and makes it simple:<br />
<br />
<pre>#!/bin/sh
BTSOURCE=`pactl list short sources | grep bluez_source | awk '{print $2;}'`
SINK=`pactl list short sinks | grep -v Monitor | grep alsa_output.pci | awk '{ print $2; }'`
case $1 in
enable)
pactl load-module module-loopback source=$BTSOURCE sink=$SINK
;;
disable)
pactl unload-module $(pactl list short modules | grep loopback | grep $BTSOURCE | cut -f 1) || true
;;
esac </pre>
<br />
That script is meant to be called as '"whatever_you_named_it" enable'. Don't forget the "enable" part, or "disable" to turn off the streams, otherwise it won't work.<br />
<br />
The Gateway profile enables the HSP/HFP bluetooth profiles. This means we're getting closer to supporting phone calls from a bluetooth-connected phone on a Ubuntu computer. There's already some amounts of support for this via Ofono and the telepathy-ring project, although there is some extra work needed -- hopefully we can fix this by the Precise release. :DMatt Trudelhttp://www.blogger.com/profile/10138570513134453565noreply@blogger.com7