Saturday 29 August 2009

Mentoring + Ubuntu Developer Week


Needless to say, I'm going to Ubuntu Developer Week. Sadly, this time my IRC participation will be limited given that I'll be at work, and thus unable to connect. I'm still very eager to take a look at the chat histories that will be posted after the sessions. It all sounds really interesting.

Also, timing couldn't be better for more interesting learning stuff to come up. I recently sent an e-mail to Ubuntu Mentors, asking for help in looking at different things, such as NetworkManager. It turns out that I will be working with Alexander Sack.

So far, I've been looking at two things: building the network-manager-openconnect package, and looking at network-manager bugs.

The bugs are bugging me somewhat. There's quite a lot to look at, and some of the stuff just seems so obscure. Still, I think I'm still getting better at sorting bugs.

Then there's building the network-manager-openconnect package. It seems to build fine-ish, but I get some weird errors when I try to enable a VPN connection -- but I think I'm close to figuring it out.

Monday 17 August 2009

Ubuntu Developer Week coming up!

As Daniel Holbach is announcing in his blog, an all new Ubuntu Developer Week is coming up soon. It will be held from from Monday, 31st August 2009 to Friday, 4th September 2009.

Ubuntu Developer Week is a great way to get to know more of the inner workings of Ubuntu. Plus, it's always packed with incredible talks about various subjects: check out the timetable.

You may also want to take a look at the brochure if you want to know how or where to start developing for Ubuntu.

Subjects announced obviously follow the latest developments in the community, so we'll get to know about fun stuff like Quickly, Papercuts, Mago, as well as lots about Launchpad.

Friday 7 August 2009

Process Accounting data

I've been working since yesterday on a very interesting project: dealing with process accounting data, in the context of PCI-DSS compliance.

Process accounting is quite interesting because of how it's done. Here's a gross over-simplification, so sorry if I'm explaining it all wrong: the kernel, if it has "BSD Process accounting" enabled (and/or "version 3", as it is in Ubuntu, or at least on my Karmic system), waits for a special signal for userland to turn on process accounting and start writing binary data to a file on the filesystem.

But let's back up a little. What is this process accounting stuff?

Process accounting is a feature that allows you to track every command ever run on a system by any user. As the name says, it helps to "account" for all actions on a system and potentially know about actions that should not have occurred (say, a system that was compromised), or a command that caused an outage and it must be tracked down to who was responsible for an evil evil intervention on a production system without advising anybody... it requires both a kernel piece and a userland piece.

By default, on most systems, the kernel will already have everything enabled to support process accounting. You will only need to install the userland software.

On Ubuntu, you can achieve this with the command:

sudo apt-get install acct

On Fedora system, you can get the same result with this command:

yum install psacct

Once that is done, you will run 'sudo /etc/init.d/acct start' on Ubuntu, or 'service psacct start' on Fedora to start the process accounting (give the go to the kernel to write in the data file).

This is where things get complicated. I've searched a little for newer, more versatile tools to handle the binary data spewed out by my systems. It turns out that while acct (or psacct), in other words the GNU Accounting Utilities are great, they tend to be the only option for dealing with the kernel's data, and pretty much also force you to physically connect to a system in order to search for the information you want. Needless to say, this can tend to be problematic (say the system is now completely broken... what can you do?). It also brings up the issue that in the event of a compromise of the system, you can no longer trust the system, or the files it contains -- not even the accounting data. Backups are an option, so is rsync, but we were looking for something better. Something that would send to syslog for example, to be integrated into Splunk or some alerting utilities, to both centralize and potentially render the data as read-only as it could get.

That's where my scripts come in.

I've been busy writing a replacement for the lastcomm command as well as something to grab the data from the binary file and feed it into syslog. I've called it acct2syslog.pl.

This is still very early and experimental work, but I already have a Bazaar branch for it. Check it the Launchpad page! If you're interested in helping out, feel free to send me an email or message on IRC.

Obviously, I can't account for performance at this time. I just don't know how well it will deal with a system if there are thousands of commands running really fast. :)