Recent blog entries for ctrlsoft

The Samba Buildfarm

Portability has always been very important to Samba. Nowadays Samba is mostly used on top of Linux, but Tridge developed the early versions of his SMB implementation on a Sun workstation.

A few years later, when the project was being picked up, it was ported to Linux and eventually to a large number of other free and non-free Unix-like operating systems.

Initially regression testing on different platforms was done manually and ad-hoc.

Once Samba had support for a larger number of platforms, including numerous variations and optional dependencies, making sure that it would still build and run on all of these became a non-trivial process.

To make it easier to find regressions in the Samba codebase that were platform-specific, tridge put together a system to automatically build Samba regularly on as many platforms as possible. So, in Spring 2001, the build farm was born - this was a couple of years before other tools like buildbot came around.

The Build Farm

The build farm is a collection of machines around the world that are connected to the internet, with as wide a variety of platforms as possible. In 2001, it wasn't feasible to just have a single beefy machine or a cloud account on which we could run virtual machines with AIX, HPUX, Tru64, Solaris and Linux so we needed access to physical hardware.

The build farm runs as a single non-privileged user, which has a cron job set up that runs the build farm worker script regularly. Originally the frequency was every couple of hours, but soon we asked machine owners to run it as often as possible. The worker script is as short as it is simple. It retrieves a shell script from the main build farm repository with instructions to run and after it has done so, it uploads a log file of the terminal output to samba.org using rsync and a secret per-machine password.

Some build farm machines are dedicated, but there have also been a large number of the years that would just run as a separate user account on a machine that was tasked with something else. Most build farm machines are hosted by Samba developers (or their employers) but we've also had a number of community volunteers over the years that were happy to add an extra user with an extra cron job on their machine and for a while companies like SourceForge and HP provided dedicated porter boxes that ran the build farm.

Of course, there are some security usses with this way of running things. Arbitrary shell code is downloaded from a host claiming to be samba.org and run. If the machine is shared with other (sensitive) processes, some of the information about those processes might leak into logs.

Our web page has a section about adding machines for new volunteers, with a long list of warnings.

Since then, various other people have been involved in the build farm. Andrew Bartlett started contributing to the build farm in July 2001, working on adding tests. He gradually took over as the maintainer in 2002, and various others (Vance, Martin, Mathieu) have contributed patches and helped out with general admin.

In 2005, tridge added a script to automatically send out an e-mail to the committer of the last revision before a failed build. This meant it was no longer necessary to bisect through build farm logs on the web to find out who had broken a specific platform when; you'd just be notified as soon as it happened.

The web site

Once the logs are generated and uploaded to samba.org using rsync, the web site at http://build.samba.org/ is responsible for making them accessible to the world. Initially there was a single perl file that would take care of listing and displaying log files, but over the years the functionality has been extended to do much more than that.

Initial extensions to the build farm added support for viewing per-compiler and per-host builds, to allow spotting trends. Another addition was searching logs for common indicators of running out of disk space.

Over time, we also added more samba.org-projects to the build farm. At the moment there are about a dozen projects.

In a sprint in 2009, Andrew Bartlett and I changed the build farm to store machine and build metadata in a SQLite database rather than parsing all recent build log files every time their results were needed.

In a follow-up sprint a year later, we converted most of the code to Python. We also added a number of extensions; most notably, linking the build result information with version control information so we could automatically email the exact people that had caused the build breakage, and automatically notifying build farm owners when their machines were not functioning.

autobuild

Sometime in 2011 all committers started using the autobuild script to push changes to the master Samba branch. This script enforces a full build and testsuite run for each commit that is pushed. If the build or any part of the testsuite fails, the push is aborted. This alone massively reduced the number of problematic changes that was pushed, making it less necessary for us to be made aware of issues by the build farm.

The rewrite also introduced some time bombs into the code. The way we called out to our ORM caused the code to fetch all build summary data from the database every time the summary page was generated. Initially this was not a problem, but as the table grew to 100,000 rows, the build farm became so slow that it was frustrating to use.

Analysis tools

Over the years, various special build farm machines have also been used to run extra code analysis tools, like static code analysis, lcov, valgrind or various code quality scanners.

Summer of Code

Of the last couple of years the build farm has been running happily, and hasn't changed much.

This summer one of our summer of code students, Krishna Teja Perannagari, worked on improving the look of the build farm - updating it to the current Samba house style - as well as various performance improvements in the Python code.

Jenkins?

The build farm still works reasonably well, though it is clear that various other tools that have had more developer attention have caught up with it. If we would have to reinvent the build farm today, we would probably end up using an off-the-shelve tool like Jenkins that wasn't around 14 years ago. We would also be able to get away with using virtual machines for most of our workers.

Non-Linux platforms have become less relevant in the last couple of years, though we still care about them.

The build farm in its current form works well enough for us, and I think porting to Jenkins - with the same level of platform coverage - would take quite a lot of work and have only limited benefits.

(Thanks to Andrew Bartlett for proofreading the draft of this post.)

Syndicated 2015-02-08 00:06:23 from Stationary Traveller

Autonomous Shard Distributed Databases

Distributed databases are hard. Distributed databases where you don't have full control over what shards run which version of your software are even harder, because it becomes near impossible to deal with fallout when things go wrong. For lack of a better term (is there one?), I'll refer to these databases as Autonomous Shard Distributed Databases.

Distributed version control systems are an excellent example of such databases. They store file revisions and commit metadata in shards ("repositories") controlled by different people.

Because of the nature of these systems, it is hard to weed out corrupt data if all shards ignorantly propagate broken data. There will be different people on different platforms running the database software that manages the individual shards.

This makes it hard - if not impossible - to deploy software updates to all shards of a database in a reasonable amount of time (though a Chrome-like update mechanism might help here, if that was acceptable). This has consequences for the way in which you have to deal with every change to the database format and model.

(e.g. imagine introducing a modification to the Linux kernel Git repository that required everybody to install a new version of Git).

Defensive programming and a good format design from the start are essential.

Git and its database format do really well in all of these regards. As I wrote in my retrospective, Bazaar has made a number of mistakes in this area, and that was a major source of user frustration.

I think the followin gdesign I propose that every autonomous shard distributed databases should aim for the following:

  • For the "base" format, keep it as simple as you possibly can. (KISS)

    The simpler the format, the smaller the chance of mistakes in the design that have to be corrected later. Similarly, it reduces the chances of mistakes in any implementation(s).

    In particular, there is no need for every piece of metadata to be a part of the core database format.

    (in the case of Git, I would argue that e.g. "author" might as well be a "meta-header")

  • Corruption should be detected early and not propagated. This means there should be good tools to sanity check a database, and ideally some of these checks should be run automatically during everyday operations - e.g. when pushing changes to others or receiving them.

  • If corruption does occur, there should be a way for as much of the database as possible to be recovered.

    A couple of corrupted objects should not render the entire database unusable.

    There should be tools for low-level access of the database, but also that the format and structure should be documented well enough for power users to understand it, examine files and extract data.

  • No "hard" format changes (where clients /have/ to upgrade to access the new format). Not all users will instantly update to the latest and greatest version of the software. The lifecycle of enterprise Linux distributions is long enough that it might take three or four years for the majority of users to upgrade.

  • Keep performance data like indexes in separate files. This makes it possible for older software to still read the data, albeit at a slower pace, and/or generate older format index files.

  • New shards of the database should replicate the entire database if at all possible; having more copies of the data can't hurt if other shards go away or get corrupted.

    Having the data locally available also means users get quicker access to more data.

  • Extensions to the database format that require hard format changes (think e.g. submodules) should only impact databases that actually use those extensions.

  • Leave some room for structured arbitrary metadata, which gets propagated but that not all clients need to be able to understand and can safely ignore.

    (think fields like "Signed-Off-By", "Reviewed-By", "Fixes-Bug", etc) in commit metadata headers.

Syndicated 2014-08-22 18:00:00 from Stationary Traveller

Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

Syndicated 2014-08-18 21:15:00 from Stationary Traveller

The state of distributed bug trackers

A whopping 5 years ago, LWN ran a story about distributed bug trackers. This was during the early waves of distributed version control adoption, and so everybody was looking for other things that could benefit from decentralization.

TL;DR: Not much has changed since.

The potential benefits of a distributed bug tracker are similar to those of a distributed version control system: ability to fork any arbitrary project, easier collaboration between related projects and offline access to full project data.

The article discussed a number of systems, including Bugs Everywhere, ScmBug, DisTract, DITrack, ticgit and ditz. The conclusion of our favorite grumpy editor at the time was that all of the available distributed bug trackers were still in their infancy.

All of these piggyback on a version control system somehow - either by reusing the VCS database, by storing their data along with the source code in the tree, or by adding custom hooks that communicate with a central server.

Only ScmBug had been somewhat widely deployed at the time, but its homepage gives me a blank page now. Of the trackers reviewed by LWN, Bugs Everywhere is the only one that is still around and somewhat active today.

In the years since the article, a handful of new trackers have come along. Two new version control systems - Veracity and Fossil - come with the kitchen sink included and so feature a built-in bug tracker and wiki.

There is an extension for Mercurial called Artemis that stores issues in an .issues directory that is colocated with the Mercurial repository.

The other new tracker that I could find (though it has also not changed since 2009) is SD. It uses its own distributed database technology for storing bug data - called Prophet, and doesn't rely on a VCS. One of the nice features is that it supports importing bugs from foreign trackers.

Some of these provide the benefits you would expect of a distributed bug tracker. Unfortunately, all those I've looked at fail to even provide the basic functionality I would want in a bug tracker. Moreso than with a version control system, regular users interact with a bug tracker. They report bugs, provide comments and feedback on fixes. All of the systems I tried make these actions a lot harder than with your average bugzilla or mantis instance - they provide a limited web UI or no web interface at all.

Syndicated 2013-11-10 18:23:00 from Stationary Traveller

Quantified Self

Dear lazyweb,

I've been reading about what the rest of the world seems to be calling "quantified self". In essence, it is tracking of personal data like activity, usually with the goal of data-driven decision making. Or to take a less abstract common example: counting the number of steps you take each day to motivate yourself to take more. I wish it'd been given a less annoying woolly name but this one seems to have stuck.

There are a couple of interesting devices available that track sleep, activity and overall health. Probably best known are the FitBit and the jazzed-up armband pedometers like the Jawbone UP and the Nike Fuelband. Unfortunately all existing devices seem to integrate with cloud services somehow, rather than giving the user direct access to their data. Apart from the usual privacy concerns, this means that it is hard to do your own data crunching or create a dashboard that contains data from multiple sources.

Has anybody found any devices that don't integrate with the cloud and just provide raw data access?

Syndicated 2013-11-10 01:05:00 from Stationary Traveller

Porcelain in Dulwich

"porcelain" is the term that is usually used in the Git world to refer to the user-facing parts. This is opposed to the lower layers: the plumbing.

For a long time, I have resisted the idea of including a porcelain layer in Dulwich. The main reason for this is that I don't consider Dulwich a full reimplementation of Git in Python. Rather, it's a library that Python tools can use to interact with local or remote Git repositories, without any extra dependencies.

dulwich has always shipped a 'dulwich' binary, but that's never been more than a basic test tool - never a proper tool for end users. It was a mistake to install it by default.

I don't think there's a point in providing a dulwich command-line tool that has the same behaviour as the C Git binary. It would just be slower and less mature. I haven't come across any situation where it didn't make sense to just directly use the plumbing.

However, Python programmers using Dulwich seem to think of Git operations in terms of porcelain rather than plumbing. Several convenience wrappers for Dulwich have sprung up, but none of them is very complete. So rather than relying on external modules, I've added a "porcelain" module to Dulwich in the porcelain branch, which provides a porcelain-like Python API for Git.

At the moment, it just implements a handful of commands but that should improve over the next few releases:

from dulwich import porcelain

r = porcelain.init("/path/to/repo")
porcelain.commit(r, "Create a commit")
porcelain.log(r)

Syndicated 2013-10-03 22:00:00 from Stationary Traveller

Book Review: Bazaar Version Control

Packt recently published a book on Version Control using Bazaar written by Janos Gyerik. I was curious what the book was like, and they kindly provided me with a digital copy.

The book is split into roughly five sections: an introduction to version control using Bazaar's main commands, an overview of the available workflows, some chapters on the available extensions and integration, some more advanced topics and finally, a quick introduction to programming using bzrlib.

It is assumed the reader has no pre-existing knowledge about version control systems. The first chapters introduce the reader to the concept of revision history, branching and merging and finally collaboration. All concepts are first discussed in theory, and then demonstrated using the Bazaar command-line UI and the bzr-explorer tool. The book follows roughly the same track as the official documentation, but it is more extensive and has more fancy drawings of revision graphs.

The middle section of the book discusses the modes in which Bazaar can be used - centralized or decentralized - as well as the various ways in which code can be landed in the main branch ("workflows"). The selection of workflows in the book is roughly the same as those in the official Bazaar documentation. The author briefly touches on a number of other software engineering topics such as code reviews, code formatting and automated testing, though not sufficiently to make it useful for people who are unfamiliar with these techniques. Both the official documentation and the book complicate things unnecessarily by listing every possible option.

The next chapter is a basic howto on the use of Bazaar with various hosting solutions, such as Launchpad, Redmine and Trac.

The Advanced Features chapter covers a wide range of obscure and less obscure features in Bazaar: uncommit, shelves, re-using working trees, lightweight checkouts, stacked branches, signing revisions and using e-mail hooks.

The chapter on foreign version control system integration is a more extensive version of the public docs. It has some factual inaccuracies; in particular, it recommends the installation of a 2 year old buggy version of bzr-git.

The last chapter provides quite a good introduction to the Bazaar APIs and plugin writing. It is a fair bit better than what is available publically.

Overall, it's not a bad book but also not a huge step forward from the official documentation. I might recommend it to people who are interested in learning Bazaar and who do not have any experience with version control yet. Those who are already familiar with Bazaar or another version control system will not find much new.

The book misses an opportunity by following the official documentation so closely. It has the same omissions and the same overemphasis on describing every possible feature. I had hoped to read more about Bazaar's data model, its file format and some of the common problems, such as parallel imports, format hell and slowness.

Syndicated 2013-09-28 17:33:00 from Stationary Traveller

Migrating packaging from Bazaar to Git

A while ago I migrated most of my packages from Bazaar to Git. The rest of the world has decided to use Git for version control, and I don't have enough reason to stubbornly stick with Bazaar and make it harder for myself to collaborate with others.

So I'm moving away from a workflow I know and have polished over the last few years - including the various bzr plugins and other tools involved. Trying to do the same thing using git is frustrating and time-consuming - and I'm sure that'll improve with time. In particular, I haven't found a good way to merge in a new upstream release (from a tarball) while referencing the relevant upstream commits, like bzr merge-upstream can. Is there a good way to do this? What helper tools can you recommend for maintaining a Debian package in git?

Having been upstream for bzr-git earlier, I used its git-remote-bzr implementation to do the conversions of the commits and tags:

% git clone bzr::/path/to/bzr/foo.bzr /path/to/git/foo.git

One of my last changes to bzr-git was to add a bzr git-push-pristine-tar-deltas subcommand, which will export all bzr-builddeb-style pristine-tar metadata to a pristine-tar branch in a Git repository that can be used by pristine-tar directly or through something like git-buildpackage.

Once you have created a git clone of your bzr branch, it should be a matter of running bzr git-push-pristine-tar-deltas with the target git repository and the Debian package name:

% cd /path/to/bzr/foo.bzr
% bzr git-push-pristine-tar-deltas /path/to/git/foo.git foo
% cd /path/to/git/foo.git foo
% git branch
*  master
   pristine-tar

Syndicated 2013-06-02 00:01:00 from Stationary Traveller

OpenChange 2.0 released

Apparently 'tis the season for major software releases.

Julien has just announced the release of OpenChange 2.0, codenamed quadrant. This release fixes a number of important bugs and enables integration with SOGo.

With the SOGo backend, it is now possible to set up an Exchange-compatible groupware server that can be accessed from Outlook without the need to connect any connectors.

See the release notes for more details.

Syndicated 2013-02-08 19:00:00 from Stationary Traveller

Bazaar: A retrospective

For the last 7 years I've been involved in the Bazaar project. Since I am slowly stepping down, I recently wrote a retrospective on the project as I experienced it for the last 7 years.

Thanks to a few kind people for proofreading earlier drafts; if you spot any errors, please let me know in the comments.

Syndicated 2012-12-19 21:36:43 from Stationary Traveller

97 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!