Older blog entries for Stevey (starting at number 546)

Slaughter is at the cross-roads

There are many system administration and configuration management tools available, I've mentioned them in the past and we're probably all familiar with our pet favourites.

The "biggies" include CFEngine, Puppet, Chef, BFG2. The "minis" are largely in-house tools, or abuses of existing software such as fabric.

My own personal solution manages my home network, and three dedicated servers I pay for in various ways.

Currently I've been setting up some configuration "stuff" for a friend and I've elected to manage some of the setup with this system of my own, and I guess I need to decide what I'm going to do going forward.

slaughter is well maintained, largely by virtue of not doing too much. The things it does are genuinely useful and entirely sufficient to handle a lot of the common tasks - and because the server-side requirement is a HTTP server, and the only client-side requirement is CRON it is trivial to deploy.

In the past I've thought of three alternatives that would make it more complex:

  • Stop using HTTP and have a mini-daemon to both serve and schedule.
  • Stop using HTTP and use rsync instead.
  • Rewrite it in Javascript. (Yes, really).

Each approaches have their appeal. I like the idea of only executing GPG-signed policies, and that would be trivial if there was a real server in place. It could also use SSL because that's all you need for security (ha!).

On the other hand using rsync allows me to trivially implement the only missing primitive I actually miss at times - the ability to recursively download and install a remote directory tree. (I solve this problem by downloading a .tar file and unpacking it. Not good. Doesn't cope with template expansion and is fiddlier than I like).

In the lifetime of the project I think I've had 20-50 feature requests or comments, which suggests it might actually be used by 50-100 people. (Ha! Optimism)

In the meantime I'll keep a careful eye on the number of people who download the tarball & the binary packages...

ObQuote: "I have vermin to kill. " - Kill Bill

Syndicated 2011-11-01 21:21:26 from Steve Kemp's Blog

So I've been creating more things

I realise it has been nearly two months since I last posted anything here. The good news is I'm still alive!

Mostly the past couple of months has been full of cute victims to take pictures of, which has helped me setup a simplified portfolio site;

I still continue to prefer images of people and I was recently pleased with the delivery of my first "photobook". Over the past couple of years I've slowly decorated my flat with prints (4"x6" - A2) of my pictures, but seeing the pictures in a nicely bound book makes them feel so much more real.

I've also been doing a little more software development, mostly relating to the archiving of images and the workflow of taking RAW images, converting them, and finally uploading via rsync. I suspect the tools I've put together are Steve-specific, but I did have some fun with duplicate image detection and eilimination - something I've written about in the past.

ObQuote: "Better to write for yourself and have no public than to write for the public and have no self" - Cyril Connolly.

Syndicated 2011-10-24 15:25:37 from Steve Kemp's Blog

Scriptable email clients

This is just a quick post to remind myself in the morning, as soon as I've made it I intend to turn my computer off and leave it off until I can re-organize my office.

I've been using mutt for my email for the past few years. Nothing compares to the flexibility of procmail/sieve for organizing server-side mail, and then mutt is ideal for reading them.

With the addition of the mutt-patched sidebar mode you can even go for a few days before realizing you're not in a graphical environment. But one thing I do long for is the ability to execute scripts at various times.

Thus far I've not actually planned what I'd like to do, but as a starting point imagine being able to execute a hook when new mail arrives? Or when you send a message matching a pattern in some fashion?

There are some things out there, such as the various hacks which are designed to abort sending a message if you mention "See attachment" in a message body but fail to add one before sendign the message. These hacks generally abuse the sendmail configuration such that they're extremely ad-hoc and hard to chain/nest.

I've mellowed out over the years and I have no interest in attempting to write a mail-client (though at the same time how hard can it be? Just restrict yourself to using inotify on ~/Maildir and offload delivery to exim and you're almost done? I guess the hard part is the UI, though I do like the mutt + sidebar layout. Write the whole thing in some scripty language?)

I'll re-examine notmuch and gnus over the next week or two, but I suspect both will continue to disappoint in various ways.

Anyway, for the moment I'm just pondering. But threading is an obvious concern. Most current mutt hooks relate to the local folder, or the local message. If I were viewing a message in one directory and a new mail notification fired for a delivery to both ~/Maildir and ~/Maildir/.people.foo I'd need to either serialise them or thread them.

Ponder ponder.

In other news I've been doing more photography recently. Nothing cohesive except for my recent experiment with shooting a "street-girl" outdoors in falling light, but that was an interesting challenge and the results were sufficient to make me want to try shooting outdoors in an organized fashion again. (Some random images have been linked to from my wee twitter page.)

ObFilm: "She doesn't get eaten by the eels at this time " - The Princess Bride

Syndicated 2011-09-05 18:40:38 from Steve Kemp's Blog

16 Jul 2011 (updated 16 Jul 2011 at 23:06 UTC) »

There is a reason why I test sites

Recently I was at a pub and there was an advert for pub tokens displayed on the window. Seemed like a cute idea:

  • Buy & donate tokens which can be spent (only) on beer.

Perfect for friends, family, remote hackers/developers & similar.

When I got home I checked out their site. Seemed simple and nice enough, with good coverage (in terms of local drinking establishments that would accept their tokens).

I decided to sign up, with the intention of gifting my sister with some delicious beer. Unfortunately that's where it all went wrong.

I tend to act the same on all new sites. Partly to amuse myself, partly to get a feel for how safe/secure/good the site is, I'll try to login with a few different values.

You know you're in trouble when you see responses like this:

SELECT * FROM cms_module_pubtokens_users where email = '"'' AND
   password =''"' LIMIT 1

Fatal error: Call to a member function FetchRow() on a non-object in
  /home/pubtokens/U79P18WQ/htdocs/includes/functions.php on line 291

Suffice it to say I sent them an email, then poked them on twitter, but to no avail.

In conclusion they don't get my money, and I couldn't recommend them to anybody else at this point either. As I'm not a customer at least I can rest easy knowing my details haven't been compromised at any point over the past few months.

ObQuote: "It can't rain all the time" - The Crow

Syndicated 2011-07-16 10:29:47 (Updated 2011-07-16 23:06:17) from Steve Kemp's Blog

Steve, in brief

In brief:

Finally having recently bought the Canon 70-200mm f/2.8 lens for a King's ransom I've agreed to buy the 24-105mm f/4.0 lens from a friend - that will be my new portrait lens of choice, and I'll sell my existing 85mm f/1.8.

ObQuote: "I could help you cross your yard." - Up

Syndicated 2011-06-26 11:05:50 from Steve Kemp's Blog

So you want to install the most recent firefox?

If you've been following new releases you'll see there is a new Firefox browser out, version 5.0.

This will almost certainly make its way into Debian's experimental tree soon, but that doesn't help users of the Debian Stable release. The only sane option for those users (such as myself), without a backport, is to install locally.

So I did the obvious thing, I made /opt/firefox then installed the binary release into it. Then I found that it was good, lovely and fast.

Unfortunately the system firefox and the local firefox are not really compatible. Run the local one, then click on a link in the gnome terminal and it wants to open the system one. Ho hum.

The solution:

  • Remove your local firefox & iceweasel packages.
  • Create the shell scripts /usr/bin/firefox & /usr/bin/iceweasel to exec the one stored beneath /opt.
  • Rejoice.

Of course this being Debian we don't want to do that. So instead here is a package that will let you do that:

Download. Build. Install. If you install your local package to a location different than /opt/firefox update the configuration file /etc/firefox/firefox.conf to point to it.

Possibly useful?

ObQuote: "I could help you cross your yard." - Up

Syndicated 2011-06-23 22:39:05 from Steve Kemp's Blog

12 Jun 2011 (updated 7 Mar 2012 at 01:08 UTC) »

Continuous integration that uses chroots?

I'd like to setup some auto-builders for some projects - and theese projects must be built upon Lenny, Squeeze, Lucid, and multiple other distros. (i386 and amd64 obviously.)

Looking around I figure it should be simple. There are a lot of continuous integration tools out there - but when looking at them in depth it seems like they all work in temporary directories and are a little different to how I'd expect them to be.

Ultimately I want to point a tool at a repository (mercurial), and receive a status report and a bunch of .deb packages for a number of distributions.

The alternative seems to be to write a simple queue submission system, then for each job popped from the queue run:

  • Creates a new debootstrap-based chroot.
  • Installs build-essential, mercurial, etc.
  • Fetches the shource.
  • Runs make.
  • Copies the files produced in ./binary-out/ to a safe location.
  • Cleans up.

Surely this wheel must already exist? I guess its a given that we have to find build-dependencies, and that we cannot just run "pbuilder *.dsc" - as the dsc doesn't exist in advance. We really need to run "make dependencies test build", or similar.

Hudson looked promising, but it builds things into /var/lib/hudson, and doesn't seem to support the use of either chroots or schroots.

ObQuote: "I feel like I should get you another sweater." - "Friends"

Syndicated 2011-06-12 15:01:34 (Updated 2012-03-07 01:08:53) from Steve Kemp's Blog

So I chose fabric and reported a bug..

When soliciting for opinions, recently, I discovered that the python-based fabric tool was not dead, and was in fact perfect for my needs.

During the process of getting acquainted with it I looked over the source code, it was mostly neat but there was a trivial (low-risk) symlink attack present.

I reported that as #629003 & it is now identified more globally as CVE-2011-2185.

I guess this goes to show that getting into the habit of looking over source code when you install a new package is a worthwhile thing to do; and probably easier than organising a distribution-wide security audit </irony>.

In other news I'm struggling to diagnose a perl segfault, when running a search using the swish-a perl modules. Could it be security worthy? Possibly. Right now I just don't want my scripts to die when I attempt to search 20Gb of syslog data. Meh.

ObQuote: "You're scared of mice and spiders, but oh-so-much greater is your fear that one day the two species will cross-breed to form an all-powerful race of mice-spiders who will immobilize human beings in giant webs in order to steal cheese. " - Spaced.

Syndicated 2011-06-06 18:57:55 from Steve Kemp's Blog

How do you deploy applications?

I've got a few projects which are hosted in mercurial repositories. To deploy them I manually checkout the repository, create symlinks by hand, then update apache thttpd to make them work.

When I want to update my applications I manually become the correct user, find the repository and run "hg pull --update".

I think it is about time that I sat down and started doing things neatly. I made a start at this by writing a shell script for each site called .deploy then I drive it like so:

#!/bin/sh
#
# ~/bin/deploy  execute the .deploy file associated with this project.
#
while true; do

    #
    #  If we're at the root directory we're done.
    #
    if [ $PWD = "/" ]; then
        echo "Reached /"
        exit
    fi

    # found our file?
    #
    if [ -x ".deploy" ]; then

       ./.deploy
       exit
    fi

    cd ..
done

It seems the main candidate is capistrano, which was previously very Ruby on Rails centric, but these days seems to be divorced from it.

Alternatively there is the python-based fabric project which has been stalled for two years, vlad the deployer (great name!) which is another Rake-based and thus Ruby-loving system, and finally whiskey disk which is limited to Git-based projects as far as I can tell.

In short each of these projects is very similar, and each relies upon being able to do two things:

  • SSH to remote machine(s) and run a command.
  • Copy files to the remote command / pull a repository from a known location.

I've automated SSH before, and I've automed SCP/rsync. The hard part is doing both "copy" and "command" over one SSH channel - such that you don't get prompted for passwords multiple times - and handling the case of runnign sudo where appropriate. Still most of the initial stages are trivial.

I wonder what project I should be using:

  • I like perl. Perl is good.
  • I use mercurial. Mercurial is good.
  • Rake is perhaps permissable, but too ruby-centric == not for me.

Anything I've missed? Or pointers to good documentation?

ObQuote: "We need to be a little more constructive here, okay? " - Terminator 2

Syndicated 2011-05-30 13:43:11 from Steve Kemp's Blog

Images transitioned, and mysql solications.

Images Moved

So I've retired my old picture hosting sub-domain, and moved all the files which were hosted by the dynamic system into a large web-root.

This means no more uploads are possible, but each link continues to work. For example:

Happily the system generated "random" links, and it was just a matter of moving each uploaded file into that static location, then removing the CGI application.

The code for the new site has now been made public, although I suspect there will need to be some email-pong if anybody wishes to use it. Comments welcome.

MySQL replacements?

Lets pretend I work for a company which has dealings with many MySQL users.

Lets pretend that, even though it is true, such that I don't have to get into specifics.

Let us pretend that we have many many hundreds of users who are very happy with MySQL, but that we have a few users who have "issues". That might be:

  • mysqld segfaulting every few months, with no real idea why.
    • Transactions are involved. So are stored proceedures.
    • MySQL paid support might have been perfect, or it might have lead to "yup, its a bug. good luck rebuilding with this patch. let us know how it turns out kthxbai."
    • Alternatively it might not have been re-producable.
  • Master-Master and Master-Slave setups being "unreliable" such that data inconsistencies arise despite MySQL regarding them as being in sync.
    • Good luck resolving that when you have two almost-identical "mysqldump" outputs which are 6Gb each and which cause "diff" to exit with "out of memory" even on a 64Gb host.
    • Is it possible to view differences in table-data, via the binary records? That'd be a fun project .. for a masochist.
  • Poor usage of resources.
  • Heavy concurrancy caused by poorly developed applications in a load-balanced environment, leading to stalling queries. (Wordpress + Poor wordpress plugins I'm looking at you; you're next on my killfile).

To compound this problem some of these installations may or may not be running Etch. let us pretend they are not, just to simplify things. (They mostly arent' these days, but I'm sure I could think of one or two if I tried)

So, in this hypothetical situation what would you recommend?

I know there are new forks aplenty of MySQL. Drizzle et al. I suspect most of the forks will be short-lived - lots of this stuff is hard and non-sexy. I suspect the long-lived forks are probably concentrating on edge-cases we've not hit (yet), or on sexy exciting things like new storage engines and going nosql like all the cool kids.

Realistically going down the postgresql road is liable to lead to wholly different sets of problems, and a significant re-engineering of several sites, applications and tools with no proof of stability.

Without wanting to jump ship entirely, what, if any, are our options?

PS. MySQL I still mostly love you, but my two most recent applications were written to use redis instead. Just a coincidence... I swear. No, put down that axe. Please can't we just talk about it?/p>

ObQoote: "I've calculated your chance of survival, but I don't think you'll like it. " - Hitchhikers Film.

Syndicated 2011-05-16 23:38:01 from Steve Kemp's Blog

537 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!