Older blog entries for Stevey (starting at number 592)

Debian-Administration.org almost migrated

The new version of the Debian Administration is almost ready now. I'm just waiting on some back-end changes to happen on the excellent BigV hosting product.

I was hoping that the migration would be a fun "Christmas Project", but I had to wait for outside help once or twice and that pushed things back a little. Still it is hard to be anything other than grateful to folk who volunteer time, energy, and enthusiasm.

Otherwise this week has largely consisted of sleeping, planting baby spider-plants, shuffling other plants around (Aloe Vera, Cacti, etc), and enjoying my new moving plant (video isn't my specific plant).

I've spent too long reworking templer such that is now written in a modular fashion and supports plugins. The documentation is overhauled.

The only feedback I received was that it should support inline perl - so I added that as a plugin this morning via a new formatter plugin:

Title: This is my page title
Format: perl
Name: Steve
----
This is my page.  It has inline perl:

   The sum of 1 + 5 is { 1 + 5 }

This page was written by { $name }

ObQuote: "She even attacked a mime. Just found out about it. Seems the mime had been reluctant to talk. " - Hexed

Syndicated 2013-01-06 10:32:42 from Steve Kemp's Blog

Template systems redux.

People seemed interested in my mini-reviews of static-site generators.

I promised to review more in the future, and so to shame myself into doing so I present:

As you can see I've listed my requirements, and I've included a project for each of the tools I've tested.

I will continue to update as I go through more testing. As previously mentioned symlink-handling is the thing that kills a lot of tools.

Syndicated 2012-12-30 10:58:19 from Steve Kemp's Blog

More polish for slaughter

A couple of days ago I made a new release of slaughter, to add a new primitive I was sorely missing:

if ( 1 != IdenticalContents( File1 => "/etc/foo" ,
                             File2 => "/etc/bar" ) )
{
   # do something because the file contents differ
}

This allows me to stop blindly over-writing files if they are identical already.

As part of that work I figured I should be more "visible", so on that basis I've done two things:

After sanity-checking my policies I'm confident I'm not leaking anything I wish to keep private - but there is some news being disclosed ;)

Now that is done I think there shouldn't be any major slaughter-changes for the forseeable future; I'm managing about ten hosts with it now, and being perl it suits my needs. The transport system is flexible enough to suit most folk, and there are adequate facilities for making local additions without touching the core so if people do want to do new things they don't need me to make changes - hopefully.

ObQuote: "Yippee-ki-yay" - Die Hard, the ultimate Christmas film.

Syndicated 2012-12-29 11:08:53 from Steve Kemp's Blog

Is there a ACL system for "all" revision control systems?

Once upon a time a company started using distributed version control, and setup several project repositories using darcs.

Over time people became more sane and new projects were created in mercurial.

Later still Git became available, and was used by a few of the brave.

Sadly each of these projects is hosted on the same host, and in the home directory of the same user. This means these two commands work:

hg clone ssh://projects@dev.host/foo

git clone ssh://projects@dev.host/bar

I'm now wanting to setup per-repository ACLs and have hit a problem...

There are several git-wrappers such as gitolite and gitosis. There is also the excellent hg-gateway and mercurial-server for dealing with mercurial.

However I've yet to find a wrapper which will handle both git & mercurial repositories, under the same UID. (+ Darcs too, of course).

So my question - is there such a beast out there, or do we need to write it? I expect such a thing would be useful for many people, so I'm surprised I've not yet found it.

Syndicated 2012-12-16 10:38:57 from Steve Kemp's Blog

December 2012 Software Updates

Some brief software updates:

Custodian

This is the monitoring tool I wrote for Bytemark. It still rocks, and has run over 10 million tests without failure.

I'd love more outside feedback, even if just to say "documentation needs work".

Slaughter

This is my sysadmin tool for multiple hosts - consider it cfengine-lite, or cfengine-trivial more likely.

The 2.x release is finally out, and features pluggable transports. That means your central server can be running HTTP, RSYNC, FTP, or anything you like.

90% of the changes came from or were inspired by Csillag Tamas, to whom I owe a debt of thanks.

Templer

A static-site generator, written in Perl.

I use this to generate blogspam.net, and other sites from simple layouts. Tutorial available online.

redis-document-store

A trivial hack which allows using Redis as a schema-less document storage system.

Assuming you never delete documents it is simple, transparent, and already in live use for Debian Administration

Random Comment on Templer:

Although I've made extensive notes on common static site generators, and they will be discussed at length in the near future, I do want to highlight one problem common to 90% of them: Symbolic links.

For example webgen fails my simple test:

~/hg/websites$ webgen create test.example.com
~/hg/websites$ cd test.example.com/src/

~/hg/websites/test.example.com/src$ mkdir jquery-1.2.3
~/hg/websites/test.example.com/src$ touch jquery-1.2.3/jquery.js
~/hg/websites/test.example.com/src$ ln -s jquery-1.2.3 jquery

~/hg/websites/test.example.com$ webgen
Starting webgen...
...
Finished
~/hg/websites/test.example.com$ ls out/  | grep jq
jquery-1.2.3

Here we see creating a symlink to a directory has not produced a matching symlink in the output. Something I use frequently. for example.

Some tools mangled symlinked directories, or files, some ignore them completely. Neither is acceptible.

Syndicated 2012-12-06 14:42:04 from Steve Kemp's Blog

Reassessing what I want from a simple website creation tool

Thanks very much to everybody who commented, both publicly and privately, on my previous entry.

To recap: I have three sites that were each generated by slightly different templating software I'd built and tweaked over the years. I was frustrated that the three copies of the generation tool had all drifted and diverged from each other, and was looking to setup a new (static) site.

The obvious conclusion was either:

  • Unify the creation tool I used, such that all four sites could be generated by a single tool.
  • Avoid the pain of doing that, and suffer through a process of using a well-maintained tool maintained by somebody else. ("Suffer" because tags would be different, and the layout/template syntax would change.)

I'd cheerfully decided to go down the second route, because life is short. But after having quick reads, then spending several hours investigating likely contenders I kept finding reasons why they weren't suitable.

Today I reworked my tool to succesfully generate each of the three sites. That was less annoying than expected, after I'd decided "I'll have to change my templates anyway, when I switch to a real tool".

So in the interests of sharing I placed my tool online, and wrote documentation:

This is not a U-turn. This is not a commitment to avoid the investigation of real replacements. This is just something I had to do as a cleanup and to make sure I fully understood exactly what my requirements were.

In conclusion: My requirements are now absolutely known, fixed, and understood. I still firmly intend and expect to migrate to something by the end of the year. Ideally something that will make tag pages, RSS feeds, and other clever things easy.

(Getting rid of literal shell usage in my templates, and unifying the way I auto-generate galleries via file globs was a useful change in its own regard - I always felt slightly dirty..)

Lookout for more summaries and reviews of specific tools when I've had the chance to relax and start looking again.

Syndicated 2012-12-01 16:03:03 from Steve Kemp's Blog

Simple website creation tools

I host a number of websites which are mostly static in nature, these are often hand-crafted, but three of them us a slightly hacked up creation of my own.

Given a master "template" the file foo.skx gets massaged into foo.html.

Sadly I added features randomly, and now I have three template-driven sites which are handled slightly differently. This put me in the position of having to choose between two options:

  • Unify my template-handling.
  • Use something else.

Simplifying my life is good. So I examined a list of static site generators, and a few more found by searching github.

Other than doing clever things by knowing which page is "current" I needed to do only minimal magic:

  • Conditionally include files.
  • Setup per-page CSS files.
    • e.g. This page and this page differ by the stylesheet used. (And the text content too, clearly!)
  • Setup per-page templates.

Webgen looked like a good fit, but I couldn't get per-page templates to work out. Either they would work, or I would get weird errors about blocks not being known.

Webby worked, but I didn't like it.

Poole was the next one that I got far down the road with, but it allowed only a single site-wide template. Shame because otherwise I loved its flexibility enought to tolerate writing "macros" in Python.

I've still got to test more out, but it is a fun process. I fully intend to adopt an existing tool, and not keep working on my own.

Tonight I'm going to look at a few more.

Syndicated 2012-11-29 18:24:07 from Steve Kemp's Blog

A busy few weeks - bah humbug

The following companies are amongst those showing Christmas Adverts on television before the start of December:

  • Tesco
  • Homebase.
  • M&S
  • Waitrose.
  • John Lewis.

I will boycott these companies until next year.

In happier news I've spent the past week or two replacing the monitoring system that we use at work.

Our previous monitoring system had been struggling to keep up with the sheer number of tests it was being asked to process. This was partly because we carry out many ping-tests, ssh-tests, http-tests, dns-tests, etc. The other reason was that our monitoring system was a behemoth of threaded-ruby, which all ran upon a single host. This made adding another monitoring host a complex undertaking.

The new solution uses a work-queue:

  • Tests to apply are parsed and inserted into a single, global, beanstalkd queue.
  • Workers continuously poll the queue for tests to execute. They then execute them, and alert on failures as appropriate.

The code is open-source, written in Ruby, and available here:

I've completed the process of tidying up the code to the extent I'm happy with it, and I believe I've also abstracted away the work-specific pieces of the code.

That said I'd not be surprised if it needs a few minor tweaks before it it useful for other people.

Syndicated 2012-11-25 19:56:01 from Steve Kemp's Blog

slaughter 2.x is getting closer

Work on slaughter 2.x is going rather well.

The scripting hasn't changed, and no primitives have been altered to break backward compatibility, but it is probably best to release this as "slaughter2" - because the way to specify the source from which to pull scripts has changed.

Previously we'd specify two arguments (or have them in a configuration file):

  • --server=example.com
  • --prefix=/slaughter/

That would result in policies being downloaded from:

  http://example.com/slaughter/

Now the rework is complete we use "transports" and "prefixes". The new way to specify the old default is to run with:

--transport=http --prefix=http://example.com/slaughter/

I've implemented four transports thus far:

  • GIT
  • http
  • Mercurial
  • rsync

The code has been made considerably neater, the test-cases are complete, and the POD/inline documentation is almost 100% complete.

Adding additional revision-controlled transports would be trivial at this point - but I suspect I'd be wasting my time if I were to add CVS support!

Life is good. Though I've still got a fair bit more documentation, prettification and updates to make before I'm ready to release it.

Play along at home if you wish: via the repository.

Syndicated 2012-10-26 19:35:25 from Steve Kemp's Blog

So slaughter is definitely getting overhauled

There have been a few interesting discussions going on in parallel about my slaughter sysadmin tool.

I've now decided there will be a 2.0 release, and that will change things for the better. At the moment there are two main parts to the system:

Downloading polices

These are instructions/perl code that are applied to the local host.

Downloading files

Polices are allowed to download files. e.g. /etc/ssh/sshd_config templates, etc.

Both these occur over HTTP fetches (SSL may be used), and there is a different root for the two trees. For example you can see the two public examples I have here:

A fetch of the policy "foo.policy" uses the first prefix, and a fetch of the file "bar" uses the latter prefix. (In actual live usage I use a restricted location because I figured I might end up storing sensitive things, though I suspect I don't.)

The plan is to update the configuration file to read something like this:

transport = http

#
# Valid options will be
#    rsync | http | git | mercurial | ftp
#

#
# each transport will have a different prefix
#
prefix = http://static.steve.org.uk/private

# for rsync:
#  prefix=rsync.example.com::module/
#
# for ftp:
#  prefix=ftp://ftp.example.com/pub/
#
#  for git:
#  prefix=git://github.com/user/repo.git
#
#  for mercurial
#  prefix=http://repo.example.com/path/to/repo
#

I anticipate that the HTTP transport will continue to work the way it currently does. The other transports will clone/fetch the appropriate resource recursively to a local directory - say /var/cache/slaughter. So the complete archive of files/policies will be available locally.

The HTTP transport will continue to work the same way with regard to file fetching, i.e. fetching them remotely on-demand. For all other transports the "remote" file being copied will be pulled from the local cache.

So assuming this:

transport = rsync
prefix    = rsync.company.com::module/

Then the following policy will result in the expected action:

if ( UserExists( User => "skx" ) )
{
    # copy
    FetchFile(
            Source => "/global-keys",
              Dest => "/home/skx/.ssh/authorized_keys2",
             Owner => "skx",
             Group => "skx",
              Mode => "600" );
}

The file "/global-keys" will refer to /var/cache/slaughter/global-keys which will have been already downloaded.

I see zero downside to this approach; it allows HTTP stuff to continue to work as it did before, and it allows more flexibility. We can benefit from knowing that the remote policies are untampered with, for example, via the checking built into git/mercurial, and the speed gains of rsync.

There will also be an optional verification stage. So the code will roughly go like this:

  • 1. Fetch the policy using the specified transport.
  • 2. (Optionally) run some local command to verify the local policies.
  • 3. Execute policies.

I'm not anticipating additional changes, but I'm open to persuasion.

Syndicated 2012-10-24 07:28:56 from Steve Kemp's Blog

583 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!