Older blog entries for Stevey (starting at number 587)

Reassessing what I want from a simple website creation tool

Thanks very much to everybody who commented, both publicly and privately, on my previous entry.

To recap: I have three sites that were each generated by slightly different templating software I'd built and tweaked over the years. I was frustrated that the three copies of the generation tool had all drifted and diverged from each other, and was looking to setup a new (static) site.

The obvious conclusion was either:

  • Unify the creation tool I used, such that all four sites could be generated by a single tool.
  • Avoid the pain of doing that, and suffer through a process of using a well-maintained tool maintained by somebody else. ("Suffer" because tags would be different, and the layout/template syntax would change.)

I'd cheerfully decided to go down the second route, because life is short. But after having quick reads, then spending several hours investigating likely contenders I kept finding reasons why they weren't suitable.

Today I reworked my tool to succesfully generate each of the three sites. That was less annoying than expected, after I'd decided "I'll have to change my templates anyway, when I switch to a real tool".

So in the interests of sharing I placed my tool online, and wrote documentation:

This is not a U-turn. This is not a commitment to avoid the investigation of real replacements. This is just something I had to do as a cleanup and to make sure I fully understood exactly what my requirements were.

In conclusion: My requirements are now absolutely known, fixed, and understood. I still firmly intend and expect to migrate to something by the end of the year. Ideally something that will make tag pages, RSS feeds, and other clever things easy.

(Getting rid of literal shell usage in my templates, and unifying the way I auto-generate galleries via file globs was a useful change in its own regard - I always felt slightly dirty..)

Lookout for more summaries and reviews of specific tools when I've had the chance to relax and start looking again.

Syndicated 2012-12-01 16:03:03 from Steve Kemp's Blog

Simple website creation tools

I host a number of websites which are mostly static in nature, these are often hand-crafted, but three of them us a slightly hacked up creation of my own.

Given a master "template" the file foo.skx gets massaged into foo.html.

Sadly I added features randomly, and now I have three template-driven sites which are handled slightly differently. This put me in the position of having to choose between two options:

  • Unify my template-handling.
  • Use something else.

Simplifying my life is good. So I examined a list of static site generators, and a few more found by searching github.

Other than doing clever things by knowing which page is "current" I needed to do only minimal magic:

  • Conditionally include files.
  • Setup per-page CSS files.
    • e.g. This page and this page differ by the stylesheet used. (And the text content too, clearly!)
  • Setup per-page templates.

Webgen looked like a good fit, but I couldn't get per-page templates to work out. Either they would work, or I would get weird errors about blocks not being known.

Webby worked, but I didn't like it.

Poole was the next one that I got far down the road with, but it allowed only a single site-wide template. Shame because otherwise I loved its flexibility enought to tolerate writing "macros" in Python.

I've still got to test more out, but it is a fun process. I fully intend to adopt an existing tool, and not keep working on my own.

Tonight I'm going to look at a few more.

Syndicated 2012-11-29 18:24:07 from Steve Kemp's Blog

A busy few weeks - bah humbug

The following companies are amongst those showing Christmas Adverts on television before the start of December:

  • Tesco
  • Homebase.
  • M&S
  • Waitrose.
  • John Lewis.

I will boycott these companies until next year.

In happier news I've spent the past week or two replacing the monitoring system that we use at work.

Our previous monitoring system had been struggling to keep up with the sheer number of tests it was being asked to process. This was partly because we carry out many ping-tests, ssh-tests, http-tests, dns-tests, etc. The other reason was that our monitoring system was a behemoth of threaded-ruby, which all ran upon a single host. This made adding another monitoring host a complex undertaking.

The new solution uses a work-queue:

  • Tests to apply are parsed and inserted into a single, global, beanstalkd queue.
  • Workers continuously poll the queue for tests to execute. They then execute them, and alert on failures as appropriate.

The code is open-source, written in Ruby, and available here:

I've completed the process of tidying up the code to the extent I'm happy with it, and I believe I've also abstracted away the work-specific pieces of the code.

That said I'd not be surprised if it needs a few minor tweaks before it it useful for other people.

Syndicated 2012-11-25 19:56:01 from Steve Kemp's Blog

slaughter 2.x is getting closer

Work on slaughter 2.x is going rather well.

The scripting hasn't changed, and no primitives have been altered to break backward compatibility, but it is probably best to release this as "slaughter2" - because the way to specify the source from which to pull scripts has changed.

Previously we'd specify two arguments (or have them in a configuration file):

  • --server=example.com
  • --prefix=/slaughter/

That would result in policies being downloaded from:

  http://example.com/slaughter/

Now the rework is complete we use "transports" and "prefixes". The new way to specify the old default is to run with:

--transport=http --prefix=http://example.com/slaughter/

I've implemented four transports thus far:

  • GIT
  • http
  • Mercurial
  • rsync

The code has been made considerably neater, the test-cases are complete, and the POD/inline documentation is almost 100% complete.

Adding additional revision-controlled transports would be trivial at this point - but I suspect I'd be wasting my time if I were to add CVS support!

Life is good. Though I've still got a fair bit more documentation, prettification and updates to make before I'm ready to release it.

Play along at home if you wish: via the repository.

Syndicated 2012-10-26 19:35:25 from Steve Kemp's Blog

So slaughter is definitely getting overhauled

There have been a few interesting discussions going on in parallel about my slaughter sysadmin tool.

I've now decided there will be a 2.0 release, and that will change things for the better. At the moment there are two main parts to the system:

Downloading polices

These are instructions/perl code that are applied to the local host.

Downloading files

Polices are allowed to download files. e.g. /etc/ssh/sshd_config templates, etc.

Both these occur over HTTP fetches (SSL may be used), and there is a different root for the two trees. For example you can see the two public examples I have here:

A fetch of the policy "foo.policy" uses the first prefix, and a fetch of the file "bar" uses the latter prefix. (In actual live usage I use a restricted location because I figured I might end up storing sensitive things, though I suspect I don't.)

The plan is to update the configuration file to read something like this:

transport = http

#
# Valid options will be
#    rsync | http | git | mercurial | ftp
#

#
# each transport will have a different prefix
#
prefix = http://static.steve.org.uk/private

# for rsync:
#  prefix=rsync.example.com::module/
#
# for ftp:
#  prefix=ftp://ftp.example.com/pub/
#
#  for git:
#  prefix=git://github.com/user/repo.git
#
#  for mercurial
#  prefix=http://repo.example.com/path/to/repo
#

I anticipate that the HTTP transport will continue to work the way it currently does. The other transports will clone/fetch the appropriate resource recursively to a local directory - say /var/cache/slaughter. So the complete archive of files/policies will be available locally.

The HTTP transport will continue to work the same way with regard to file fetching, i.e. fetching them remotely on-demand. For all other transports the "remote" file being copied will be pulled from the local cache.

So assuming this:

transport = rsync
prefix    = rsync.company.com::module/

Then the following policy will result in the expected action:

if ( UserExists( User => "skx" ) )
{
    # copy
    FetchFile(
            Source => "/global-keys",
              Dest => "/home/skx/.ssh/authorized_keys2",
             Owner => "skx",
             Group => "skx",
              Mode => "600" );
}

The file "/global-keys" will refer to /var/cache/slaughter/global-keys which will have been already downloaded.

I see zero downside to this approach; it allows HTTP stuff to continue to work as it did before, and it allows more flexibility. We can benefit from knowing that the remote policies are untampered with, for example, via the checking built into git/mercurial, and the speed gains of rsync.

There will also be an optional verification stage. So the code will roughly go like this:

  • 1. Fetch the policy using the specified transport.
  • 2. (Optionally) run some local command to verify the local policies.
  • 3. Execute policies.

I'm not anticipating additional changes, but I'm open to persuasion.

Syndicated 2012-10-24 07:28:56 from Steve Kemp's Blog

Software and hardware..

Software

I've been using redis for a while now. It is a fast in-memory storage system which offers persistence (unlike memcached), as well as several primitive data-types such as lists & hashes.

Anyway it crossed my mind that I don't have a backup of the data it contains, so I knocked up a simple script to dump the contents in plain-text:

In other software-news I've had some interesting and useful feedback and made two new releases of my slaughter sysadmin tool - it now contains a wee test suite and more robustness.

Hardware

I received an email last night to say that my Raspberry PI has shipped. Ordered 24/05/2012, and dispatched 12/10/2012 - I'd almost forgotten about it.

My plan is to make it a media-serving machine, SNES emulator, or similar. Not 100% decided yet.

Finally I've taken the time to repaint my office. When I last wrote about working from home I didn't include pictures - I just described the process of using a "work computer" and a "personal computer".

So this is what my office used to look like. As you can see there are two machines and a huge desk.

With a few changes I now have an office which looks like this - the two machines are glued-together with a KVM. and I have much more room behind it for another desk, more books, and similar toys. Additionally my dedication is now enforced - I simply cannot play with both computer as the same time.

The chair was used to mount the picture - usually I sit on a kneeling chair, which is almost visible.

What inspired the painting? Partly the need for more space, but mostly water damage. I had a leaking ceiling. (Local people will know all about my horrible leaking roof situation).

The end?

Syndicated 2012-10-13 07:55:57 from Steve Kemp's Blog

Artificially inflation of facebook-likes

Facebook Like-Inflation

If you have a website, with a "Facebook Like" box on it, it probably shows something like this:

  • 400 People Like this

Did you know that number is not just the total number of people who clicked "Like" on your page? Did you know you can artificially inflate that number?

Interesting stuff.

Send a message to yourself with the URL in the body, such that it becomes an "attachment". Watch as the like-counter increases by 1 or even 2. Lather. Rinse. Repeat.

Sending messages to other people probably does the same thing. But sending to yourself is sufficient.

Syndicated 2012-10-04 21:06:32 from Steve Kemp's Blog

I should bite my tongue.

Too often requests of the form "I'm looking for an open source solution to ..." mean "I'm looking to spend zero money, contribute nothing, and probably not even read your excellent documentation".

Syndicated 2012-09-29 22:08:51 from Steve Kemp's Blog

So about that off-site encrypted backup idea ..

I'm just back from having spent a week in Helsinki. Despite some minor irritations (the light-switches were always too damn low) it was a lovely trip.

There is a lot to be said for a place, and a culture, where shrugging and grunting counts as communication.

Now I'm back, catching up on things, and mostly plotting and planning how to handle my backups going forward.

Filesystem backups I generally take using backup2l, creating local incremental backup archives then shipping them offsite using rsync. For my personal stuff I have a bunch of space on a number of hosts and I just use rsync to literally copy my ~/Images, ~/Videos, etc..

In the near future I'm going to have access to a backup server which will run rsync, and pretty much nothing else. I want to decide how to archive my content to that - securely.

The biggest issue is that my images (.CR2 + .JPG) will want to be encrypted remotely, but not locally. So I guess if I re-encrypt transient copies and rsync them I'll end up having to send "full" changes each time I rsync. Clearly that will waste bandwidth.

So my alternatives are to use incrementals, as I do elsewhere, then GPG-encrypt the tar files that are produced - simple to do with backup2l - and us rsync. That seems like the best plan, but requires that I have more space available locally since :

  • I need the local .tar files.
  • I then need to .tar.gz.asc/.tar.gz.gpg files too.

I guess I will ponder. It isn't horrific to require local duplication, but it strikes me as something I'd rather avoid - especially given that we're talking about rsync from a home-broadband which will take weeks at best for the initial copy.

Syndicated 2012-09-28 16:31:03 from Steve Kemp's Blog

Security changes have unintended effects.

A couple of months ago I was experimenting with adding no-new-privileges to various systems I run. Unfortunately I was surprised a few weeks later at unintended breakge.

My personal server has several "real users", and several "webserver users". Each webserver user runs a single copy of thttpd under its own UID, listening on 127.0.0.1:xxxx, where xxxx is the userid:

steve@steve:~$ id -u s-steve
1019

steve@steve:~$ sudo lsof -i :1019
COMMAND  PID    USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
thttpd  9993 s-steve    0u  IPv4 7183548      0t0  TCP localhost:1019 (LISTEN)

Facing the world I have an IPv4 & IPv6 proxy server that routes incoming connections to these local thttpd instances.

Wouldn't it be wonderful to restrict these instances, and prevent them from acquiring new privileges? Yes, I thought. Unfortunately I stumbled across a down-side: Some of the servers send email, and they do that by shelling out to /usr/sbin/sendmail which is setuid (and thus fails). D'oh!

The end result was choosing between:

  • Leaving "no-new-privileges" in place, and rewriting all my mail-sending CGI scripts.
  • Removing the protection such that setuid files can be executed.

I went with the latter for now, but will probably revisit this in the future.

In more interesting news recently I tried to recreate the feel of a painting, as an image which was successful. I think.

I've been doing a lot more shooting recently, even outdoors, which has been fun.

ObQuote: "You know, all the cheerleaders in the world wouldn't help our football team." - Bring it On

Syndicated 2012-09-07 14:35:52 from Steve Kemp's Blog

578 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!