Older blog entries for dan (starting at number 135)

Backing up a workgroup server with rsnapshot

Lately I have been changing the way that $WORK backs up their office server. It should be pretty simple, right? Removeable USB drive they can take home at night, rsnapshot, win.

Almost. The problem is user-proofing.

The disk should not be mounted when not in use

Really, I don't want end-users unplugging a mounted drive. So automounting on drive plugging is a non-starter.

The disk has to be mounted before doing the backup

You'd think you could use cmd_preexec in rsnapshot.conf for this. No. If cmd_preexec fails (e.g. because the disk is not plugged in), rsnapshot prints a warning (which nobody will read) and then goes on and does the backup anyway.

There is a neat option no_create_root which you'd think was designed for this: "rsnapshot will not automatically create the snapshot_root directory. This is particularly useful if you are backing up to removable media, such as a FireWire or USB drive". However, it does this test before running cmd_preexec, which is entirely the wrong order of events for our purposes

The disk must be idle before being unplugged

Well, there's not really any physical way to stop people unplugging it during a backup, but we could at least give ourselves a sporting chance by telling them when the backup's happening.

There are several parts to this: first we spend an inordinate amount of time fighting with zenity to make it do something vaguely sane before giving up and creating a workaround that lets it do what it wants instead. The result looks a lot like rsync-wait.sh in which, if your external HDD is not called Fred, feel free to amend the text messages. If you're one of the vanishingly small number of people who has to use GNOME but isn't on a Linux kernel or for some other reason don't have inotify, you'll have to replace inotifywait with e.g sleep 10

We want this to be run for every user, because they all have physical access to the server and any of them might be charged with taking the disk home. So we need a way of launching this script on login and stopping it on logout - surprisingly the latter is harder. If you write an X client program in a real language it will probably eventually notice (or just die with SIGPIPE) when its X server socket closes, but a shell script doesn't necessarily have a persistent X connection open - so if we start the script from Xsession.d or similar and if you log out and in again the process won't die, and you end up with two copies running at once. Double bubble trouble.

To save you the afternoon that this took me to figure out, the simplest answer is "start it from xterm". So, create a .desktop file and drop it in /etc/xdg/autostart directory

dan@carnaby:~$ cat /etc/xdg/autostart/rsync-wait.desktop 
[Desktop Entry]
Encoding=UTF-8
Name=Backup Disk notifier
Comment=System tray icon for notifying rsnapshot running
Exec=/usr/bin/xterm -iconic -e /usr/local/bin/rsync-wait.sh
Terminal=false
Type=Application
NotShowIn=KDE;
StartupNotify=false
Categories=GTK;Monitor;System;

So far, this is working. I think.

Syndicated 2011-02-10 21:52:00 from diary at telent netowrks

How to create a diskless elastichosts node

Elastichosts is a PAYG (or monthly contract) "cloud" virtual server provider based on the Linux kvm technology. At $WORK we use it to provide a horizontally scalable app service, and we need to be able to add new app servers in less time than it takes to copy a complete working Debian system. Also we want to be running the same version of the same software on every server (think "security updates") and we don't want to be paying for another 3GB of Debian that we don't really need on each box. So, we need that stuff to be shared.

Elastichosts don't directly support kvm snapshots (or they didn't when I asked them about it) which leaves us looking for alternative ways to do the same thing. This blog entry describes one such approach: we use a read-only CD image for the root filesystem and then mount /usr and /home over NFS and a ramdisk (populated at boot) on /var. It's all done using standard Debian tools and Debian setup as of the "squeeze" 6.0 release.

The finished thing is on github at https://github.com/telent/squeeze-cd-nfsroot/ . To use, basically you clone the repo into /usr/local/client, edit the files, and run make. Slightly less basically, you almost certainly need to know what edits to make to which files, and you may also want to know how it works anyway. So read on ...

(Yes, you should be able to clone it elsewhere because I shouldn't have hardcoded that directory name into the Makefile. This may be fixed in a future version if I ever find the need to install it somewhere else myself. Or see the 'conclusion' section if you want to fix it yourself)

How the client boots

  1. the client boots off a CD (ISO9660) image created by initramfs-tools which is configured to look for an nfsroot directory. This directory is created on the server by a Makefile rule that copies the server's root dir and the replaces, renames and changes a bunch of stuff in /etc

  1. it then mounts a ramdisk on /tmp and another on /var. There is an initscript populate_var which creates all the empty directories that daemons will expect when they start up. Note that these directories are entirely ephemeral, which means for example that syslog must be configured to log remotely

  1. it mounts /usr and /home (readonly) directly from the server. This means that most of the packages on the server are available immediately on the clients - unless they include config files in /etc, in which case they aren't until you rerun the Makefile that creates the nfsroot (after, possibly, adjusting the config appropriately for the client)

A short guide to customising the system

These files are copied to the client - you may want to review their contents

  • template/etc/fstab needs to have the right hostname for your NFS server
  • template/etc/initramfs-tools/initramfs.conf - check DEVICE and NFSROOT settings
  • template/etc/network/interfaces may need tweaking
  • template/etc/resolv.conf is set up for our network, not yours
  • template/etc/init.d/populate_var might need directories added or chown invocations removed, depending on what packages you have installed
  • template/etc/rsyslog.conf needs editing for the syslog server's IP address

And also

  • insserv calls in Makefile may need adjusting if you have other services on the server that you don't want to also run on the client

And on the server

  • you'll need to be exporting the nfsroot/ directory as NFS, ditto /home and /usr. My /etc/exports looks something like this
    /usr/local/client/nfsroot 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
    /home 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
    /usr 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
    

  • you need to run a dhcp server (I use "dnsmasq", which also provides DNS service)

  • If you want the clients able to syslog, you need to configure the syslog server to accept syslog messages from them

How we build the files

The nfsroot

Creating the nfsroot is done by the Makefile rootfs target

It starts by rsyncing the real root into nfsroot/ with a whole bunch of exclusions, then copies files from template/ over the copied files to cater for the bits that need a different configuration on the client than they do on the server, then does some other fiddling around. Most notable:

  • we have to copy libcrypto and libz into /usr because the dhcp client needs those libraries and it runs before /usr is mounted (see http://bugs.debian.org/592361 - though according to that page this bug is now fixed)

  • we blat the generated files etc/udev/rules.d/*persistent* which are correct for the server but not for the client.

  • Debian will run better with no /etc/hostname than it will with the wrong one

  • Debian squeeze uses a slightly exciting parallelising dependency-based system for running init scripts, so we can't just copy files into init.d, we need to run insserv to make it see then. (As a long-time Unix user who doesn't pay enough attention when these kinds of changes are made, this took ages to work out). Similarly to disable daemons that run only on the server, we use insserv -r.

  • a couple of files need to be writable, so we replace them with symlinks
    • /etc/network/run is pointed to /lib/init/rw
    • /etc/mtab is pointed to /proc/mounts

  • We create our own etc/resolv.conf. Our elastichosts clients generally have a public (dynamically allocated) IP address assigned to eth0 and a vlan attached to eth1. DHCP gets exciting here: the client boots off eth1 and gets the address of that interface using boot-time kernel code, then runs the user-space dhclient tool to get an eth0 address, and we'd rather one rely on the conjunction of all that to get /etc/resolv.conf right

  • populate_var pretty much does what it says on the tin but might need more directories adding/removing depending on what you have installed

The initramfs

The Makefile ramfs.img target makes an initramfs image which knows how to mount root on nfs. This particular magic is built into Debian and the only particular point of note here is that we use nfsroot/etc/initramfs-tools as the config directory so we know we're generating a config for the client without treading on the server's usual initramfs config (which it might need when it boots itself). In our setup the only file that's actually changed is template/etc/initramfs-tools/initramfs.conf which has settings for BOOT, DEVICE, NFSROOT that probably differ from what the server wants for itself

Creating the cd image

This is pretty straightforward too. The Makefile boot_cd.iso target runs mkisofs to generate a CD image using the initramfs image and other files taken from isolinux.

Uploading it

We had to slightly patch the elastichost-upload script to add the ability to create shared images as well as exclusive ones. This is controlled by the api key claim:type, which the elastichosts API docs describe as follows: "either 'exclusive' (the default) or 'shared' to allow multiple servers to access a drive simultaneously"

The patched version is in the git repo, accompanied by the patch

Once you've uploaded the first one you can uncomment the DRIVE_UUID param at the top of Makefile so that subsequent attempts update the same drive instead of creating a new one every time.

Conclusion

There you have it. It's certainly a bit rough and ready right now and requires editing a few too many files to be completely turnkey, but hopefully it will save someone somewhere some time. If you have bug fixes, send me patches (or fork it on github and send me pull requests); if you have suggestions, my inbox is open; if you know you need something like this but can't understand what I'm writing about, my consulting rates are reasonable ;-)

Syndicated 2011-01-22 22:35:28 from diary at telent netowrks

Hating on HATEOAS

In 2011 I will not start blog posts with the word "so".

Lately I've been thinking about RESTfulness again. An observation that has been widely made is that Roy T. Fielding's definition of REST differs wildly from what most of the rest (sorry) of the world thinks it is - while all people with taste and discrimination must surely agree that the trend from evil-tasting SOAPy stuff back to simple HTTP-based APIs is a Good Thing, the "discoverability" and "hypertext" aspects of Canonical REST are apparently not so widely considered as important for practical use.

My own small contribution to this debate is that the reason people are not trying to do HATEOAS that they've been told that the web at large - the large part of the WWW that's mediated through ordinary web browsers under the direction of human brains - is an example of how it works. And the more I think about it the more I think that the example is rubbish and unhelpful.

It's a rubbish example because the browsers through which we're viewing these resources have very limited support for most of the HTTP verbs and HTTP response codes that REST requires, hence silly workarounds like tunnelling PUT inside POST using _method=put. In a way that's a trivial complaint because the workarounds do exist, but it's still a mess. (Note for purists: I write "REST requires" when what I really mean is "HTTP defines", but you don't qualify for the "RESTful" badge if you're misusing HTTP, and there's little social cachet in describing an API as consensual HTTP )

It's also a rubbish example because humans tend to expect a multi-stage "workflow" or "wizard" interaction, but HTML has lousy support for updating state and indicating a transition at the same time. A representation of a resource might include a Form to update the state of that resource, but it says nothing about what you can do when it's been updated. Alternatively (or additionally) it might include a navigation link to another resource, but that will be fetched with a GET and won't change anything server-side. Let's take a typical shopping cart as example: a form with two buttons for "update quantity" and "go to checkout" - whichever button you press, the resource that gets POSTed to is the same in either case, and any application state transition that might happen after you click is driven by the server sending a redirect (or not) - in effect, the data sent by the client smooshes together both the updated resource state and the navigation, which doesn't smell to me like hypertext. And as a side note, we may yet decide to ignore the client's indication of where it wants to go next if the data supplied is not valid for the current state of the resource, and instead send another copy of the shopping cart page prefixed with a pretty red box that says "sorry, you can't have 3.2j widgets" - and in all probability send it with a "200 OK" response code because there's no point sending any fancy kind of 40x when you don't know whether the browser will display it or will substitute with its own error page.

And thirdly it's a rubbish example because of the browser history stack and the defensive server-side programming that becomes necessary when your users start to treat your story as a Choose Your Own Adventure game. The set of state transitions available to the user is in practice not just the ones in the document you're showing him, but also all the other ones you've shown him in any of n previous documents: some of them may still be allowed, but others (changing the order details after you've charged his card) may not. Sending him "409 conflict" in these situations is probably not going to make him any wiser - you're going to have to think about the intention behind his navigational meander and do something that makes sense for the mental model you think he has. Once the user has hit the Back button and desynced the application state from the server-side resource state, you're running to catch up.

To summarise, a web application designed for humans needs to support human-friendly navigation and validation in ways which current browsers can't while keeping true to the intended uses of HTML and HTTP and RESTful style in general. This doesn't mean I think HATEOAS is bad as a concept - I just think we should be looking elsewhere than the human-driven web for an example of where it's good (and I haven't really found a compelling one yet).

I have a nasty feeling that the comments on this site are presently broken, but responses by email (dan @ telent.net) are welcome - please say if you want your email published or not.

Syndicated 2011-01-05 14:39:09 from diary at telent netowrks

Some pre-Christmas cheer - shameless plugs

At around the end of October we were going to have a whole series of posts about all the new stuff we're doing at $WORK, including

  • how to build an nfsroot web server farm based on Debian and using the fine services of Elastichosts ,and
  • the great people at Loadstorm and how they make web app load testing cheap and easy (TBH, not much of an article in this one even, it really /is/ easy)

but due to a marginally missed deadline and then the knockon effect of coping with all the new stuff they put in at Sagepay, I've not had time to write much. But in the meantime, suffice to say that the services linked from this post have amazed me not only by working properly (ok, in itself not that amazing) but by their swift, helpful and clueful email tech support to my queries. I'd like to add Bytemark to that list, and let it be generally known that I feel bad about the stuff there that we'll be decommissioning shortly - but hopefully we can put some business their way again in future.

Anyway, the new systems are now all up (although not actually running all the new code as yet) - maybe in the New Year I can describe in more detail the installation process. It needs documenting somewhere, anyway...

Syndicated 2010-12-21 16:01:36 from diary at telent netowrks

Streaming media with Sinatra for lightweights

I started looking at all the UPNP/DLNA stuff once for a "copious spare time" project, but I couldn't help thinking that for most common uses it was surely way over-engineered. What is actually needed, as far as I can see, is

  1. a way to find hosts that may contain music
  2. a way to search the collections of music therein for stuff you might want to listen to
  3. a way to get the bits that encode that music across the network
  4. a way to decode them, and push them to a DAC and some speakers

And it seems to me that DNS Service disovery ought to cover the first requirement quite adequately, HTTP is perfectly suited to pushing bits across a network, and once you've got the bits to the client then everything else is trivial. So this only leaves "search the collection" as an unsolved problem, and it surely can't be too hard to do this by e.g. sending an XQuery search to the collection server and having it return a XSPF playlist of matching files.

Probably the only reasons I haven't done this yet is that I don't know the first thing about XQuery, and I can't see a RESTful way to send XQuery to a server without misusing POST, because from the examples I have seen it looks too big to fit in a GET query string. So I'm letting it all mull in my mind in the hope of coming across a truly succint search syntax that does like being URL-encoded. In the meantime, though, because even though I don't need to discover my music but I still want to play it in the living room anyway, here's my one hour hack:

Syndicated 2010-12-06 22:01:00 from diary at telent netowrks

Let me (ac)count the ways: Sagepay Admin API vs Ruby Enumerable

At $WORK we accept credit card payments through Sagepay Server - a semi-hosted service that enables us to take cards on a service that looks like our web site but without actually having to handle the card numbers. Which is nice, because the auditing and procedure requirements (google PCIDSS for the details) for people who do take card numbers are requirementful and we have better things to do.

Anyway, for reasons too grisly to go into, I found myself yesterday writing some code, in Ruby, that would talk to the snappily named "Reporting and Admin API". (It used to be called "Access", but, just like Mastercard once upon a time, apparently got renamed). It's not particularly difficult, just a bit random. You create a bunch of XML elements (note, no root node) indicating the information you want plus the vendorname/username/password triple that you'd use to sign in to their admin interface, then you contatenate them being sure not to introduce intereleemnt whitespace, then you take an md5 hash of what's left, then you delete everything inside the <password> tags and substitute <signature>md5 hash goes here</signature>. Then you surround it all with <vspaccess> and </vspaccess>

If that sounds like doing XML by string munging, that's pretty much exactly what it is, but you don't want to do it using an XML library like anyone sane would do, because that might introduce whitespace or newlines or something which will upset the MD5 hash. Why didn't they use something standard like HTTP Digest authentication (or even Basic, since it's going out over HTTPS anyway)? No, I don't know either. At the least they could have specified that the hash goes somewhere other than in the body of the message it's supposed to be hashing.

Anyway, some Ruby content. The sagepay R&A call for getTransactionList takes optional startrow and androw arguments but doesn't say what the defaults are if they're not supplied: inspection says that unless you ask for something else you'll get the first fifty, and it's not completely unreasonable to suppose this is because you'll get timeouts or ballooning memory or some other ugliness if you ask for 15000 transactions all at once. So, we probably want to stick with fifty or whatever they've decided is a good number and do further queries as necessary when we've dealt with each block. But if we have to handle this in the client it's going to be kind of ugly.

Fortunately (did I say we were getting to some Ruby content? here it is) we don't have to, because of the lovely new support in 1.9 for external Enumerators. An Enumerator is an object which is a proxy for a sequence of some kind. You create it with a block as argument, and every time some code somewhere wants an element from the sequence it executes the block a bit more until it knows what value to give you next. This sounds trivial, but it makes control flow so much simpler it's actually pretty gorgeous, because the control flow in the block is whatever you need it to be and the interpretere just jumps in and out as it needs to. Just call yielder.yield value whenever there's another element ready for consumption and what you do between those calls is up to you.

This is kinda pseudocodey ...

offset=0
Enumerator.new do |yielder| # this arg name is convention
  loop
    doc=get_fifty_requests_starting_at(offset)
    doc.elements.each do |element|
      yielder.yield element # control goes back to the caller here
    end
    if doc.length > 0 then  # there are probably more elements to get
      offset+=50
    else
      break # end of the results
    end
  end
end
and this is kinda too long to be illustrate the point quite as effectively, but does have the benefit of actually doing something useful: https://gist.github.com/662821

If you find it useful, I am making it available under the terms of the two-clause BSD licence. If you want to extend it, send patches. If I need more of the API methods I'll be extending it too. If either of the two preceding things happen and cause it to grow up I'll move it into a proper github project and make it play nice with gem/bundler/all that goodness

Syndicated 2010-11-04 20:51:27 from diary at telent netowrks

Anhedonic Android

I'm having another go at Android development: this happens every so often when the memories of Java verbosity are sufficiently dulled by distance that I start thinking "it can't have been that bad, can it?". And of course, it turns out every time that yes, it can.

Anyway, should you be doing Android programming and getting a runtime error of the form Your content must have a TabHost whose id attribute is 'android.R.id.tabhost' when your latest changes shouldn't have involved a tabhost anyway, my advice is not to spend too long looking for the cause until after you have run ant clean and reinstalled on the target device or emulator. Because my experience is that the error then usually goes away with no code changes required.

Another tip to lessen the monkey-clicking: use the am command to launch your app automatically after a successful build/install. I don't know ant nor do I want to learn it right now (include XML rant by reference here) so I've added to the creaking edifice witha small Makefile

PATH:=/usr/local/lib/android/tools:$(PATH)

bin/onelouder-debug.apk: $(shell find src -name \*.java) $(shell find res -name \*.xml ) ant debug

go: bin/onelouder-debug.apk adb -e install -r bin/onelouder-debug.apk adb -e shell "am start -a android.intent.action.MAIN -n fm.onelouder.player/.Onelouder"

clean: rm -rf bin gen ant clean

Syndicated 2010-09-16 10:39:36 from diary at telent netowrks

RESTless spirits

From stackoverflow.com

I have read up many articles on Rest, and coded up several rails apps that makes use of Restful resources. However, I never really felt like I fully understood what it is, and what is the difference between Restful and not-restful. I also have a hard time explaining to people why/when they should use it.

If there is someone who have found a very clear explanation for REST and circumstances on when/why/where to use it, (and when not to) it would benefit the world if you could put it up, thanks! =)

Content-Type: text/x-flamebait

I've been asking the same question lately, and my supposition is that half the problem with explaining why full-on REST is a good thing when defining an interface for machine-consumed data is that much of the time it isn't. OK, you'd need a really good reason to ignore the commonsense bits (URLs define resources, HTTP verbs define actions, etc etc) - I'm in no way suggesting we go back to the abomination that was SOAP. But doing HATEOAS in a way that is both Fielding-approved (no non-standard media types) and machine-friendly seems to offer diminishing returns: it's all very well using a standard media type to describe the valid transitions (if such a media type exists) but where the application is at all complicated your consumer's agent still needs to know which are the right transitions to make to achieve the desired goal (a ticket purchase, or whatever), and it can't do that unless your consumer (a human) tells it. And if he's required to build into his program the out-of-band knowledge that the path with linkrels create_order => add_line => add_payment_info => confirm is the correct one, and reset_order is not the right path, then I don't see that it's so much more grievous a sin to make him teach his XML parser what to do with application/x.vnd.yourname.order.

I mean, obviously yes it's less work all round if there's a suitable standard format with libraries and whatnot that can be reused, but in the (probably more common) case that there isn't, your options according to Fielding-REST are (a) create a standard, or (b) to augment the client by downloading code to it. If you're merely looking to get the job done and not to change the world, option (c) "just make something up" probably looks quite tempting and I for one wouldn't blame you for taking it.

Syndicated 2010-08-23 20:04:19 from diary at telent netowrks

Do not meddle in the affairs of Wizards

From github

A less resource-heavy way to do realistic regression tests (and eventually load tests) than controlling an actual web browser a la watir.

  • Interact with your web site using Firefox.
  • Capture the requests sent with Tamper Data, and export as XML
  • Replay them from the command line
    • with realistic timing
    • with SSL support
    • with a 'rewrite' step that lets you programmatically change the request data before sending it (e.g. to switch hostnames from production to test, or vice versa)
    • using the single-threaded low-overhead goodness of EventMachine

For an event this autumn that I'm probably not allowed to tell you about, $WORK needs a web site that deals with 50x as many transactions as the current box. Current plan is to move it into the cloud and add memcached for everything that might conceivably benefit, but step one in performance tuning is, of course, to get a baseline.

And it's an excuse to learn EventMachine

Syndicated 2010-08-07 16:50:20 from diary at telent netowrks

Using a public routed network on a Vigor 2700

If you have a tech-friendly ISP (like mine ) your DSL service might not only have a static IP address (one that doesn't change each time you reconnect) but several of them. In my case, 8 (five usable).

If you have a Draytek Vigor router, you can configure it to know about these using the 2nd subnet support - this is what I did before I moved, with the 2600 I had at the time

If you have the specific Draytek Vigor 2700 model that I have (and I don't know how wide a problem this is) you may attempt to follow these instructions but find that the configuration options for second subnet are missing. The option for DHCP relay agent is missing too. I tried a bunch of stuff including factory reset, firmware upgrade, and "phone a friend" to resolve this before eventually grasping the nettle and fiddling with firebug and HTML "view source"

The situation seems to be that the router is entirely capable of doing both these two things (if you're reading this, it must be) but javascript variables govern whether the HTML configuration interface actually lets you, and for no reason I can think of these variables (called HIDE_LAN_GEN_2NDSUBNET and HIDE_LAN_GEN_DHCPRELAY) are, on my router, set to true. So, login to the router, pull up the firebug console, enter

parent.HIDE_LAN_GEN_2NDSUBNET=false
parent.HIDE_LAN_GEN_DHCPRELAY=false
and then choose "This frame", "Reload" from the rightclick menu in the main frame, and you should find they magically reappear and you can configure them appropriately. This will almost certainly take you less time to do than it did me to work out.

Syndicated 2010-05-05 14:57:31 from diary at telent netowrks

126 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!