Older blog entries for dan (starting at number 139)

What I miss most about Lisp

It’s been three months since I wrote anything longer than one line in Lisp, and over a year since I wrote more than a screenful of the stuff.

What I miss most is not CLOS or the REPL or even macros (per se, anyway). It’s

  • the distinction between READ and EVAL: a sane syntax for constructing complex data structures that look like code, but without actually having the data structures in question interpreted
  • and backtraces with the values of function parameters in them. When you’re doing the same thing 1000 times to different database rows or objects in a collection, and one of them has a nil in it somewhere, it would be really nice to know which one.

And maybe the REPL (although irb does most of that). And kinda sorta Defsystem, but I seem to be in the process of reimplementing that

But I hope soon to get Rubinius installed, just because I still have the irrational opinion that a grown-up programming language ought to be able to implement itself (and I have a thing for native code) so project 1 there is to see if I can hack the backtrace thingy at least into it.

Syndicated 2011-04-30 22:23:33 from diary at Telent Netowrks

TDD, BDD, executable specification

The new system at $WORK finally went live about a week ago, hurrah.

The upgrade itself took a few hours longer than I'd have liked, and (short shameful confession time), some (but probably not all) of this could have been caught by better test coverage. Which set me on a path towards SimpleCov (I'm using Ruby 1.9, rcov doesn't work), which led me to start looking at the uncovered parts, which set me to thinking. Which, as we all know, is dangerous.

TDD advocates (and pro-testing people in general) say "Don't test accessors". There are two reasons to say this that seem to my mind like good reasons: Ron says it because he wants you to write tests that do something else (something useful) that happens to involve calling those accessors. J B Rainsberger says it because "get/set methods just can't break, and if they can't break, then why test them?"

The problem comes when you adopt the mindset typified by BDD that "the test examples are actually your executable specification", because in that case how do you specify that the object has an accessor? This is not an unreasonable demand. Suppose we have objects whose purpose is to store structured data that will be used by client code - for example, User has an age property. Jbrains - which must surely be the Best Nickname Ever for a Java guy - says there's no useful test you can write for this (or not unless you don't trust your platform or something, but that way madness lies). But even if we are going to write one: a test that stores one or a few example values can easily be faked by the bloody-minded implementor ("the setter is called with the argument 42, then the getter is called and should return 42? I know! def age; 42; end") and a test that stores all possible values and tests they can be retrieved will take forever to write/read/run. Really the best notation in which to specify the behaviour of that property is the same notation which, when run by the interpreter, will implement the said behavior -

    attr_accessor :age
It's not just accessors either. Everything on the continuum between declarations of constants (SECONDS_PER_DAY=86400, are you really going to write a test for that?) and simple mathematical formulae
    class Triangle
      def area
        self.base * self.height / 2.0
      end
    end
are most readably expressed to humans as, well, the continuous functions that they are, not the three or four example data points that we might write examples to test for. For any finite number of test cases, you can write a giant case...when statement that passes all of them and still doesn't work in the general case.

Yes, we could and often should write a couple of tests just to make sure we haven't done anything boneheaded in implementing the function, but they're not spec. They're just examples.

But here's the rub: where or how do we put that code to make it obvious that it's specification that happens also to be a valid implementation - and not just implementation that may or may not meet a spec expressed in some other place/form? If we're laying out our app in conventional Ruby style, it can't go in the spec/ directory because that doesn't actually get run as part of the application, and it shouldn't go in lib/ or models/ or wherever else unless we are prepared to make our clients rootle through all that code looking for whatever "this is specification not just an implementation detail" flag we decide to adopt) when they want to use our interface.

I'm going to make a suggestion which is either radical or bone-headed: we should smush the rspec-stuff together with the app code: embed examples (which may in some cases be specification and in other cases be "smoke tests") in the same files as the implementation (which may itself sometimes be specification and other times be the result of our fallible human attempts to derive implementation from spec), and then we can have some kind of annotations to say which is which, and then we can have some kind of rdoc-on-'roids literate programming tool (To Be Implemented) go through the whole lot and produce two separate documents. One for people who want to use our code and want to know what it should do, and the other for the people who have to hack on it and need to know how it does. Or doesn't. And then maybe we can have code coverage metrics that actually mean something.

Syndicated 2011-04-07 14:26:57 from diary at telent netowrks

Corner cases

As you see in the image, right, my notebook recently took a dive onto a laminate floor and ended up a trifle dog-eared. Amazingly, the hardware crash didn't provoke a software crash, but not wishing to take any chances with running it while bending the motherboard, I shut it down myself. Then I dismantled it and dismantled its predecessor , and swapped parts about between them to create one combined functional machine . In the interests of extending battery life I left the fingerprint reader disconnected and removed the original HDD, leaving only the second (and faster) 2.5" disk in the optical media bay.

The new old laptop (henceforth to be known as Igor) worked for a couple of days until I left it running on battery overnight, and when I reapplied power in the morning got a rather nasty-looking "Non-system disk or disk error" message. The usual tedious mucking about with rescue images on USB keys revealed that the disk was still there and fully working, but the BIOS was determined not to see it until I remembered: PATA has two disks on the same channel. Sure enough, after removing the 'slave' jumper on the disk drive I suddenly had hda where previously I had a hole where hdb used to be.

Getting back to a state where it would actually boot, though, that was the trick. Something in Debian's initramfs generation does some kind of magic to detect that the root fs is stored in a logical volume of a volume group of an LVM physical volume on a LUKS device-mapper layer over the physical device, but something in the Debian installer's rescue mode wouldn't let me run anything grub-related (it tried: it failed with unhelpful error messages) after mounting the disk on /target, and nothing in the Debian installer would let me reinstall / without also clobbering /home, which I wasn't really interested in restoring from two-day-old backup. The reason for this rant, though, is not to vent (I did that on Twitter last week) but to document somewhere that if you're booting off a disk that has changed its name or address since this magic stuff happened, you can override it with appropriate kernel command line parameters: in my case it was

cryptopts=source=/dev/hda2,target=eamcs,lvm=eamcs-root
where source is the raw disk, target is the name of the mapping that the DM encryption code will set up for the decrypted PV (if you don't understand that clause then take heart because I'm not sure I do either, but if you set things up the debian way it's probably your hostname), and lvm is the appropriate LV name inside that PV.

Then it booted! There were some interesting warnings and it asked me for my passphrase an extra few times for luck, and then I edited /etc/crypttab and /etc/fstab and purged and reinstalled the grub-pc and initramfs-tools packages. Which may or may not have been strictly necessary but by that stage seemed prudent.

Syndicated 2011-03-07 14:25:12 from diary at telent netowrks

The Neighbour-Net Proxy Protocol

Borrowing a riff from Charlie Stross' "books I will not write" meme, I present the first in a series of indeterminate length entitled "Software I will not implement".

This is the result of a couple of days thinking about how to do a distributed Facebook (or at least, the interesting bits thereof) originally inspired by Eben Moglen's Freedom in the Cloud talk last year, and my subsequent disappointment to see that our most publically well-known hope Diaspora were all gungho about their implementation but publically completely silent about the protocols. In my opinion a monoculture is not the way to a robust ecology.

So (per the opening para of this post) why aren't I implementing it? Purely and simply, a severe deficiency of Copious Free Time. I am posting what I've got publically in the hope that it triggers some good ideas in others. In the (perhaps unlikely) event that anyone reading this thinks it's an awesomely good idea and does have the time to drive it forwards, take it or fork it and let me know and I will of course be deliriously happy to flipflop on this position and shelve something else instead if I can contribute.

https://github.com/telent/nnpp # READ ME; READ ME NOT; READ ME; READ ME NOT;

Syndicated 2011-03-02 14:01:46 from diary at telent netowrks

Backing up a workgroup server with rsnapshot

Lately I have been changing the way that $WORK backs up their office server. It should be pretty simple, right? Removeable USB drive they can take home at night, rsnapshot, win.

Almost. The problem is user-proofing.

The disk should not be mounted when not in use

Really, I don't want end-users unplugging a mounted drive. So automounting on drive plugging is a non-starter.

The disk has to be mounted before doing the backup

You'd think you could use cmd_preexec in rsnapshot.conf for this. No. If cmd_preexec fails (e.g. because the disk is not plugged in), rsnapshot prints a warning (which nobody will read) and then goes on and does the backup anyway.

There is a neat option no_create_root which you'd think was designed for this: "rsnapshot will not automatically create the snapshot_root directory. This is particularly useful if you are backing up to removable media, such as a FireWire or USB drive". However, it does this test before running cmd_preexec, which is entirely the wrong order of events for our purposes

The disk must be idle before being unplugged

Well, there's not really any physical way to stop people unplugging it during a backup, but we could at least give ourselves a sporting chance by telling them when the backup's happening.

There are several parts to this: first we spend an inordinate amount of time fighting with zenity to make it do something vaguely sane before giving up and creating a workaround that lets it do what it wants instead. The result looks a lot like rsync-wait.sh in which, if your external HDD is not called Fred, feel free to amend the text messages. If you're one of the vanishingly small number of people who has to use GNOME but isn't on a Linux kernel or for some other reason don't have inotify, you'll have to replace inotifywait with e.g sleep 10

We want this to be run for every user, because they all have physical access to the server and any of them might be charged with taking the disk home. So we need a way of launching this script on login and stopping it on logout - surprisingly the latter is harder. If you write an X client program in a real language it will probably eventually notice (or just die with SIGPIPE) when its X server socket closes, but a shell script doesn't necessarily have a persistent X connection open - so if we start the script from Xsession.d or similar and if you log out and in again the process won't die, and you end up with two copies running at once. Double bubble trouble.

To save you the afternoon that this took me to figure out, the simplest answer is "start it from xterm". So, create a .desktop file and drop it in /etc/xdg/autostart directory

dan@carnaby:~$ cat /etc/xdg/autostart/rsync-wait.desktop 
[Desktop Entry]
Encoding=UTF-8
Name=Backup Disk notifier
Comment=System tray icon for notifying rsnapshot running
Exec=/usr/bin/xterm -iconic -e /usr/local/bin/rsync-wait.sh
Terminal=false
Type=Application
NotShowIn=KDE;
StartupNotify=false
Categories=GTK;Monitor;System;

So far, this is working. I think.

Syndicated 2011-02-10 21:52:00 from diary at telent netowrks

How to create a diskless elastichosts node

Elastichosts is a PAYG (or monthly contract) "cloud" virtual server provider based on the Linux kvm technology. At $WORK we use it to provide a horizontally scalable app service, and we need to be able to add new app servers in less time than it takes to copy a complete working Debian system. Also we want to be running the same version of the same software on every server (think "security updates") and we don't want to be paying for another 3GB of Debian that we don't really need on each box. So, we need that stuff to be shared.

Elastichosts don't directly support kvm snapshots (or they didn't when I asked them about it) which leaves us looking for alternative ways to do the same thing. This blog entry describes one such approach: we use a read-only CD image for the root filesystem and then mount /usr and /home over NFS and a ramdisk (populated at boot) on /var. It's all done using standard Debian tools and Debian setup as of the "squeeze" 6.0 release.

The finished thing is on github at https://github.com/telent/squeeze-cd-nfsroot/ . To use, basically you clone the repo into /usr/local/client, edit the files, and run make. Slightly less basically, you almost certainly need to know what edits to make to which files, and you may also want to know how it works anyway. So read on ...

(Yes, you should be able to clone it elsewhere because I shouldn't have hardcoded that directory name into the Makefile. This may be fixed in a future version if I ever find the need to install it somewhere else myself. Or see the 'conclusion' section if you want to fix it yourself)

How the client boots

  1. the client boots off a CD (ISO9660) image created by initramfs-tools which is configured to look for an nfsroot directory. This directory is created on the server by a Makefile rule that copies the server's root dir and the replaces, renames and changes a bunch of stuff in /etc

  1. it then mounts a ramdisk on /tmp and another on /var. There is an initscript populate_var which creates all the empty directories that daemons will expect when they start up. Note that these directories are entirely ephemeral, which means for example that syslog must be configured to log remotely

  1. it mounts /usr and /home (readonly) directly from the server. This means that most of the packages on the server are available immediately on the clients - unless they include config files in /etc, in which case they aren't until you rerun the Makefile that creates the nfsroot (after, possibly, adjusting the config appropriately for the client)

A short guide to customising the system

These files are copied to the client - you may want to review their contents

  • template/etc/fstab needs to have the right hostname for your NFS server
  • template/etc/initramfs-tools/initramfs.conf - check DEVICE and NFSROOT settings
  • template/etc/network/interfaces may need tweaking
  • template/etc/resolv.conf is set up for our network, not yours
  • template/etc/init.d/populate_var might need directories added or chown invocations removed, depending on what packages you have installed
  • template/etc/rsyslog.conf needs editing for the syslog server's IP address

And also

  • insserv calls in Makefile may need adjusting if you have other services on the server that you don't want to also run on the client

And on the server

  • you'll need to be exporting the nfsroot/ directory as NFS, ditto /home and /usr. My /etc/exports looks something like this
    /usr/local/client/nfsroot 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
    /home 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
    /usr 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
    

  • you need to run a dhcp server (I use "dnsmasq", which also provides DNS service)

  • If you want the clients able to syslog, you need to configure the syslog server to accept syslog messages from them

How we build the files

The nfsroot

Creating the nfsroot is done by the Makefile rootfs target

It starts by rsyncing the real root into nfsroot/ with a whole bunch of exclusions, then copies files from template/ over the copied files to cater for the bits that need a different configuration on the client than they do on the server, then does some other fiddling around. Most notable:

  • we have to copy libcrypto and libz into /usr because the dhcp client needs those libraries and it runs before /usr is mounted (see http://bugs.debian.org/592361 - though according to that page this bug is now fixed)

  • we blat the generated files etc/udev/rules.d/*persistent* which are correct for the server but not for the client.

  • Debian will run better with no /etc/hostname than it will with the wrong one

  • Debian squeeze uses a slightly exciting parallelising dependency-based system for running init scripts, so we can't just copy files into init.d, we need to run insserv to make it see then. (As a long-time Unix user who doesn't pay enough attention when these kinds of changes are made, this took ages to work out). Similarly to disable daemons that run only on the server, we use insserv -r.

  • a couple of files need to be writable, so we replace them with symlinks
    • /etc/network/run is pointed to /lib/init/rw
    • /etc/mtab is pointed to /proc/mounts

  • We create our own etc/resolv.conf. Our elastichosts clients generally have a public (dynamically allocated) IP address assigned to eth0 and a vlan attached to eth1. DHCP gets exciting here: the client boots off eth1 and gets the address of that interface using boot-time kernel code, then runs the user-space dhclient tool to get an eth0 address, and we'd rather one rely on the conjunction of all that to get /etc/resolv.conf right

  • populate_var pretty much does what it says on the tin but might need more directories adding/removing depending on what you have installed

The initramfs

The Makefile ramfs.img target makes an initramfs image which knows how to mount root on nfs. This particular magic is built into Debian and the only particular point of note here is that we use nfsroot/etc/initramfs-tools as the config directory so we know we're generating a config for the client without treading on the server's usual initramfs config (which it might need when it boots itself). In our setup the only file that's actually changed is template/etc/initramfs-tools/initramfs.conf which has settings for BOOT, DEVICE, NFSROOT that probably differ from what the server wants for itself

Creating the cd image

This is pretty straightforward too. The Makefile boot_cd.iso target runs mkisofs to generate a CD image using the initramfs image and other files taken from isolinux.

Uploading it

We had to slightly patch the elastichost-upload script to add the ability to create shared images as well as exclusive ones. This is controlled by the api key claim:type, which the elastichosts API docs describe as follows: "either 'exclusive' (the default) or 'shared' to allow multiple servers to access a drive simultaneously"

The patched version is in the git repo, accompanied by the patch

Once you've uploaded the first one you can uncomment the DRIVE_UUID param at the top of Makefile so that subsequent attempts update the same drive instead of creating a new one every time.

Conclusion

There you have it. It's certainly a bit rough and ready right now and requires editing a few too many files to be completely turnkey, but hopefully it will save someone somewhere some time. If you have bug fixes, send me patches (or fork it on github and send me pull requests); if you have suggestions, my inbox is open; if you know you need something like this but can't understand what I'm writing about, my consulting rates are reasonable ;-)

Syndicated 2011-01-22 22:35:28 from diary at telent netowrks

Hating on HATEOAS

In 2011 I will not start blog posts with the word "so".

Lately I've been thinking about RESTfulness again. An observation that has been widely made is that Roy T. Fielding's definition of REST differs wildly from what most of the rest (sorry) of the world thinks it is - while all people with taste and discrimination must surely agree that the trend from evil-tasting SOAPy stuff back to simple HTTP-based APIs is a Good Thing, the "discoverability" and "hypertext" aspects of Canonical REST are apparently not so widely considered as important for practical use.

My own small contribution to this debate is that the reason people are not trying to do HATEOAS that they've been told that the web at large - the large part of the WWW that's mediated through ordinary web browsers under the direction of human brains - is an example of how it works. And the more I think about it the more I think that the example is rubbish and unhelpful.

It's a rubbish example because the browsers through which we're viewing these resources have very limited support for most of the HTTP verbs and HTTP response codes that REST requires, hence silly workarounds like tunnelling PUT inside POST using _method=put. In a way that's a trivial complaint because the workarounds do exist, but it's still a mess. (Note for purists: I write "REST requires" when what I really mean is "HTTP defines", but you don't qualify for the "RESTful" badge if you're misusing HTTP, and there's little social cachet in describing an API as consensual HTTP )

It's also a rubbish example because humans tend to expect a multi-stage "workflow" or "wizard" interaction, but HTML has lousy support for updating state and indicating a transition at the same time. A representation of a resource might include a Form to update the state of that resource, but it says nothing about what you can do when it's been updated. Alternatively (or additionally) it might include a navigation link to another resource, but that will be fetched with a GET and won't change anything server-side. Let's take a typical shopping cart as example: a form with two buttons for "update quantity" and "go to checkout" - whichever button you press, the resource that gets POSTed to is the same in either case, and any application state transition that might happen after you click is driven by the server sending a redirect (or not) - in effect, the data sent by the client smooshes together both the updated resource state and the navigation, which doesn't smell to me like hypertext. And as a side note, we may yet decide to ignore the client's indication of where it wants to go next if the data supplied is not valid for the current state of the resource, and instead send another copy of the shopping cart page prefixed with a pretty red box that says "sorry, you can't have 3.2j widgets" - and in all probability send it with a "200 OK" response code because there's no point sending any fancy kind of 40x when you don't know whether the browser will display it or will substitute with its own error page.

And thirdly it's a rubbish example because of the browser history stack and the defensive server-side programming that becomes necessary when your users start to treat your story as a Choose Your Own Adventure game. The set of state transitions available to the user is in practice not just the ones in the document you're showing him, but also all the other ones you've shown him in any of n previous documents: some of them may still be allowed, but others (changing the order details after you've charged his card) may not. Sending him "409 conflict" in these situations is probably not going to make him any wiser - you're going to have to think about the intention behind his navigational meander and do something that makes sense for the mental model you think he has. Once the user has hit the Back button and desynced the application state from the server-side resource state, you're running to catch up.

To summarise, a web application designed for humans needs to support human-friendly navigation and validation in ways which current browsers can't while keeping true to the intended uses of HTML and HTTP and RESTful style in general. This doesn't mean I think HATEOAS is bad as a concept - I just think we should be looking elsewhere than the human-driven web for an example of where it's good (and I haven't really found a compelling one yet).

I have a nasty feeling that the comments on this site are presently broken, but responses by email (dan @ telent.net) are welcome - please say if you want your email published or not.

Syndicated 2011-01-05 14:39:09 from diary at telent netowrks

Some pre-Christmas cheer - shameless plugs

At around the end of October we were going to have a whole series of posts about all the new stuff we're doing at $WORK, including

  • how to build an nfsroot web server farm based on Debian and using the fine services of Elastichosts ,and
  • the great people at Loadstorm and how they make web app load testing cheap and easy (TBH, not much of an article in this one even, it really /is/ easy)

but due to a marginally missed deadline and then the knockon effect of coping with all the new stuff they put in at Sagepay, I've not had time to write much. But in the meantime, suffice to say that the services linked from this post have amazed me not only by working properly (ok, in itself not that amazing) but by their swift, helpful and clueful email tech support to my queries. I'd like to add Bytemark to that list, and let it be generally known that I feel bad about the stuff there that we'll be decommissioning shortly - but hopefully we can put some business their way again in future.

Anyway, the new systems are now all up (although not actually running all the new code as yet) - maybe in the New Year I can describe in more detail the installation process. It needs documenting somewhere, anyway...

Syndicated 2010-12-21 16:01:36 from diary at telent netowrks

Streaming media with Sinatra for lightweights

I started looking at all the UPNP/DLNA stuff once for a "copious spare time" project, but I couldn't help thinking that for most common uses it was surely way over-engineered. What is actually needed, as far as I can see, is

  1. a way to find hosts that may contain music
  2. a way to search the collections of music therein for stuff you might want to listen to
  3. a way to get the bits that encode that music across the network
  4. a way to decode them, and push them to a DAC and some speakers

And it seems to me that DNS Service disovery ought to cover the first requirement quite adequately, HTTP is perfectly suited to pushing bits across a network, and once you've got the bits to the client then everything else is trivial. So this only leaves "search the collection" as an unsolved problem, and it surely can't be too hard to do this by e.g. sending an XQuery search to the collection server and having it return a XSPF playlist of matching files.

Probably the only reasons I haven't done this yet is that I don't know the first thing about XQuery, and I can't see a RESTful way to send XQuery to a server without misusing POST, because from the examples I have seen it looks too big to fit in a GET query string. So I'm letting it all mull in my mind in the hope of coming across a truly succint search syntax that does like being URL-encoded. In the meantime, though, because even though I don't need to discover my music but I still want to play it in the living room anyway, here's my one hour hack:

Syndicated 2010-12-06 22:01:00 from diary at telent netowrks

Let me (ac)count the ways: Sagepay Admin API vs Ruby Enumerable

At $WORK we accept credit card payments through Sagepay Server - a semi-hosted service that enables us to take cards on a service that looks like our web site but without actually having to handle the card numbers. Which is nice, because the auditing and procedure requirements (google PCIDSS for the details) for people who do take card numbers are requirementful and we have better things to do.

Anyway, for reasons too grisly to go into, I found myself yesterday writing some code, in Ruby, that would talk to the snappily named "Reporting and Admin API". (It used to be called "Access", but, just like Mastercard once upon a time, apparently got renamed). It's not particularly difficult, just a bit random. You create a bunch of XML elements (note, no root node) indicating the information you want plus the vendorname/username/password triple that you'd use to sign in to their admin interface, then you contatenate them being sure not to introduce intereleemnt whitespace, then you take an md5 hash of what's left, then you delete everything inside the <password> tags and substitute <signature>md5 hash goes here</signature>. Then you surround it all with <vspaccess> and </vspaccess>

If that sounds like doing XML by string munging, that's pretty much exactly what it is, but you don't want to do it using an XML library like anyone sane would do, because that might introduce whitespace or newlines or something which will upset the MD5 hash. Why didn't they use something standard like HTTP Digest authentication (or even Basic, since it's going out over HTTPS anyway)? No, I don't know either. At the least they could have specified that the hash goes somewhere other than in the body of the message it's supposed to be hashing.

Anyway, some Ruby content. The sagepay R&A call for getTransactionList takes optional startrow and androw arguments but doesn't say what the defaults are if they're not supplied: inspection says that unless you ask for something else you'll get the first fifty, and it's not completely unreasonable to suppose this is because you'll get timeouts or ballooning memory or some other ugliness if you ask for 15000 transactions all at once. So, we probably want to stick with fifty or whatever they've decided is a good number and do further queries as necessary when we've dealt with each block. But if we have to handle this in the client it's going to be kind of ugly.

Fortunately (did I say we were getting to some Ruby content? here it is) we don't have to, because of the lovely new support in 1.9 for external Enumerators. An Enumerator is an object which is a proxy for a sequence of some kind. You create it with a block as argument, and every time some code somewhere wants an element from the sequence it executes the block a bit more until it knows what value to give you next. This sounds trivial, but it makes control flow so much simpler it's actually pretty gorgeous, because the control flow in the block is whatever you need it to be and the interpretere just jumps in and out as it needs to. Just call yielder.yield value whenever there's another element ready for consumption and what you do between those calls is up to you.

This is kinda pseudocodey ...

offset=0
Enumerator.new do |yielder| # this arg name is convention
  loop
    doc=get_fifty_requests_starting_at(offset)
    doc.elements.each do |element|
      yielder.yield element # control goes back to the caller here
    end
    if doc.length > 0 then  # there are probably more elements to get
      offset+=50
    else
      break # end of the results
    end
  end
end
and this is kinda too long to be illustrate the point quite as effectively, but does have the benefit of actually doing something useful: https://gist.github.com/662821

If you find it useful, I am making it available under the terms of the two-clause BSD licence. If you want to extend it, send patches. If I need more of the API methods I'll be extending it too. If either of the two preceding things happen and cause it to grow up I'll move it into a proper github project and make it play nice with gem/bundler/all that goodness

Syndicated 2010-11-04 20:51:27 from diary at telent netowrks

130 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!