Older blog entries for Bram (starting at number 91)

Quasi-review: Test Driven Development

The introduction to Test Driven Development by Kent Beck starts with the following:

Early one Friday, the boss came to Ward Cunningham to introduce him to Peter, a prospective customer for WyCash, the bond portfolio management system the company was selling. Peter said, "I'm very impressed with the functionality I see. However, I notice you only handle U.S. dollar denominated bonds. I'm starting a new bond fund, and my strategy requires that I handle bonds in different currencies." The boss turned to Ward, "Well, can we do it?"

Anyone interested in domain knowledge would at this point have done some research on exchange rate volatility and hedging, critically important subjects for anyone competently managing a bond portfolio which dabbles in multiple denominations. But extreme programming dictates that you jump right in and start coding. Sure enough, half the book is dedicated to the writing of a Value class which completely obfuscates that exchange rates vary over time and that a commision has to be paid every time a currency exchange is done.

Trust Metrics

My claim in my last diary entry that the algorithm will always find all violating subsets isn't true. For example, it's possible that there are two disjoint islands of nodes, the smaller of which has proportionately slightly fewer spam markings and hence drifts slowly upwards. If there's a set of spammers within that island which drift downwards relawively to it more slowly than the island as a whole drifts upwards, they won't be detected. This isn't a big problem in practice, since it only lets a few extra spams through. Any group which goes significantly over the limit will quickly get detected.

The last metric I gave suffers from the following bad artifact: If A certs B, and B isn't certed by anyone else and B spams, then A gets knocked out completely as a result as well as B. This can be fixed by differentiating between removed nodes which are directly certed by a non-removed node and removed nodes which are only certed indirectly, by other removed nodes. Indirectly certed nodes are removed, while directly certed nodes are only removed if they themselves spam more than the threshold. This makes it so that if you don't spam yourself but cert a bunch of dubious people the worse that could happen is that your certs stop working. It also makes it possible for spammers to send twice as much spam, first indirectly and then directly, but that's no big deal, since we have several orders of magnitude to play with before spamming becomes worthwile.

The technique I gave allows a certain amount of spam per cert. Changing it to an amount per node rather than per cert may have significant advantages, both in terms of behavior and computation time, but I have to think about it more.

BeOS's GUI

Matt Brubeck informs me that the GUI API I described is essentially what BeOS did.

So BeOS had a good GUI system in addition to real time features. I don't know what criteria Apple used when it decided to purchase Next instead of Be, but technical merit clearly was not the overriding one.

<p>

The automatic insertion of <p> tags on advogato is too smart for its own good. The only coherent way to do it is to add <br> before every carriage return (assuming one isn't already there) and skip <p> tags completely.

Of course, advogato was just trying to do the 'right thing'. Supposedly the semantics of <p> are deeply meaningful and important. Some of us can even remember back in the day when anyone who put <br><br><br> in their html would get flamed for it. These days everyone can see that such flamage was stupid. The truth is that as soon as mosaic (the first good GUI web browser) came out it was completely obvious that html was all about layout. <p>'s semantics have been nothing short of a disaster. The many differences between browsers caused by implicit insertions of <p> in different places have caused no end of headaches.

The w3c, for its part, continues to claim that html is all about markup, and denies that not making <p> be shorthand for <br><br> is anything other than asinine.

Trust Metric Calculations

In a previous entry The following problem came up: Is it possible to find in polynomial times all subsets of nodes in a graph containing less than half the nodes such that the number of certs into that subset is less than a tenth the number of nodes marked 'bad' in that subset? (I'm going to assume the amount is ten for simplicity.)

It turns out that a simulation of a process akin to gravity can work. First, imagine all nodes are height zero, and there is a force pulling downwards. For each tick of the clock, first pull each node down one unit for every time it was marked as a spammer. Next, for each node A and B where A certs B, if B is lower than A then raise B by ten units and lower A by ten units (this can cause a lot of overshooting, but that averages out in the end). Finally, find where the median height node is and move all nodes upwards the same amount to leave the median node at height zero. Repeat this process many times (a polynomial function with a low exponent on the number of nodes will suffice, although the exact exponent isn't immediately obvious). Eventually all nodes which are part of spam groups plummet downwards, while all good nodes stay part of the central pack, reasonably close to height zero.

To use this for spam stoppage, apply this algorithm to all nodes, then remove all nodes which plummet downwards. If A sent mail to B, then if there's a path from B to A along cert edges which only covers nodes not removed then B accepts the mail, otherwise it gets rejected.

Interestingly, this algorithm not only approximates the solution well in a hand-wavy sort of way, but actually solves it rigorously. A very strange result for a simulation algorithm.

Book Reviews: Poker Nation, Positively Fifth Street, Word Freak

Several more highly recommended books in the scientific games genre, like Moneyball which I reviewed earlier.

Poker Nation and Positively Fifth Street are both books about poker, specifically centering around what has become the standard game among serious players, no limit texas hold 'em. I recommend you first read Poker Nation. It's a quick read, gives a good overview of the culture around the game and the psychology of playing it, and covers basic strategy. Positively Fifth street is much longer and has a much stronger narrative structure. It covers in parallel the murder of Ted Binion, owner of the Horseshoe casino in Las Vegas, and the World Series Of Poker, held in that same casino. The author decided to spend his budget for writing an article on entering satellite tournaments. You should avoid reading the back of this book before you're done with it, it gives away a lot of the story.

Word Freak is a book in the same genre but about scrabble. It's very comprehensive, covering the culture around the game, the trademark issues around the name, the controversy over dictionaries, tournaments, computer play, history of the game, basic strategy, and just about anything else you might think to wonder about it.

The unusual thing in common among all these books, and why I love them so much, is that they cover the issues of luck and chance unflinchingly, pointing out just how much of chance there is among who wins the big tournaments and how little meaning the patterns we read into winning history really have.

That exhausts all the books I know of this genre. I'd be happy to hear recommendations of others.

GUI Library APIs

GUI libraries, it seems, are designed poorly by tradition. Sure you could design one which isn't a big steaming pile of manure, but why break with tradition?

In case anyone reading this might happen to write a GUI library and want to break with tradition, I will now give a very high-level overview of the Right Way to do threading in a GUI library.

There is a separate thread which does nothing but graphical display. This isolates all the potentially CPU-intensive grahical tasks from the main application, and also prevents the application from making the GUI thread block. In addition, it enables lots of performance enhancements like having the GUI thread notice that a window was closed halfway through doing some CPU-intensive graphical effect on that window and not bothering to finish.

The only time the application developer writes code which will be executed in the GUI thread is when they're writing their own layout or widget code. All the calls to make things happen in the GUI (such as changing text or resizing windows) are threadsafe. This puts an end to all that clumsy invokeLater() crap which all the GUI library authors seem to think we love doing.

When events happen in the GUI they come back as callbacks on a special-purpose callback thread. If there are multiple processes then there's a separate callback thread per process, so that they can't lock each others's interfaces (note that Java got this horrifyingly wrong). If you as the GUI library author are feeling ambitious you can make callbacks be events pulled off a queue in a way which can be integrated into the application's main event loop. Of course, doing that without polling requires making a decent event notification system, but that's a whole other rant.

There are a few small details which should go without mentioning but in practice don't so I'll cover them now.

Good double buffering should be supported in 1.0. Actually, it should be supported in the first alpha version.

It should be easy, nay trivial, to make windows which resize nicely. Graphical widget layout tools are a symptom of execrable libraries.

It should be possible to make a modal window display without blocking. In particular, anyone who writes code which automatically fires off a new thread while the previous one is blocking needs a good beating with a big stick.

Yes I've been haggling with GUI stuff today. How could you tell?

P vs. NP

Scott Aaronson has an interesting paper on whether P vs. NP is formally decidable. He makes an interesting distinction between P vs. NP and the continuum hypothesis. We can imagine a very specific physical process which corresponds exactly to the truth of P vs. NP, but no equivalent exists for the continuum hypothesis, thus P vs. NP is something which we philosophically view as strictly true or false, regardless of whether we can ever prove it, while the continuum hypothesis might have a much vaguer independant state.

I'd like to point out that we can almost make a physical process which corresponds to the continuum hypothesis. Specifically, we can make a machine which assumes the continuum hypothesis and systematically proves all possible statements and halts if it ever finds a contradiction. However, this physical process doesn't correspond to the continuum hypothesis per se, but to whether the continuum hypothesis is consistent with our axiomatic basis, which is a subtly but importantly different question.

12 Sep 2003 (updated 12 Aug 2004 at 05:08 UTC) »
Sender Permitted From

Several people pointed out that in my last post I reinvented Sender Permitted From.

It's great to see this happening. It will cause a dramatic reduction in the amount of spam gets through, and has enough engineering simplicity and political force behind it to make success likely.

Free software people take note, SPF can use contributors.

Minesweeper 3d

I'm now in second place on the Minesweeper 3D world records chart.

Email Workflow

As mentioned previously, I've been working on a new email reader. Right now I'm doing design work. Here are a few notes -

All email activities are centralized around folders, typically corresponding to mailing lists. There is a 'main' folder, which is basically miscellaneous, and a 'junk' folder where all the spam/viruses/otherwise useless things go.

Each piece of mail has one of the following states - unread, read, to be dealt with, reminder, or junk.

Whenever you're looking at a piece of mail, you can set a reminder on that mail, which will make it go into reminder state at a time you specify.

The three tasks I do are read unread mail, browse to be dealt with, and read pending reminders. There will be a different mode for each of them. In all states it should be possible to do the standard task just by hitting the same key repeatedly.

Threading will, of course, be supported.

Searching in common headers and message bodies, in either direction, will also be supported.

Filtering a mailing list to a folder will be very easy to set up when you're viewing a message to that list.

This already describes a much nicer mail reader than any I've used so far, so my feature set will stop there. My plan in terms of storage is to make it use maildir format, and have a separate file of its own containing classification info. It will always modify that file by appending to the end.

Rethinking Email Infrastructure

The bulk of spam email has bogus From: lines. Some simplification and tweaking of how email is delivered could block most of that.

Relays are obsolete, an old dinosaur from the days of UUCP. These days, only one architecture makes sense. For each domain, there are several IPs which mail originating at that domain might come from - a primary and backups in case the primary goes down. Likewise, on the receiving end there are several IPs which the mail can be sent to, a primary and backups. This is very basic fail-over. There may be another machine behind the recipients which speaks POP or IMAP, but that's outside of the SMTP portion of the protocol.

The way this could be made secure is very simple. The IPs for the recipients are already in DNS, the IPs of the senders could be put there as well. If the IP a piece of mail was sent from didn't match the one listed in the DNS of the From: line, it would be tossed out. Obviously this isn't a perfect form of security, but it would increase the difficulty of sending spam (and viruses) quite a lot.

Sender information in DNS would have other benefits as well. For example, if I send mail as foo@bar.com, my mailer could do a DNS request to find out where bar.com sends its mail from and authenticate with that machine as foo, rather than forcing me to configure where my mail server is.

Undoubtedly there are some issues which would have to be worked out to make this universally deployable, for example a huge site like hotmail.com may not be able to send everything from just a handful of ips, but those aren't insurmountable.

Mail Programs

I've finally grown so sick of mail sent to me falling through the cracks that I've decided to write my own mail client. This is pathetic. My mail needs are extremely simple, following a very common work flow, and yet no available mail client handles them acceptably. Bram's law strikes again.

Related to that, does anybody know of a good Python library which will parse dates written in english, such as 'in 2 weeks', 'august 14th' and 'next month'?

Livejournal Valentine System

Stevey: You may wish to base a future version of the valentine system on the stable marriage problem. (Or, to simplify the interface a bit, the stable roommate problem).

Trust Metrics

Having everyone use a single central site for their messaging has tremendous benefits in terms of stopping spam. For one thing, there's no need to propogate data across the system. For another, it's reasonable to assume that the central site could make it sufficiently difficult to generate new ids that less than half the nodes in the system are spammers, which as we will see makes anti-spamming vastly easier.

My big problem with anti-certs up until now is that they enable censorship. I'm now convinced that censorship can be limited using the very simple mechanism of only allowing someone to anti-cert the sender of a message they actually received. It's probably a good idea to require that the anti-certing be done within a short period of time after the message is first read.

Now, for my improved algorithm for removing spammers. We would like for a spammer to be able to send out a fixed amount of spam for each node they compromise, and not get any benefit by making fake identities. Since we now have a concept of where the big island of nodes is, we can do exactly that. For a subset of nodes, we count the number of them who are certed by nodes in the big island. If that subset has 100 times as many spam marks as the externally certed nodes (or whatever the permitted amount of spam is) then none of the nodes in that subset are allowed to send out mail.

I'm not sure of a good algorithm for finding such subsets, but intuitively it doesn't seem like an intractable problem. Also, a reasonable case could be made for counting edges instead of nodes for the amount of spam allowed.

I'm quite happy with how workable this approach seems. It looks downright practical.

Voting Machines

chalst: Certainly electronic voting machines could in theory both have a paper audit trail and be much more reliable than any past vote counting mechanism. Rebecca Mercuri, whose ejection from a conference got us talking about this in the first place, has some very good thoughts on the topic. Unfortunately the current batch of voting machines are a big step back in terms of reliability, even ignoring their lack of paper trail and likely rigging of elections.

Minefield

chalst: In the case of two infinites which don't have a clear ordering neither player will successfully challenge the other and they'll have to play another round. The point of it being a game is that the players have the opportunity to try to challenge each other without it ever being necessary to prove that no such challenge exists, since proving the consistency of large infinites is by nature impossible to do without assuming the existence of yet ever larger infinites.

Trust Metrics

I've implemented my latest trust metric. This implementation requires Python 2.3, and includes some 'clever' algorithmic things which should make it reasonably fast in practice. I've tested this code, but not as thoroguhly as I'd like.

from bisect import insort
from __future__ import division
class TM:
    def __init__(self, seed, get_peers):
        self.seed = seed
        self.get_peers = get_peers
        self.point_to = {'phantom': [seed]}
        self.point_from = {seed: ['phantom'], 'phantom': []}
        self.targets = {seed: 1}
        # {peer: [(time, increase)]}
        # stays sorted by time
        self.dispensation_schedule = {'phantom': [(0, 1)]}
        # {peer: [{peer, proportion}]}
        self.dispense_to = {'phantom': [(seed, 1)]}
    def get_closest_peers(self, numaccept):
        result = []
        while True:
            if not result:
                times = [(1, self.seed)]
            else:
                times = []
                for i in self.targets.keys():
                    try:
                        times.append((self.time_to_finish(i), i))
                    except ZeroDivisionError:
                        pass
                times.sort()
            if not times:
                return result
            t, peer = min(times)
            completes = []
            result.append(peer)
            del self.targets[peer]
            if len(result) == numaccept:
                return result
            # change the dispensation schedules of everything which contributed to this new 
            # peer to reflect that their output is earmarked until the new peer is full
            for i in self.point_from[peer]:
                total = 0
                for j, (t2, inc) in enumerate(self.dispensation_schedule[i]):
                    if t2 > t:
                        self.dispensation_schedule[i] = [(t, total)] + self.distribution_schedule[j:]
                        break
                    total += inc
                else:
                    self.dispensation_schedule[i] = [(t, total)]
            # for all complete peers which newly have all their friends full, 
            # redistribute their outflow to their friends
            completes = []
            for i in self.point_from[peer]:
                for j in self.point_to[i]:
                    if j not in result:
                        break
                else:
                    completes.append(i)
            if completes:
                for c in completes:
                    self.redistribute(c, t)
    def time_to_finish(self, peer):
        incs = {}
        for i in self.point_from[peer]:
            for (t, inc) in self.dispensation_schedule.get(i, []):
                incs[t] = incs.get(t, 0) + inc
        inclist = incs.items()
        inclist.sort()
        current_rate = 0
        current_time = 0
        current_amount = 0
        for t, inc in inclist:
            new_amount = current_amount + (t - current_time) * current_rate
            if new_amount >= 1:
                break
            current_amount = new_amount
            current_time = t
            current_rate += inc
        return current_time + (1 - current_amount) / current_rate
    def redistribute(self, peer, t):
        # pull in friend information of peer's friends
        for p in self.point_to[peer]:
            if not self.point_to.has_key(p):
                self.point_to[p] = self.get_peers(p)
                for x in self.point_to[p]:
                    if self.point_from.has_key(x):
                        self.point_from[x].append(p)
                    else:
                        self.point_from[x] = [p]
                        self.targets[x] = 1
        # calculate the distribution of peer's outflow
        donefriends = []
        notdonefriends = []
        bounceback = 0
        for friend in self.point_to[peer]:
            if self.point_to[friend]:
                if self.dispense_to.has_key(friend):
                    donefriends.append(friend)
                    bounceback += self.dispense_to[friend].get(peer, 0)
                else:
                    notdonefriends.append(friend)
        try:
            mult = 1 / (len(donefriends) + len(notdonefriends) - bounceback)
        except ZeroDivisionError:
            mult = 0
        newdist = {}
        for friend in notdonefriends:
            newdist[friend] = mult
        for friend in donefriends:
            m = mult * (1 - self.dispense_to[friend].get(peer, 0))
            for p2, amount in self.dispense_to[friend].items():
                if p2 is not peer:
                    newdist[p2] = newdist.get(p2, 0) + amount * m
        # optimize peer out of other peers's distribution lists
        for i in self.point_from[peer]:
            if self.dispense_to.has_key(i):
                d = self.dispense_to[i]
                for friend, amount in newdist.items():
                    if friend is not i:
                        d[friend] = d.get(friend, 0) + amount * d[peer]
                del d[peer]
        self.dispense_to[peer] = newdist
        # add new scheduled distribution
        m = self.dispensation_schedule[peer][0][1]
        for friend, amount in newdist.items():
            insort(self.dispensation_schedule.setdefault(friend, []), (t, amount * m))
        del self.dispensation_schedule[peer]
Trust Metrics

I think I've figured out how to fix the gameability problems I've talked about in the last few entries.

Water pours in at a fixed rate from the seed. At any given time, each node is either trying to collect water, done collecting water and now dispensing it, or done dispensing water and simply passing it through. When a node is dispensing water it will do so at a rate depending on the amount of water flowing into it.

We will construct a schedule of what water goes where when. For each node which is trying to collect water, we ask 'if all dispensing nodes were to allocate their full amount to this node, when would it be full?' The node which does so the fastest gets added first, and all the dispensation of the nodes which certify it gets allocated up until the time. When deciding on later nodes to add, it is assumed that dispensers don't start contributing until after their previously allocated contributions are done.

Note that when a node is first added its rate of dispensation is zero. When all of the nodes which a dispenser certifies have been added, then that node's rate of dispensation is distributed evenly across all the nodes it certifies. Dead ends are skipped over, to avoid water wastage. More sophisticated forms of backlogging indirect dead ends could be made, but the simplest one should work fine in practice.

I'm very enamored of this technique. It does accurate proportional representation and is very hard to game outside of the gameability inherent in having proportional representation at all. My one current gripe is that while the technique for selecting which node gets added next is linear on the number of nodes, the technique for redistributing the dispensation of full peers is quadratic. Harumph.

To see the gameability inherent in proportional representation, consider this case: There are three voters, who will elect three candidates. Two voters prefer A, B, and C, while the third one prefers A, B, and D. Clearly the winners should be A, B, and C, but the third voter can list simply D and get exactly the candidates he wants elected.

Election Methods

On a related subject, I've for a long time been thinking about improvements to single transferable vote. There is a form of gameability of STV which I think can be completely overcome. Specifically, a voter can make their vote much stronger by taking a candidate who has no hope of winning and ranking them first. This causes very bad distortions, including the occasional candidate with no hope who actually wins!

My modification is surprisingly simple. The first winner is selected by the condorcet method. Later candidates are elected one at a time using essentially the condorcet method but with the pairwise results modified based on previous winners. Specifically, if we're comparing candidates A and B, then if X was already elected we take all candidates who ranked X above both A and B and evenly distribute a reduction in their vote strength by a total of the droop quota.

I did some testing of this and found (assuming I didn't botch up my implementation) that when using minimax there are circumstances in which the weight of a vote becomes negative. I suspect this stops happening if you use ranked pairs instead.

This technique completely obliterates the rank-a-weak-candidate-higher form of gaming which plagues STV. It also has the nice feature of not requiring all votes to be kept around. All you need to do is collate the vote counts of each two- and three-way race and the winners can be calculated from that.

Voting Machines

chalst: unfortunately the electronic voting systems are far less accurate than the paper-based ones they're replacing. Not only do they not have a paper trail, making any sort of double-checking impossible, but they run on un-audited proprietary software. The software has been known to crash in deployment. The solution? Reboot it.

There is one thing which could be done to increase the accuracy of vote counting and the difficulty of vote fraud. Instead of creating a single paper ballot, create two, which go into two different bins, and are sent to two different vote counting agencies, which are run by two different political organisations.

Sadly, the current trend is in the exact opposite direction, with paper trails getting eliminated entirely. They're replaced by unpublished electronic protocols which we're simply supposed to trust. There is a very real possibility that in the near future every major election in the united states will be rigged.

Trust Metrics

I've been thinking about how to apply ciphergoth's new trust metric ideas, discussed in my last few entries, with negative certs for spam resistance. The two match up very well. When a bucket is full, in addition to dribbling out more positive water, it also dribbles out some anti-water to all of its negative certs. Anti-water can filter backwards until it's negated an entire bucket then flow backwards from there. This approach needs more polish, but I like its game-resistance properties.

I figured out an in some ways better way to do simple peer ordering. For each new inclusion, figure which peer has the greatest total sum weight pointing at it, then include that peer and move weight of all the peers which certify it onto that one, setting the weight to 1/numcerts * peerweight for each certifier. Unfortunately this removes the disincentive for certifying peers by replacing it with a strong incentive for certifying peers, since the more peers you certify the less your strength gets diluted.

More thought is required. I've figured out one nice straightforward criterion though. If the seed certifies two peers, and the first one certs x peers and the second one x+y peers, then they should get to alternately appoint includes until they've both selected x, then the y extras should get added at the end.

Math Puzzles

It's impossible to position three solid tori in 3-space in such a way that they're borromean linked without deforming them. Is it possible to position a larger number of solid torii such that the entire set of them is linked but no pair is?

How can you find the midpoint of two points using a compass only, no straightedge?

6 Aug 2003 (updated 8 Aug 2003 at 23:00 UTC) »
Ciphergoth's trust metric

Ciphergoth made a nifty web toy based on his new trust metric. It's gotten so popular in just two days that it can't handle the load. From this and Friendster, it seems that it's extremely easy to get lots of users if you let them study their friends neigborhood, but it's hard to handle all the load.

Thinking about the first few people added to the metric, it's clear that ciphergoth's approach has some definite advantages over the one I gave in my last entry. Specifically, once you're done with the immediate friends list, the person who is a friend of a friend in the most ways should be selected next. My approach most definitely does not do that.

When selecting the first friend of a friend, what we have is essentially a voting problem. Ciphergoth's approach tries to find the best next person by assuming honesty, while my approach tries to reduce the amount of dishonesty by selecting the next person in as non-gameable a way as possible. Of these two tradeoffs, ciphergoth's seems to be the more appealing. This analysis indicates that maybe a condorcet style approach to picking the next person will achieve the best of both approachs. I'll have to think about that.

Minefield

Minefield is a game involving infinite numbers for two players.

Both players write down an infinite number (assuming the axiomatic basis ZF). They then both reveal what number they wrote down.

Both players now have a set amount of time to come up with a challenge to the other person's number. This can be done either by showing that the statement of the existence of the other person's infinite causes a contradiction, or by showing that your infinite is at least as large as the other person's.

If exactly one player shows that the other person's number causes an inconsistency, then that person wins. Otherwise, if exactly one person shows that their infinite is at least as big as the other person's, then that person wins. Otherwise the game is a do-over. In the case where someone should win by being larger, the opponent has an opportunity once they see the proof to challenge the consistency of the other player's number. This is to prevent the cheap trick of saying 'my number is inconsistent, therefore all statements are true, so it's at least as large as yours.

This game can be played, although it's a bit odd. Mostly I've thought of it as a way of getting a visceral sense of how very large infinites work.

Triumvirate

The 'degenerate' case of my game second best is actually the most interesting. Three players each write down either a 0 or 1, then all reveal what they wrote down. If they all wrote down the same number, then they all get one point. Otherwise, the player who wrote down the number which is in the minority gets three points. I'm calling this game triumvirate.

In triumvirate, two players can gang up on the third one by putting down different numbers, but then the third player can pick which one of them will win, and thus easily bully around either of them. I believe that triumvirate gets to the heart of what's difficult about multi-player, zero-sum games in much the same way that prisoner's dilemma gets to the heart of what's difficult about two-player, non-zero-sum games.

82 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!