Older blog entries for Bram (starting at number 53)

The CodeCon 2003 program is now announced, and registration is open.

CodeCon 2003 will be February 22-24, noon-6pm at Club NV in San Francisco, California.

All presentations will given by one of the active developers, and accompanied by a functional demo.

CodeCon presentations will include:

  • Advogato - Good metadata, even when under attack, based on a trust metric

  • Alluvium - p2p media streaming for low-bandwidth broadcasters

  • Bayonne - Telephony application services for freely licensed operating systems

  • Cryptopy - pure Python crypto

  • DeepGreen - Agent Oriented investment analysis designed to be self-funding

  • GNU radio - Hacking the RF Spectrum with Free Software and Hardware

  • HOTorNOT - People submit their picture for others to rate from 1 to 10

  • Hydan - Steganographically conceal a message into an executable application

  • Khashmir - A distributed hash table library upon which applications can be built

  • Mixminion - A next-generation anonymous remailer

  • Neurogrid - Decentralized Fuzzy Meta-Data Search

  • OpenRatings - An open source professor ratings engine

  • Paketto Keiretsu - Interesting and Useful Techniques for TCP/IP Networking

  • YouServ - A communal web-hosting system for the masses

  • A panel on version control

Lance Fortnow links to some interesting commentary on P vs. NP.

I found out today the standard terminology for my favorite conjecture -

Conjecture: The circuit complexity of the k-sum problem is at least n ** k.

This directly implies that the 4SUM problem is quadratic. I believe that the 3SUM problem is also quadratic (as does everybody), but that the reasons for that are much deeper and more complex.

It also implies P != NP, so we can rest assured that noone has proven it yet.

Given the obviousness of this conjecture I'm sure circuit complexity people have already spent considerable time on it and just haven't gotten anywhere, although curiously googling for '3sum' and 'circuit complexity' doesn't turn up anyone else musing on the subject.

Here is a good list of open problems to spend time on.

I haven't posted much lately because I've been spending my blogging time on my new secret project, which will be unveiled when I get a proof of concept working.

XML people don't seem to realize that when I say I'm on the de facto standards committee I'm not kidding.

ladypine: Neither of the examples you cite show a failure of pareto efficiency, in fact quite the opposite. I'll explain.

In the case of an auction, any price below the second price wouldn't be pareto efficient, because the seller could sell the item to someone else at a higher price and both the seller and the new buyer would be happier. Pareto efficiency isn't unique in this case, because the high bidder purchasing with any price between the first and second bids is pareto efficient.

(Yes it's possible to make more money selling on ebay by setting your minimum bid increment to a humongous value. At some point I'll get around to writing about ebay at length.)

The reason to go with the second price instead of some amount more (which is how ebay does things, irritatingly enough) is to make it less gameable, however some gameability remains. Specifically, the seller could inflate the minimum price to anywhere between the first and second price and be better off. This is impractical under many circumenstances, but is very important with only two (or one!) bidder. Note that gameability is only a problem in the situations in selecting between different pareto efficient solutions, so there's no weakness to it as a technique here.

In the case of stable marriage, there is a straightforward algorithm for finding all stable solutions based on pareto efficiency. For each person A, if A's first place choice is B, then for every C which appears below A on B's list of preferences, scratch B off C's list and C off B's list (this is justified because if B were paired with C then we could change the pairings to have B paired with A, and both A and B would be happier). Repeat until you can't simplify any more. At this point, participants will be in cycles, in which B is first on A's list, C is first on B's list, D is first on C's list, etc, until we get to A being first no somebody's list. Because of the gender difference, these cycles will always be of even length, so for each cycle we have to decide whether the males get their first choice or the females get their first choice. Note that this is a choice between different pareto efficient configurations, the appropriateness of pareto efficiency as a criterion is uncontroversial.

In practice, even on random data, this technique does such a good job of pairing up people that there's hardly any arbitrariness in final pairings. In the medical example a study was done and found that only a miniscule number of students would be assigned elsewhere in students's choice, so the algorithm was switched to that for good PR.

A bit off-topic, I'd like to point out that It's utterly stupid that 'the match' was so controversial for so long. I at one point solved stable marriage on my own because it's an interesting problem, coming up with the above algorithm, and after a little bit of testing realized how little difference male versus female choice makes. That was only a few days's worth of work. Let this be a lesson to everyone that if there's a big controversy which a simple study can shed a lot of light on, do the damn study.

The stable roommates problem doesn't always have a pareto efficient solution, for example there might be a loser who's put last on everyone else's list of priorities, but the above technique can be used to make an algorithm which works very well. Note that stable marriage is just a special case of stable roommates where all the males rank all females above all males and all females rank all males above all females.

8 Dec 2002 (updated 8 Dec 2002 at 06:23 UTC) »
ladypine: Viewing cost and benefit between individuals as somehow comparable is dubious. For one thing, it's completely gameable - individuals who exaggerate costs and benefits to themselves, or are just plain histrionic, are rewarded unduly. For another, social policies based on such theories tend to be downright racist, since they heavily slant towards whatever benefits the gender/race/whatever is dominant most cares about.

A much simpler and more justifiable approach is to try to reach pareto efficiency, which is a situation in which no two individuals can make an exchange and both be happier. This cleverly avoids making subjective comparisons of different peoples's worth, and also yields a straightforward algorithm for maximization. The success of capitalism can be viewed as a testament to the robustness of greedy algorithms.

robocoder: I suggest setting up CVS, a mailing list, syncmail, and a todo list. Picking a bug tracking tool to begin with is like starting the construction of a bridge by digging a mass grave for everyone who will die in the building process.

There's a very difficult puzzle in the latest scientific american, it goes as follows:

Three of the nine members of the president's cabinet are leaks. If the president gives a tip to some subset of his advisors and all three interlopers are in that subset, then that tip will get leaked to the press. The president has decided to determine who the leaks are by selectively giving tips to advisors and seeing which ones leak. How can this be done by giving each tip to three or four people, having no more than two leaked tips, and using no more than 25 tips total?

There's a solution on the web site, although the one I figured out is very different.

Here's my solution. First, note that if a tip to four people leaks, the leaks can be found using only three more tips, giving each of them to three of the four.

Arrange eight of the nine advisors on the vertices of a 2x2x2 cube. Test each of the 12 subsets which are coplanar, plus the 2 subsets which are all the same color if they're colored like a checkerboard. If any of those leak, we're done. If not, we've determined that the odd one out must be one of the leaks.

Next, arrange the eight we previously had in a cube formation into a line, and test all 8 configurations of the form the odd one out, x, x+1, and x+3 (modulo 8).

If none of those hit, then the two other leaks must be of the form x and x + 4, which there are four possibilities for. Three of them can be tried directly and if none leak then the last one can be inferred.

This leads to a total of 14 + 8 + 3 = 25 tips. It's a very hard problem, it took me about 45 minutes to figure out a solution.

An additional question given is whether increasing the number of advisors included in a tip can reduce the number of trials necessary. The answer is yes. For example, to modify the technique I gave the first test could be of six of the corners of the cube except for two adjacent ones. This effectively does three tests at once, so the total number of tests needed drops to 23. If the first test turns up positive, all but one of the 20 subsets of 3 of 6 can be tried individually, and if none of them leak then the last one can be inferred, for a total of one initial tip with to 6 plus 19 others, or 20 total, so leaking on the first tip isn't the limiting case.

The CodeCon submissions deadline is December 15th. Get those submissions in.

The time and place have been set. It will be February 22-24 at Club NV in San Francisco.

I just finished my first game of go on a 19x19 board. I played against a 15kyu player, and lost by about 100 points (I should have lost be about 80, but blundered stupidly near the end). I literally feel ready to throw up. I'm very kinaesthetic and apparently my body confuses playing a position I don't understand at all with being dizzy. Hopefully this will stop happening in the future.

19x19 go is huge. Probably equivalent to about 30x30 hex.

Kiseido is a nice go server. A lot of interesting stuff has been done with CGoban, and java web start is actually working okay now. Web start's proclivity to make applications look like they're subordinate to it as an architecture would make me not even consider using it though - that's so obnoxious as to be offensive.

Compare Joel's current time giving site design with the real thing.

Sylvester's problem is cool.

Surprisingly, the rules of go are somewhat controversial.

This game is rather interesting. I figured out at least one position in which O has a simple forced win. A more interesting question is whether there are any positions in which X has a forced win. I doubt it.

I tried out my original rules Bohemia on a 4x4 board. It's a close game, but the first player can always win. I didn't try any possibilities where the bohemian moves first and doesn't pass. Here is an interesting question - If we take a generalizations of this game, in which there are sets of subsets which the square wins if any of them become monochromatic, and the bohemian can pass, and the bohemian can go first, is there any such game in which the bohemian can always win, but loses by passing on the first move? Note that my new rules aren't of this form.

44 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!