Older blog entries for Bram (starting at number 75)

Someone told me that Y suffers from the corners hardly ever getting used, making the effective board size very small. A similar game which fixes this problem is to play on a hexagonal board, and the first player to complete a group which either connects two opposite or three non-adjacent sides (meaning (1, 3, 5) or (2, 4, 6)) is the winner. This game also has no draws, and makes the whole board important while maintaining the subtleties of Y. I wonder if anyone has thought of this game before.

The sourceforge stats for PHP-Nuke for this week look extremely fishy.

Board Games

My new favorite place to play games online is Little Golem.

I met up with John Tromp and we tried several different board games. The big winner among the ones we played was Y. Y feels conceptually simpler even than hex. The board is a triangle, rather than a rhombus, and there's no board orientation to keep track of. The strategy of Y also feels considerably more subtle, with no single central point of conflict.

I've been trying to design a board game which has a gradual build-up of positional strength until the very end when everything comes to a head. Unfortunately games where the goal is to create a pattern tend to force tactics to play out early, since one side or the other will benefit from the tactics finishing and hence force the play early. Misere games, in which the goal is to avoid making a pattern, avoid this problem, but tend to turn into two separate solitaire games which don't interact until the very end.

I've come up with a game which seems to fix that solitaire problem fairly well. Here are the rules:

Game played on a grid. White and black alternate placing stones. The first player to cause the four corners of an orthoganl square to all be filled with stones with two opposite stones being of one color and the other two of the other color loses.

The obvious drawing strategy of always moving on the point rotated 180 degrees from the last move can be prevented by playing on an odd sized board.

Whether this game is drawish, needs a swap rule, and is generally interesting to play needs to be determined by experimentation.

Pentagon Tilings

Pentagon tilings are cool.

Election Methods

The 'proportional representation' system I referred to in my previous entry was specifically the one where each voter votes for a political party, and the parties get to appoint candidates based on their percentage of the vote.

My objection to this very specific system is that it greatly increases the strength of political parties over individual politicians. It does have the advantage of making 'third parties' stronger, but that only looks good if you're using the horrendous two-party system as the basis for comparison. It can in fact have an ill effect of making the smallest of three major parties disproportionately powerful, by making them be the swing vote. Ideally political parties should be loose associations of politicians who share organising resources. Any system which results in an oligopoly of political parties running government is inherently flawed.

'Proportional representation' generally refers to systems in which multiple candidates are elected and a block of voters is capable of voting in a number of candidates proportional to their size. There are also proportional representation systems which use ranked ballots. In fact, I've spent a considerable amount of time working on proportional representation winner selection algorithms which are considerably less gameable than the standard instant runoff. At some point I'll do another round of experimentation and post my results.

Puzzles

While attempting to solve a very difficult solid metal parts puzzle, I came up with a very interesting question. The sort of puzzle I'm referring to is one in which there are several solid pieces which are entangled and the puzzle is to extricate them from each other without bending them. My question is: can you come up with a schema for solid parts puzzles which contains a fixed number of pieces and whose computational complexity of finding a solution is superlinear on the amount of data necessary to describe the pieces? All parts must come apart, not just one.

I've come up with an interesting solution. Here's a hint: use 3sum. (I'm assuming 3sum to be quadratic.)

Most Inefficient Sort

I got wind of an interesting informal competition to find the most computationally inefficient, but not completely contrived, sort algorithm. My solution is as follows: Assume all the values to be sorted are 32-bit integers. Have a candidate solution which starts off with all values set to zero. Repeatedly compare the candidate solution to see if it's a permutation of the input, and if not increase the last number by one, with overflow carried to earlier numbers. How do you check if two lists are permutations of each other? By trying all possible permutations and comparing them individually, of course.

Election Methods

apenwarr: Getting better election methods actually adopted is indeed the problem. The United States has had unranked ballots for so long that Duverger's Law has kicked in and we now have two completely entrenched political parties which have a vested interest in keeping things as they are.

Perhaps a clue can be gleaned from the defeat of proportional representation. Proportional representation as a method of electing congresspeople has been banned, which seems rather extreme given the usual bent towards states's rights. The reason this happened was actually a good one. Elected officials like having a considerable amount of personal power, and simple proportional representation would lead to them being completely subservient to the party they represent, easily replaced by party beurocrats.

One of the nice things about the way elections work in the US is that ballots always list individuals, not parties. In the above case, the interests of those individuals helped improve election methods used (or at least, keep it from getting worse). Perhaps incuments could be convinced to back ranked ballots.

Do ranked ballots favor incumbents? I'm honestly not sure. There are two forces at work; One is that ranked ballots make political parties have greatly reduced influence, so an incumbant doesn't need to worry so much about party backing going away. The other is that the will of the voters is more accurately reflected, possibly removing incumbents whose support relied on electoral distortions.

I suspect that incumbents could simply shift their positions to reflect the opinions of voters more than political parties, and wind up in a better position than they were before. However, real studies of the likelihood of and incumbent getting reelected when the voting system becomes ranked versus when it remains unranked are required.

Election Methods

robla links to debian election analysis. The full results show that condorcet yields a well-defined full ordering of all the candidates, making what should happen in this election completely non-controversial.

The fundamental brokenness of unranked ballots gets astoundingly little media attention. Even after it caused an apparently artifactual result in France's presidential election, which was later shown to be clearly artifactual by the runoff results, most media coverage portrayed it as a political struggle.

Schedule

I'm going to be at the emerging technologies conference on the 24th and 25th, and at the emerging man party on the night of the 24th. After the 26th I'm going to be in Amsterdam for a while, then New York City starting on the 15th of May. Anyone who would like to meet up with me, mail me.

Certs against Spam

I previously suggested an anti-censorship multiplier of around 1/5. This is way too high - a multiplier of 1/100 or even 1/1000 would be able to make spam not worth doing, since spammers rely on being able to reach millions of users. Not eliminating peers which were eliminated by less than 100 or 1000 anti-certs could also help reduce false positives.

The extreme looseness of these values indicates that stopping spam with certs, once the necessary infrastructure is in place and the right approach is taken, is actually very easy.

Tickets against Spam

Penny Black has an ticketing system. There are a few obvious improvements to this sytem though, which I'll explain now.

Rather than sending back simple tickets, the server should send back pairs of one time passwords and their identifiers. When sending mail, rather than including the one time password, the mailer includes the identifier and a MAC of the hash of the message based on the one time password. When the recipient gets the mail, they query the token server with the id, hash, and MAC, and the token server responds with whether it's valid or not.

With this approach, there is no need to worry about a spammer intercepting a whole bunch of other peoples's mail and using their tickets. There's also no pop-style problems with dropped connections losing mail, since the ticket server can remember which hash was used with which ticket and respond idempotently that it's still valid later on.

The proper encoding of tickets is in a header line, and the hash should be of the Subject, From, and To lines, and the message body. This way it's possible for more than one kind of ticket to be put in the headers, if more become widespread in the future.

I think this technique could go a long way to stopping spam, and make Microsoft quite a bit of money in the process. They're one of the few companies positioned to be able to make such a system ubiquitous. I think $100 for 10,000 tickets is the wrong price point though. I'd be willing to pay $10 for 1000 tickets, if that really made sure my mail would get through any spam filter.

RSS bandwidth saving

garym asks about saving bandwidth used by RSS. This can be done much more simply by adding an optional GET parameter for RSS feeds which specifies what the latest entry the downloader already has is. This parameter is strictly advisory, but could save a lot of bandwidth.

Another useful parameter would be an advisory first entry number to return. That would make RSS feeds useful as archives.

I'm not going to suggest names for these parameters for fear of adding one more to a multiplicity of available standard names for them which I don't know yet.

The one subtlety has to do with when an existing entry is changed. To make those get overwritten, the client should send an if-modified-since like it does now, and the server sends older entries past the most recent one the client has if any of them have been modified.

Trust Metrics

A trust metric evaluation function I gave before has the problem that that it's very hard to compute. It can be approximated well by setting the weight of each node to the probability that there's a path from that node to an attester, then do simple random walk.

It turns out the latter problem is essentially the same as what's called all-terminal network reliability problem, which there's an interesting neural network approach to. I haven't read over this paper carefully enough to say if the technique is practical for a site the size of advogato, but from a quick skimming it appears so.

Neural networks also add buzzword compliance.

Bram's Law

I realized this the other day, and dub it Bram's Law

The easier a piece of software is to write, the worse it's implemented in practice.

Why? Easy software projects can be done by almost any random person, so they are. It's possible to try to nudge your way into being the standard for an easy thing based on technical merit, but that's rather like trying to become a hollywood star based on talent and hard work. You're much better off trading it all in for a good dose of luck.

This is why HTTP is a mess while transaction engines are rock solid. Almost any programmer can do a mediocre but workable job of extending HTTP, (and boy, have they,) but most people can't write a transaction engine which even functions. The result is that very few transaction engines are written, almost all of them by very good programmers, and the few which aren't up to par tend to be really bad and hardly get used. HTTP, on the other hand, has all kinds of random people hacking on it, as a result of which Python has a 'fully http 1.1 compliant' http library which raises assertion failures during normal operation.

Remember this next time you're cursing some ubiquitous but awful third party library and thinking of writing a replacement. With enough coal, even a large diamond is unlikely to be the first thing picked up. Save your efforts for more difficult problems where you can make a difference. The simple problems will continue to be dealt with incompetently. It sucks, but we'll waste a lot less time if we learn to accept this fact.

Certs against Spam

The last technique I gave for back propogation has a serious flaw, but I think the criteria and general approach are good. Here's a much better algorithm:

  1. Compute a weight for each node. This is done by answering the question 'If I walk randomly along certifications, with probability x of stopping at each hop, which is the probability that I will stop at this node?' The stopping probability is parameterizable.

  2. Compute a weight of of spammerness of each node. To do this, for each node subdivide its weight among everyone it claims sent spam, and add up the claims of spammerness for each node. Then multiply by a fraction, say around 1/5. This fraction is parameterizable, and reflects a tradeoff between judiciousness of stopping spam versus difficulty of fraudulent censorship.

  3. Use spam values to neutralize weights. If a node has weight x and spam value x + y, then the node's weight becomes 0 and the remaining value y is used to neutralize the nodes which certified that one, possibly completely neutralizing them and requiring furthur back-propogation.

Mail is accepted from any node which there's a path to along certification edges over nodes which have weight greater than zero.

I've glossed over some details, but that covers all the important points. I'm very enamored of this approach. It's extremely untuitively compelling, and all the steps in it can be done or at least very well approximated in essentially linear time. It looks like we have an algorithm which really can expand to a global scale now.

Twisty Puzzles

Some posts by me to the twisty forum.

HTML

Web pages which make their hyperlinks non-underlined drive me absolutely batshit. I wish there was a browser option to always underline even when the page says not to.

Today, the W3C announced a new initiative to emphasize ease of implementation in specification design. 'Encouraging adoption is the most important part of making protocols useful for the future internet' said Tim Berners-Lee.

In other good news, the IETF announced a new policy in RFCs against using MUST to describe behavior which is widely violated in practice, especially when that violation won't change for the forseeable future.

66 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!