Older blog entries for Bram (starting at number 124)

8 Nov 2004 (updated 8 Nov 2004 at 23:15 UTC) »
Car Emissions

There are two concepts in car pollution which people generally get mixed up. Some exhaust gases are simply stinky and noxious, most notably particulate carbon and carbon monoxide. Those do direct damage to the humans near them and crops which grow nearby and are clearly bad. Pollutants are clearly bad and there isn't much direct economic disincentive for any one person to make their car produce less of them.

The other troublesome kind of exhaust is greenhouse gases, mostly carbon dioxide. The amount of damage caused by these is much less clear, and there's a straightforward economic disincentive to produce them, because they correspond pretty much directly to the amount of gas your car consumes. Carbon dioxide also happens to be produced in mass quantities by respiration.

If you really want to know how clean a car is, look it up on the EPA web site. There are some surprises, for example the honda civic hybrid with a manual transmission has mediocre pollution ratings.

Erasure Codes

People keep asking me about using erasure/rateless/error correcting codes in BitTorrent. It isn't done because, quite simply, it wouldn't help.

One possible benefit erasure codes is that when sending data to a peer there are so many potential pieces that you can send any random one you have and it won't be a duplicate. The problem is that the peer may already have gotten that same piece from another peer, so that benefit is destroyed, and on top of that the overhead of communicating and remembering which peer has what is increased tremendously.

Possible benefit number two is that erasure codes increase the chances that your peers won't already have the pieces which you've downloaded. But simply downloading pieces which fewer of your peers have first handles that problem quite nicely, so a vastly more complicated solution is unwarranted.

Possible benefit number three is that if there's no seed left erasure codes increase the chances that the entire file will be recoverable. In practice, when a file becomes unrecoverable it's because there was only one seed and several downloaders started from scratch, then the seed disappeared after uploading less than the total length of the file. Erasure codes obviously would not help out in that case.

There are other possible benefits and corresponding rebuttals, but they get more complicated. The short of it all is that the possible benefits of erasure codes can be had with much more straightforward and already implemented techniques, and the implementation difficulties of such codes are quite onerous.

While I'm pissing on everyone's parade, I should probably mention another scenario in which everyone wants to use erasure codes and it's a bad idea: off-site backup. If you store everything straightforwardly on each backup site, and each site has two nines (99%) uptime (if it doesn't you shouldn't be using it for backup) then the overall reliability will be six nines (99.9999%). Engineering for more than six nines is nothing but intellectual masturbation, because unforseeable problems completely dominate failure at that point. Therefore one-of-three gets great reliability with unreliable backup sites in exchange for having to store three times the amount of data you're backing up.

With erasure codes, you could make it so that each backup site only had to store half as much stuff, but that two of them would still need to be up to recover data. If you then have four backup sites, there's a savings of 1/3 of the storage versus the much more straightforward approach. This is a pretty small reduction given that the price of mass storage is very small and plummeting rapidly. It also comes at great expense: you have to deal with four backup sites instead of three, and the software is much more complicated. In systems like this, the recovery software not working is a significant part of the chances of the system as a whole failing. Also, any economic benefit of savings on disk space must be weighed against the costs of the software system which runs it. Given the ludicrous prices of backup systems these days, a much simpler albeit slightly less efficient one would probably be a great value.

ECC of course has some great uses, for example data transmission of noisy mediums and storing data on media which can get physically corrupted, and recent developments in it are very exciting, but it's very important to only use sophisticated tools when clearly warranted.

Comments?

Electoral College Effects

Here is an interesting question - Given the electoral college system, does it favor the large or the small states? The very short answer is that it ridiculously disproportionately favors the small states, because they're flat-out given disproportionately large representation. And if that weren't the case then switching to a simple electoral majority would be uncontroversial, following the trend of a simple majority deciding within each state.

But if we pretend that this is an issue of real interest, what are the effects? Well, that depends. If we had a country of two states, one larger than the other, than the chances of one's vote mattering in the smaller state would be just about nil. In a country of three states two of which were just slightly larger than half the size of the largest state the small states would be far disproportionately represented.

In practice there are enough states that statistical effects overwhelm the weirdnesses of specific enumerable outcomes. We can adopt a much more simplistic model of there being many small states, and we compare two, one of which is roughly double the size of the other. If we assume that the race is a dead heat across the entire country, (a completely unrealistic assumption, as I'll get to in a minute), the chances of a voter swinging the half size state is approximately 1.4 times that of the double size state (because a standard deviation is proportional to the square root of the number of voters) and the chances of it swinging the overall election are about half, so the chances of a single vote from the smaller state swinging the election are about a third less.

But we don't have a homogeneous voting population, and very few races are dead heats across the entire country. In practice state lines jerrymander quite heavily against New York and California, whose votes in recent elections have been such a foregone conclusion that nobody bothers campaigning in them. With the two coasts being very populous and the economic centers of the country, and getting more so, this effect is likely to become even more pronounced in the future.

And then there's the question of, in a close race, which states do you give out more candy to? The ones which are close races, obviously, and the ones which you can more likely affect the outcomes of. Small states are much easier to win by buying off votes, because a much smaller number of votes can change their outcome. The likelihood of their swinging the overall election is negated here because we don't go into elections blind - campaigns poll to find out what states are up in the air, and ignore the ones which aren't.

Which states are close races varies from election to election, so there's a random crapshoot which decides who gets the most resources each time. The result is inevitable arbitrary disparities, which the only consistent thing being their arbitrariness, and strong incentivization for local officials to make their states be close, or at least appear to be close.

If this all sounds stupid and unpleasant, it's because it is. The only clear effect of a truly voter weighted electoral college would be that New York and California would be (still) jerrymandered against. All the other effects are random and generally bad for everybody.

Unfortunately the chances of the electoral college getting fixed via an orderly political process are just about nil. Fixing it would require a constitutional amendment, which would have to be approved by 2/3 of the states, and most of them are, unsurprisingly, small. The smallest states get several times their proportional say in the electoral college, and many times their propotional say in the senate, with a flat two senators from every state, so any constitutional amendment cleaning up the mess would be dead in the water.

The rules favoring small states, by the way, were set up at the time of the formation of the United States to get the south to join. Back then, the rules were even worse because slaves counted towards representation, even though they couldn't vote. It took the civil war, which was caused by the political imbalances favoring the less economically productive parts of the country and the separation being on neat geographic lines, for the mess to get cleaned up. Kind of like the situation today, except that it hasn't gotten to the point of internal warfare, at least not yet.

On that note, I feel obligated, this being election day, to encourage everyone to vote. Unless of course you'll be voting with a diebold machine, or your registration got mysteriously lost, as mine did. [Update - I showed up to vote and they did manage to find my registration in some obscure place, but my wife, whose registration was sent in at the same time and on the same day, had to cast a provisional ballot.]

comments

Trikke

I had some interesting ideas for a human-powered vehicle in which the rider stood upright and propelled forward by side to side motion using the same general principle as pumping on a skateboard or streetboard. Then I read about Trikke and realized that it's been invented already, so I could simply buy one. After an hour of practice I can do laps with a Trikke 8 without having to push at all. It's great fun.

On the subject of interesting vehicles, I'd be remiss not to mention the handcycle.

Graph Isomorphism

After much cogitation, I think I've figured out some examples of graph isomorphism problems which are almost tricky.

Take a graph for which for every pair of nodes (X, Y) the entire graph can be mapped onto itself in an isomorphism such that X maps onto Y. There are many example of these, such as hypercubes. Then, 'split' each node to make a new graph, such that for each node X in the old graph, there are two nodes in the new graph Y and Y', with Y connected to Y'. For each pair of nodes X and Z connected in the original graph and corresponding to Y, Y', W, and W' in the new graph, either connect Y to W and Y' to W' or 'cross' it by connecting Y to W' and Y' to W.

The potentially tricky problem is to determine whether a graph created with one set of crossings is isomorphic to one with a different set of crossings.

One way to differentiate nodes in graphs of this form is to, for each node, make a list of how many other nodes have a minimum distance of one hop away, then two, then three, etc. If there was a loop in the original graph then the numbers will be affected by whether that loop contains an odd or even number of crossings. Proper selection of the graph and crossings make make every short loop have an even number of crossings, thus foiling this approach.

comments

CodeCon 2005 Call For Papers

The CodeCon 2005 Call for Papers is now up. Please forward to anyone who might be interested.

CodeCon 4.0
February 2005
San Francisco CA, USA
www.codecon.org

Call For Papers

CodeCon is the premier showcase of cutting edge software development. It is an excellent opportunity for programmers to demonstrate their work and keep abreast of what's going on in their community.

All presentations must include working demonstrations, ideally accompanied by source code. Presenters must be done by one of the active developers of the code in question. We emphasize that demonstrations be of *working* code.

We hereby solicit papers and demonstrations.

* Papers and proposals due: December 15, 2005 * Authors notified: January 1, 2005

Possible topics include, but are by no means restricted to:

* community-based web sites - forums, weblogs, personals * development tools - languages, debuggers, version control * file sharing systems - swarming distribution, distributed search * security products - mail encryption, intrusion detection, firewalls

Presentations will be a 45 minutes long, with 15 minutes allocated for Q&A. Overruns will be truncated.

Submission details:

Submissions are being accepted immediately. Acceptance dates are November 15, and December 15. After the first acceptance date, submissions will be either accepted, rejected, or deferred to the second acceptance date.

The conference language is English.

Ideally, demonstrations should be usable by attendees with 802.11b connected devices either via a web interface, or locally on Windows, UNIX-like, or MacOS platforms. Cross-platform applications are most desirable.

Our venue will be 21+.

To submit, send mail to submissions2005@codecon.org including the following information:

* Project name * url of project home page * tagline - one sentence or less summing up what the project does * names of presenter(s) and urls of their home pages, if they have any * one-paragraph bios of presenters, optional, under 100 words each * project history, under 150 words * what will be done in the project demo, under 200 words * slides to be shown during the presentation, if applicable * future plans

General Chairs: Jonathan Moore, Len Sassaman Program Chair: Bram Cohen

Program Committee:

* Jeremy Bornstein, AtomShockwave Corp., USA * Bram Cohen, BitTorrent, USA * Jered Floyd, Permabit, USA * Ian Goldberg, Zero-Knowledge Systems * Dan Kaminsky, Avaya, USA * Klaus Kursawe, Katholieke Universiteit Leuven, BE * Ben Laurie, A.L. Digital Ltd., UK * Jonathan Moore, Mosuki, USA * Len Sassaman, Nomen Abditum Services, USA

Sponsorship:

If your organization is interested in sponsoring CodeCon, we would love to hear from you. In particular, we are looking for sponsors for social meals and parties on any of the three days of the conference, as well as sponsors of the conference as a whole and donors of door prizes. If you might be interested in sponsoring any of these aspects, please contact the conference organizers at codecon-admin@codecon.org.

Press policy:

CodeCon provides a limited number of passes to bona fide press. Complimentary press passes will be evaluated on request. Everyone is welcome to pay the low registration fee to attend without an official press credential.

Questions:

If you have questions about CodeCon, or would like to contact the organizers, please mail codecon-admin@codecon.org. Please note this address is only for questions and administrative requests, and not for workshop presentation submissions.

Comment

Motorized vehicles

I recently had the opportunity to try out Trevor Blackwell's segway and eunicycle. They're both a lot of fun, although the eunicycle requires actual practice to ride, unlike the segway, which you basically just step on and ride around.

I think that appropriate modifications to the eunicycle could make it even easier to ride than the segway. While this is extremely counterintuitive, it makes a lot of sense from a physics standpoint. A car is completely based on static balance, meaning that if it turns off it doesn't roll over. A bicycle has some dynamic balance, because it would fall to the right or left if the rider stopped balancing. A unicycle is completely dynamic balance, which makes it much more difficult to ride, but also makes it far more controllable and maneuverable once you've learned to ride it. The way one shifts one's weight on a unicycle, and by extension a eunicycle, is ironically more intuitive for humans than the way it works on a bicycle, because it's the same as the way we do it with the form of locomotion we inexplicably use all the time, which is bipedalism. Bipedalism is completely based on dynamic balance, and requires constant corrections even when just standing still.

In order to make a eunicycle easier to ride, it must be made self-balancing. On a unicycle, and the eunicycle as it is today, you balance by waving your hands around, which is both difficult to do and very limited in the amount of force it can exert, which makes going at very high speed inherently dangerous since you can't maneuver. A more stable vehicle can be constructed as follows: On the bottom, there's a wheel like in the existing eunicycle. Above that, there's a horizontal flywheel attached to a high-torque motor which is used for balancing. Above that, and attached to the wheel via structural components which go around the flywheel, is a platform which the rider stands on, and in front of the rider there are handlebars of the same design as the segway for the rider to hold onto.

If the rider is moving forwards and leans right, the eunicycle turns the flywheel to the left, thereby turning the rider and wheel to the right and keeping the rider from falling over (although generally this will result in the vehicle being angles slightly more forward, so it will then accelerate to keep from falling forwards). Likewise, if the rider is going forward and leans left the flywheel is used to turn the rider and wheel to the left. If the rider is going backwards then the wheel is turned left to compensate for leaning right and right to compensate for leaning left.

A weird problem is that if you keep turning in the same direction for a while the flywheel might build up considerable angular momentum, eventually getting to the point where it can't turn any faster and hence the steering bottoms out. I'm not sure if this would be a real problem in practice, there are several ways the effect could be dampened or avoided if it does.

In principle this sort of vehicle should be able to go faster than a motorcycle, since it has only one wheel and hence half the friction, although in practice the weight of the flywheel and stability issues might limit its speed.

If you really wanted the ultimate high-speed vehicle, it would probably be a glycerine jet-propelled unicycle with electronic stabilization system, although that would be incapable of idling and be in some ways closer to a jet pack than a land vehicle.

Comments

As an experiment I'm dual-posting both here and on livejournal, so that people can leave comments. If you'd like to leave comments on this entry, go here.

Airport 'Security'

I'm apparently on the always-harass list. After getting searched before going onto airplanes so many times, I've learned a bit about the procedures involved. When flying on Alaska/Horizon, if you're marked to be searched your boarding pass says SSSS in the lower-right corner. Conveniently, you get your boarding pass before going through security. This is presumably so that any would-be hijacker can see that they're going to be searched thoroughly and drop off all their weapons before trying to get on the plane, to avoid all that trouble that catching and detaining them would cause.

My last flight I happened to be sitting next to a pilot deadheading back. He confirmed that security searches pilots too. Whoever designed current airport security procedures might have a reasonable excuse for this ineptitude though. For example, they might still be in kindergarten.

The SSSS mark has a 2d bar code above it. I wonder if anyone has ever collected a bunch of those and decoded them.

SHA-1

There's rumor of a sha-1 break, related to a very confirmed break of sha-0. Fortunately the breaks are unlikely to lead to the construction of pre-images, so there's no need to panic just yet, even if the rumors prove true.

Which leads to the question, what should we do? First of all, there's no need to change algorithms just yet, although anyone designing a protocol might want to hold off a few months unless time isn't of the essense (and in software everything is always late, so it rarely is). With a hash length of 160 bits, and thus birthday attacks in 80 bits, sha-1 is due to expire in roughly two decades no matter what, so we should seriously consider using a hash function with a longer output. AES-128, which with a key and block size of 128 bits could easily survive past when moore's law runs out of gas.

Whatever happens, it would be a disaster for a multiplicity of hash functions to win out, since that would result in profound incompatibility. So the choice of successor to sha-1 shouldn't be taken lightly.

The clear political favorite is sha-256, which is unrelated to sha-1 in structure and hence not susceptible to the most recent attacks, and also has a 256 bit output to match aes's 128 bit key length. My only real complaint about sha-256 is that it's a different (and much less studied) cryptographic primitive than aes, thus providing two points of attack to our cryptographic systems rather than one. I would much prefer a hash function which is based on aes, such as whirlpool. Unfortunately whirlpool has a 512 bit output and about the performance of sha-512, at least on 32-bit systems. If there was a standardized aes-based hash which had an output of 256 bits and about the performance of sha-256 I'd recommend it unhesitatingly, but with the way things are now I'm torn.

If I had to put even money on it I'd bet on sha-256 winning, since that one already has considerable political oomph behind it, and noone has complained about potential weaknesses in it, and everybody agrees on the importance of a single standard.

Python Exception Assertions

I would like to be able to state in my Python code 'assert than exception spam would get caught somewhere in the current call stack by something other than a catch of general exception'. There are many bugs which this would catch easily which are very difficult to search for comprehensively using test code, and have caused me great pain in the past and probably will continue to do so in the future.

I mentioned this to Guido and he pointed out that the expressions stating what is to be caught aren't evaluated until an exception is actually thrown. This interesting piece of trivia indicates that the expressions would have to be evaluated at the time the assertion is made, which could be a little slow, but that doesn't invalidate the utility of the functionality I want.

Guido indicated that, although he doesn't disagree with the functionality I want, it would require a significant change to the Python VM which he doesn't expect anyone to be interested in doing. So if you want a Python core project which is technically challenging and, in my humble opinion, clearly very benificial I suggest adding this functionality. If you do, I will be eternally grateful.

Hex

After playing around with the hex code I gave in my last entry, I think I can finally explain some opening theory.

The first question is, if your first piece is in the center, why is it weak to put your second piece near it? The answer is somewhat indirect. If there were other pieces randomly scattered around the board, a piece close to the central one would be very powerful, but at the beginning of the game there are very few pieces around, so you have to think specifically about the pieces in play. When your opponent responds to your strong central move, he will do so in a place away from where you moved, and since your board coverage is poor from being so focused on one point, his move will be relatively strong. By spreading out your moves you prevent that.

So the second centralized move isn't weak, it's just that the responses it allows are strong. A very surprising thing about this observation is that it's made straightforwardly by increasing the ply of look-ahead, in fact 2 ply sees it just fine. I always assumed that the beginning of a hex game is very strategic and that increasing ply look-ahead wouldn't improve play any, since that's reserved for 'tactical' situations. Apparently that guess was completely wrong, but I still think that increasing ply doesn't improve play unless your board evaluation function isn't a piece of garbage, which I didn't have until now.

The second question is, why do the best hex players now play pieces so far away from the center for their opening move? The answer is related to the answer to the first question - if you place your first piece in the center, then any strong place to put your second piece will probably be close to it, which will be weak for the reason I gave above. I believe this would be seen by a four-ply look-ahead. Again, my guess that increasing the ply of a board evaluation function was completely off base.

So now at least I have an explanation of why early hex moves are the way they are, although I suspect I'll have a lot of difficulty applying this knowledge to make my play better.

Hex is the only game I know of where hypermodern theory has become the dominant one.

A good place to play hex is kurnik.

Boom-Boom

I came up with a new and improved board shape for playing boom-boom on. Start with a checkerboard of odd side length, with the corners colored black and alternating squares colored white. Make the white squares part of the 'real' board, and connect each of them to the four (or in the case of edges, two) other white squares they share a corner with. Next connect each of the edge pieces with the two edge pieces closest to it, so for example (6, 1) connects to (8, 1) and (4, 1). (My terminology counts the corner as (1, 1) not (0, 0).) Then remove (2, 1) and (1, 2) and replace them with a single piece at (1, 1) which is connected to (4, 1), (3, 2), (2, 3) and (1, 4).

This board is probably most interesting to play go on, since it makes each node have exactly four neigbors, thus having the 'borderless' property of circular go boards without being so difficult to print.

Boom-Boom

One of the big problems with boom-boom is that after each explosion new pieces wind up being placed in fairly 'random' places, which make the play extremely tactical, with not much advanced planning possible. This can be eliminated by making it so that when a piece explodes a piece remains on the original exploding spot. Although that results in more pieces to the side which caused the explosion, it doesn't really incentivize pointless exploding because the new group tends to get captured as a unit. But you do wind up with a question of where excess pieces propogate to, so that a single explosion doesn't propogate forever. This also leads to the funky-shaped board becoming pointless, since it was designed to make explosions happen more quickly.

A few more logical steps along those lines results in the following game, which can be easily played using a standard othello set.

Both players alternate placing pieces of their color. On a player's turn they have the option, instead of placing a new piece, of capturing a piece of the opponent's which borders on one of their own (for the purposes of this game 'bordering on' means sharing an edge or corner). When a piece is captured all of the pieces of the same color which border it are captured recursively, thus a very large group can be captured at once. The game ends when one player occupies the entire board.

Note that if white has pieces on (3, 3) and (3, 5) and black has pieces on (3, 2), (3, 4) and (3, 6) then if white captures (3, 4) than does not result in either (3, 2) or (3, 6) getting captured, although it would if there were black pieces on (4, 3) and (4, 5).

I'm not sure off the top of my head how to approach even very simple endgames of this game. Further experimentation is required.

Poker

It turns out that my simplified poker was a bit too simplified. Chris Chang pointed out that since all-in bluffs are guaranteed to fail half the time, raising one is strictly superior to going all in. He suggests adding more cards to get rid of that problem. With the way people play these days that game would be just about as interesting as no hold 'em. (See that link for a good explanation of why no limit isn't really 'the cadillac of poker' and pot limit is better.)

Here's an interesting question: if you have a positive expectation bet, how much should you bet each successive round to make your money grow as quickly as possible? If you bet nothing you obviously get nothing, but if you bet everything you wind up with zero and lose out on the ability to make any money in later rounds. It turns out this is known as the Kelly Criterion.

There's computer security snake oil even in poker. This 'audit' just about made my head explode. An audit is when you go over the internals of a system and analyze that it works correctly. Running statistical tests on the output isn't an audit, it's just bullshit.

While trying to grok the rules to Razz I came up with the following game, which I think has most of the strategic elements of Razz but has a lot of the weirdness removed. I say 'I think' because Razz has a whole lot of weirdness and not having played it I have a hard time figuring out what that weirdness causes in practice.

Each player gets two pocket cards to start and community cards are dealt out as in hold 'em. The difference is in how hands are scored. Unlike in other forms of poker, the hands really are seven-carded. The hand with the highest card wins, with a tie-break of the second-highest card, then third, etc. Duplicate cards are completely ignored. For example, if player X has AQ and player Y has K3 and on the board is A5538 then X's full hand is AQ853 and Y's full hand is AK853, so Y wins. If two hands are identical except that one has an extra low card, then the one with more cards wins. For example, if player X has AQ and Y has A2 and on the board is Q9884 then X's hand is AQ984 and Y's hand is AQ9842, and since Y has the extra 2 Y wins. Having two extra cards of course also wins, and having a triplicate of a card causes both duplicates to be thrown out. As in Razz, flushes and straights don't count for anything.

I think there are a few things which make this an interesting game. A hand which contains two lower cards than another one pre-flop has only a low chance, I think about 7%, of winning, so trying to intimidate people by playing every hand will rapidly get you soaked. Also, knowledge gained after each face-up cards are dealt is directly related to the previous knowledge gained, increasing chances for reasoning what cards a player has based on their behavior. The chances of any particular card turning up on the board are reasonably high, so even if you start with a pocket ace there's almost a one in three chance that you'll find yourself playing garbage and having to figure out whether to bluff or fold.

Alarm Clock Display

I have a digital alarm clock with a seven-pin display which I've long noticed 'warbles' in funny ways when I brush my teeth with an electric toothbrush, due to the vibration it imparts on my head. I always attributed to the weird way different pins warble to odd cognitive phenomena, but the other night I noticed that it's always the pins on the top and right edges which warble as one group and the others warble as another, so now I wonder if there's something funny about the timing of it, for example maybe those three pins are offset by 1/120 of a second from the other four as they cycle on and off.

Monitors also warble when you bite into crunchy potato chips. It's highly amusing.

Hex

Based on the hex board evaluation ideas I gave earlier, I've written a hex playing program. I found giving it a two-ply look-ahead got rid of the tendency to play new moves right next to old moves.

This program plays position surprisingly well, but still has some very obvious weaknesses. For example, it doesn't realize that bridges are fully connected.

Clearly some sort of hexy/six-style feedback from tactical calculations into positional evaluation is necessary to make this program play better. I think my approach is quite a bit more inherently flexible about ways of doing that than the circuit resistance approach which hexy and six use.

To use this code, create a hex board of the size you want, then use x() and o() to place pieces, and to have the computer place a piece use moveo(). The computer only plays O, which is trying to connect the top and bottom. Runs of length 1000 work reasonably well.

Note that although a two-ply look-ahead is substantially better than a one-ply look-ahead, a chess-style three-ply look-ahead would be of dubious value. A better approach would be to make the results of the move/counter-move pairs influence which positions are generated in the fill-in process.

from random import randrange
from copy import deepcopy
def hexprint(myarray):
    for i in xrange(len(myarray)):
        print (' ' * (len(myarray) - 1 - i)) + ' '.join([str(myarray[x][i]) for x in xrange(len(myarray))])
    print
class hexplayer:
    def __init__(self, size):
        self.size = size
        self.states = [['.' for i in xrange(self.size)] for j in xrange(self.size)]
    def o(self, x, y):
        assert self.states[x-1][y-1] == '.'
        self.states[x-1][y-1] = 'O'
        hexprint(self.states)
    def x(self, x, y):
        assert self.states[x-1][y-1] == '.'
        self.states[x-1][y-1] = 'X'
        hexprint(self.states)
    def findbest(self, values):
        best = -1
        bestx = None
        besty = None
        for i in xrange(self.size):
            for j in xrange(self.size):
                if values[i][j] > best and (self.states[i][j] == '.'):
                    best = values[i][j]
                    bestx = i
                    besty = j
        return (bestx, besty)
    def moveo(self, runs):
        values2 = [[-1 for i in xrange(self.size)] for j in xrange(self.size)]
        responses = [[None for i in xrange(self.size)] for j in xrange(self.size)]
        for i in xrange(self.size):
            for j in xrange(self.size):
                if self.states[i][j] == '.':
                    self.states[i][j] = 'O'
                    values, wonx = self.evaluate(runs)
                    bestx, besty = self.findbest(values)
                    assert self.states[bestx][besty] == '.'
                    responses[i][j] = (bestx, besty)
                    values2[i][j] = runs - wonx - values[bestx][besty]
                    self.states[i][j] = '.'
        bestx, besty = self.findbest(values2)
        self.states[bestx][besty] = 'O'
        hexprint(values2)
        print float(values2[bestx][besty]) / runs
        r1, r2 = responses[bestx][besty]
        print r1+1, r2+1
        hexprint(self.states)
    def evaluate(self, runs):
        wonx = 0
        top = self.size - 1
        values = [[0 for i in xrange(self.size)] for j in xrange(self.size)]
        for run in xrange(runs):
            myvalues = deepcopy(self.states)
            for i in xrange(self.size):
                for j in xrange(self.size):
                    if myvalues[i][j] == '.':
                        myvalues[i][j] = ('X', 'O')[randrange(2)]
            visited = [[0 for i in xrange(self.size)] for j in xrange(self.size)]
            def neigbors(x, y):
                r = []
                if y > 0:
                    r.append((x, y-1))
                    if x > 0:
                        r.append((x-1,y-1))
                if x > 0:
                    r.append((x-1, y))
                if x < top:
                    r.append((x+1, y))
                if y < top:
                    r.append((x, y+1))
                    if x < top:
                        r.append((x+1, y+1))
                return r
            remaining = [(0, i) for i in xrange(self.size)]
            while remaining:
                x, y = remaining.pop()
                if visited[x][y] != 0:
                    continue
                if myvalues[x][y] != 'X':
                    continue
                if x == top:
                    wonx += 1
                    break
                visited[x][y] = 1
                remaining.extend(neigbors(x, y))
            else:
                remaining = [(top, i) for i in xrange(self.size)]
                while remaining:
                    x, y = remaining.pop()
                    if visited[x][y] != 0:
                        continue
                    if myvalues[x][y] != 'X':
                        continue
                    visited[x][y] = 2
                    remaining.extend(neigbors(x, y))
                for i in xrange(self.size):
                    for j in xrange(self.size):
                        m = neigbors(i, j)
                        n = [visited[x][y] for (x, y) in m]
                        if (1 in n or i == 0) and (2 in n or i == top) and self.states[i][j] == '.':
                            values[i][j] += 1
        return values, wonx
I still have my domains

Thanks to everybody who mailed me translating the mail about my domains. A whole bunch of people translated it for me, and David Turner went so far as to call ovh and tell them about the apparent hijack attempt. And Joey DeVilla got in contact with James Woods at Tucows.

All the help turned out to be way more than needed. I probably could have ignored the original mail and not gotten my domains taken, since it required affirmative action on the part of one of the contacts listed, and for those two I was the only contact.

Right before posting here about my domains I mailed tech support at my registrar, domainmonger. A few minutes later I noticed the domain locking feature, and turned it on. Somewhere about that time I got mail from tech support at domainmonger saying that they'd locked all of my domains. I'm not sure whether they or I did it first.

Thanks everybody for your help, that was confidence-inspiring.

In answer to the obvious question 'Why didn't you just hit the site and say no?' First of all, the site was in french, which I only speak a few words of, and second of all, when I went to it it said that I needed to enter some code taken from my registrar. While that message was probably legit and there for security purposes, it set of an alarm bell in my head that the attempted transfer mail I got might have been a fake sent by a spammer in an attempt to get me to give them a confirmation code to steal my domain. So I didn't do it.

Cool Mechanical Stuff

Here , here and here you can find lots of explanations of cool mechanical things.

My own crackpot engine ideas

I've designed a piston replacement I call a 'steel lung'. It basically acts like a piston in that it contains a chamber which expands and contracts, but unlike a piston the shape of the chamber remains mostly spherical for the entire stroke, for much better surface area to volume ratio than found in conventional pistons. Of course this creates extra friction, although not an insane amount - the amount of 'seam' is only two or three times what you have in a piston. It also increases sealing problems, but again not as much as you'd expect. It can be configured so that the expansive forces push the moving pieces into each other rather than away from each other, and the sealed edges move outwards during combustion, so there isn't all that much force on them.

The big question, of course, is whether heat loss during combustion is significant enough to be worth reducing. I'd appreciate it if anyone has any knowledgeable opinion about this.

Another idea of mine is based on how the heat of the piston head is frequently a problem in car engines. How about if, rather than a conventional piston, we have a piston which is composed of several sliding pieces in such a way that the piston head is actually a sliding pieces whose exposed part is different with each ignition? That would cause the effective surface area of the piston head to be a lot larger, thus making it easier to keep cool. Again, I'd be interested in any knowledgeable opinions about whether this is a worthwile goal even in principle. I have a design for this, but it lacks elegance.

Finally, I have a question about the way in which motion is translated from reciprocating to rotational. It seems a little odd that real engines do this with a simple camshaft, since that has a very particular rate of movement at each point, and I have a hard time believing that it's optimal. There are plenty of only slightly more complicated mechanisms, many given in the links above, which can modify this with a lot of precision. Does the relative rate of movement of the piston at each point in the cycle not matter all that much, or is it simply not worth the extra complexity?

Buckminster Fuller was a crackpot

Buckminster Fuller's intended great invention was the dymaxion car, which was basically a stretch limo with one front wheel which always ran backwards. Its funding was pulled after a fatal crash.

You can read Fuller's excuse-making about the crash here. For those of you unfamiliar with the physics involved, I'll explain. The car turned left and, as is inevitable with a 20 foot long vehicle with real-wheel steering, its butt stuck out into the right lane. There happened to be a car there (Fuller claims it was 'rubbernecking', which is completely besides the point - this is a very common kind of accident which shouldn't cause a fatality). When the rear side of the dymaxion hit the other car, its single rear wheel got shoved hard into the ground, making it precess and forcing the turn to be even greater, quickly causing the car to get to such an angle that it rolled over.

The investors wisely decided to pull out after this, presumably because they wanted nothing to do with a company with such callous disregard for passenger safety, and that was the end of that. Fuller did spend a huge amount of his own money attempting to build more cars, but they never went anywhere, probably because the quality of his engineering ideas was about what you'd expect, which is to say, very poor.

When you look at Fuller's accomplishments, philosophy, and claims, they bear a notable resemblance to many internet pundits today.

Moller Skycar

Speaking of which, those of you who have heard of the Moller Skycar may wish to read this.

If you're interested in flying recreationally, I recommend powered paragliding.

Simplified Poker

When musing about poker, I came up with simplified version, which is close to as simple as you can get. Two players, three cards. Each player is dealt a single card. The higher card wins. After anteing up, each player can either fold, call, raise one, or go all-in. I suspect that this variant would actually be fairly interesting to play, since it involves going all-in (which I think makes complete mathematical analysis completely beyond the capabilities of current techniques) and the small number of cards make the contents of the other player's hand strongly related to the contents of your own.

An improved trust metric

I think I've figured out how to improve on the last batch of trust metric ideas I posted. In those, I had problems with everything certified by the 'root' looking like spam, and suggested some very kludgy ways of dealing with that problem.

A much better approach is to view spam guilt as 'leaking' out of the system, an extremely small amount from everywhere. I'll present a technique as an interative process, in which spam guilt gets back propogated with each round. After many rounds have been done, it will be clear that some nodes have their guilt score increasing without bound, while the rest will have their guilt scores stabilized. All nodes whose guilt surpasses some threshold are then removed (the number of rounds and threshold are obviously a judgement call, but generally more is better). The distinctions between directly vs. indirectly attacking nodes I posted about before are then made, and the decision of which nodes can mail is based on the result.

(At some point I'm going to post a consolidated form of my best trust metric to date, so people don't have to read a long series of posts and cull the non-obsolete snippets out of them to figure out what's going on. In the mean time, I apologize for the vague references to stuff I've posted about before, and please bear with me.)

First, we have some value for how much spam each node is allowed to send before getting cut off, which we set by judgment call. A hundred pieces sounds reasonable, so I'll use that for the sake of argument. Next, we calculate how much 'leak' each node is, which we can reasonably set to 3 * (total amount of spam ever sent) / (total number of nodes). The number 3 can obviously be varied, but that's a good value.

On each round, we add spam score to direct infringers, back-propogate spam scores and then leak spam scores. We add spam scores for each direct infringer simply by adding the amount they've spammed to their score. To back-propogate, we iterate over each node, and for each of them we find the minimum spam score of it and any of the nodes which certified it, and transfer 100 to that node. If the node in question's spam score is less than 100, we transfer that amount. To leak, we reduce the spam score of each node by the leakage amount, and round up to zero.

I'm quite happy with this algorithm. It's essentially linear time, and I can straightfaced say that it's both secure and practical.

Chess

bgeiger: If one were to select a game to be the single canonical abstract strategy game which people the world over knew how to play, having draws would be a complete disqualification. Go has no draws if you use a super-ko rule and non-integer komi, but there are many modern board games which meet that criterion as well, for example Amazons.

Phew

Enough blogging for one sitting. More later.

115 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!