Older blog entries for Bram (starting at number 10)

Roshambo

Today I'll be writing about computer strategies for the classic game Roshambo, also known as paper scissors stone.

But ... but ... There's no strategy in Roshambo!

While it is true that playing random can reliably do dead average, in any tournament everyone will be trying to win, hence everyone will have bias, and strategies which do better than average against a wide range of opponents will rise to the top.

I've spent far more time than is reasonable reading about the computer Roshambo tournaments, and will present a distilled explanation of the strategies here.

Henny

Henny exemplifies the simplest strategy which works well in practice. It picks a random move which the opponent played previously, and plays the response which beats it. If the opponent has any bias toward playing the same move over time, Henny will win. Henny is also very defensive - optimal play against it will only get an edge very slowly over time, and even that might get swallowed by noise.

Markov Chains

Another winning in practice strategy is to use markov chains - look at the last n moves your opponent made, and predict that they'll continue playing as they have most frequently in the past. Considerable variation is possible in length of chains, which brings us to...

Iocaine Powder

The competitor Iocaine Powder magically combines the best qualities of Henny and markov chaining by looking for the longest match between the moves your opponent just played and any sequence they've played in the past, and assuming they'll play the same next move again. If the opponent has a bias based on the last n moves for an n, this strategy will essentially pick a random time they played the same last n moves they just did, which is the defensive strategy used by Henny.

Ironically, this algorithm is completely deterministic.

Note that it's slightly better when there are equal length matches to go with the first matching sequence rather than the last matching sequence, to perform well when an opponent plays a string of paper, then a scissors, then a string of paper again.

It's possible to implement this technique very efficiently using trees, which strangely Iocaine Powder doesn't do (MemBot does, though).

But this strategy still has a major weakness, and Iocaine Powder has another trick up its sleeve...

Sicilian Reasoning

A braindead, fixed sequence entry named DeBruijnize nearly cleaned up in the first tournament. DeBruijnize sequences are biased against sequences they've done in the past, rather than in favor of them. A meta-strategy called sicilian reasoning beats it nicely. Compute how well you would have done to date predicting the opponents move straightforwardly, then always shifting it up one (paper->scissors, scissors->rock, rock->paper), then down one, then the same three predicting instead your own side. Play whichever one would have the highest score so far.

Sicilian reasoning cleanly defeats not only DeBruijnize, but all manner of cheap variants on it.

Sword and Shield

This is my own idea, which is untested, but seems reasonable.

Generate predictions via whatever method (pre-sicilian reasoning) for both your move and the opponent's move, then play a move based on the following table. The top row is your predicted move, the left row is the opponent's (t = stone) -

        p  s  t 
      ---------
     |
   p |  s  p  s
     |
   s |  t  t  s
     |
   t |  t  p  p

This algorithm plays defensively by predicting both sides simultaneously, and hence may beat any opponent which bases each specific move on a prediction for only one side. It has nine sicilian reasoning variants, found by offseting the prediction for either side one of the three possible ways.

Wimping Out

A tempting strategy is to make your bot 'wimp out' and start playing randomly if it isn't doing well. Tournaments play two programs against each other many times with no persistent information between runs to keep this strategy from being effective.

Optimizing for Score

The winning entry, Greenberg, looks at historical performance if its history buffer was various sizes, and has exponential falloff of their performance. This makes it score very well against weaker opponents, but makes it more vulnerable to better opponents. Score, match wins, and worst-case lossage are all subtly different things to optimize for.

The Future

Sadly, there appear to be no plans for future computer Roshambo tournaments. It would be interesting to see how strategies continue to evolve.

12 Aug 2002 (updated 12 Aug 2002 at 21:29 UTC) »
Formal Methods

The utility of formal methods for practical code writing is an interenting debate. As you might expect, I have some thoughts on the matter.

Code winds up much better if it is rewritten pragrmatically with experience. I think there isn't a single line of BitTorrent which hasn't been rewritten at least twice. If I had been using formal methods, the whole program would essentially have been proven once for each rewrite. Much easier is to write the whole program once using more direct methods and then audit it.

The much-vaunted slow programming of the space shuttle is mostly a triumph of beurocracy. Switching from a B&D to a write-then-audit methodology would drastically reduce costs and result in better code.

While I'm still not convinced of the practical utility of formal methods, I'm completely sold on unit tests. A unit test is a test which exercises some code without requiring the rest of the program be linked in. The approach to writing them is to make them all pass/fail, and have a script which runs all unit tests and reports failures. Whenever a unit test is failing, fixing it becomes the top priority. Anyone whe doesn't fix their unit test breakage with the excuse that getting all the tests to pass is impractical anyway should be fired, (or at least, have their commit access revoked).

While formal methods clearly slow down development, unit testing actually speeds it up by reducing debugging time, making it not only practical but easily justifiable.

Writing good unit tests is more art than science. The general idea is to try to make the unit test code simpler than the code it's testing, and also to make the test code assert statements from a fundamentally different perspective than the code itself. Formal methods, by contrast, wind up repeating the code a lot, and thus aren't good at catching specification bugs.

After two years or so of writing unit tests for most of my code, I find the thought of debugging something without them absolutely horrifying, and can't bring myself to even toy with newly written code without writing some tests first (I generally don't test bootstrap and UI code though, since those aren't straightforwardly encapsulatable).

The above approach to unit tests is taken from extreme programming. However, extreme programming people seem to think that unit == class, which is wrong, and their encapsulated APIs seem to not be nearly as hermetically sealed as mine. Units can span several classes, and if they do those classes should internally have complete access to each others's data, while the outside world should only have access to data and methods if it is explicitly granted.

The bugs I've run across in BitTorrent generally fall into the following categories, each of which occurs about an order of magnitude less often than the one before -

  1. Bugs found in proofreading. My skill at this depends a lot on how tired I am.

  2. Bugs found writing unit tests. This is what really causes my debug time to be so small.

  3. Bugs which manage to slip past unit tests but cause a stack trace when invoked and are fixed within a few minutes of being found.

  4. Bugs which involve subtle misunderstandings of third-party APIs. There are far too many of these, since basically all third-party APIs are written horrendously (wx, select(), open()), but I generally get them fixed eventually, once I understand what the actual semantics are.

  5. There was one (count 'em, one) bug which managed to slip past all the testing and remain in the program for months, resulting in disastrous performance. The wrong integer was getting passed in by bootstrap code, and the generally robust nature of the underlying APIs caused the problem to be manifested as extremely poor performance rather than an outright bug. Eventually I observed some behavior which unambiguously indicated a bug, and managed to fix the problem after a few hours (and a lot of whining on my part).

    Notably, this problem could just as easily have slipped past formal methods, but would have been caught very easily had I been religious about passing parameters by name rather than position.

    (The specific bug, in case you're curious, was that the number of pieces to pipeline from any one peer was set at 32768 instead of 5, often resulting in the entire file being requested from one peer. Since BitTorrent usually has a policy of not requesting the same piece from more than one peer at once, there were other connections which were going unutilized since they had nothing else to request. This problem would have been even furthur masked had endgame mode been written at the time. Robust APIs can eliminate the possibility of a huge number of bugs, but the bugs which remain can sometimes be very strongly masked.)

Gollum Gollum

Someone posted an advogato diary entry (I'm not sure who, it's scrolled off recentlog) linking to a paper on exhaustive search for golomb rulers. Unfortunately, the site was down and archive.org didn't have the paper, I'd like to read it.

Conjecture Proven

Robert M. Solovay sent me a proof of my conjecture about prescient busy beaver numbers. Here it is, (slightly rearranged) -

I can prove your conjecture [and much more]. For example, there is a d such that

pbb(x) > bb(bb(x)) for all x > d.

This is stronger since bb(x) > h(x) for x large for *any* recursive function h.

All Turing machines considered in this letter have one two-way infinite tape and two symbols, 1 and "blank".

Let me state a series of results and then prove them:

Definition: K is the set of Godel numbers of TM's that started on a blank tape eventually halt. (A Godel number is a natural number encoding of a turing machine).

bbk is the Busy Beaver function for oracle TM's that use the oracle K.

It follows that for any function h which is recursive in K, we have bbk(n) >= h(n) for all but finitely many n.

In particular, bbk(n) >= bb(bb(2^n)) for all but finitely many n.

Assertion: There is a positive constant C such that:

  1. bbk(Cn * log n) >= pbb(n) for all but finitely many n.

  2. pbb(Cn * log n) >= bbk(n) for all but finitely many n.

This is the assertion whose proof I will sketch. But a little more work could improve this by replacing Cn * log n by just Cn.

Let's start with the proof of 2). Let M be a TM with at most n states [which uses an oracle for K] that halts in precisely bbk(n) states. We will design an ordinary TM, N, with O (n log n) states that visits state 1 a finite number of times but more than bbk(n) times.

It clearly takes O(n log n) bits to specify the table of the TM M.

N will first use O(n log n) states to write this description of M on its tape.

Comments: Using a trick of Chaitin, one could get by with just O(n) states here. It is at this point that the O(n log n) comes in. The remainder of the proof uses only O(1) states.

Officially, our TM has only two symbols and one tape. But it can simulate a k-tape n symbol tape with r states with Cr states of its own. [The C depends on k and n.] I find this remark useful in thinking through the details of programming N. I will, however, suppress these details in this sketch.

N will construct a series of approximations to the set K. The first approximation is that K is the empty set. It then simulates M using this approximation to K till one of two things happens:

(a) the simulation halts;

(b) for some n that the simulation queried the oracle about, N discovers that the original oracle gave the wrong answer.

If (a) happens, the program continues to look for an event of type (b). If the program ever finds an event of type (b), it redoes the simulation of M's action using the newly updated version of K.

Eventually, the program will find the correct computation of the M program from the true K.

We arrange to enter state 1 for any step in any of the simulated programs.

Because we eventually converge on the true computation of M the state 1 is only entered finitely many times. Because we eventually find the true computation, state 1 is entered more than bbk(n) times.

The proof of claim 1 is similar. We let now M be a TM of at most n states that enters state 1 precisely pbb(n) times.

Here is an oracle TM, N, [using an oracle for K] with O(n log n) states and that eventually halts but runs at least pbb(n) steps before halting.

N first uses O(n log n) states to write down a description of M.

It now simulates M. But before doing any step of M it asks "Will M ever again enter state 1". [It can settle this question using its oracle for the halting problem.] If N ever gets the answer NO to this question it immediately halts.

What I find interesting about this proof isn't so much that it shows that bb grows less than pbb, which seemed fairly obvious, but that it shows that pbb and bbk grow at essentially the same rate, which negated my intuition that pbb grows much more slowly than bbk. I always find unexpected results more interesting than expected ones.

Bad Code

Reading Dijkstra's obituary, I realized that I've always viscerally thought of structured programming as a concept which has always existed, and always been obvious. In fact it was a great triumph of engineering, and one created within my grandparents's lifetime.

My own method of programming is such that I cannot fathom writing code without having a proof of correctness in mind. Prove first, then type. Unfortunately programmers seem to fall clearly into one of two categories - those who can't program without thinking about proofs of correctness, and those who can't think about correctness proofs at all. Sadly, the second type seems to be dominant, with disastrous consequences for code quality.

Interestingly, Dijkstra mentioned at the end of note 1308 that the phrase 'considered harmful' was coined by Niklaus Wirth.

Continue Not Harmful

Many people incorrectly take the problems with gotos to mean they should avoid other advanced control flow structures, these include -

  • break
  • continue
  • return (from the middle of a function)
  • while true (implicitly using one of the above)
  • for/else (a pythonism, else is executed if no break happened)
  • finally

All of the above except finally are completely sound control-flow techniques, and you should use them whenever it results in more concise code.

The unifying abstraction to all of them is that each execution point in the program has an expected state of the call stack, so to go there you pop your own state off the stack, then push expected state (an exception or return values) back onto the stack, and then jump.

Exceptions are interesting because they decide how far back to pop the stack dynamically. The way most languages conflate exception identification with the class hierarchy is kind of a hack, but works acceptably.

'Finally', on the other hand, is a conceptual mess. I pity anyone stuck debugging memory leaks in a non-garbage-collected language which are triggered by a finally clause raising an exception of its own.

Golomb Rulers

The exhaustive search algorithm which ogr uses can probably be improved a lot. In essence, it recursively starts with a ruler with some positions marked and some positions consequently ruled out, and branches on whether or not a given position is marked.

The technique it uses for picking which position to branch on is to pick the smallest available one, which is awful. In general, we wish to branch on a spot which is likely to be proven to not lead to a solution as quickly as possible. Always using the least available position causes positions towards the beginning to be ruled out redundantly and positions towards the end not to be ruled out until very late. In particular, none of the positions in the last third will be ruled out until a position is marked outside of the first third.

Better techniques include picking the position which rules out the most available spots, and picking one (pseudo)randomly. These techniques carry significant overhead, so it probably makes sense to branch on the last few marks by picking least available, but they are clearly warranted for the first few marks.

Unfortunately, the last few exhaustive searches for optimal golomb rulers have just verified the previously known best solutions. Ah well.

Ceci n'est pas une Pipe

fxn: I am using the word interpretation for what it means in english, a meaning it has had since long before it was ever used in mathematics. The value 1 + 1 + 1 in PA, viewed completely abstractly, has no more to do with our concept of 'three' than it does with the planet mars. Only with interpretation does the statement 1 + 1 + 1 = three have meaning.

I'm of the opinion that the natural numbers are the main impetus for trying to make mathematics formal in the first place. Every time a new basis for mathematics is devised, the very first thing which is done with it is to build the natural numbers. Mathematics is a slave to our concept of the naturals, not the other way around. On the other hand, there is no generally agreed upon interpretation of the statement the real numbers are compact, and that disturbing problem lies in the dark corners of even such mundane subjects as high school geometry.

I have a confession to make. I don't really believe in anything other than PA. My interest in foundations of mathematics is mostly a morbid fascination with the possibility that important new results will strongly depend on large cardinal axioms or worse. I don't really even believe in Goodstein's theorem. Sure, I'll happily talk about it as if it's true while sitting in the safety of my own home, but if I had the opportunity to take a joyride in a spaceship whose engineering soundness I had full confidence in except that it depended on Goodstein's theorem to not go spinning out of control, my feet would remain firmly planted on the ground, thankyouverymuch.

I do, however, believe in the planet Mars. I would even go so far as to say that it exists.

I said a boom-chika-boom

Chicago: The order in which explosions are done does not, in fact, matter. Proving the following is an interesting exercise -

  • A never-ending explosion must take control of the entire board.
  • It is impossible for the entire board to be filled with pieces of one side and it be the other side's turn (this would result in there being no legal move).

I could add logins and rating and other games and such to my Boom Boom implementation reasonably easily, but have decided not to. That could easily become a time pit, and there are already underutilized games servers on the net. For example, there's the excellent Newton Games, which includes my favorite game, hex.

With regards to angel vs. devil, I haven't come up with any strategies for the devil which go much beyond the ones given in winning ways, which show that, for example, the devil can force the angel to make a single turn which involves moving downwards.

UI snit

Whenever I'm editing an already posted diary entry I always blanche at hitting the 'Post' button because I'm afraid it will double-post. It would be better for it to say 'Update' in that case.

Boom Boom

I've implemented a new game called Boom Boom, you can read about it and play it online here.

I like making web toys. They don't require a several month commitment to get anything working.

BitTorrent Usage

A BitTorrent deployment is up here. It's great to see BitTorrent deployments not motivated by its cool toy aspect starting to happen.

Interpretation

chalst: Mathematics is, of course, just a bunch of symbols, but certain theories have an interpretation which has meaning. The consensus reality interpretation of PA is inconsistent with the axiom of inconsistency of PA. I believe that there is an interpretation of large cardinal axioms which both has meaning and implies their existence.

It is conceivable that there is also an interpretation of an axiom which contradicts the large cardinal axioms, such as the axiom of constructibility, which also has meaning. That would not be as philosophically bothersome as it sounds - the interpretations would be an axiomatic pun, unrelated to each other in terms of real meaning.

6 Aug 2002 (updated 6 Aug 2002 at 19:18 UTC) »

This diary entry is all mathematical musings

Concreteness

I've been thinking about how to differentiate between amorphous conjectures like the axiom of choice, whose truth values are ambiguous, and the goldbach conjecture, which is clearly either true or false. If it's true but unprovable, one can add an axiom saying that there is a counterexample, and no logical contradiction will result, but this is strictly a bizarre creature of axiom-land. It clearly is not something we wish to accept. I believe that the goldbach conjecture is concrete while the axiom of choice is not.

Any conjecture of the form 'turing machine X halts' is concrete, as is any conjecture which can be shown to be equivalent to such a statement. Some statements I consider to be concrete can't be expressed this way - for example, the twin primes conjecture, so another statement must be included. Specifically, all statements of the form 'turing machine X hits state s an infinite number of times'. Whether this encompasses all statements which I'd like to consider concrete I don't know.

Ludicrously Large Numbers

The second category of concreteness seems fundamentally more powerful than the first one, which leads to an interesting conjecture. Let us define a 'prescient busy beaver' as a turing machine with no halting condition but having a specific blessed state, and it halts after the last time it will ever visit the blessed state. We define pbb(x) for any integer x as the largest number of steps any prescient busy beaver with x states will take before halting (excluding non-halting ones, of course.) I conjecture that for any arbitrarily large integer c, there exists a d such that for all x > d, pbb(x) > bb(x * c).

Less Ludicrously Large Numbers

While on the subject of ridiculously large numbers, I came up with an interesting variant of goodstein's theorem. Goodstein's really operates on ordered pairs (x, y), and every turn x undergoes a transformation involving expressing it in terms of y and changing all the y's to y+1, then subtracting 1, and incrementing the second number by 1. The theorem still applies if we instead replace y with x, which results in the number of steps necessary to inevitably hit 0 going from unimaginably huge to ... yet even more unimaginably huge. This has gotten quite ludicrous, but I still enjoy the game of naming the largest number you can by specifying a turing machine which halts after that number of steps.

I wonder if there are any reasonably expressible processes which can be shown to halt based on transfinite induction using a larger cardinal. Intuitively, I would guess there is, and such a creature would completely toast the example I just gave in terms of time to halting.

Concreteness of Large Cardinals

I believe that the statement of the existence of large cardinals is, in fact, concrete. Clearly they aren't conrete in the strict sense, but their consistency can be shown to be equivalent to statements which I do very strongly feel to be not only concrete, but also true.

For any axiomatic system, we inevitably wish to add an axiom stating that all the other axioms are consistent. We then wish to add an axiom stating that that axiom is consistent, then generalize to another, and another, etc. until we generalize to aleph null. This can be furthur generalized (via statements beyond my own mathematical ability to express properly) to larger infinite numbers, and suddenly the whole cardinal heirarchy appears.

Although this intuitive justification is rarely stated, I believe it's the reason so many mathematicians believe strongly in large cardinal axioms, despite their otherwise very loose justification. Hopefully some day we'll figure out a way of stating within mathematics that the mathematics itself is consistent without falling into the trap laid by Godel, and then the cardinal heirarchy will be derivable in a coherent manner, instead of on an ad hoc intuitive basis.

31 Jul 2002 (updated 31 Jul 2002 at 22:35 UTC) »
A Species of Gamblers

I finished reading Fooled by Randomness. What a great book. Taleb is a true man of science, and this book contains none of the usual financial bluster.

There are many interesting points made in this book. Highlights include -

  • If you look at stock markets globally historically, they aren't nearly as stable as the US stock market has been, and even in the US stock market the feature of never having fallen over a 20 year period is probably an accident of history. The risks of investing in the stock market are seriously underplayed.

  • Most successful traders are probably just plain lucky. Our view is distorted by forgetting about the ones who lost and were forced out of the industry.

There is a striking similarity between trading and programming in that both are so emotionally demanding that many, if not most people can never do them well. Good trading involves admitting that most of your performance is random. Of course, most successful investors will never accept that fact, and get very angry if you suggest it. Thankfully, the market has now crashed, so I no longer routinely offend people when the subject comes up in conversaton.

Good programming involves admitting your fallibility, with bugs and initially poor design being a fact of life. With only a little bit of cynicism, most programming Methodologies can be seen as very transparent attempts to not accept those facts. Their self-assuredness that such a thing is possible despite a total lack of evidence is actually quite childish.

All mathematical concepts in this book are explained very clearly in plain english, which is great for a populist book, but left me wanting more rigor. I may read Taleb's other book.

Reading this book made me very much want to get into finance. Every time my life starts to get in order I figure out some way to make it chaotic again, so I might as well have work which has chaos built in and pays commensurately. The software and mathematical problems involved are quite interesting, and there's even some social benefit justification - bad investment helps turn the markets into a gamble, good investment helps keep peoples's savings safe.

Investing in Angels

I have an interesting strategy for the angel in the angel problem.

The angel imagines herself doing a random walk of legal moves on non-eliminated squares. She keeps a record of what the path is, but truncates it down to nothing every time her walk returns to the original square. Over time, one of the available first moves will spend the largest fraction of time as the first step in her walk, and she makes that move.

This strategy does a good job of avoiding traps no matter how deceptively and contortedly they're shaped. How one might prove that it always works, however, I have no idea.

Stick a fork in it

BitTorrent's first mature release is up. There is also up-to-date documentation of the finalized protocol.

Defcon

I'm going to be at Defcon this year. In addition to pumping BitTorrent, I'm going to have the CodeCon 2003 Call For Papers out.

Nilsimsa

Nilsimsa has an interesting problem of, given a bunch of bit vectors, figure out which ones have a 'very close' neigbor using the hamming distance (the number of bits in common).

I figured out the following very odd randomized algorithm for this -

First, we divide the nodes into n ** (1/5) buckets of equal size. We will find the nearest neigbor of each node in its own bucket and every other bucket, and the minimum of those is its nearest neigbor globally. (Assuming it has a very close nearest neigbor, this algorithm often misses nearest neigbors if none happen to be particularly close by.)

For each bucket, we build a data structure in which each node has a 'neigborhood' of nodes it's bilaterally connected to. We define 'walking' the data structure as the process of finding a likely nearest neigbor to node x by first starting at an arbitrary node and calculating the distince from all of that node's neigbors and their neigbors to x, then hopping to the nearest of those, and repeating until we can't get any closer. In practice, walks are never more than a few hops. This data structure can be built inductively by, for each new node, walking from random different existing nodes to the new one and adding that node to the neigbor list repeatedly until there probably aren't any more neigbors of the new node to be found. This will result in a neigborhood of size about the fourth root of the number of nodes in the bucket (determined empirically).

Note that walking is somewhat nondeterministic, and doesn't always return the best value. It's a good idea to walk two or three times, to make it unlikely that a very close neigbor will be overlooked.

I believe this algorithm uses memory n ** (6/5) and runtime n ** (8/5), which seems reasonably practical.

Memory used is (number of buckets) * sizeof(data structure per bucket) = n ** (1/5) * (n ** (4/5)) ** (5/4) == n ** (6/5)

Time used for building data structures is (number of buckets ) * (time used per bucket) == n ** (1/5) * (n ** (4/5)) ** (7/4) == n ** (8/5)

Time for building is computed by x ** (1/2) per walk, times x ** (1/4) for neigborhood size, times x for the number of nodes, hence the 7/4 exponent.

Time for finding neigbors is (number of nodes) * (number of buckets) * (walk time) == n * n ** (1/5) * n ** (2/5) == n ** (8/5)

Load Testing

BitTorrent continues to shrug off every load test we throw at it. I thought a 700 meg file would get more concurrent downloaders, but instead it's just resulted in more people cancelling before download completes. The 140 meg file had an 80% completion rate, the new one is more like 35%.

I'd really like to get a load test with 200+ concurrent downloaders since that makes the connectivity graph more sparse than densely connected, but it's looking very difficult to make that happen. Earlier versions got that kind of load and didn't have artifacts related to large networks, so I may just give up and declare it final without another big deployment. The network is a manually constructed to be random, rather than following a power law, so it's unlikely for anything weird to happen. Still, another test would be nice.

In any case, recent tests have gone unbelievably smoothly. The only artifact of note is that when the load suddenly jumps two orders of magnitude it takes a few minutes to get warmed up, which is hardly alarming or even surprising.

Diary Ratings

Current diary ratings seem to conflate two concepts - your estimated quality of a diary, and your confidence in that estimation. That said, I'm impressed with how well the numbers reflect my own own opinions.

Diaries which talk about the author's life seem to rate worse (at least for me) than ones which emphasize technical topics. I wonder if this is a stylistic difference of opinion, and whether other peoples's estimated values have the same phenomenon or possibly even the opposite one.

Ugly Hacks

wardv just got me to throw up on my keyboard. That's more a possible new source of security problems than a useful technique. I wonder if parseargs has buffer overflows.

Picking on Wolfram

Wolfram presents a turing machine which he claims is universal because it emulates rule 110. That assertion is just plain wrong. The turing machine in question doesn't halt in response to a computation ending, it merely stabilizes, a clear violation of the ground rules for how turing machine encodings work. It also requires an infinite pattern on the starting tape, rather than a finite number of cells representing the problem encoding and the rest in a uniform 'blank' state. This is an equally clear, although less egregious, violation.

Dillo

Dillo is a lightning fast, well written browser with a small, maintainable codebase. Merit, sadly, is not the primary determiner of funding.

Free Manure

Mozilla and FreeS/WAN are both projects which have by their existence discouraged people from working on alternatives but barely shipped themselves. Criticism can't be brushed off with the excuse 'I'm not getting paid for this, buzz off'. Often we'd be better off without free contributions.

People are too politic about starting or contributing to new free software projects which compete with an existing one. Leadership incompetence and unpleasantness are valid reasons for starting an alternative, and we shouldn't be afraid of giving the real reasons for such splits. Pretending problems don't exist doesn't make them better, and the nice part of not getting paid is the emperor can't fire you for criticising his wardrobe.

Load Testing

BitTorrent shrugged off another load test. We need to try distributing a gigabyte-size file.

Only four lines of code had to be changed after this deployment. It looks like 3.0 is really going to happen.

18 Jul 2002 (updated 7 Jul 2003 at 16:27 UTC) »
A New Kind Of Plagiarism

Wolfram sued to keep the proof, written by someone else, of the keystone result in his book from being published before the book came out!

I think Wolfram's understanding of universality is weak. He completely glosses over the issues of encoding, which would be excusable for the purposes of exposition, but he commits a horrible technical gaffe created by that confusion. Specifically, he claims that because rule 110 is universal, there must be a straightforward encoding in it to compute a single generation of any of the other cellular automata. This is simply untrue, given the very non-straightforward encoding used in the proof of universality. Whether there is a proof of universality which uses a straightforward encoding is unclear.

Wolfram is clearly violating the spirit, if not the letter, of the rules of attribution, and his innovative use of the legal system to do so has chilling consequences.

New Axioms

I think this posting says there is now a known theorem which can be expressed using basic arithmetic but requires an axiom even stronger than basic set theory to prove. Fascinating.

Pipelining

Jeff Darcy correctly reconstructs our earlier conversation about pipelining. Pipelining is an important concept in networking, so I'll explain it here.

Let's say you have some data being transferred over a TCP connection in pieces. The simplest way is for the sending side spew everything, but that leaves no control on the receiving end other than to close the connection completely. In many protocols, the receiving side must explicitly ask for a piece before it is sent. The most straightforward implementation of this is for the receiving side to request a single piece, and thereafter request a new piece every time the last requested one arrives.

Unfortunately, that approach causes the sending side to pause after every piece sent, thus reducing throughput. Turning off Nagle will keep the congestion control damage from being too bad, but it still won't get rid of the pause. The Right Way is for the receiving side to send out requests for several things, then send out an additional request every time a piece comes in. This technique is called pipelining, and it ensures that the sending side always has something in queue.

Distributed Proofs

Some notes in my ongoing discussion of automated proof checkers with Raph -

Math papers generally contain coherent self-contained proofs, which result in one or more theorems. The self-contained part of the proof is straightforward, the subtlety comes in with what the proof imports and exports.

Exported theorems should be separately referenceable, since there is often more than one proof of a theorem.

I'm not sure whether an axiomatic system, such as peano's axioms, needs to be handled any differently from a theroem. Certainly, peano arithmetic can be constructed in a multitude of ways using ZFC. Whether to believe a theorem on its face, or require a proof based on something else, should be left up to the client proof checker, which is free to believe whatever it wants.

Namespacing of imported theorems is a subtle issue. Clearly clients must notice when a proof tries to use theorems about commutative groups on axioms which don't state commutativity. Also, it is important to be able to combine theorems which are both based on the axioms of arithmetic, but refer to cosmetically different statements of them. Requiring everyone to use the same encodings would likely prove unworkable in practice.

1 older entry...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!