Older blog entries for Bram (starting at number 34)

I'm interested in getting into hedge funds, so I'm reading up on finance. Currently I'm reading 'Options, Futures, and Other Derivatives' by John Hull. I haven't finished it yet, but I'm going to post a review anyway.

This is the classic book for beginning traders to learn the basics from. Its structure is what you would expect - very simple, clear prose, and comprehensive coverage of all the basics.

My problem with reading this book is that it keeps making me fall asleep. After a few pages I have to take a nap. After enough naps I can't sleep any more and go into a trance-like dissociated state with the words passing through my brain with no comprehension. All the formula derivation in this book is spelled out in painstaking detail, with whole sections devoted to formulas anyone with a decent mathematical background could re-derive in a few minutes. Also, key bits of math are dropped out, presumably because they're too difficult. According to the reviews on amazon those derivations are actually misrepresented and the formulas given are approximate, even assuming that the underlying model is perfect. I think all readers would be better served with a treatment which simply gave the formulas, without burying them at the end of tedious derivations which are either too hard or too boring for any given reader.

I'll post a review of another, much more promising book I'm reading in the future.

movement: I suggest you view recentlog with a threshold greater than the default of 3, and rate diaries which are better than trolls but not what you want to read ratings somewhere between 3 and whatever threshold you set.

Circuit Complexity

Here's a page on circuit complexity. If only it used programming terminology instead of math notation I'd probably understand all of it.

I have a conjecture that given 2*k lists of n strings, each of them a small multiple of lg(n) long, the question of whether it's possible to select exactly one string from each list and have their xor be all zero has O(n^(2*k)) circuit complexity and O(n^k) real complexity, give or take a few log(n)s. The faster algorithm is to make a set of all sums from the first k lists, and do look-ups for all sums in the second k lists. This is a very well-behaved tradeoff between time and memory - it exactly matches the lower bound implied by the circuit complexity.

If proven, this conjecture would imply that P != NP, which suggests that it's very hard to prove, but it does have a very approachable feel. I think the people searching for proofs than P = NP is undecideable are being far too pessimistic.

Computers

My new computer is now working with the pre-installed Windows XP. XP detects all the peripherals properly, and has a useable command line, so I'll probably leave it this way. The user interface differences are interesting. XP now has lots of mouseover highlighting, which is nice, but still has far too little pre-installed, and the pre-installed windows media player and musicmatch are garbage, I'm using winamp.

Sadly, most windows application UIs have gotten worse since I last used them. Mirc's new server dialog should be a huge improvement but instead makes logging into multiple servers a UI disaster, and it still doesn't have a decent custom widget for displaying chat. X-chat is way ahead of it in many regards, but mirc does have the very nice feature of grouping channels you're on by server and alphabetizing them, the as-opened ordering in x-chat is ridiculous. Mozilla should open new tabs in the position after the current one instead of at the end.

Paint Shop Pro's usability has gone retrograde in a major way. They seem to be trying to compete with Photoshop, instead of being happy in the quick and easy but serviceable niche for people like me who have no aspirations of becoming graphic artists.

All the ftp clients have become almost unuseable. The only one left which lets you just say 'put this file here' is, ironically, WinSCP, presumably because they haven't spent enough time destroying its UI yet.

Command line CVS, surprise surprise, didn't seem to work at all, and there aren't even any decent help pages left on it. TortoiseCVS is quite good though, I'm happy with it.

But all of those pale in importance to stability problems. Under Windows, peripherals generally just work, which is wonderful. But the CD auto-play is flaky and refuses to work even when configured properly. And there's some bug in windows explorer which causes it to occasionally turn the whole screen gray which I have to control-alt-delete to get out of. Qwerty/Dvorak conversion continues to be a complete disaster. The converter is even harder to find than before, and still changes each process individually instead of system-wide, a behavior I can't even imagine wanting as a special feature, much less default behavior.

Thankfully I'm behind a firewall and never going to use outlook on this machine, so the biggest security concerns aren't a big deal, but I do worry about the multiple chat clients I've got running. Security's a big problem on everything though.

Conclusion - I still hate computers. OS wars continue to be a competition of who has fewer unacceptable deficiencies.

I can't complain about what the hardware people have done though - the new machine redraws graphics almost instantly, even in mozilla, and that's by far the most pratical difference for me.

1 Oct 2002 (updated 1 Oct 2002 at 03:26 UTC) »
mwh: Done. I also put in __rsub__ and added some asserts. I've separated out the rational class and put it on my home page.

ciphergoth: The argument you put forth applies to some types of trust metrics, but not the one I gave.

Specifically, I think your argument applies to trust metrics in which first compute a weight for each attester, then combine their judgements by simple averaging. The second step of the algorithm I gave is considerably more complicated. You imagine doing a random walk around the graph, with each hop to one of the ones certified by the current node, weighting the chances of going to each one proportionally to its confidence. The histogram of what this process will result in can be averaged in any way you choose, median is probably best.

If an attacker has compromised some nodes, their most effective attack will be to make all those nodes attest to what they want. Adding lots of nodes and certs can only asymptotically approach that amount of influence.

After thinking about it a bit, I realized that sampling works quite well as a method of computing confidences. Randomized algorithms show up in the most surprising places.

lukeg's post prompted me to spend several hours yesterday doing a Python implementation of the nine nines problem. My solution contains a much better rational number class than any of the ones I found doing a quick web search.

With some help from demoncrat I came up with a much better weightings algorithm than the one I gave yesterday.

Simply set the weight of each node to the probability that there will be an uncompromised path from that node to an attester. This technique obviously meets the increasing confidence criterion I gave, and is much simpler and easier to analyze. I also find it much more intuitively compelling.

Unfortunately it might not be possible to efficiently compute, or even approximate, said probability. I hope it is.

A probability of compromise of 1/2 seems to square with my own judgements of graphs imagining the nodes as people I know personally.

I spoke with raph for a while yesterday about trust metric criteria. It turns out all the non-definitional criteria I gave yesterday can be consolidated into a single one.

Say we're comparing graphs P and Q. For both of them, we make a list of all paths from me to an attester which don't visit the same node twice and don't pass along an attester in the middle. If the resulting lists are isomorphic, then their confidences must be the same. If a subset of P's list is isomorphic Q's list with possibly some hops added to some entries, then P's confidence must be greater than Q's.

For example, if I cert A, A certs X, and X attests, then that must have a higher confidence than if I cert A, A certs B, B certs X, and X attests, since (A, B, X) can be transformed into (A, X) by removing the B.

This criterion is clearly reasonable - in such circumstances for every attack on A there is a strictly easier attack on B, hence A's confidence must be higher than B's. Unfortunataely this criterion isn't very restrictive - both shortest path and max independent paths meet it - hence it is unlikely to directly lead to a formula.

Failing this criterion leads to an interesting type of attack. If an attacker can induce good nodes to cert other good nodes, then he can reduce their influence. This is a potentially insidious kind of attack. There is nothing obviously wrong after the attack has taken place, and even if there is, what are you going to do about it, remove the certs?

Another major problem with violating this criterion is that it strongly discourages nodes from certing others under some circumstances, since it could potentially reduce the rating strength of members of one's own clique qute substantially.

A somewhat related problem occuring in the random walk model is that it tends to look a lot like shortest path. My ratings confidences are so low that if just about anyone I've directly certified gives a rating, that will drown out all the others. This is arguably reasonable behavior, but not what should be happening when the chances of giving up at each hop are as ridiculously low as .05.

I've devised an algorithm which appears to get this problem under control. Consider the advogato diary ratings system as it exists today. Unlike the simplified model I gave yesterday, this yields not just a confidence value but a rating as well.

We will give each node has a weight, in the range [0, 1]. When doing a random walk, we will pick a next hop to go to with probability proportional to its weight. If a node certifies two others with weights a and b, the probability that the next hop will go to the first one is a / (a + b) and the second one is b / (a + b). Normal random walk corresponds to all nodes having a weight of 1.

We will do the algorithm in two parts. In the first part, we reduce the weight of each node to the probability that a random walk starting from there won't give up. This is unfortunately nonlinear, since the probability of giving up is based on the weights, but now the weights are based on the probability of giving up. Fortunately a very good approximation algorithm exists - we simply calculate the probabilities given the current weights, then recalculate again with the new weights, and repeat several times. I think this will consistently give very good values after only four iterations.

Note that all dead ends will now have probability zero, and all near dead ends will have very low probability. Also, longer paths will have lower weight than shorter paths.

Next we compute the diary rating based on the random walk method given the computed weights and with no probability of giving up at each hop. The justification for having no chance of giving up is that longer paths were already reduced in weight by the first step. A special case is when all weights are zero, in which case we have no rating.

Although it gets closer, this technique still doesn't meet my earlier criterion. For example, if I cert A, A certs B, B certs C, and C attests, then adding a B cert of A reduces A's weight. I think this phenomenon isn't all that big a deal in practice. Fixing it would probably have to involve making paths not visit the same node twice, and anything involving that is computationally problematic - for example, simply calculating the number of paths between two nodes which don't visit the same node twice appears very difficult.

Trust Metric Criteria

Confidence values in diary ratings are currently surprisingly low. It turns out that this is due to a serious deficiency in the way diary ratings are calculated, which I will now explain.

Consider the case of a single statement. We have a set of nodes, one of which is 'me', and a directed graph of their certifications. Some nodes 'attest to' the statement in question. We will calculate a confidence level of the statement in the range [0, 1] for each node, with special interest in our own confidence. Diary rankings are more complicated, but this simplified model is sufficient for a full exposition of the current problems.

The current advogato way of calculating confidence is as follows. Start with yourself, and repeatedly hop to a new position by randomly selecting one of the nodes the current position certifies. If you get to a node which attests to the statement, halt. If you get to a dead end, give up. Before each hop, have a 5% chance of giving up. Your confidence is the fraction of the time this procedure doesn't give up.

While this technique seems reasonable on its surface, it hase a serious deficiency.

the adding certifications criterion - Adding a certification to any graph must not reduce your confidence.

Advogato fails this criterion in even the simplest case. Consider the case where we only cert one other node, and that node attests to the statement. Our confidence will now be .95. If we add a cert to a node which doesn't attest to the statement and doesn't cert anyone else, our confidence will fall by a factor of 2. This is clearly undesireable behavior.

Here are some more criteria, some of which advogato already passes, but all of which are clearly desireable and none of which (I think) imply each other -

adding nodes - If nodes and certs are added in such a way that you have a path along new nodes which ends in a new node which attests to the statement, your confidence must go up. (This eliminates another technique.)

total strangers - If a node you have no path to certifies someone, that must not change your confidence.

extending chain - If you only have one cert and do not personally attest to the stament, your confidence must be less than the person you certed.

skepticism - If you do not personally attest to the statement your confidence must be less than 1.

terminators - If a node attests to the statement, additional certs from it to other nodes must not raise your confidence level.

redundant backlinks - If all paths from you to node B must go through node A, then B certifying A must not increase your confidence.

unreachability - If you have no path to a node which attests to the statement, your confidence must be zero.

full attestation - All nodes which attest have confidence one.

side show - If all paths through node B which arrive at a node which attests must pass through some node A both before and after B, then B's certifications must have no effect on your confidence.

That's all I can think of for now, there may be others. Ideally, we would like a set of criteria such that for any two graphs we could unambiguously prove which of them should yield a higher confidence.

Notify Lists

Raph wonders about notify lists. I've spent a lot of time thinking about algorithms for this, but haven't talked about it much, since it turns out that the simple solutions are near optimal.

Unrelated to the technical messaging aspects, I'd like to point out that the most logical form of addressing for instant messaging is person@machine. Reusing email addresses makes the system both easy to use and viral, and doesn't introduce yet another identifier for users to remember and manage.

There are two simple approaches. The first is completely centralized. All users report to a single central machine when they log in, and get data back when their friends come online. This system is very efficient, but it has a single point of failure, although in practice a single site can be made quite reliable. It also doesn't scale very well, although the number of messages involved in online status reports is so small it might hardly matter.

The second basic approach is for each user to have a 'home base' they log into, and for home bases to report interest and online status to each other. This approach is very reliable and scales very nicely, but is inefficient under some circumstances. If a typical user has thousands of friends but only a dozen are online at once, lots of unnecessary interest messages get sent. For notify lists this isn't a big deal, since a large fraction of users are online at all times, and the total number of messages is pretty small anyway, but there might be similar applications where it's a real issue.

Before explaining the hybrid solution, I should explain the inter-home protocol.

There is some reliable ordered transport layer, not necessarily TCP. On top of that, there are four messages, all of them idempotent -

  • I'm interested in user X.

  • I'm not interested in user X.

  • User X is online.

  • User X is not online.

The one subtlety is that if one side changes from not interested to interested, the other side must report online status (not necessarily instantly though, as we'll see below).

Let's look at what happens if Alice is in Bob's notify list and Alice comes online first. Messages are sent as follows -

  1. Alice tells her home she's online.

  2. Bob tells his home he's online, and sends it an 'interested in Alice'.

  3. Bob's home sends an 'interested in Alice' to Alice's home.

  4. Alice's home sends an 'Alice is online' to Bob's home.

  5. Bob's home sends an 'Alice is online' to Bob.

Now let's look at what happens if Bob comes online first -

  1. Bob comes online, tells his home he's online and sends it an 'interested in Alice'.

  2. Bob's home sends an 'interested in Alice' to Alice's home.

  3. Alice's home sends an 'Alice is not online' to Bob's home.

  4. Bob's home sends an 'Alice is not online' to Bob.

  5. Alice tells her home she's online.

  6. Alice's home sends Bob's home an 'Alice is online'.

  7. Bob's home sends Bob an 'Alice is online'.

Of course, someone else using Bob's home might also be interested in Alice, in which case several of the messages in the above dialogue would be unnecessary. Also, as an optimization Bob's home may remember his notify list so it doesn't have to be re-uploaded every time he comes online.

The hybrid solution, which has some of the benefits of both of the above methods, is to introduce 'collators' which handle the information for a group of home machines. Collators speak the inter-home protocol to each other, and get some of the message reduction benefit of doing everything with a single centralized server.

Let's look at how the message flow works if Alice is in Bob's notify list and is already online when Bob comes online -

  1. Bob tells his home base he's online, and sends an 'interested in Alice'. If Bob's home already has a user who's online and has Alice in their notify list (unlikely), then Bob's home responds immediately and the protocol stops here.

  2. Bob's home sends an 'interested in Alice' to Bob's home's collator. If another user whose home is using this collator is interested in Alice (likely), then this collator responds immediately, otherwise the protocol continues as follows.

  3. Bob's home's collator sends an 'interested in Alice' to Alice's home's collator. If a user on another collator is interested in Alice (very likely), then this collator responds immediately, otherwise the next two steps are necessary.

  4. Alice's home's collator sends an 'interested in Alice' to Alice's home.

  5. Alice's home sends an 'Alice is online' to Alice's home's collator.

  6. Alice's home's collator sends an 'Alice is online' to Bob's home's collator.

  7. Bob's home's collator sends an 'Alice is online' to Bob's home.

  8. Bob's home sends an 'Alice is online' to Bob.

I've glossed over the details of how collators are found, how homes set and change what their collators are, and how failures on the messaging level are reported, but all those problems can be solved in reasonably straightforward ways.

Leave it to advogato to have a very sophisticated attack launched for testing purposes.

There are two issues at play here. First of all, all arbitrary text should be escaped before being displayed in html. Nothing particularly deep or difficult about that, although advogato text processing is borked in several ways - repeatedly editing old diary entries results in lots of <p> <p> <p>.

The other issues is that for many web sites it's fairly simple to trick a user into clicking on a link which causes something nasty to happen to their account. iframes just exacerbate this problem, they don't create it. A straightforward solution is that for every page which has links which have side effects, a one-time use string is randomly generate and put into the links. When the action is performed, the system checks to see if the string is valid and (very important!) that it was generated for the current user and action. This also stops accidental double-posting, since a second usage of the same string can be treated as a modification instead of a new post.

Unfortunately, this isn't just an issue on advogato, it's an issue on almost all web sites. Advogato's second problem of not escaping makes the attack worse by making it possible for it to be viral, but the attack is there nonetheless.

Props to whoever came up with and implemented the attack. That was very clever.

17 Sep 2002 (updated 17 Sep 2002 at 06:28 UTC) »
dyork: Most of the furniture I've put together recently has been assembled with Allen wrenches, which have a hexagonal hole. I'm guessing they have less chance of shearing when tightening, since they have six contact points instead of four, and each of them has more material behind it. It also might have some advantage related to glide planes, but I'm very hazy on that.

Torx heads look like they have even less chance of shearing, and fewer pointy bits to wear down. Amusingly, Torx is a trademarked name, and everything puts (tm) after it, but there doesn't appear to be a generic term.

I just spent more time than I ever should have googling for screw information.

25 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!