Older blog entries for raph (starting at number 294)


The 7.32 beta release is now official, with Windows and Macintosh builds as well. Hopefully, this means that 8.0 is imminent.

Thanks to everyone who's been helping with this, especially my colleagues at Artifex Software. It feels really good to be part of a team.


If you're in the US, please vote today. Especially if you were inclined not to vote, please consider voting for your Green candidates. The California governor's race is particularly ugly. From the major parties, we have an incumbent who's proved excellent at finding the line between merely aggressive fundraising and outright bribery; and a challenger whose inept campaign hopefully not a predicter of his performance in office should he win, not to mention his closeness to fraudulent practices in his family firm.

By contrast, Pete Camejo is somebody you can vote for without having to hold your nose. A strong showing for him will send an important message that many people are not satisfied with the status quo and want something better. In addition, he has a lot of good experience with finance and ethical issues in particular, which suggests he would make a good governor in the extraordinarily unlikely event he wins.

Davis is sufficiently ahead in the polls that you should feel comfortable voting for Camejo even if the thought of a Republican in the governor's office disturbs you. Even so, I'm more inclined to vote for the best candidate than the lesser of two evils in this election.

Please consider your Green candidates in local races as well. Green candidates can and do win offices at the local level, and these are often positions where they can do some good.

But no matter what your political beliefs, vote. It's how our democracy, however flawed, works.

Proofs and Types

chalst: Hrm. Unfortunately, Girard's book is just a little over my current threshold for accessibility. At this point, I really prefer Web-available resources, as well, largely because I am doing this as a weblog and want to encourage other people to follow along.

I know I've just labeled myself a lazy student, but I'm wondering if you could give me some hints nonetheless. You mention Z_2-style and Coq-style consequence relations. What is the big difference between them? I'm not used to thinking about logical systems in terms of the consequence relation, but do know what the term means. Of course, I know one difference is that Coq is intuitionistic, but I think that's not the main thing you're talking about.

I'm also having a little trouble grasping the idea of a lambda calculus with the same strength as Z_2. Is the trick to write functions as finite strings, analogous to a computer program to compute the function? This way, you quantify over computable functions rather than the imaginable universe.

PS: chalst: Michael Norrish has some notes he'd like to send you, but doesn't have a working email address.

I don't feel like I have much extra time to write, but I've been thinking of doing a longer essay on some of the controversies in mathematics, especially as they're relevant to proof systems. The most important ones are logical strength (from primitive recursion to ZFC, with at least Z_2 and HOL as way-points, and specialized need for systems past both ends), types vs. no types, and partial functions vs. total functions. In the "moderately important" class I'd put classical vs. intuitionistic logic. The philosophical issues don't do much for me, and it seems like it's almost always possible to work around the intuitionism when you need to. Finally, in the least important category, I'd put choice of primitive operators and axioms for the propositional calculus. They're all quite equivalent, but that doesn't stop people from arguing their favorite.

Hmm, I wonder whether this is a reasonably complete list, or if there's another big one I'm overlooking.

Trust metrics for spam-free blog comments

Dave Winer talks about a large-scale single sign-on system for posting comments and replies in blogs. In this entry, he mostly talks about the convenience factor of not having to type in passwords and other information all the time.

At the same time, there's a lot more awareness of the vulnerability of comment and reply mechanisms to spam, trolls, and other forms of abuse. So far, the blog world has been quite civilized, but as spammers become tuned into the prestige and popularity of blog placement (not to mention their role in boosting Googlejuice), this will undoubtedly begin to change. Heather likens it to the Internet pre- and post-AOL.

I just realized today that both problems can be solved quite easily using the trust metric technology that powers the diary rating systems in Advogato. Blog servers, acting as clients to this new service, would get a cookie from the user's browser (probably using Passport-style techniques), and get an auth token back, also with a one-to-ten ranking, if so desired. This way, anyone who's hooked into the blog world can post comments and replies, but abuse would be limited.

In a dream world, we would have a decentralized peer-to-peer identity service based on a cryptographically sound protocol, but in the meantime there are a lot of people implementing blogs with very crude authentication. I think it will be possible to hack something together quickly.

Once the infrastructure is in place, I can see it being useful for a bunch of other things, including authenticating backlinks, which are also spam-vulnerable. And, once the social net of bloggers starts becoming reified into a net-accessible graph, the stage is set for evolution to a more decentralized system. It seems like a good way to get rapid adoption of the trust metric ideas.

I discussed the idea with Bram and others on #p2p-hackers, and he had some good ideas about how to make a simple challenge-response protocol that would be both easy to implement over the existing infrastructure and reasonably secure against the most common attacks such as password sniffing. He'll probably want to post about that himself.


A couple of days ago, Alan was trying to gross us out at the dinner table, which is of course a normal six-year-old thing. But, being Alan, he tried to convince us that he had "bee larvae" on his plate.

Max really loves his "puter games" and is quite good at running the computer. In particular, he can usually quit out of a game by himself, which is even more impressive considering the atrocious consistency of keybindings in the downloadable Mac OS X games. He seems especially good at reading icons and other visual elements. For example, just today, he got some stickers trying to convey "no sweat" by the international "no" circle overlaid on some water drops. He didn't get it exactly, but we were pretty impressed by his interpretation as "stop raining".

We were talking about blogging yesterday, and together singing fragments of the Kipper theme song, with the words "Kipper the blog".


chalst: Thanks for your highly relevant comments. Let me see if I can answer.

"You don't need to encode first-order terms as singleton second-order variables..." Right. Actually, thanks for reminding me about the need to quantify over things like total functions. I was pretty much only thinking about building them up as terms, for which the proof that the function is total follows directly from the structure. For quantifiers, probably the easiest technique is to put this predicate into an assumption.

"...you may not be respecting the distinction between the set of total functions and the set of function that SOA can prove total." This is entirely possible. Presumably, the important difference is that you you quantify over the total functions, but when you build up from terms, you're restricted to the provably total. I'm not sure yet how this might bite me.

"Finally, you need to check your translation actually captures the properties you are after." Actually, I may take the heretical position that this is not necessary. You may want to allow the translation to have essentially arbitrary properties. Of course, in many cases there will be a meaningful translation back from target language to source language, and it may be worth capturing that formally.

Your point about a rich set of datatypes is well taken. Basically, what I'm trying to figure out is whether such a set of datatypes can be mapped to a simple Z_2 axiomatization using a straightforward definitional scheme. I'm not even sure that growth in the size of propositions matters much: as I said, in most cases, you won't be doing the translation explicitly.

What's the best source to read about these things, especially the polymorphic lambda calculus systems with the same provability strength as Z_2?


I poked around at Google Answers a bit. It seems like a useful service, but I don't see any reason to get excited about it. There are two features I might have hoped for.

First, I don't see that the Answers provided help, or even particularly influence, the results of ordinary searches. This is disappointing, because it seems to me that Google's trust metric has the best chance of working when there's high quality manual input, especially regarding the trusted "seed". It's entirely possible that Google has two completely disjoint sets of people manually reviewing Internet sites (one set for Answers, another for deciding the seed), but that they don't talk to each other. I'm not sure.

The second missing feature is a bit harder to define, but it has to do with whether Answers is a true community. There are glimmerings of this. For example, having all the questions and Answers posted, along with comment posting open to all users, is cool. Many of the questions are interesting, and I can see people hanging out there. Another pro-community sign is that ordinary users and Researchers share the same namespace in the account/login system.

Most importantly, a lot of the Researchers seem to know each other, so there is something of a community there. But users don't really get to take part in it, even when (or perhaps because) they buy in monetarily. Part of the problem may be the way identities are concealed

If Google (or somebody else) figured out a way to build a real knowledge-seeking community, with money lubricating the system, I think they'd have something big. As it is, they have a reasonable service with a solid-gold brand behind it.

I feel like about 150% of my usable time is spoken for these days. But I am getting through the work. Hopefully, the load will lighten a bit soon.


I just sent a patch to code-review adding 0 and 45 degree angle cases to Well Tempered Screening. This is the last of the 8.0 release-critical features, so hopefully we'll be able to do a release soon. I expect to do a 7.32 beta tomorrow to let people experiment with WTS.

A nice bug image resulted from tonight's debugging. I don't really have an explanation for it. It's supposed to look like this.


I think I can answer the question I posed last time about how to transform second order arithmetic with function applications into a "pure" axiomatization. I'll try to sketch it here.

Every production in the source language grammar has a translation into the target language. In particular, every (numeric) term in the source becomes a singleton set in the target. Constants and variables just become {x} or {c}. The interesting case is function application, which looks something like this ([term] means the translation of source-language term into the target language):

[f(a)] = {y | \exists x. x \in [a] ^ pair(x, y) \in [f]}

We can define pair(x, y) as (x + y) * (x + y) + x * x. There are other choices as well, but that's probably the simplest.

In order to translate proofs, you need to establish that every (translated) term really is a singleton, and that every function is total. I think these proofs can be generated straightforwardly, because their structure follows the parse of the source language expressions.

The above assumes we have a definitional framework for set comprehensions. I see two approaches. First, instead of having a comprehension axiom, you could just allow comprehensions to be written, as above, for set-valued terms. The fact that all terms are assumed to exist implies the usual comprehension axiom.

Another approach, more consistent with the usual theory, is to transform set-valued terms into "predicates with a hole" (here, [x := v] foo means the substitution of foo for x in foo; I'll gloss over the technicalities of substitution here, which can be quite tricky).

[{x | phi}] = [x := hole]phi
[a \in S] = \exists x. x \in [a] ^ [hole := x] [S]
[S0 = S1] = \forall x. [hole := x] [S0] <-> [hole := x] [S1]

I expect you rarely actually do the transformation into the target language; you'll usually do all the steps in the source language. But having the relationship formalized, I feel, is important. Among other things, it should let you freely intermix theorems over the same fundamental axioms, but with different definitional frameworks. This does not seem to be a feature of most proof systems, which in general treat definitions as new axioms, usually with some meta-level justification that they're conservative.

Why different definitional frameworks? Partial functions may be one good reason. In this world, terms are translated into sets which may be either a singleton or empty. Some things change; for example, \exists x. term is no longer a theorem, as it is in the total world. Even so, when translated into the target language, exactly the same things are provable. So, you should be able to do stuff like build a function in the partial world, prove that it's total, and then use it in the total world with little hassle.


We went trick-or-treating tonight, and it was fun - this year had a more social flavor than previous. We knew a bunch of the people we met, and really visited at two houses. Plus, none of the decorations or costumes scared either of the kids, which I think is a first.

Alan went as a lion-headed dragon, and Max as a "dino-store" (his "s" is still soft). Their cuteness was appreciated by all, and the candy haul was substantial.


Raph Levien is generally quite pleased with his googlism results.

Thinkpad 600 battery

I sent a fairly strongly worded letter to IBM through their feedback mechanism. Here's my favorite excerpt:

Finally, I find this marketing statement galling: "Thanks to IBM's advanced power management systems, the ThinkPad's battery life has earned a powerhouse of a reputation." Perhaps this is true in the sense that Firestone tires on the Ford Pinto have earned a reputation with respect to safety, but I believe it is misleading and should be withdrawn.

blog, Advogato, mod_virgule

I'm really getting a lot out of writing this diary. It's becoming an absolutely integral part of my life. Here's an amusing anecdote: Heather learned that I had fixed the battery in her computer from reading this. My mom also follows my doings by reading this diary, which is important because I do a bad job keeping in touch through phone, mail, etc. (Hi Mom!)

I like the idea of slowly, gradually improving the mod_virgule code. There are a few things I consider important and want to add soon. First on the list is RSS export for diaries, because it seems like this is an increasingly important tool.

A lot of people keep both an Advogato diary and another, presumably for topics other than free software. This is fine (in fact, many bloggers seem to maintain multiple blogs), but I also want to make Advogato more useful for people who want to move in entirely.

So next is a <topic> tag for marking individual topics within an entry. This more-structural markup will be rendered in several ways: in RSS (and maybe OPML) export of diaries, as outlines when diary entries are rendered as summaries. Finally, I'll put in a scoring mechanism for custom views, so individual topics can be expanded, condensed, or ignored entirely. I'm also toying with ideas for the ontology of such topics: maybe you'll type, say, Books/Science Fiction/Stanislaw Lem/Solaris, and it will render as just Solaris, but have an effect for all the topic scoring filters. Not sure yet.

To people discussing an "ignore" patch: why not just set the diary ranking below 3? This feature really was intended to subsume filtering at the grain of individual accounts.

The "friends" view a la LiveJournal is quite a bit more interesting, especially as that's another aspect of the social network. Two-way links for references to other diary entries would, I think, be another important step in the right direction.

djm: you're quite welcome.

fxn: I haven't looked into it in detail, but it sounds like it's probably a bug in mod_virgule's xmlrpc code. See also this post by Rogers Cadenhead.


One of the topics that I haven't been writing about much is my Quaker faith. When the <topic> mechanism gets implemented, I will feel more free to write about it without worrying that it will throw off the main focus of my diary.

At quarterly meeting, I participated in an interest group on electronic communications. Most of the discussion was about Meeting-sponsored web pages, which are of course an important tool for reaching out to people. The College Park one (that I just linked) is nicely done.

Are blogs also a useful tool for expressing Quaker values and faith? Probably so. A quick Google search doesn't turn up any from the unprogrammed, liberal tradition of my monthly meeting.

I stood up today and gave a message about what I'm finding when reading in the wider blog world about the Iraq conflict. I've found some insightful commentary, but very little of the writing from the liberal viewpoint shows deep critical thinking, and I find some of the more conservative writing deeply repellent. My quoting of Eric Raymond to the effect that our enemies are "to be dealt with as rabid dogs are" elicited an audible gasp, as well it should.


I'm sure Heather and my mom will want to assign a negative score to <topic>Proofs</> :)

chalst: You say "moving between the two representations for Z_2 is pretty trivial (with the right definitional framework)." I take it this is the usual mathematician's meaning for "pretty trivial", which is roughly equivalent to "a small matter of programming".

In any case, arriving at the "right definitional framework" is exactly what I'm after. As you point out, pairing and representing functions as relations are easy. One of the trickier bits is showing that the axioms of reflexivity and transitivity for = work for terms including function application in addition to the primitive terms consisting of variables, 0, 1, +, and *. I'm not sure, but I think this needs to be proved as a metatheorem, which means effectively that it's hardwired into the proof system encoding Z_2.

Is there any place where the transformation between these two representations is written down explicitly? I get the impression that most mathematicians are content to show that you could do the transformation.

I'll reiterate my biggest open question. We agree that a Z_2 with functions as part of the definitional framework is a lot more usable than one expressed purely in the "predicates over naturals" axiomatization. Does there exist an even more usable definitional framework? How would you characterize this? Usability is a pretty slippery concept.

I don't have answers, but I'm starting to get a little bit of a feel for the terrain. I think a very important quantitative test for the expressive power of a proof system is the number of proof steps required to crank through an instruction in a simple CPU-like virtual machine. Over a pure axiomatization such as Z_2 or ZFC, my sense is that three orders of magnitude is about right.

If this expansion factor is suitably small, then a reasonable approach for some things is to run a program. Let's say we've already proved a theorem of the form, "for all runs of program P over input x, the output y satisfies such-and-such predicate(x, y)". Then, for specific input x0, mechanically prove that the output of program P is y0, use the modus ponens, and you've proved that such-and-such predicate(x0, y0) holds.

As I've mentioned before, I expect this kind of "proof strategy" to be useful for things like numeric calculation, parsing, etc.

I have more thoughts on this, but they're still a little unformed. A teaser: theorems of the form calc("2+2") = atoi("4").

Random thought

The raw capacity and power of personal computers has been growing at an amazing pace, over an equally impressive span of time. Even so, they're probably still considerably smaller than human brains. Mainstream estimates for the latter are 100 billion neurons, and an average of somewhere around 10,000 inedges for each neuron.

But you hook these personal computers up through the Internet, and all of a sudden the gap seems a lot smaller. If the Internet really wanted to become sentient, it probably could.


bytesplit: I mean you no ill will. However, by now it has become painfully clear that you and Advogato are not a good match for each other. I ask of you, please find another place for your writings.

Thinkpad 600 battery

Heather's got the 600 now. Yet another battery was tanking - the useful life was down to about 10 minutes. So I tried a technique suggested by Javier Valero, and so far it seems to work.

I've put up a page summarizing the ThinkPad 600 battery problem and possible solutions.


I have lots more feedback than I can digest right now. But I will point out Robert Solovay's interesting reference to two flavors of second-order arithmetic. One is the "pure" Z_2 axiomatization with naturals, predicates over naturals (or functions from naturals to bool, if you prefer), and quantifiers over both. The other, with equivalent strength, has functions from naturals to naturals. The latter is probably a lot more directly useful for what I'm trying to do, because new functions can be defined as plain old lambda terms, and application is primitive, i.e. requires no hoops to be jumped through.

I'm more than a bit intrigued, though. Since these systems are indeed equivalent in logical strength, but differ widely in the convenience of their definitional systems, it's clear that choice of definitional framework is very important. How carefully has this space been mapped out, though? Is it possible that there are other definitional tricks that could make the systems even more usable, or is macro expansion plus function definition and application somehow complete in that all other definitional frameworks can be reduced to it tightly?

I'm not sure what such things might look like, but I see glimmerings. For one, I'd imagine that statements of the form "abstract tree S is a parse of string S by grammar G" might be easier to express and prove than in the usual logics. In any case, these statements are exactly the kind of thing I'd expect to be primitive in a human-friendly "source language", and then compiled automatically to a proof "object code", so maybe in the end it doesn't much matter.

Paul Snively has been blogging about proofs as well. I've skimmed his references, but so far am not sure how relevant they are.

I haven't felt much like interacting with anybody the past few days. It's probably the days getting shorter, and it will certainly pass.


The 7.31 beta candidate tarballs are up. Please give them a shot.

I was hoping to get subpixel text antialiasing in this release, but didn't quite manage. Hopefully next time.


I got some very nice email from Robert Solovay, answering many of my questions about second-order arithmetic.

I'm starting to get a handle on definitions. All proof systems have a mechanism for them, but mathematically they tend to be not very interesting, so mathematicians don't talk about them much. Even so, they're clearly important.

In general, definitions create a mapping from a source language to a target language, mostly by expanding out rewrites of the form <symbol> = <expansion>. The target language has just the axioms of the underlying system, like ZFC set theory or Z_2 arithmetic. So the concept of truth or provability is basically the same between the two languages: a statement in the source language is provable iff its expansion is provable in the target language.

So now the important thing is to make sure that there are usable inference rules in the source language. Ideally, all the inference rules that work in the target language are also sound in the source. But this doesn't necessarily have to be the case. For example, if you define the expansion of, say, y = 1/x to be x * y = 1, then you have 1/0 not equal to anything. This is cool in a way, but you have to use somewhat different axioms for equality.

Partiality is of course a big issue here. Joe Hurd recommended this paper on how to get partial effects in a total logic such as HOL. A different approach is Jan Kuper's quite readable thesis, which gave me the courage to consider tweaking the equality axioms in the source language so that you can write terms that don't exist. Also see Freek Wiedijk's First order logic with domain conditions.

There is one way in which definitions may be interesting: as a means of making proofs portable to different axiomatizations. The idea is that a module of definitions has an interface that exports proved theorems, but might keep certain details private. A good example is pairing. Your theorems basically say that you get out what you put in, and are the same in Z_2, HOL, and ZFC, but the actual construction (hidden) is quite different. When you plug in a different definition module with the same interface, you still expect your proofs to go through.

I'm not sure about the mathematical consequences of this approach. I get a feeling that it won't work in all cases, but if it works in all non-contrived ones, that's good enough. It's also entirely possible I'm reinventing category theory, in which case I better break down and get a book on it. I got a nice email from Sarah Mount recommending a few, as well as this paper updating Reynolds' result that polymorphism is not set-theoretic.

The feedback and intellectual excitement I'm getting from my exploration into proof systems is turning out to be quite addictive!

Second-order arithmetic

Just a quick note: many, if not most, of the questions I posed yesterday about second-order arithmetic are answered in the books referenced. I've skimmed them and am still grokking the details.

These books also give nicer pairing functions than the one I came up with on my own, but it's nice to note that they're recognizably similar.

Voda's book in particular bulds up a great deal of definitional machinery, much of it geared towards computer science applications.

12 Oct 2002 (updated 12 Oct 2002 at 09:00 UTC) »
Aaron Swartz on the Eldred case

AaronSw has written a fine first-person account of the Eldred vs. Ashcroft case seeking to overturn the the latest 20 year extension to copyrights. Also see Aaron's excellent visualization of the evolution of copyright lifetime since 1790.

I have an idea for a poster or t-shirt: Mickey Mouse is crushing diamonds underfoot. The caption reads, "Diamonds last a pretty long time, but Copyright is Forever(R)".

More words on proofs

There's a lot of material, through email and my own thinking. I'll try to get through it. This entry might be long.

Second-order arithmetic

chalst recommends second-order arithmetic as a nice "small" system. I poked around a bit and am definitely intrigued. It's been well-studied for a long time, which of course means that the main results are in dusty books rather than Web pages. Here's what I did find, though: a book (with a first chapter and content-rich reviews online), a concise definition (with axioms). There are also some drafts of a book by Paul Voda on computability theory. These are interesting, but it will take some slogging to figure out how to make it practical.

None of the "fifteen provers" in Freek's list use second-order arithmetic. Why not? Voda's CL seems to be, but I don't see other people talking about it. Why is it not on Freek's list?

Can you use HOL to prove the consistency of second-order arithmetic (as ZFC can do with HOL, but see below)? If so, HOL would seem to be a strictly "larger" system. What other important things are in HOL but not second-order arithmetic? My intuition tells me that integration is one of them, but of course it could be wrong (I'm not an intuitionist).

Conversely, can second-order arithmetic prove the consistency of primitive recursive functions? If so, I have a pretty good idea of examples you can do in second-order arithmetic but not primitive recursion: Ackermann's function is the most famous.

Definition schemes

If you look at the axioms for second order arithmetic, it's clear that they'd be painful to work with directly. You can't even write a tuple of integers (but see below). Instead, you have to construct all these basics from the primitives.

It's clear that any usable formal system must have a system for introducing definitions for things like tupling. At minimum, such a scheme must be safe, in that you can't introduce inconsistency through definitions. Metamath's mechanism for definitions lacks enforcement of this property, but almost all the definitions introduced in set.mm are of a restricted form for which safety is obvious (there's lots more discussion in the Metamath book). Fortunately, ZFC is powerful enough that these restrictions are not too painful. In particular, you can express concepts of extraordinary richness and complexity as functions, then apply and compose them quite straightforwardly.

HOL's definition scheme is a lot more complex. Basically, you provide a proof that implies that the new definition won't let in any inconsistency, then the system lets you define your new constant or type. For types, you do this by giving a construction in terms of already existing types. However, once you sneak the definition past the prover, it doesn't tie you to that specific construction. In fact, you could replace the construction with another, and all proofs would still go through. I consider this an important feature, well worth the extra complexity it entails.

Metamath's set.mm doesn't have this feature; rather, it nails down specific set-theoretical constructions of all the basics (fortunately, these are for the most part the standard ones used by mathematicians). In practice, all the proofs in set.mm are carefully designed so that the construction can be replaced, but again there's nothing to check this short of plugging in different constructions and trying it. It would be possible to rework set.mm so that reals are a tuple of 0, 1, +, -, *, /, etc., and all theorems about reals are explicitly quantified over this tuple. It sounds pretty painful and tedious, though. (Incidentally, Norm Megill sent me a nice email sketching how to do it).

I get the feeling that for second-order arithmetic, the situation is even more urgent. My intuition is that you can't just add another layer of quantification as you can in ZFC (and probably HOL-like type systems), because that would "use up" one order.

Bertrand Russell famously said that definitions had many advantages, namely those of "theft over honest toil". It looks to me like a search for a system of "honest theft" will be rewarding.

A good definition scheme can help with some other concerns, as well. Primarily, it should be possible to construct the basics compatibly in a number of logical systems. Sequences of integers, for example, are straightforward in second-order arithmetic, HOL, and ZFC, although in each of these cases the most natural construction is quite different.

I'm also intrigued by the possibility of instantiating an axiom set with a concrete type, perhaps multiple times within the same universe. The example that comes most readily to mind is integer arithmetic as a subset of the reals. If you look at set.mm, naturals appear (at least) twice; once in the usual set-theoretic construction, and a second time as a subset of reals (which, in turn, are a subset of complex numbers). The Peano axioms are provable in both. Thus, a proof expressed in terms of Peano axioms only should prove things in both constructions. It could be a powerful technique to stitch together otherwise incompatible modules, although of course it's equally probably I'm missing something here.

A little puzzle

One of the things I did not find was a library of useful constructions in second-order arithmetic, so I thought about them myself. One of the most basic constructions, pairs of naturals, makes a nice puzzle. This should be doable for most readers.

I came across one such construction in the datatype package for HOL. It is: pair(x, y) = (2 * x + 1) * 2^y (where ^ is exponentiation). This is most easily understandable as a bit sequence representation: the bit sequence of x, followed by a 1 bit, followed by y 0 bits.

But second-order arithmetic doesn't have exponentiation as a primitive (although you can do it). Can you write a closed form expression as above using only + * > and an if-then-else operator ("pred ? a : b" in C)? Obviously, such an expression has to meet a uniqueness property: no two pairs of naturals can map to the same result.

I've got a pretty nice, simple construction, but I have the feeling an even simpler one (perhaps without the if-then-else) is possible. In any case, I'll keep posting hints if nobody comes up with an answer.

The Web as a home for proofs

In any case, the choice of logical systems and the conversions between them, while interesting and important, is but one aspect of the evil plan. The other is to make the proof system live comfortably on the Web.

I think "hypertext" is a good analogy here. We've had plain vanilla text for a very long time. Closed-world hypertexts resident on a single computer system (such as HyperCard), also predated the Web by many years. A lot of people dreamed about a global hypertext for quite some time, but it took about thirty years for the Web in its present form to happen.

Likewise, proof systems have come a long way. In the early days, even proofs of trivial theorems were quite painful. Over the years, technology has improved so that truly impressive proofs are now practical. Even so, I can't just download my math from the web, do some proofs, and upload the results. I'm convinced that this is exactly what's needed in order to build a library of correct programs and their associated proofs.

How did the Web succeed where so many bright people failed before? It's certainly not through doing anything interesting with the text part of the hypertext problem. In fact, HTML as a document format is, at best, a reasonable compromise between conflicting approaches, and, at worst, a steaming pile of turds. What won the day was a combination of simplicity and attention to the real barriers to successful use of hypertext.

In particular, the Web made it easy for people to read documents published to the system, and it made it easy for people to write documents and get them published. In the early days of the web, touches like choice between ftp:// and http:// protocols (it was sometimes easier to get write access to an FTP repository than to set up a new server), port :8080 (for the relatively common case of being able to run user processes on a machine but not having root access), and of course the use of DNS rather than some more centrally managed namespace significantly lowered the barriers to publish.

The analogous situation with proofs is clear to me. It should be easy to use a proof once it's already been published, and it should be easy to publish a proof once you've made one. A great deal of the story of theorem provers is about the cost of proving a theorem. This cost is effectively reduced dramatically when it is amortized over a great many users.

I'll talk a bit about ways to achieve these goals, but first let's look at how present systems fail to meet them. Keep in mind, they weren't designed as Web-based systems, so it's not fair criticism. It would be like complaining that ASCII doesn't have all the features you need in a hypertext format.


In many ways, Metamath is a nice starting point for a Web-based proof system. In particular, there's a clean conceptual separation between prover and verifier, enforced by the simple .mm file format. The fact that set.mm has been nicely exported to the Web for browsing is also a good sign.

A big problem with Metamath is the management of the namespaces for definitions and for theorems. There are two issues. First, the names of existing objects might change. Second, there's nothing to prevent two people from trying to use the same name. The namespace issue is of course analogous to programming, and there's lots of experience with good solutions there (including Modula's module system, which serves as the basis for Python's).

Another big problem is the lack of a safe mechanism for definitions, as discussed above. Of course, on the Web, you have to assume malicious input, while in the local case you only have to worry about incompetence.


HOL solves the definition problem. In fact, the definition scheme is one of the best things about HOL.

However, the distinction between prover and verifier is pretty muddy. HOL proofs are expressed as ML code. This isn't a good exchange format over the Web, for a variety of reasons.

Fortunately, people have started exploring solutions to this problem, specifically file formats for proofs. See Wong and Necula and Lee. I haven't seen much in the way of published proof databases in these formats, but I also wouldn't be surprised to see it start to happen. Interestingly, Ewen Denney uses these results (motivated by proof-carrying code) as the basis of his HOL to Coq translator prototype.

I'm unaware of any attempt to manage namespaces in HOL (or any other proof framework), but again this could just be my igorance.

How to fix namespaces

Whew, this entry is long enough without me having to go into lots of detail, but the namespace problem seems relatively straightforward to solve. In short, you use cryptographic hashes to make the top-level namespace immutable and collision-resistant. Then, you have namespaces under that so that almost all names other than a few "import" statements are nicely human-readable.

There's more detail to the idea, of course, but I think it follows fairly straightforwardly. Ask, though, if you want to see it.

Category theory

Yes, I've come across category theory in my travels. I have an idea what it's about, but so far I don't think I need it. My suspicion is that it might be helpful for proving metatheorems about the system once it's designed, but right now I'm mostly exploring. Ideally, the system will be simple enough that it's obviously sound, but of course it's always nice to have a rigorous proof.

Even so, if I get a strong enough recommendation for an accessible intro to the subject, I'll probably break down and read it.

Translating HOL to ZFC

chalst also warns that the translation from HOL to ZFC is not trivial, and cites a classic paper by Reynolds. Of course, being Web-bound I can't quickly put my hands on a copy, but I think I understand the issue. Here's a quick summary:

In untyped lambda calculus, representing the identity function (id = \x.x) in ZFC set theory is problematic. In particular, applying id to itself is perfectly reasonable in the lambda calculus, but a function in ZFC theory can't take itself as an argument, because that would create a "membership loop", the moral equivalent of a set containing itself.

For any given type, the (monomorphic) identity function over that type is straightforward. In Metamath notation, given type T it's I restricted to T.

So the trick is to represent the polymorphic identity function (\forall A. \x:A. x) as a function from a type to the identity function over that type. Before you apply a function, you have to instantiate its type. You never apply a polymorphic function directly, and you never have polymorphic functions as arguments or results of other functions (or as elements of types). So no membership loops, and no problems in ZFC.

Is this what you meant, or am I missing something?

Well, that's probably enough for today. And I didn't even get into Joe Hurd's email :)

Proof feedback

I've been getting some really good feedback from my series of diary entries on formal proof systems. The latest batch are from Norm Megill (of Metamath fame) and Joe Hurd, who's done quite a bit of work with HOL.

I've used up almost all of my diary time responding to these threads (privately), but the juicy bits will find their way back here.

285 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!