12 Oct 2002 raph   » (Master)

Aaron Swartz on the Eldred case

AaronSw has written a fine first-person account of the Eldred vs. Ashcroft case seeking to overturn the the latest 20 year extension to copyrights. Also see Aaron's excellent visualization of the evolution of copyright lifetime since 1790.

I have an idea for a poster or t-shirt: Mickey Mouse is crushing diamonds underfoot. The caption reads, "Diamonds last a pretty long time, but Copyright is Forever(R)".

More words on proofs

There's a lot of material, through email and my own thinking. I'll try to get through it. This entry might be long.

Second-order arithmetic

chalst recommends second-order arithmetic as a nice "small" system. I poked around a bit and am definitely intrigued. It's been well-studied for a long time, which of course means that the main results are in dusty books rather than Web pages. Here's what I did find, though: a book (with a first chapter and content-rich reviews online), a concise definition (with axioms). There are also some drafts of a book by Paul Voda on computability theory. These are interesting, but it will take some slogging to figure out how to make it practical.

None of the "fifteen provers" in Freek's list use second-order arithmetic. Why not? Voda's CL seems to be, but I don't see other people talking about it. Why is it not on Freek's list?

Can you use HOL to prove the consistency of second-order arithmetic (as ZFC can do with HOL, but see below)? If so, HOL would seem to be a strictly "larger" system. What other important things are in HOL but not second-order arithmetic? My intuition tells me that integration is one of them, but of course it could be wrong (I'm not an intuitionist).

Conversely, can second-order arithmetic prove the consistency of primitive recursive functions? If so, I have a pretty good idea of examples you can do in second-order arithmetic but not primitive recursion: Ackermann's function is the most famous.

Definition schemes

If you look at the axioms for second order arithmetic, it's clear that they'd be painful to work with directly. You can't even write a tuple of integers (but see below). Instead, you have to construct all these basics from the primitives.

It's clear that any usable formal system must have a system for introducing definitions for things like tupling. At minimum, such a scheme must be safe, in that you can't introduce inconsistency through definitions. Metamath's mechanism for definitions lacks enforcement of this property, but almost all the definitions introduced in set.mm are of a restricted form for which safety is obvious (there's lots more discussion in the Metamath book). Fortunately, ZFC is powerful enough that these restrictions are not too painful. In particular, you can express concepts of extraordinary richness and complexity as functions, then apply and compose them quite straightforwardly.

HOL's definition scheme is a lot more complex. Basically, you provide a proof that implies that the new definition won't let in any inconsistency, then the system lets you define your new constant or type. For types, you do this by giving a construction in terms of already existing types. However, once you sneak the definition past the prover, it doesn't tie you to that specific construction. In fact, you could replace the construction with another, and all proofs would still go through. I consider this an important feature, well worth the extra complexity it entails.

Metamath's set.mm doesn't have this feature; rather, it nails down specific set-theoretical constructions of all the basics (fortunately, these are for the most part the standard ones used by mathematicians). In practice, all the proofs in set.mm are carefully designed so that the construction can be replaced, but again there's nothing to check this short of plugging in different constructions and trying it. It would be possible to rework set.mm so that reals are a tuple of 0, 1, +, -, *, /, etc., and all theorems about reals are explicitly quantified over this tuple. It sounds pretty painful and tedious, though. (Incidentally, Norm Megill sent me a nice email sketching how to do it).

I get the feeling that for second-order arithmetic, the situation is even more urgent. My intuition is that you can't just add another layer of quantification as you can in ZFC (and probably HOL-like type systems), because that would "use up" one order.

Bertrand Russell famously said that definitions had many advantages, namely those of "theft over honest toil". It looks to me like a search for a system of "honest theft" will be rewarding.

A good definition scheme can help with some other concerns, as well. Primarily, it should be possible to construct the basics compatibly in a number of logical systems. Sequences of integers, for example, are straightforward in second-order arithmetic, HOL, and ZFC, although in each of these cases the most natural construction is quite different.

I'm also intrigued by the possibility of instantiating an axiom set with a concrete type, perhaps multiple times within the same universe. The example that comes most readily to mind is integer arithmetic as a subset of the reals. If you look at set.mm, naturals appear (at least) twice; once in the usual set-theoretic construction, and a second time as a subset of reals (which, in turn, are a subset of complex numbers). The Peano axioms are provable in both. Thus, a proof expressed in terms of Peano axioms only should prove things in both constructions. It could be a powerful technique to stitch together otherwise incompatible modules, although of course it's equally probably I'm missing something here.

A little puzzle

One of the things I did not find was a library of useful constructions in second-order arithmetic, so I thought about them myself. One of the most basic constructions, pairs of naturals, makes a nice puzzle. This should be doable for most readers.

I came across one such construction in the datatype package for HOL. It is: pair(x, y) = (2 * x + 1) * 2^y (where ^ is exponentiation). This is most easily understandable as a bit sequence representation: the bit sequence of x, followed by a 1 bit, followed by y 0 bits.

But second-order arithmetic doesn't have exponentiation as a primitive (although you can do it). Can you write a closed form expression as above using only + * > and an if-then-else operator ("pred ? a : b" in C)? Obviously, such an expression has to meet a uniqueness property: no two pairs of naturals can map to the same result.

I've got a pretty nice, simple construction, but I have the feeling an even simpler one (perhaps without the if-then-else) is possible. In any case, I'll keep posting hints if nobody comes up with an answer.

The Web as a home for proofs

In any case, the choice of logical systems and the conversions between them, while interesting and important, is but one aspect of the evil plan. The other is to make the proof system live comfortably on the Web.

I think "hypertext" is a good analogy here. We've had plain vanilla text for a very long time. Closed-world hypertexts resident on a single computer system (such as HyperCard), also predated the Web by many years. A lot of people dreamed about a global hypertext for quite some time, but it took about thirty years for the Web in its present form to happen.

Likewise, proof systems have come a long way. In the early days, even proofs of trivial theorems were quite painful. Over the years, technology has improved so that truly impressive proofs are now practical. Even so, I can't just download my math from the web, do some proofs, and upload the results. I'm convinced that this is exactly what's needed in order to build a library of correct programs and their associated proofs.

How did the Web succeed where so many bright people failed before? It's certainly not through doing anything interesting with the text part of the hypertext problem. In fact, HTML as a document format is, at best, a reasonable compromise between conflicting approaches, and, at worst, a steaming pile of turds. What won the day was a combination of simplicity and attention to the real barriers to successful use of hypertext.

In particular, the Web made it easy for people to read documents published to the system, and it made it easy for people to write documents and get them published. In the early days of the web, touches like choice between ftp:// and http:// protocols (it was sometimes easier to get write access to an FTP repository than to set up a new server), port :8080 (for the relatively common case of being able to run user processes on a machine but not having root access), and of course the use of DNS rather than some more centrally managed namespace significantly lowered the barriers to publish.

The analogous situation with proofs is clear to me. It should be easy to use a proof once it's already been published, and it should be easy to publish a proof once you've made one. A great deal of the story of theorem provers is about the cost of proving a theorem. This cost is effectively reduced dramatically when it is amortized over a great many users.

I'll talk a bit about ways to achieve these goals, but first let's look at how present systems fail to meet them. Keep in mind, they weren't designed as Web-based systems, so it's not fair criticism. It would be like complaining that ASCII doesn't have all the features you need in a hypertext format.

Metamath

In many ways, Metamath is a nice starting point for a Web-based proof system. In particular, there's a clean conceptual separation between prover and verifier, enforced by the simple .mm file format. The fact that set.mm has been nicely exported to the Web for browsing is also a good sign.

A big problem with Metamath is the management of the namespaces for definitions and for theorems. There are two issues. First, the names of existing objects might change. Second, there's nothing to prevent two people from trying to use the same name. The namespace issue is of course analogous to programming, and there's lots of experience with good solutions there (including Modula's module system, which serves as the basis for Python's).

Another big problem is the lack of a safe mechanism for definitions, as discussed above. Of course, on the Web, you have to assume malicious input, while in the local case you only have to worry about incompetence.

HOL

HOL solves the definition problem. In fact, the definition scheme is one of the best things about HOL.

However, the distinction between prover and verifier is pretty muddy. HOL proofs are expressed as ML code. This isn't a good exchange format over the Web, for a variety of reasons.

Fortunately, people have started exploring solutions to this problem, specifically file formats for proofs. See Wong and Necula and Lee. I haven't seen much in the way of published proof databases in these formats, but I also wouldn't be surprised to see it start to happen. Interestingly, Ewen Denney uses these results (motivated by proof-carrying code) as the basis of his HOL to Coq translator prototype.

I'm unaware of any attempt to manage namespaces in HOL (or any other proof framework), but again this could just be my igorance.

How to fix namespaces

Whew, this entry is long enough without me having to go into lots of detail, but the namespace problem seems relatively straightforward to solve. In short, you use cryptographic hashes to make the top-level namespace immutable and collision-resistant. Then, you have namespaces under that so that almost all names other than a few "import" statements are nicely human-readable.

There's more detail to the idea, of course, but I think it follows fairly straightforwardly. Ask, though, if you want to see it.

Category theory

Yes, I've come across category theory in my travels. I have an idea what it's about, but so far I don't think I need it. My suspicion is that it might be helpful for proving metatheorems about the system once it's designed, but right now I'm mostly exploring. Ideally, the system will be simple enough that it's obviously sound, but of course it's always nice to have a rigorous proof.

Even so, if I get a strong enough recommendation for an accessible intro to the subject, I'll probably break down and read it.

Translating HOL to ZFC

chalst also warns that the translation from HOL to ZFC is not trivial, and cites a classic paper by Reynolds. Of course, being Web-bound I can't quickly put my hands on a copy, but I think I understand the issue. Here's a quick summary:

In untyped lambda calculus, representing the identity function (id = \x.x) in ZFC set theory is problematic. In particular, applying id to itself is perfectly reasonable in the lambda calculus, but a function in ZFC theory can't take itself as an argument, because that would create a "membership loop", the moral equivalent of a set containing itself.

For any given type, the (monomorphic) identity function over that type is straightforward. In Metamath notation, given type T it's I restricted to T.

So the trick is to represent the polymorphic identity function (\forall A. \x:A. x) as a function from a type to the identity function over that type. Before you apply a function, you have to instantiate its type. You never apply a polymorphic function directly, and you never have polymorphic functions as arguments or results of other functions (or as elements of types). So no membership loops, and no problems in ZFC.

Is this what you meant, or am I missing something?

Well, that's probably enough for today. And I didn't even get into Joe Hurd's email :)

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!