9 Aug 2002 raph   » (Master)

Why formal methods?

Formal methods have earned a bad reputation. At one time, I think it was widely held that eventually we'd figure out how to prove large scale programs correct, and that the benefits would be compelling. But it hasn't worked out that way. People program as carelessly as ever, with reams of bugs and security holes to show for it.

Even so, I'm stubbornly hopeful. I think we simply haven't worked out good ways to do formal proofs yet. Mathematical proof style isn't really all that formal, and doesn't seem to adapt well to computer problems. Dijkstra's work provides glimpses of the future, but those techniques won't become popular until we can teach ordinary people to use them.

Another problem is that mathematical logic is fairly nasty. Especially when dealing with infinities, you have to be careful to avoid pitfalls like the Russell set "paradox". It's especially a problem with rigorously formal logic because you really want "metatheorems" to work: essentially, creating new deduction rules along with proofs that they're sound. The problem is that no formal system can be both complete and consistent. So you have to place limits on metatheorems, and often getting work done has the flavor of working around these limits.

What's the answer? Well, one way is to bite the bullet and adopt a reasonably powerful axiom set as the basis for all other work. A problem here is that you can't really get people to agree on which axiom set is the right one. In fact, controversy rages on whether or not to accept the law of the excluded middle or the axiom of choice.

But I find these debates deeply unsatisfying. What bearing do they have on whether a program runs according to its specification? My gut feeling is that infinity-prone concepts such as integer, real, and set are a bit too seductive. Computers deal with finite objects. In many ways, 64-bit word is a simpler concept than integer, as evidenced by the much larger set of open problems. A 64-bit addressable array of bytes is a bit larger, but still finite. You can do a lot in that space, but a lot of the classically tough problems become conceptually simple or trivial. Solution to Diophantine equations? Undecidable over integers, but if you confine yourself to solutions that fit in memory, just enumerate all possible arrays and see if any fit. You wouldn't do it this way in practice, of course, but I find it comforting to know that it presents no conceptual difficulty.

It looks as if the field of formal methods may be heading this direction anyway. Recently, the subfield of model checking has been racking up some success stories. The models in question have much more of a finite flavor than most mathematical approaches to computations. It may well be that the technology for reasoning about finite formal systems evolves from the model checking community to become widely applicable to programs. That would be cool.

I still haven't answered the title question: why formal methods? My answer is that formal techniques are our best hope for producing software of adequate quality. I'm under no illusion that it's a magic bullet. No matter how good the proof technology becomes, I'm sure it will always be at least one or two orders of magnitude more work to produce a provably correct program than to hack one out. Even so, programs will still only be "correct" with respect to the specification. In some cases, a spec will be relatively simple and straightforward. Lossless data compression algorithms are probably my favorite example: essentially you want to prove that the composition of compression and decompression is the identity function. But how can you prove a GUI correct?

You can't, but the use of formal methods will put intense pressure on spec authors to remove needless complexity and ambiguity. When formal methods catch on, I think we'll start seeing specs that truly capture the essence of an idea, rather than sprawling messes common today.

I also believe that security will be a major driving force in these developments. Our processes for writing software today (free and proprietary alike) are simply incapable of producing secure systems. But I think it's possible to formalize general security properties, and apply them to a wide class of systems. Buffer overflows (still a rich source of vulnerabilities) are obvious, but I think it's also possible to nail down higher-level interactions. It would be cool, for example, to prove that a word processing format with rich scripting capabilities is incapable of propagating virii.

Even so, getting the right spec is a hard problem. Timing attacks, for example, took a lot of people by surprise. If your spec doesn't take timing into account, you might have a system that's provably impossible to break (I'll assume that P != NP gets proven somewhere along the way), but falls even so. This brings me to another point: security assertions are often very low-level, while the natural tendency of computer science theorists is to lift the abstraction level as high as possible.

This, I am convinced, is how we'll program in fifty years. A lot of work will go into writing good specifications; more than goes into writing code now. Then, when people actually write programs, they'll do correctness proofs as they go along. It might take a thousand times as much work to crank out a line, but I think we can easily get by on a thousandth as much code.

And I think it's a pretty good bet that this will come out of the free software world rather than proprietary companies. We'll just have to

see.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!