Older blog entries for graydon (starting at number 119)

17 Mar 2009 (updated 17 Mar 2009 at 19:40 UTC) »

Chalst: Certainly he could target Clojure at LLVM; he'd just have to cook up a big elaborate runtime to replace all the runtime services the JVM is providing for him now. LLVM gives you pretty much nothing runtime-y. At best it is going to give, say, GC hooks or profiler hooks, or stack-management hooks to an unwinder library; in general it's runtime library is totally minimal. This is not a criticism: LLVM is great, it's just not a runtime system. It's a code generator / compiler backend.

What he wrote was this:

I’d like to pick my VM for its security, footprint, handling of parallelism and messaging, and run-time appropriateness. This would let me choose Lisp, Haskell, Python or C++, depending on the skillset of engineers available to me; and the JVM, .NET platform, or LLVM, depending on how I meant the code to be used.

To me this shows a pretty broad misunderstanding of the "VM" suffix shared by JVM and LLVM. They're different layers in the language implementation stack. There is no run-time component to LLVM to speak of; nothing on the scale of the services offered by a JVM. No "parallelism and messaging" system, no verifier, no security system, no reflection services, no dynamic loading services beyond the OS loader, no adaptive inlining or specializing by the JIT as the program's running, no complete GC, etc. etc. I'm not particularly keen on the JVMs flavours of all these services, but they're nontrivial. If you're writing a language that wants any of that stuff, and you want to "target LLVM", you're going to be writing a lot more of your own runtime services. Even getting GC working in an LLVM-targeted language involves nontrivial user-written parts.

About your example: GCJ does not compile Java "to the GCC runtime". The GCC runtime is roughly "libgcc and libc". GCJ compiles using GCC's infrastructure, sure, but its runtime library is quite substantial on its own.

(Appropriately enough, a moment of searching turns up the fact that there is also an LLVM sub-project to provide the JVM and .NET runtime services on top of LLVM. Heh.)

Chalst: as far as I know that is one of the objections many people have to working in haskell, or any language with a particularly "high level" semantic model sufficiently divorced from machine-parts. A correct and performant implementation of the language requires a large and complex runtime, often with a heavy set of automatic services, auxiliary data structures, and nontrivial compiler activity. This forces the programmer to give up a degree of control and predictability, and sets up a general performance tax / performance ceiling for the whole program.

It's rather the same objection asm programmers make when choosing against C or C++. The comparison extends, in fact, to the counter-arguments made by the high-level system defenders: that the C compiler (or JVM runtime as the case may be) is capable of automatic optimizations far beyond the "intention and control" of the lower-level hacker.

Strangely, media codecs and arithmetic libraries still get some of their cores written in asm, and OS kernels, graphics libraries, network stacks, servers, games and desktop applications still get written in C. I think a bit of the "automatic-optimization better than any human" story is overreaching, doesn't happen as often as the defenders wish, or enough to make up the difference for the systemic taxes.

The OP's notion that he'll someday be able to "choose" between LLVM and a JVM as backend is meaningless, alas, apples-to-oranges. LLVM is a lower-level component (compiler backend); you could implement a JVM using LLVM, but the complexity of a JVM comes from the abstract semantics required by the Java language spec (which includes a VM spec), not any particular implementation substrate.

jedit's main text pane now seems to work, and all the gui-branch work is merged back to the gcc trunk in time for the 4.0 branch. if you download trunk, configure it with the cairo 0.3.0 snapshot, and run

gij -Dgnu.java.awt.peer.gtk.Graphics=Graphics2D -jar jedit.jar

you should get something like this.

free swing

today jedit started working on free swing. it's a bit ugly and slow, but it's by far the largest free swing GUI we've constructed yet. that's rendering on cairo, which seems to be maturing nicely. I also taught the imageio system to use gdk-pixbuf, so now we can load and save most major image formats.

monotone

we've upgraded to sqlite 3.0, which does away with most real size restrictions. I put some of my ogg files and digital camera images in it. seems to work. also the current head supports "single file" diffs, commits, reverts, etc. many active development branches now; people are adding features faster than I can keep track. that's quite satisfying.

free runtimes summit

Red Hat had a little summit which I attended last week, showing off the excellent work our free java hackers have been up to lately. But it was not all show and tell; an important theme to this meeting was getting various disagreeing people to talk face to face, with civility, rather than fighting through email.

Personally I don't like fighting much anymore. I'm particularly uninterested in the java and C# fight. So I wrote up a little exploration of the differences, to see if we can't just learn to live with them as minor dialects of the same basic language.

statistics and information theory

I got a couple nice books recently:

  1. Probability Theory: The Logic of Science
  2. Information Theory, Inference, and Learning Algorithms

Both these books are important to me, because the little statistics I tried to learn in university didn't make any sense. It wasn't for fear of math. I studied math. The stats I learned made vague sense when discussing uniform and discrete problems, but seemed increasingly mysterious as continuous non-uniform distributions were introduced: the justification for assigning a particular process to a particular distribution never seemed very clear, and the flow of information between knowns and unknowns, data and hypotheses, and the meaning of "randomness", became increasingly muddled. It resisted my ability to understand.

These books -- especially the former -- seem to place all that muddle in the context of a titanic struggle between Bayesian and Frequentist philosophical perspectives. Which is good. It's actually very important to me to see that there has been meaningful digression into the deeper epistemology of probability, because most statistics textbooks just pressure philosophical questions about the reasoning framework into humiliation and silence. These books come out plainly in favour of the Bayesian (knowledge-representation) view of probability, and give a pleasant contextualization of classical information theory in these terms. But they also spend a good deal of time discussing how a probabilistic reasoning process can be thought to make sense -- to be well-motivated and executed with confidence -- from the pragmatic needs of a creature needing to perform some uncertain reasoning.

I've heard people describe Bayesian inference as a cult. I'd be curious to hear that side of the argument distilled; so far it just seems like refreshingly clear thinking (similar to the clarity of thinking in Exploring Randomness, another one I've recently enjoyed).

cool language of the week

IBAL is a nice language for playing with inference in a way which is easy for programmers. Perhaps the future will see more such languages

hashes

depending on how you view the state of cryptographic research, the results from this week are either very good or very bad. in the short term it probably means not much; in the slightly longer term it probably means we have a lot of replacing and upgrading to do.

this incident points out two facts:

  • cryptography is an arms race and you need to keep spending money on it as long as your opponents are
  • the ability to extend, augment, or replace algorithms in the field is an important feature for a security system

there will inevitably be an increase in pointers to henson's paper. beyond the preceding two points, the paper makes a valid argument that input or algorithm randomization can help turn permanent failure cases into transient ones. however, it extends these points, I think unfairly, into an attack against the whole concept of cryptographic hash functions (CHFs). that's a mistake, and really involves a lot of glossing over of what a CHF is and why we need them:

  • difference detection is the principal task of data integrity
  • humans can see big differences but not small differences
  • the meaning of "big" and "small" can be changed, depending on the type of lens you use
  • a CHF is a lens which enlarges some differences and shrinks others
  • integrity systems should always use as many lenses as they can afford to
  • working with "no lenses" is a meaningless concept: computers produce derived images of data all the time. even loading and storing bytes from memory is a copying operation, and there is always -- even with no attackers -- a certain probability that any bit in a string will flip.
  • CHFs produce good value for the money: you spend a little bit of money on R&D, a little bit of CPU time calculating, and a little bit of disk space for storing; you get a lot of integrity. that's the equation.

I agree with the point about hash randomization, but tossing out CHFs as a concept is a serious mistake. coding theory, along with say binary search, is one of the exceedingly few sources of computers' Real Ultimate Power.

LSP and subtyping

I don't usually care about subtyping. I read a theory of objects and found it pleasantly formal, but I probably missed a lot. the issue just doesn't usually grab me.

then I read oleg's old page about many OO languages failing to satisfy the LSP, and realized how important and overlooked this critique is. the basic result is that subtyping in most OO languages these days is behaviorally wrong, and when it works it is working only by accident.

I find this remarkable!

there appear, from a further evening of digging in the literature, to be only two known ways to produce subtyping relationships which satisfy the LSP: the first statically prohibits the problem by careful construction of the type system and restrictions on the kinds of extensions possible in subtypes: the essential difference being to dispatch by type rather than by object. this is what CLU does, and amazingly the approach seems to have completely died off.

the second approach is to let the problem persist dynamically in your language, but check it at runtime using explicit pre and post conditions ("design by contract"), and combine ancestor contracts appropriately when subtyping. this is of course what Eiffel is famous for, but the only other language I see picking up on it is D. in both cases, I find no mention of the fact that the correct combination of contracts in subtyping is a pure necessity for behavioral correctness. as necessary as an array bounds check or a null pointer check -- oh wait.

looking over the pantheon of modern OO languages which fail to address this issue lends further evidence to my belief that language design is actually regressing. CLU was developed before I was born.

on preventing null:

along the way of discussing pointer idioms, I wrote that nullable pointers should be expressed as a disjoint union between nothing (null) and a pointer value. this means that a pointer, to the type system, is something which does point to a live, non-null, non-special object. if a pointer "can be null", it's not expressed as a pointer; it's expressed as a union. union {null, pointer} so to speak. this is standard in several languages, it works fine; you just need language support for disjoint unions, not C's "inclusive unions". it's really the same thing elanthis is describing, only formalized using a normal language feature. when you have a value of that type, you need to switch on its value in order to dereference it, such as:

  switch (maybenull) {
    case pointer p: p->dosomething();
    case null: donull();
   }

notice that there's only a pointer value p in scope in the switch arm corresponding to the pointer part of the union. the principal benefit to this approach is that the dereference operator is statically inapplicable to the null value. it doesn't even get a name. you know by looking at types alone when you need to write this form and when you have a live pointer. for cases where there may be a null, you're using a union of this type, and you're forced to handle it. for cases where there may not be a null, you just use a pointer. really, this is not at all novel. it already exists in many languages.

on preventing cycles:

the "transfer of ownership" idiom I described is insufficient, as elanthis pointed out. I think a more correct way to do this is to require that an owning pointer (or disjoint owning/null union) can only receive its value when it is initialized. it is then inexpressible to "assign" to that variable, merely to evict its value (transferring to another variable which is being initialized) or destroy it.

I think that restriction does the trick; you would need to initialize A before B and B before A to make a cycle. I believe there is some relationship between this approach and the "linear naming" research that chalst is doing. perhaps I'm underestimating his work though.

in any case, I mostly disagree with elanthis' position that cyclical ownership is some sort of special, unsolvable problem. it's just a delicate problem. I think it's generally understood that structural language restrictions are a tradeoff between "helping to organize your thoughts" and "binding your hands from doing useful work"; such things must be developed carefully and with attention to costs, but not treated as sacred cows.

socializing

met some really nice, slightly older and significantly wiser folks for dinner yesterday; it's always neat to talk with people who have more long-term perspective on computing trends. also spent a few days in north carolina with the red hat gang soaking up the corporate cheer. it was a bit rushed, but I had a lot of fun and met many interesting and friendly hackers. next up: gcc summit!

D

I wanted to lend a moment's support to the D language, which is sensibly crafted and sticks to pragmatics rather than dogma. pragmatics are important. I'm especially happy to see it promoting DBC and formalized support for unit testing.

languages in general

I've been tinkering with some ideas for programming languages recently. not necessarily "new languages", but aspects of programming which I feel ought to be expressable in language. there are two issues in particular I'd very much like to see formalized into language constructs:

  1. "design by accounting" (in a sense analogous to DBC): the notion that the signature of a function includes resource use accounting; typically in terms of memory and time, though conceivably also in terms of user-defined costs. the notion being that a function has a certain cost which is the sum of the costs of its sub-expressions, with loops costing some bounded-above multiple of the cost of their body. costs would be calculated as much as possible by the compiler, and statically verified as much as possible. analogous to other forms of mixed static/dynamic checking, those cost checks which could not be statically verified (say at module boundaries, i/o operations, or hard-to-analyze loop heads) would be left in the program, making "withdrawls" from a runtime-managed cost center, with underflow causing an exception. the technology for this is just a redeployment of conventional value range propagation and bounds checking.

  2. an "ownership tree". this came up last night too. every time I get to discussing modern languages, recently, an uncomfortable fact comes up: I don't like garbage collection. I don't like it because it encourages the programmer to not think about ownership, so they build systems which slowly leak space, by accidental accumulation of dangling references. I don't claim that C's "explicit free" model is any better mind you; it makes you think about ownership but gives you no vocabulary with which to express your thoughts, so you wind up acting on mistaken assumptions (double free, forgotten free).

    I think the solution to this is to differentiate, at a low level, between owning and non-owning (weak) pointers. you should get an owning pointer from an allocation, and each allocation should (until it dies) have exactly 1 owning pointer in the program. you enforce this by prohibiting direct copying of owning pointers, but make 3 variants of the concept of "copying an owning pointer": one which clones the pointee into a new owning pointer, one which forms a weak pointer, and one which forms a non-storable reference (for use as a function argument). if you want an owning pointer you can move around (and possibly set to null), you make a discriminated union of "nothing" and an owning pointer (like haskell's Maybe 'a or ML's 'a option), and add a language construct which takes two of these values and transfers an owning pointer between them atomically. passing an owning pointer as an argument -- another form of "making a copy" -- is similarly prohibited.

    in other words, the rules of the language should make cyclical ownership, null pointers, and double-ownership inexpressible.

    this strikes some people as draconian, but I claim that it is nothing more than the extension of stack discipline from control into data. we already enforce, in many languages, that all control structures must be expressed as a tree of function calls, rather than an arbitrary graph of gotos. at any time, there is a single, obvious stack of activation frames from main() to your current context. this simplifies reasoning about control; imagine debugging without stack traces! all I'm saying is that we should force all data structures to express their ownership as a proper tree, rather than an arbitrary graph of pointers.

swing

spent some weeks digging through the repaint, double buffer management, clipping, and paint coalescing sections of swing. I think I have it mostly right now, but then cairo went and changed underfoot. alas.

monotone

I released a version of monotone with support for human readable version names and win32. two more boxes ticked off the list.

thermostat

a year ago we bought an extra athlon as a workhorse for compute-heavy jobs. frances needed it mostly, but I use it sometimes for big builds or tests. it turns out the athlon was much faster than the one we meant to buy (accidentally got the wrong part) and came with a very bad fan. this gave it the unfortunate habit of critically overheating and shutting down.

three things were done to fix this:

  1. a new heat sink made of copper.
  2. a program called athcool which twiddled the necessary registers to have the chip actually enter a low power state when the OS runs HLT. this is not the default. if you have an athlon, you probably want this program to run at boot.
  3. a homebrew thermostat consisting of the lm_sensors module, a gkrellm alarm, and a shell script. the sensors monitor temperature and the alarm trips after a certain temperature is reached. this runs the shell script, which pulls out the top process id on the system, sends it SIGSTOP, and schedules an 'at' job for 5 minutes in the future to send SIGCONT.
components and text

on a more abstract, cultural note, I'd like to expand on something I've said here before, about programs.

I think the idea of a "software component" is a wrong and damaging distraction from a fundamental fact: program text is the ultimate component technology.

the text of free programs can be read, indexed, searched, edited, copied and pasted, translated from one language to another, paraphrased, commented, printed out for posterity, machine analyzed and machine transformed. that is far more than any other "component" technology (COM, .NET, java beans, CORBA) has ever permitted, and more I suspect than any ever shall.

after speech, text is the deepest, oldest, and most powerful human language technology. free software has grown strong because it treats text as an ally. a new user has a ton of code to read. a ton of examples. command interpreters which respond to single words. tools which speak and can be spoken to. when you want to learn how a system works in free software, you can read through it, edit it, perturb it, tinker directly with the text. if you need to find something, you run grep, use TAGS, ask google, use LXR or global. when someone wants to explain something they can mail you code. you can buy a book on algorithms and see how things are written.

the web too has flourished due to treating text as an ally: page source can be viewed, rendering happens on the fly, markup is minimal and mixed in with plain written text codes. screen scraping may be embarassingly "low tech", but think honestly about some of the wild, out-there programs which people have actually got working using the web's text, and imagine trying to coax a distributed OO system to do the same thing. imagine even agreeing on the interface. it would never happen.

I think arguments about programming language are of trivial importance compared to the argument about text vs. black-boxes. we should always remember the primacy of text. you can translate between the text of different languages (or auto-stub, or what have you) but you're doomed if you have no text.

110 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!