Older blog entries for mbp (starting at number 244)

I have started keeping my own blog on sourcefrog.net.

There's nothing wrong with Advogato, but I am getting a little interested in photography and wanted to be able to paste in photos, which is not possible here. (For good reasons, of course -- who wants to see the goatse guy in their diary?) And the absence of trolls is kind of relaxing.

Hackish news is that I am in between projects but still with the same employer, and looking at doing any of several interesting things. I went to try on a new suit today -- shock horror.

distcc is doing remarkably well. Tridge reminded me the other day that he originally said he didn't think it would work, but he was decent enough to not to discourage me completely. So my advice for young hackers is to listen to your elders, but perhaps go ahead and prove them wrong!

Speaking of blogs, I really like the idea of per-project diaries, like those for user-mode-linux or distcc. I think this fills a valuable function, similar to that of Kernel Traffic, by allowing people interested in the project to get a sense of how it's getting on, without needing to read the whole mailing list. It's nice to know what the developers are thinking about and how things are going. It gives you a sense of whether it's active or not.

Actually they kind of remind me of the sport reports at high school, where during assembly the captain of the rugby team (somebody perhaps not a specialist at public speaking) would stand up and use cliche or amusing analogies to describe the weekend's matches.


linux.conf.au was great. I particularly liked Rusty's kernel overview and Willy's talk -- it's good to keep up to date with softirqs and similar things happening in the kernel, even if I don't normally work there. Telsa's talk about debugging was pretty interesting too, though I cringed at some of the dumb things some projects do. My distcc talk was well received. More geeks need to learn to make eye contact while they're talking. The next LCA is in Adelaide in Jan 2004; it's well on it's way to becoming one of the big few conferences.

"black panther on 'roids with a love affair of genocidal dictators" [1].

I've been called many things, but never before something so hilarious. Really, criticizing our resident neoconservative would be superfluous -- his work speaks for itself.

Nevertheless, the essay Anti-Europeanism in America is an intelligent look at the phenomenon. (mglazer seems to think that it is a book rather than an editorial in a publication that happens to be called "NY Review of Books." Be glad you didn't make that embarassing mistake in a graded assignment. :-)

If I was going to pick out just a few passages, it would be these:

Robert Kagan argues that Europe has moved into a Kantian world of "laws and rules and transnational negotiation and cooperation," while the United States remains in a Hobbesian world where military power is still the key to achieving international goals (even liberal ones).[...]

[...] American writers should, but often don't, distinguish between legitimate, informed European criticism of the Bush administration and anti-Americanism, or between legitimate, informed European criticism of the Sharon government and anti-Semitism.

I wish mglazer would get his head around that second one and cut out the "anti-semitism" trollery.

you are here

On a related note, I listened to Bush's State of the Union on BBC World at lunch time. I suppose to be fair non-American ears have to edit out all the religious references, which seem to be obligatory in US politics. I'm not sure how I feel about it overall. He has a good speechwriter.

One interesting thing was his use term "American Coalition" -- it seems to have condensed out of the earlier, somewhat clumsy "American-led Coalition". (Does the new one imply ownership? And can you have a coalition of one?) It strikes me as quite a nice and handy term for the current political power structure, parallel to "Roman Empire" or "British Commonwealth". Practically everyone reading this lives in the entity called the American Coalition, or at least on the margins of it.

how painful...

You know, looking yesterday at the eerie black smoke and red light in the sky, and the strings of helicopters going across, I thought that this must have been a *tiny* taste of what it was like in the Gulf. Thank goodness they were dropping water not bombs.

I am a bit hopeful that this will make Australians think twice about participating in a slaughter in Iraq. Losing four people is bad enough -- the US & friends killed about 100,000 Iraqis last time. If there is any reasonable way to avoid it, we should.

mglazer seems to think anyone opposed to war must hate the US and love Hussein. I'll try to explain it in small words, not that I think it will do much good: I think the world would be better off were Hussein not in power, and we ought steadfastly to work towards a peaceful & democratic Iraqi government. I just think tens or hundreds of thousands of civilian deaths is too high a price to pay for his removal. (These people didn't choose to be born in Iraq, and few have the option of leaving.) Is that really such an unreasonable or hate-fuelled position?

Canberra on fire

A state of emergency has been declared in Canberra because of severe bushfires. Dozens of houses are on fire. Fire officers on the radio report that the fire is out of control and they are just trying to preserve life and property. About 2/3rds of the city is declared "under threat".

Roads to the south (Monaro Hwy) and west are shut off. Fires are also burning in the Snowy Mountains, Brindabellas and north of Sydney. 100,000 hectares are reporting to be burning or burnt out in Kosciuszko National Park.

The sky is eerie. It varies from black to golden to green and blue like a bruise. It's overcast like the beginning of a thunderstorm, but the light is yellow rather than blue, and the air is parched. It's very hot -- up to 43C in the middle of town, now down to 36C. Strong winds are blowing embers and causing spot fires 10-15km ahead of the fire front.

A few water-dumping helicopters are overhead but visibility is becoming too poor for them to continue flying.

Some of my friends are in suburbs that are on alert, but nobody I know is immediately threatened.


As Raph said a while ago, Google's response time is fairly competitive with that of DNS.

Mistyping or misremembering a domain name (or even using an old URL) is likely to take you to a very shonky javascript-infested porn or scam site: exhibit 1, 2. Google basically never makes this mistake, and indeed handles typos in a very friendly manner.

Partially reputation-based ranking seems in fact to be *more* democratic than ICANN.

On the other hand, a single source for essential services is undesirable.

see figure 1

jdub demonstrates that sarcastic elitism is always entertaining.


Linux font support is getting better. I probably have at least fifty fonts installed on my home machine now without really trying. Choosing fonts by picking a name from a list is really not a scalable solution.

I'd like to see GNOME acquire a "more like this" manipulation dialog as a standard widget, and use it for choosing fonts. I don't know if there's a proper name for them -- you see them sometimes in graphics programs where they demonstrate the effect of changing several possible variables, such brightness, saturation & contrast.

So the first page will be about broad styles of font: serif, cursive, sans-serif; or perhaps display or text. You can drill down towards e.g. different variations on roman serif fonts, or cursive fonts, or cartoony fonts.

I really need a mocked-up dialog to show what I mean. And I must remember not to smoke drugs while writing about GUIs.


The other thing I should have said earlier is that of course sometimes ugly performance hacks are the only way to get the job done using the tools available. So for example for Apache to use threads on NT is a necessary concession to the poor fork implementation on that platform.

What most recently got me thinking about this was the internal Microsoft whitepaper on MSSecrets in which they admint that implementing IIS as shared-everything threads was an enormous mistake.

I fairly often attach gdb to a single Apache process to see what's going on. Since the process handling a single TCP connection is pretty much isolated from all the rest, this is quite straightforward and it doesn't interfere with anything else on the machine. The writer complains that this is impossible on IIS, because it would jam up all other threads in the process.

Similarly, if a particular process dies because of a bug it doesn't necessarily affect anything else.

pphaneuf, I had the impression that Ulrich might have said that in private conversation with bje, but I will check later. (Unless one of them responds here. :-)

MichaelCrawford, the thing about "using SMP" is that nobody really wants to just "use SMP" unless they're a "how about a beowulf cluster of those" slashdot weenie. People want to get a task done more quickly. We have to ask first of all, is the task parallelizable, and how? For example, if the system wants to handle incoming network requests, then you can do that using either threads or isolated processes. Or if you have a lot of data to digest you can divide it up and work in parallel.

What I'm asking about is how a user program can do SMP via state machines without the use of threads. Saying to run two state machines in different processes isn't the right answer. That's the same as using two threads and presents all the same difficulties.

Well, I would say that it presents many fewer difficulties: the processes are isolated and so don't affect each other if they crash, they can be debugged separately, etc. As pphaneuf points out, shared-everything threads will possible cause more SMP contention than processes that use special mechanisms to share only what is necessary.

I think things like tridge's tdbs that provide a simple safe abstraction on top of shared memory are an advance in this direction. So too are rusty's futexes (fast user-space mutexes): they give you mutual exclusion and rescheduling *faster* (IIRC) than most thread implementations, even if you're using processes. (Incidentally, rusty and tridge will both be at linux.conf.au.)

If the only way to represent your problem is as a single tightly integrated state machine then that suggests that perhaps it is not parallelizable at all.

lukeg, I think what Alan was getting at is that there is no getting away from the fact that mainstream CPUs *are* state machines. (They have registers, a PC, etc.)

Since consensus is no fun, let me suggest that both threads and state machines have advantages and disadvantages,

I didn't mean so much to structure programs explicitly as state machines, but rather to suggest that data should be private by default and shared where there is a good reason, rather than the shared-everything model used by threads in C. I think often only a few data structures will need to be shared to get an appropriate degree of parallelism.

I don't know Erlang as well as I would like, but I suspect lazy functional languages are more or less an exception to the idea of threads being bad, because they're not something the programmer deals with directly.

By the way, Squid is a fascinating example of continuation-passing in C, because it wants to do select-based async IO without using threads. It's clever, though I think it demonstrates C is not well suited to the problem.

Thanks for the pointer to Communicating Sequential Processes. I'll look out for it.

Perhaps you'd like to post a precis of how threads are used by Erlang?

20 Dec 2002 (updated 20 Dec 2002 at 05:45 UTC) »

bje had a good quote from Ulrich: "threads and stupid people attract each other." It goes with Alan Cox: "A computer is a state machine. Threads are for people who can't program state machines."

We thought at lunch the other day: except for very rare cases where you really do want to simulate many asynchronous processes it's hard to see threads as anything but a performance hack. Instead of using threads, you really want:

  • Cheap structured IPC and sharing, so that data can be explicitly shared as necessary, rather than sharing everything.
  • Good async IO.
  • Good flow-control mechanisms for doing background tasks.
  • ....


Have a happy holiday, everyone.

People in the northern hemisphere might like to imagine me going for a swim in ~36C (~95F) dry heat.

Don't forget to get ready for linux.conf.au. It's going to rock all over the place. I think there are going to be some pretty cool surprise guests.

Heard around the office: "ClearCase is so good, I encourage all our competitors to buy it." (Oops, I guess they did! :-)

I started writing a macrobenchmark/test for distcc. Inspired by GAR and GARNOME, it downloads, configures, and tries to build various large packages, timing the local and distributed build times. It complements the test suite, which checks correctness on small interesting cases, by feeding through a lot of valid diverse cases.

It reveals that performance across 3 machines is typically 2.0 to 2.9 times better. For any given project the results are quite reproducible. Presumably the slow ones have either lots of non-parallelizable or non-distributable work, or something about their Makefiles is not handled well.

Another way to look at this is that distcc is about 60% to 90% of the theoretical limit of 3.0x faster. Typically parallelization incurs some cost; 90% is not bad. I wonder how much of the loss is unavoidable? distcc itself does not use many cycles, but the scheduler that distributes where to compile a particular file is not optimal.

Python is excellent for this -- so easy to write very concise and clear tests.

Testing is so fun once you get into the swing of it. There's really a lot of creativity in trying to work out how to exercise a particular aspect, either by improving the program's testability or by writing a harness or driver.

I'm reading an ACM anthology on automated testing. I forget the name. More on this later.

Seth Schoen makes a doubleplusgood point

"Trying to design a limited-purpose computer is like trying to design a limited-purpose spoken language. Imagine trying to design a language that can express only some thoughts but not others."

Seth replies with a worthy comparison of this approach to manipulation of language in Orwell's 1984.

Jem Berkes wrote a good essay about 1984 a while ago.

I entered the Shell / Economist Essay competition earlier in the year. I didn't win, but the winners are so well written that I can't feel bad about it. I think the copyright on my entry now returns to me, so I will put it up later. In particular, the gold prize winner Milksop Nation is just brilliant.

In the entire state of California there is no saloon with a clientele so reckless and depraved that the law will avert its eyes and permit them to take the insane risk of drinking a beer in a building occupied by a person who might smoke a cigarette.

(Good rhetoric is slightly exagerated and simplistic.)

We went to the California Academy of Sciences to see the skull exhibit, fish roundabout, and Eames Powers of Ten exhibit. (Didn't you watch Powers of Ten at school? Don't you have nerdy nostalgia too?). Very cool. mjs says that the Academy of California Sciences ought to have reiki, dolphin telepathy and homeopathy.

I've been listening to the BBC Radio Play of The Lord of the Rings while driving around California. I like it as a story, but I find the underlying philosophy a bit strange. The bad guys are evil in their bones -- there is no possibility of even a single orc joining the other side, or any question that there might be fault on both sides. Whereas in the real world, given sufficient perspective (say, a thousand years), it often seems that there is fault on both sides, or at least that evil is not so easily apportioned by race.

A war without death, but not what you might think:

Armored Combat Earth Movers came behind the armored burial brigade, leveling the ground and smoothing away projecting Iraqi arms, legs and equipment.

(I expect enthusiastic praise for US Army landscape gardening from mglazer.)

13 Nov 2002 (updated 13 Nov 2002 at 02:36 UTC) »


movement, here is at least one reference for malloc returning memory to the OS:

Doug Lea's malloc (If anyone wants to be a better programmer, I would suggest they should read stuff by Doug Lea.)

The ``wilderness'' (so named by Kiem-Phong Vo) chunk represents the space bordering the topmost address allocated from the system. Because it is at the border, it is the only chunk that can be arbitrarily extended (via sbrk in Unix) to be bigger than it is (unless of course sbrk fails because all memory has been exhausted).

"wilderness" is such an excellent, vivid, clear name.

I agree that it will often not be the case that there is contiguous memory at the top that can be returned to the OS. However, (as dl says), for programs that allocate memory in phases, or in a stack pattern, it may well be that memory which is allocated last is freed first.

Big, long-lived allocations perhaps should perhaps be in mmaps (perhaps containing arenas), so that they can be returned. For example, Samba now stores a lot of private data in .tdb files, which are mmaped. When they're not used, the memory is returned.

However, I think being able to return memory is perhaps atypical. Most programs run to completion, allocating memory all the way (e.g. gcc), or reach a steady state and then remain within it (e.g. servers or applications.)

It would be nice if Linux let you find out how many pages were being used by a particular map, but I don't think there is any easy way at present. Perhaps with rmap...

Of course, the more common case of "returning memory" is just allowing pages to be discarded by not touching them. This also indicates why it can be worthwhile to have swap on boxes which have plenty of memory: data pages which are still allocated but never touched can be written out, allowing more ram to be used as a disk cache. Apparently swapfile support will be better in 2.6, reducing the problem of needing static allocation of swap partitions.

A Java implementation that used handles and did not rely on objects not moving in memory would have the option of defragmenting itself to allow wilderness to be returned to the OS, or even just to avoid paging. I don't know if this is ever considered worth the code complexity and CPU cycles that it would cost.

The "hotspot" effect would suggest that for most programs where memory usage is a problem, it will be a few routines or classes of allocation that use most of the memory. Changing them to use mmap, or less memory, or an external file might fix it.

Perhaps oprofile would let you find out what programs are "causing" paging? (Not that it's really any one process's fault...) I haven't tried it, but I really want to.

I checked quickly and Debian sid's libc malloc uses mmap by default for allocations of 200kB or more. (I'm too lazy to find the exact value.) They're unmapped when freed.

235 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!