2006-11-13: Novelling vs. My SanityNovelling vs. My Sanity
"I need to attract more customers so I can get promoted or at least demoted so I don't have to wear this squid anymore. It's a long story."
Final score: Novelling:1, My sanity:0
2006-11-13: Novelling vs. My SanityNovelling vs. My Sanity
"I need to attract more customers so I can get promoted or at least demoted so I don't have to wear this squid anymore. It's a long story."
Final score: Novelling:1, My sanity:0
2006-11-06: The Brave New World of Schedulation; The Google FactorThe Brave New World of Schedulation
Schedulator is the software project management system we originally developed at NITI to solve a simple problem: I'm a programmer, and I ended up as a manager, and I didn't want to spend all my time doing management.
It's essentially a fancy automated implementation of Joel Spolsky's Painless Software Schedules, where each developer can manage his/her own schedule. By combining the results generated that way, you can produce an accurate schedule for the whole team.
Schedulator isn't very many lines of code, but it's the result of years of tweaking of the Schedulation process, which actually works pretty well, and has a lot of advantages over "normal" software scheduling. As part of my new job, I've been redoing major portions of the Schedulator to make it more flexible: it now supports bug tracking systems other than FogBugz, for example (although that one's still my favourite :)), and no longer requires GracefulTavi (as much as I like wikis, doing it that way led to a pretty inflexible design).
The new Schedulator isn't packaged up nicely, and may never be because it has a pretty limited audience. But it should be much easier to get up and running with the new version than the old one, because it has far fewer prerequisites (and the parts that have prerequisites are now optional plug-ins).
I did, however, write some extensive documents explaining the theory and practice of Schedulation. My favourite is the first one, Schedulator Philosophy. Read it, and maybe find out if Schedulator is right for you.
The Google Factor
Not long ago, pcolijn wrote about Google's development process and compared it with Schedulator. I generally agree with his analysis. The problem is that most of the problems Peter identified were not problems with Schedulator, per se: they were problems with the general team atmosphere and management. (I'm allowed to say that, because I was one of the managers.) Schedulator is very compatible with the "Agile development style" done at places like Google. Okay, so Schedulator doesn't come with any index cards included, but it does do a lot of the grunt work of arranging those index cards - particularly when there are hundreds of them in your bug tracking system and they all seem important - for you.
As much as Google is indeed awesome, hires a lot of great people, produces a lot of great things, and makes an awful lot of money in the process, I sometimes caution people against taking their experiences at Google too seriously. People in highly successful companies are prone to an attribution bias, much like people are in unsuccessful companies. That is to say, no matter what your team does at Google, Google is still going to be making billions of dollars. And no matter what your team does at some unsuccessful or dying company, you probably can't pull it out of the toilet. In both places, the consequences of your actions are disconnected from reality. (Bill Gates called this the honeymoon phase.) At Google, raises and bonuses for everyone. At the unsuccessful company, even great people get laid off sometimes because there's no money to keep paying them.
For example, the article that Peter was responding to had this very important paragraph:
Sure, this is a great incentive for people to work harder, and Google's very wise to do it. But do the math: every single project that launched? Always small teams? But Google employs thousands and thousands of programmers! They can't all be on the big screen. In fact, most of them can't. But Google's still successful.
Which kind of team are you on? How can you tell? Statistically speaking, you're probably on the unsuccessful kind. And beware of logic that sounds like, "my team did this, and we weren't too late" or "their company did it this way, and went three years overtime." There's a correlation there, but there could easily be no causality. Super-ultra-smart people could do their project plan on stone tablets and still get their projects done an order of magnitude faster than complete boneheads.
That said, I used (and still use) Schedulator, and it works great for me :)
2006-10-30: Programmer ReplacementProgrammer Replacement
As a programmer, there are two main ways to make yourself irreplaceable. One is to write code that nobody else can possibly understand, so they can't figure out how to get rid of you. One is to write code so great that nobody could ever want to get rid of you.
The best way to be replaceable is to be somewhere in between.
2006-10-27: Six WordsSix Words
apm linked to a series of "six word short stories" by famous people in Wired Magazine. Many are quite excellent.
I'm not famous, but this sounded like fun, so here I go:
(This is a work of pure fiction. Any resemblance to real CorporateDogs is purely coincidental.)
Woo hoo! Only 49,994 words left!
2006-10-24: A day in the life of power managementA day in the life of power management
Hmm, it seems the expected flamewar is flaming even better than expected. hub chimes in to say that power management has been working just fine on Linux since 1999, thank you very much. Um, yes, well... sit back and relax, because I have a story to tell. Like all my favourite stories, it involves a lot of stupid people doing stupid things. It doesn't have a happy ending. If you're easily depressed, you should stop reading right now. That means you, Pierre.
Once upon a time there was APM, or "Advanced Power Management." As with most fancy-named specifications, its name certainly doesn't imply that there was an earlier "Totally Simple Power Management" that it replaced. The way APM worked was it used a kind of "super supervisor mode" in the x86-series CPUs to actually run some BIOS code above the operating system layer in an invisible way so the kernel couldn't detect it (except for the bad BIOS coding that would result in lost clock ticks and random interrupt latency). This BIOS code would do things like detect presses of the laptop's suspend button and forcibly suspend the system whether the operating system liked it or not. If your operating system supported APM, which was by no means necessary in order for suspend/resume to work, it could get a notification from the BIOS that it was about to suspend. In later APM specifications, they added a way for your software to reject the suspend operation in progress. However, because of BIOS bugs, this didn't really work so well. Also, if you took too long to service the notification, the BIOS would just give up on you and suspend anyway. Ah, those were the days.
Oh, also, because the BIOS writers for a laptop always knew exactly which hardware devices were in the particular laptop, the APM BIOS could always know exactly how to suspend and resume all the devices in your laptop transparently to the kernel. And yes, it actually worked just fine. Even the video mode was saved and restored correctly on most systems, all behind the scenes.
But that was a long time ago. To give you an idea of how obsolete APM is, my apmd page seems to be the fourth hit in a Google search for APM - and it's the first one about power management. And let me tell you, that's not because I'm such a popular guy.
There were two perceived problems with APM. First of all, it was highly x86-specific, which annoyed the people at Intel who were trying to make and sell a non-Intel-compatible processor. (Remember that failed experiment? Me neither.) Secondly, operating systems programmers correctly noted that all BIOS programmers are crackheaded morons who can't implement an API correctly to save their lives. The way things tended to work was this: Windows didn't come with native APM support at the time, and the BIOS programmers would screw up the APM implementation horribly, but that was okay! Because every motherboard had to include a special APM driver for Windows anyway, and this APM driver would just implement the broken APM calls that the BIOS required, and so nobody would know the difference. Sure enough, nobody (er, well, no Windows users) did, and the world of power management was a fine place. For Windows users.
Linux users had a bit of trouble because they had to independently discover all the stupid BIOS bugs in various laptop APM implementations. But they mostly sorted this out eventually. In any case, one popular way to make your problems go away and have suspend/resume work mostly right was to simply disable the Linux APM altogether and just have your BIOS do it transparently.
So anyway, back to those people who wanted to fix what wasn't particularly broken. They invented ACPI, and I want to kill them. Oops, I'm getting ahead of myself.
ACPI stands for Advanced Control and Power Interface. Now, I have two things to say about that. First of all, there was no "Delightfully Simple and Straightforward Control and Power Interface," although ACPI actually makes APM seem that way in retrospect. And secondly, ACPI has nothing at all to do with APIC, the Advanced Programmable Interrupt Controller. The only comparable thing between the two is that they both have Linux kernel boot-time options to disable them because they both have buggy Linux drivers that cause your computer to crash a lot.
Now where was I? Oh, right, ACPI. So, the idea of ACPI was to get the BIOS developers out of the way on a normally-running system by turning around the power interface: instead of the BIOS running things and just occasionally notifying the kernel when something happened, the kernel would run things and just ask the BIOS to do stuff occasionally, like power down various hardware and blink the lights and so on. That would mean BIOS bugs wouldn't be so harmful. Oh! And while we're here, because we're insane, why not implement the whole thing using Forth-like bytecode instead of real assembly language, so it can also run on that new (doomed) 64-bit processor we've been working on? Forth-like bytecode is super simple and can be implemented in a couple of kbytes, so it won't cause much overhead, and suddenly everything will be portable. It'll be great!
Because I like foreshadowing, I'll give you the quick version of what I'm about to say. To my total amazement, they managed to fail totally on all counts. How's that for consistency?
First of all, ACPI completely and utterly fails to remove the BIOS from the picture: in fact, you're calling back and forth to it far more than you ever did with APM. And because you're calling it from your context instead of it having its well-known super-supervisor context, it's more likely to get confused by the funny way you do your stack or registers or memory protection modes. And there are so many ways to call into ACPI, because they broke it into tiny pieces so your kernel can control exactly what it wants to when it wants to... except that the BIOS manufacturers didn't actually test what happens when you call their stuff in random order, so actually you have to reverse engineer exactly what order is safe to call things in, or your system crashes horribly.
Oh, also, the bytecode thing went awry somewhere, because the Linux ACPI implementation runs to hundreds of kbytes and is filled with all kinds of weird and very complicated special case drivers for obviously totally dissimilar APIs like fans (it goes fast! it goes slow!) and CPUs (it goes fast! it goes slow!) and LCD backlights (it gets bright! it gets dark!). And naturally, ACPI, being a big horrible pile of crap, was never adopted on any non-x86 platforms, so its CPU independence doesn't help.
(Around the time all this garbage was being invented, people were trying to make non-x86 platforms run PCI video cards, which was tough because the video card initialization code was written in x86 assembly. The XFree86 group and other groups solved this problem unilaterally in a less elegant-sounding but actually working way. To this day, video BIOSes are still in x86 machine code, not bytecode.)
But that's not all!
Remember, the OS developers wanted to get the BIOS developers out of the picture, because BIOS developers are indeed crackheaded morons - I think we can all agree on that. Unfortunately, while they completely failed to do this - and in fact, ACPI makes things much worse - they also made it so the BIOS developers can happily just disclaim any responsibility for whatever parts of power management they don't want. Once upon a time, in the golden age of APM, the BIOS had to support all your devices because it was the BIOS whose @#$! responsibility it was to suspend and resume everything. Now, however, the OS is expected to pick up wherever the BIOS leaves off - which is almost everywhere. That means most ACPI BIOS implementations don't actually handle any parts of the suspend/resume process properly, often including even the CPU speed. Certainly they're not smart enough to suspend your ethernet chip. And heaven help you if you want your video mode to be restored! My now-stolen last laptop, a Sony Vaio, actually had an ACPI interface to control the LCD backlight... but it didn't do anything. There was a totally different non-ACPI backlight control elsewhere in the system, and the BIOS developers simply didn't bother to take out the ACPI one leftover from a previous laptop model. The OS has to know this, based on the laptop model number, and deal with it.
But that's okay! Because the Windows driver programmers, sitting right next to the BIOS programmers or maybe the hardware designers, can simply compensate for all this stuff. The CD that comes with every laptop contains modified drivers for all the broken stuff the hardware and BIOS designers did wrong when building your system in the first place, so everything is fine! For Windows users.
Now, Linux certainly didn't have it all easy in the days of APM, but things had a pretty good chance of working because they were relatively simple. With ACPI, everything is just a total disaster. You have to implement suspend-to-disk all by yourself, and it's doomed to suck because the stupid BIOS does its time-consuming and useless initialization before you're even allowed to start. You have to implement power saving features in every single driver, where with APM you didn't have to do it in any driver. Linux developers are notoriously bad at handling exception conditions, which power management is, so the power management code for most drivers is almost-untested barely working garbage. And of course, you do still have to call into the mostly-but-unfortunately-not-totally-useless ACPI BIOS, which for some reason takes hundreds of kbytes of source code to do and requires talking to a horrendously buggy BIOS. That means you need an exception table listing every laptop anyway to tell the kernel which bugs you need to work around at which time.
And if you do all that stuff correctly, your laptop will suspend and resume properly!
And you know what? Even then, it'll still suck, because that's just what you have to do to make it barely work at all. If you want to, say, bypass the stupid BIOS POST phase to make it boot faster, or do what Apple does and actually save to disk and memory at suspend time, then resume from memory whenever possible, or have the system suspend to memory and then suspend to disk if you stay suspended for a long time, or any of that other complicated stuff: that's all extra. Meanwhile, most hardware developers are a bunch of slackers and even when you do suspend the bloody thing properly, the battery dies in a few hours anyhow.
So kudos to the Linux developers for making it almost work. I'm sure that was really hard. Yay team.
Compared to that, Apple cheated like crazy. But I still like my Mac, because it actually works.
(And I have Ubuntu running constantly in a virtual machine because I mostly hate Darwin, but that's another story.)
2006-10-23: In my new role as Mac zealot...In my new role as Mac zealot...
Far be it from me to get into any sort of flamewar-causing religious-type discussion (ha ha), but pcolijn asked how any sensible developer could use a Mac for their work, given a few of its flaws. In order, and without any actual reference to the questions:
(How fine? Well, pphaneuf asked me to time how long it takes to compile Quadra on native OSX vs. my virtual Ubuntu, but unfortunately I'm not smart enough to compile it on either one successfully. Anyway, it's fast enough for me.)
2006-10-21: Delirium 3: Specs and Constraints(No, I still haven't finished transcribing my handwritten notes from when I was laptopless in early September. Here's another entry. Note to self: I know it's where the term "copy and paste" came from, but yeesh, handwritten notes are a really inferior medium for actually doing it.)
Delirium 3: Specs and Constraints
So let's say you managed to assemble a team of obsessive, maniacal geniuses who are all "signed up" and ready to get the job done. Great! So, uh... what job, exactly?
There's more than one reason people don't run teams like this too often. The most basic reason is obvious enough: the rules are so strange, undocumented, counterintuitive, and completely opposite from normal (successful!) engineering practice that people just don't try to assemble the "right" conditions very often. But if, after takeoff, the results tended to work out spectacularly... well, people would get over all that other stuff, right? ... Right? ... Actually, yes, I think so.
But life isn't so simple. There's a much more serious problem with the "obsessive genius" technique. The problem is that the results are entirely out of control. You'll almost certainly get something cool - probably more cool stuff than you know what to do with. But you won't know in advance exactly what it'll be. And that, my friends, makes it hard to run a business.
Some companies do it anyway. They start a "research lab" or "skunkworks," put the wheels of obsession in motion, and pray like crazy that something good comes out.
Well, research labs aren't for me. I'm not going to beg "the establishment" for their precious resources, with nothing more to offer than a random splash from my really, really big shiny innovation firehose. When all I have to choose from is total randomness in large quantities or total determinism in disappointing quantities, I'm in a classic compromise situation. The non-compromise condition is massive, sustainable, directed ingenious output. And my theory is that it's a lot easier to direct output that's already massive, sustainable, and ingenious than it is to take small deterministic output and make it bigger and more ingenious. It's easier to learn to aim a big firehose than to put out a big fire with a really accurate squirt gun. Mind you, no matter how well you aim, some tangential stuff is still going to get a little wet. But it'll dry off eventually.
Which all finally brings us around to my point. It is possible to direct the output of a team of obsessives - sort of. You just have to be careful about what exactly you direct.
Determinists love specs - clear descriptions of the ins and outs, which must be captured exactly. Then you leave your geniuses to do any design they want, as long as it implements the spec precisely.
I tried that method. I really did. I sliced and diced specs vs. designs in every way I could think of. UI design isn't specification, it's design!, I declared once.
But with each and every attempt at this technique, I failed in one of two ways. Either the spec was too dumb and restrictive - or the result was too crazy and random. Either the product was overspecified or it was underspecified. And now I realize that it always will be. Trying to "right-size" your spec is just a tradeoff - a compromise. The right answer doesn't even lie on that axis.
But even though I've never really managed to separate the spec from the design (for management purposes at least - it works for post-documentation, or what we sometimes call a "retrospec"), I have successfully produced non-random, ingenious results - and I've worked with other teams, like NVS and GourD, that produced them. What's the secret?
The secret is "end-to-end." The problem with our spec-design separation is actually very simple: it draws a line between the real, deterministic world - the spec - and the world of genius - the design. But that's nonsense. How can your product be ingenious if its workings were defined by someone deterministic? How can it be useful if the workings are defined by someone crazy? The spec, too, must be written by someone in genius mode. Those were the real successes I've seen - where the spec, the design, and the product were all done in genius mode, "proper processes" be darned. And I've been fooled like everyone else: GourD had a Requirements doc, then a Spec, then a Design, then an implementation. Deterministic, right? The model of good engineering!
Yeah, sure. The author of all those parts was the same person, and while they were written down in sequence - which is a very fair, respectable, methodical approach applicable to geniuses as well as anyone else - I know for sure that the documents were "non-causal." The contents of the design changed the requirements long before anyone wrote down the design. I was there. You can't fool me. Just try to tell me the requirements would have been exactly the same if Mediawiki hadn't existed as a model. It was all a good exercise in in thinking clearly. But it wasn't a "deterministic engineering process." It was a sham. It was an ingenious bunch of work that couldn't help but come out because all the right pieces, including the right people with the right motivations, were in place.
So the question is, then, how did it get to be the right product? Where did the genius stop and the real world begin?
The answer to that question is the key to everything.
2006-10-17: ISO-9001; Writing stuff down; High-level intellectual discussionISO-9001
My apartment in Montreal is near an ISO-9001 certified parking lot. I didn't even know that was possible. Or necessary. Or even something that would have occurred to anybody in a million years. But now... I'm just not sure anymore. Is it really safe to deal with those other, unprofessional, "fly by the seat of your pants" parking people? I mean, what if I'm parked there and they just go out of business, or the pavement falls off, or worse? Will they have all the requisite forms?
Writing stuff down
apm wonders if an expressed idea is more valuable than an unexpressed one. You can spin in circles for hours wondering what "valuable" means and what's the meaning of life and whether there's a God, but let's think of it this way instead: why exactly is it so hard to write things down?
Well, it is.
I think the reason I write stuff down, at least, is because writing it down forces me to make it better. It's so easy, once something is written down, to see that it completely doesn't make sense and you'd better start over. (Perhaps this is why salespeople don't write stuff down.) So even if you don't care whether writing your story will improve the lives of anyone else, maybe you can justify all that work by thinking that maybe you'll be improved in the telling.
Speaking of which, several people have asked me why exactly I write this journal, since it's pretty obvious that I mostly blather on about nothing important, there's no particular unifying theme, most people don't understand what the heck I'm talking about most of the time, I'm certainly never going to get rich doing it, and so on. I've made up several answers in the past. But here's another possibility: maybe writing it down helps me to clarify things in my own mind. And maybe pretending to write for a "real" audience keeps my standards a bit higher, so that I'm forced to clarify things even more. But in the end, maybe I just write because I enjoy reading what I write. And yes, I always laugh at my own jokes. Always. Even if it's just out of sympathy.
Err, my condolences if you actually enjoy reading this crap. Feel free to continue. But you'll be sorry:
High-level intellectual discussion
Okay, I just have to get this out of my system. The other day I was cleaning up a spill - as a purely scientific experiment done in the most hygienic way imaginable (given the circumstances) - with my tongue. And I noticed, to my amazement, just how effective it was. Now, we all know that smearing around a spill with your finger just smears it around and makes it worse. And paper towels sometimes help, particularly the extra-absorbant kind.
But as far as I know, my tongue isn't extra-absorbant. It's not like I lick stuff up, then pull it back in and squeeze it out so I can lick up some more. You just lick stuff up, and it goes in, and then you can lick more stuff up. Why can't my fingers do that? Isn't it amazing? Wow.
2006-10-12: Selfishness; BarCamp, Advogato, and Self-selection; NaNoWriMo
Today, a collection of random and mostly unrelated thoughts. Oh, how I do prattle on.
Today, while I was considering the sad state of the Quebec construction industry, it occurred to me that unionization, and socialism in general (ie. Canada and especially Quebec) is not as straightforward as it seems.
The idealized American system is a rather liberal everyone-for-himself economic free-for-all, in which "the system" is intended to make it so that if everyone acts "selfishly," it'll all work out for the public good. In Canada, we don't trust that theory so much, so the government gets involved more frequently, and in general people take a somewhat more "moralistic" outlook even in business decisions.
Both systems have their good and bad sides, but the resulting style is very different, which I will explain in terms of my experience at Subway (the food chain) in Canada and in Seattle. In Canada, you go to Subway, and they make you a sandwich according to your specifications. In Seattle, the person who made my sandwich was totally insane and made a perfect sandwich according to my specifications faster than I've ever seen anyone make a sandwich in my life. You could barely even see his hands, he was so fast. I'm not making this up.
Now, I don't go to the U.S. very often, so I don't know if this is typical or not. But imagine it is: the idea is that, because everyone is being selfish, they become hypercompetitive: not only do store managers have to outdo the other restaurant chains, but maybe they try to outdo other Subway outlets as well. If you can get your sandwich twice as fast here as at the place two blocks away, maybe you'll choose their store instead of the other one. So they focus on customer service. Figuring it out was all a bit circular, but in the end it's not indirect at all: being selfish equates to being as good to the customer as possible. That's selfish?
In Canada, people are more laid back and they don't worry about competition as much. The result is nobody particularly cares how fast they serve me my sandwich. In fact, if the Sandwich Ace from Seattle showed up and tried to work here, people would probably look at him funny: he makes the other employees look bad. The unselfish thing for him to do would be to slow down and not rock the boat, resulting in worse customer service overall. That's unselfish?
Which system is more selfish then, really? Is it the one we thought it was?
BarCamp, Advogato, and Self-selection
I went to BarCampWaterloo a couple of weeks ago and it was quite entertaining: a small, interesting group of people, just like I like.
The problem is, that doesn't make any sense.
In general, communities that are good and interesting and widely applicable tend to start off well, and then explode into hypergrowth until they're no longer manageable and most of the people there are just annoying and all the fun is gone. Take the Linux kernel developers (it was possible to follow their mailing list, once, even if it wasn't your full time job), or Debian, or actually even This Whole Internet Thing. Impromptu communities (I wanted to say "online communities", but BarCamp isn't, exactly) tend to be in one of two states: growing or dying, and growing typically leads eventually to hypergrowth. The best you generally hope for is either slow explosion or slow death, so that it can be fun while it lasts.
The typical way to slow your community's growth sufficiently is to limit your topic area, so fewer people are interested. Forget the linux-kernel or debian-devel mailing lists; try linux-fsdevel or debian-apache instead.
What's weird about this is that I have two as-yet-unexplained counterexamples: Advogato, which claims to be a web site connecting "free software developers" (how restrictive is that?) and BarCamp, which generally claims to be "about Web 2.0" (nobody even knows what that is!) but in which anybody can show up and present about anything even remotely relevant.
Why, then, did I find that the majority of stuff produced by both communities was interesting to me? Certainly I'm weird, because the majority of people wouldn't have found them interesting at all. But that's the point. The communities created are self selecting and quite restrictive, but it's not selected by topic area. It's something else. Perhaps BarCamp simply selects for people who don't think deliberately failing to plan your conference in advance is a stupid idea. And that's a pretty small group of people - and they're pretty compatible with each other.
(This concept of people connecting better based more on style than content relates to my earlier comments on literacy.)
This is a pre-announcement of my intention to join this year in National Novel Writing Month, in which each foolish participant attempts to write a crappy 50,000 word novel in 30 days. Normally I wouldn't bore you with my plans for such things, but apparently one of the keys to success is setting yourself up to get teased a lot if you slack off (ie. mutual motivation). In my case, this is actually by far the most likely situation since I don't actually have any free time to write a novel in.
But I mean... impossible deadlines and absolutely no quality standards? Hello, count me in! Anybody wanna race?
2006-10-10: Making banking fun, round 1; Bankers are literate!Making banking fun, round 1
It turns out that other than so-called "business logic," there isn't much to business software. It's all the same stuff: a database connection, forms, fields, buttons, some daily/weekly/monthly batch operations, and the dreaded Reports.
Over and over again.
What's interesting - and interesting is the first step on our way to fun - is the process of converting fluffy human requirements into usable software. Many programmers doing business software aren't quite up to the level of programmers doing other kinds of software, but also interestingly, you don't get a lot of cross pollination between groups. Things just start getting done a particular way, and they keep getting done that way, and nobody really thinks twice until and you get these funny "programmer cliques" that believe totally different things and don't really talk to each other. You know, the Oracle types don't have much respect for the MySQL types, and vice versa, and they'll never resolve their differences because, well, they don't really care enough to bother.
One thing I learned is that there's a huge, tangible, visible difference between a company that understands business (and software is secondary) and a company that understands software (and business is secondary). Software companies do an awful lot of silly things that just don't make basic business sense. I don't mean any particular company here - they're almost all pretty clueless. Me too, for now. But it can be tough not to be clueless when you don't even know what clueless is, and worse, you've isolated yourself from the people who do know.
Now turn it around. People with business sense tend to be profitable, which is a good start. But they don't know that much about advanced software development, and silly things happen and time gets wasted, and they don't know that those things are totally avoidable. The smart ones, though, are willing to learn. Great software is an amplifier for a great business model, and the two put together is how you can get a really great company. This is straight out of Good to Great, of course. Go figure.
Bankers are literate!
All this is tangentially related to another amazing discovery I made recently, which is that banking-type people are actually much more compatible with programmers than I thought. You would normally not expect this, since banking-type people (kind of an extreme form of accounting-type people) are in many surface ways different from programmers: less energetic, less liberal, less creative, and often less smelly.
But one thing we have in common - and this is a critical thing when you're trying to work together - is that both bankers and programmers are literate.
"Literate" is the word I use to describe the sort of person who processes text efficiently. I hold literacy to a higher standard than, say, the people that tell me 97% of Canadians are literate. Maybe so, but I'm sorry, hardware engineers can't spell. That goes for assembly language programmers too, and I'm grossly overgeneralizing, but my generalization goes almost exactly opposite for good high-level language programmers.
Maybe you noticed the trend too, and thought it was just a coincidence. But it's not.
By my extended definition, many "illiterate" people can read and write... but they'd rather not. It's a last resort. Illiterate people would rather have meetings and "facetime" and phone calls. They also prefer diagrams over long-winded explanations. There's nothing really wrong with "illiterate" people, I suppose - let's call them "visually oriented" or "spacially oriented" people instead, and it suddenly sounds better.
Illiterate people are hard for literate people to get along with. They don't understand why programmers like to have long email flamewars to resolve their problems. They don't see the point of developing documents collaboratively in a wiki. They prefer shorter and more abstract, not longer and more detailed. They can't skim. They can't search. They don't understand irc. Being the vast majority of the population, they agree in these respects with almost everyone, and so they assume the literates are just doing things wrong and our methods are somehow horribly inefficient: "Geez, it must have taken you all morning to type that up! Why didn't you just have a meeting?"
...Illiterates are frequently slow typists.
But bankers are literate. Of course they are - they have to be. Their whole profession is about spending long hours avoiding people, working with long, complicated legal documents, writing things down so they don't get forgotten or misquoted, and checking long columns of numbers, making sure nothing gets lost, because one mistake messes up the whole thing. Hmm. Avoiding people, complicated documents, concentrating, columns of confusing numbers, one tiny mistake screws everything up. Does that sound like anyone you know?
Programmers are totally capable of dealing with this sort of people, but they let differences of external style - mostly the conservatism, I suppose - stand in their way. But little things like that are less important to collaboration than the fundamental communication channel. If you can talk to people effectively, you can deal with them. Or so I claim.
Hmm, have you ever seen an accountant using a wiki?
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!