Older blog entries for apenwarr (starting at number 238)

2007-01-24: While you weren't looking, operating systems became irrelevant; Epilogue

While you weren't looking, operating systems became irrelevant

I wrote briefly before about how I now own a Mac laptop and am thus automatically entitled to be a Mac zealot. What I said at the time was true, but also a bit tongue-in-cheek.

What's actually true is that I'm a "use the right tool for the right job" zealot, and Macs happen to be the right tool an increasing amount of the time lately. I think that has almost nothing to do with their software, and a lot to do with their hardware - and the fact that the software was made for it.

Swooshy windows, yet another variant of Firefox and OpenOffice, yet another average-quality Unix clone (or a below-average Linux clone), a rather questionable music player, a highly questionable video player (Quicktime), a buggy X11 server, a total inability to synchronize with my Blackberry, a lame port of CUPS that's supposedly an excuse for printing, and an almost-as-good-but-costs-money clone of VMware... those are really not very good reasons to want to use a Mac.

But I bought one, and it's great. Why? Well, excellent power management tops the list. Then there's the power connector with built-in LED, the volume control that knows whether I have headphones plugged in, the two-finger scrolling touchpad, the slot-based CD player, the high-quality keyboard, the tiny "mylar sleeve" case I have that's custom-designed for its shape, the CPU fan that blows somewhere that's not out the bottom so using it on a bed doesn't cause overheating.

You know, the computer. Not the operating system at all.

You know why I finally bought one of these things? Because it had an Intel processor, which meant I could run Windows (which I need) and Linux (which I like) in a VM at near-native speeds.

But here's the thing. Apple could have never made hardware this good unless there was an operating system that could handle it; most of those hardware features, like power saving, needed some special support from the software. Microsoft isn't likely to provide it, and Linux is useless to normal people, so there was no other choice.

Until recently, that meant using good hardware was this terrible compromise: you can have great hardware, and the operating system fully supports it, but oh yeah, those apps you need? They don't run. But we have these mediocre toy ones that sort of do similar things. Maybe try those?

Not anymore. Now an operating system can do what it was originally meant to do: run your hardware, and get the heck out of the way. There's a separation of the operating system from the desktop environment like the Unix people were trying to do, but at a level that actually works. It only works because CPUs are so fast now, but it works nevertheless: we emulate a whole computer, on top of the computer we built, and it's not really too slow anymore. It turns out that emulating hardware is effectively a fixed cost, not a linear cost, so beyond a certain CPU speed, it's never worth removing anymore.

I have a Linux machine. It suspends and resumes perfectly every time, and its audio volume adjusts automatically when I plug and unplug my headphones. I can do two-finger scrolling on my touchpad without violating Apple's patents.

It's running in a virtual machine on my Mac.

Epilogue

The relationship of this post with the fact that Mono is an excellent .NET clone that runs on Windows, Linux, and MacOS X, and Microsoft obviously knows this and likes it just fine, is left as an exercise to the reader. If operating systems are irrelevant, where do you suppose Microsoft wants to be today?

Syndicated 2007-01-25 18:18:18 from apenwarr's log

2007-01-20: Your company might be on the right track when...

Your company might be on the right track when...

...customers start calling you just based on a single very vague press release. That, my friends, is an entirely new experience for me.

Also, today I learned what Operational Risk Management is. As it turns out, our software already fully supports it. It's a big change for me, not being the one that wrote the majority of the software I'm actually helping to commercialize.

Syndicated 2007-01-22 16:25:14 from apenwarr's log

20 Jan 2007 (updated 21 Apr 2007 at 15:03 UTC) »

2007-01-19: Versabanq

Versabanq

We now have a name and even a newspaper article. Clearly we're famous.

Syndicated 2007-01-20 17:49:12 (Updated 2007-04-21 15:03:53) from apenwarr's log

17 Jan 2007 (updated 17 Jan 2007 at 04:05 UTC) »

2007-01-14: NiR: Ice Storms and the Slowness of Startups

NiR: Ice Storms and the Slowness of Startups

Ah, ice. Yesterday I awoke to find my car and all the nearby trees covered in it, due to a recent minor bout of freezing rain. Well, it was minor in London, Ontario. Apparently it was a bit more serious in other places.

Thanks, google images!

This event reminds me of the Great Quebec Ice Storm of 1998, which I was lucky enough to participate in from the comfort of my then-home in Ste-Anne-de-Bellevue. That home, conveniently, was on the same electrical circuit as a major hospital nearby. We lost power for maybe, oh, 45 minutes or less. In the middle of the night, while I was sleeping, so I didn't pay very close attention.

Many people lost power for longer. Rumour has it that this resulted in a minor Quebec baby boom nine months later. I wouldn't know anything about human babies, but at the time me and ppatters were safely indoors, entirely with electricity, discussing and building the earliest beginnings of Weaver 1.0 and NetMap, and integrating it all with WvDial.

I don't have too many terribly insightful things to say about those hazy days, except to observe that, hey, dialup really doesn't seem as popular now as it did then, does it?

But dialup, and WvDial, were the things that sold the first bunch of weavers.

Speaking of which, here's a quick timeline to remind me how slowly this all actually started:

  • Summer 1997: thought of the general idea.
  • Fall 1997 (+3mos): incorporated Worldvisions Computer Technology, Inc. Confusingly similar to a certain massive charity, but that didn't matter, because we're in a different industry. Heh heh.
  • Winter 1998 (+6mos): ice storm, working at General DataComm by day, coding on Weaver at night.
  • June 1998 (+1yr): first 5 weavers actually sold, but only because I wrote Tunnel Vision 1.0 the weekend before. (Side note: it's highly disturbing that tunnelv is still the second Google hit for "tunnel vision." Stop it already! OpenVPN is way better!)
  • Fall 1998 (+1.5yr): dcoombs and I take four months to work full time on Weaver 2.0 and to sell, sell, sell! with reasonably passable results, especially considering neither of us had ever sold anything before.
  • Early 1999 (+1.8yr): Licensed Weaver sales rights and tech support to sestabrooks & co. for a share of 50% of gross profits. They apparently sold quite a few of them.
  • Summer 1999 (+2.0yr): During a failed attempt to sell out to the now-defunct Netwinder people, licensed NetIntelligence 1.0 to them non-exclusively and, oddly, got hired to write their user manual.
  • Fall 1999 (+2.3yr): Met opapic and gmcclement and formed Net Integration Technologies, Inc., the replacement for Worldvisions. Made opapic the CEO, taking over leadership from me, to the benefit of all involved.
  • Sometime in 2000 (+2.5-3.0yr): First round of real financing. I forget exactly when. Opened an actual office.
  • June 2001 (+4yr): Hired mcote, the first developer other than me and dcoombs.
  • January 2002 (+4.5yr): opened the MontrealOffice for the first time.
That's all for now. Off I go to meditate on the idea that we only had two part-time developers (ie. me and dcoombs) until about four years after the project's initial conception. And my new project is only about four months in. Historical expectation based on the above: we should have just chosen a name, incorporated, and now be moving at putting together the beginnings of our first product.

Wow. Historical predictions are still scarily accurate. Why did it seem so fast at the time?

Syndicated 2007-01-17 02:42:00 (Updated 2007-01-17 04:05:45) from apenwarr's log

2007-01-12: NITI in Retrospect: Job Titles and Roles; Epilogue

NITI in Retrospect: Job Titles and Roles

Part of the fun of working in NITI-Montreal in the earlier days was the flexibility: everybody did a little bit of everything. If you're in a startup company, that's the way it has to be. For the right kind of person, it's more fun anyway.

At first, every single developer who worked in our Montreal office was known as a "Human Cannonball." They all did different things; some people are better architects, some are better coders, some are better debuggers or testers or spec writers or infrastructure specialists. But they all had the same title. That made people feel like peers.

There was a job description that went with that title. It made being a Human Cannonball sound hard, which it was. And so, by implication, if you managed to become one, you must be awesome. Different people are awesome at different things, but the point was, you were awesome, because you wouldn't be there if you weren't. That made people feel proud of themselves and of their peers.

We used the same title and job description when hiring co-op students. I've never seen results like that before in a job posting: the applicants were incredibly self-selecting. Where normally you might choose to interview 10-20% of the resumes for a particular job - if you're lucky - with Human Cannonball, it was sometimes worth interviewing up to half of them. Why? Because people don't want to look like idiots in an interview, so if the job looks like it's way over their head, they simply won't apply. Combine that with past co-op students who go back to school and say how great the company is and that we have very high standards, and the less awesome applicants simply go elsewhere. The most common failure mode for this job description? People who thought they were awesome, but frankly, weren't. But those people are pretty easy for sufficiently awesome interviewers to detect.

But one important feature for all of this was that the job title didn't make any sense. That wasn't just fun and games; again, in my naivety, I suspected it might have been, and I'm not opposed to fun and games. But there's more to it than that. Meaningless job titles prevent people from jumping to conclusions. If you advertise for a "Programmer", you'll get people who assume their first-year university Java skills are sufficient, and they drop their resume in the slot. If you advertise for a "Human Cannonball," people have to stop and think.

As time went on, we introduced a new concept on top of this: roles. Roles were temporary, didn't change your job title or salary or peer ranking, and Human Cannonballs would shift in and out of multiple roles as time permitted. We used a mix of meaningless and meaningful names for the roles: FeaturePusher, ReleasePusher, HumanBalance, Architect, and so on. Exactly what these roles entailed isn't too important right now, but what's important is that the fuzzy role names helped people not to make assumptions about what the role entailed. For example, a "FeaturePusher" was something like a Project Manager. But Project Managers have a stigma attached to them; they're often not programmers, yet they're somehow put in charge of programmers; they get paid more; they hire/fire people; they do employee evaluations. None of those things were what we wanted. Some of those jobs belonged to the HumanBalance (which is sort of like an HR manager, except not). The FeaturePusher did indeed "manage" a "project", but that's entirely different from managing people. Changing the title removed the assumptions, and helped people to think more flexibly.

All that was how we did things in Montreal. Nowadays at NITI, people have job titles (which encompass roles) like Project Manager, Team Lead, QA Manager, and so on, with predictable results: they're afraid to take on roles outside of their job title, they take on roles they're unqualified for because of their job title, and so on. Hopefully, they rise to the challenge.

Epilogue

For my next company, I'm thinking of balancing the job titles a bit better between silly and boring (some people are just embarrassed to have a silly job title, and that's fine), but I don't want to lose the lack of clarity in the process. I'm kind of inspired by the research labs where everyone is an "Associate" or just a "Programmer" or sometimes literally "Just a Programmer." The advantage of these is that it makes the lack of clarity the explicit goal - it doesn't hide that goal behind silliness, which people tend to wrongly assume is the goal in itself. Also, job titles like "Programmer" at least have the advantage of helping people who are used to thinking of themselves in a role to find the job, where they might never look twice at "Human Cannonball."

So here's what I'm thinking: "Programmer, etc." "Test Automation, etc." "Entrepreneur, etc." "Sales, Marketing, etc." It gives the general idea, and then loosens the meaning with "etc," which makes you think twice.

Would you work for me if you could be a "Genius, etc?"

Syndicated 2007-01-15 15:43:54 from apenwarr's log

13 Jan 2007 (updated 16 Jan 2007 at 04:08 UTC) »

2007-01-13: NiR: NetIntelligence 2.0 and second-system effect; As applied to business

NiR: NetIntelligence 2.0 and second-system effect

After hiring people at NITI, we didn't really suffer from the second-system effect (with the possible exception of UniConf). That's probably because I got it out of my system in the early days when it was just me and dcoombs.

Even in version 1.0, Weaver had two related features called NetMap and NetIntelligence. NetMap was a passive packet-sniffing program that monitored the network and tracked which IP addresses were where; NetIntelligence (before its name was co-opted to include other features) analyzed the data in NetMap and used it to draw conclusions about the local network layout.

The first versions of Weaver couldn't even act as an ethernet-to-ethernet router; they were designed for dialup, so you routed to them on your ethernet, used them as your default gateway, and the Weaver's default gateway would either be nonexistent, or a PPP connection, or a demand-dial interface that would bring up a PPP connection if you tried to access it. In those days, NetIntelligence's job was easy: it just had to detect the local IP subnet number and netmask, and pick an address for itself on that network. (Weavers were often installed on networks without a DHCP server so they could become the DHCP server; requesting an address from DHCP usually didn't help.)

In later 1.x versions of Weaver, we added ethernet-to-ethernet routing in order to support cable modems, T1 routers, and so on. We extended NetIntelligence to do three other relatively easy tasks: figure out which network interface was the "Internet" one, figure out which device on that interface was the default gateway, and set up the firewall automatically so that the "Internet" would never be allowed to route to your local network, even for a moment. This code was very successful and worked great; it was the origin of the "trusted vs. untrusted" network concept in Weaver, and it's pretty easy to find out which node should be your default gateway when you know you can't lose. (That is, it's always better to have a default gateway than no default gateway, so even picking the wrong one is okay as long as the user can fix it.)

That was version 1. NetMap/NetIntelligence 2.0 was where things started going wrong. I decided that this concept was so cool that we should extend it one more level: what if we install Weaver on a more complex network, with multiple subnets connected by routers scattered about? What can we do to eliminate "false" data produced by misconfigured nodes? (Trust me, there are always misconfigured nodes.) What if there's more than one Internet connection, and sometimes one of them goes down? Wouldn't it be great if Weaver could find all the subnets automatically, configure the firewall appropriately, and allow any node on any connected subnet to find any other node using Weaver? It seemed like a great timesaver.

Except that it wasn't. First of all, it took a long time to write the code to handle all these special cases, and it never did really work correctly. We had some very angry customers when we put them through our 2.0 beta cycle and Weaver regularly went completely bonkers, auto-misconfiguring its routes and firewall so badly that you couldn't even reach WebConfig anymore. Or sometimes you'd end up with 100 different unrelated routes to individual subnets, because Weaver wasn't sure that 100 routes through the same router really meant that was your default gateway. Those messes were the origin of the "NetScan" front panel command, which made NetIntelligence forget everything it knew and start over. To this day, I consider this a terrible hack. But it's sure better than 2.0beta1, which didn't have a NetScan and had to have a developer (ie. me) come on-site to debug any network problems.

NetIntelligence 2.0 was a perfect example of the second-system effect: we chose to add a lot of cool but not-really-necessary features all at once, we had a non-working product until the whole thing was done, it was an order of magnitude more work than the 1.0 version, and bugs in the new features caused old, 100% reliable features (like the ability to reach WebConfig!) to fail randomly. It was a disaster.

In retrospect, the mistake is easy to see. Not long after, the proliferation of DHCP meant that auto-discovering subnets was much less important. But more importantly, Weaver's network discovery feature was supposed to make Weaver easy to configure on simple networks. Any IT administrator who managed to set up a network with multiple subnets already knows what those subnets are and how he wants to route between them, so auto-discovering isn't worth anything. The existence of a complex network implies the ability to configure a router for it. We sacrificed sanity on networks where people didn't have the ability, all in the name of giving a useless feature on complex networks that didn't need it. Oops.

By the time we were actually hiring developers back in 2000 and 2001, we had already been through all this mess. Nowadays in Weaver (now Nitix) 3.x and 4.x, we've wrangled NetIntelligence under control, and all those broken-but-cool features from 2.0 actually work and do cool stuff. But to this day, once in a while, it still produces a huge, insane list of correct-but-pointless subnet routes that you have to delete by hand.

So yes, I know a thing or two about the second-system effect.

As applied to business

As I continue to lay the groundwork for a new company, it's important to keep this sort of thing in mind. Just because a few "cool" things were missing the first time around, don't lose sight of the basics in round 2.

Syndicated 2007-01-13 20:48:51 (Updated 2007-01-16 04:08:12) from apenwarr's log

2007-01-11: Other products; Nitix; Variable Declarations in C# 3.0

Other products

Nitix

Variable Declarations in C# 3.0

I've never seen this kind of variable declaration before. It's statically typed, but you don't have to declare the type. In other words, it's perfect.

Syndicated 2007-01-12 15:36:57 from apenwarr's log

2007-01-10: Roaming Profiles

Roaming Profiles

It's not my imagination: Roaming Profiles in Windows really are nearly useless. The article is pretty depressing in itself, but read through the comments in response.

Syndicated 2007-01-11 15:15:06 from apenwarr's log

2007-01-09: Avery on new-age religion

Avery on new-age religion

"Avery, you're a programmer about everything," said one of my friends a few days ago.

Okay, I admit it, I'm an addict. Here I go again.

One of the big science vs. religion debates is about whether we actually have souls, or whether our brains are just big sacs of chemicals and electrical impulses. Today I realized that the question is really an easy one.

Of course our brains are sacs of chemicals and impulses. Which is like saying that our computers are just boxes of sand with electrons bouncing around. It's true, but that doesn't mean the software doesn't matter, even though it's just made up of those electrons.

Software is like your soul. It's there, made up of electrons, but it has an existence that matters much more than the individual electrons do. It can even move around and multiply, conforming other boxes filled with electrons to match its own structure. And its pattern was created, put there for the first time, by a power greater than itself.

Where does your soul go when you die?

That's simple too. Just because one box of electrons dies, the software doesn't disappear. Its pattern has had many effects on many other boxes, and the resulting patterns are just as important as the original program, and just as influenced by its creator. Software might change, but once it becomes part of the network, it's never really gone.

Syndicated 2007-01-09 04:46:54 from apenwarr's log

2007-01-08: Arcnet does nuclear physics

Arcnet does nuclear physics

Apparently they're switching the photomultiplier tubes in the Super-Kamiokande project from old, obsolete Sun VME-based Arcnet to that newfangled Linux Arcnet thing, which is, I hasten to point out, still not dead.

It would seem that I wrote the first version of that driver about 13 years ago. Egads. Incidentally, did you know that it had poetry too? I sense a theme.

Syndicated 2007-01-08 04:11:29 from apenwarr's log

229 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!