Older blog entries for bwh (starting at number 44)

Testing, Open Source, and NFSv4

I gave a presentation today at the Pacific Northwest Software Quality Conference on my work with organising testing efforts for the Linux NFSv4 community.

PNSQC is a pretty heavily industry focused conference (i.e., nearly all Windows), but it was quite interesting to see that they devoted one of their five tracks to Open Source.

Anyway, one of the questions that came up during my talk was really thought provoking. At one of the earlier talks they'd made a point that that the dark side to fixing a bug in software is that it "leaves scars". By this, what they mean is that when a proprietary application is maintained, it gains #ifdefs and other hacky fixes to issues. In that particular talk, their point was that the best way to make good, long lasting software is to not introduce defects into software in the first place, by adopting development methodologies, coding practices, and testing approaches that generate better code from the start. For example, writing tests before writing the code, shifting metrics to not inadvertantly make high bug counts desireable (e.g. ranking testers on how many bugs they find), etc.

Anyway, in my presentation I was contrasting the proprietary model, where users don't see the program until it is polished and "finished", with open source, where your first release is generally buggy and incomplete - but it works, and over time, given an adequate number of actively involved users, it gets better and better.

So someone asked about why it is that Open Source doesn't suffer from this scarring phenomenon that they're accustomed to with proprietary software. Why is it that instead of turning into the proverbial "big ball of mud", an open source project seems to just get _better_ with age? It doesn't suffer from scarring to the degree that proprietary software does, and the bigger and older it gets, the better. Apache, MySQL, Linux, gcc... all the grand daddy's of open source are considered some of the best quality software in open source (and indeed of the whole software industry, according to some studies), but they're ancient compared with their proprietary fellows.

This is a pretty profound observation. I'd love to see someone study this in more detail and see how true it is. A lot of the data for open source quality are snapshots in time, comparing an open source app with the corresponding proprietary one.

Regardless of *why* this happens, if the phenomenon is as true as it appears, it has some pretty intriguing implications. It means that as we go forward, the scales are only going to tilt further and further into Open Source's favor. Proprietary-only software companies will have to work harder and harder to stay ahead.

It also suggests that there is an advantage to getting into Open Source sooner. If the longer your application is available as OSS, the better it gets, then the sooner you open source it, the more time you'll have to accumulate the benefits. Assuming you have a good, active community around it, and that you have a solid architecture and so forth, this could ensure your application will have a long and successful life, and out compete others in the long term.

Of course, this still leaves that question. Why doesn't OSS suffer from "scarring"? I think there's probably several factors that play into this:

First, "maintenance" in open source software isn't really treated that much differently than regular development. If a bug requires a major refactoring of the codebase in order to close it, then rather than simply slapping a hacky work around and forget it, if the maintainer has the time and fortitude, he *does* that major refactoring. It may destabilize the software for a while, but it closes the bug (and often cleans up the code significantly).

There is also a "redo" effect. I suspect there are some gifted programmers who do everything perfectly on the first try, but most of us get it right maybe after the third or fifth try. With a proprietary application, you may only be allocated enough time to do it once, or maybe twice. In open source, there's really no limit to the number of times that something can be redone, and in fact you see people redoing things a LOT. This is bad in the sense of leading to churn, but I imagine it's one way to slough off a LOT of scar tissue.

A third reason is the people. In proprietary software, the testers, maintainers, and developers tend to ebb and flow. The developer that created the highly successful wizbang algorithm is reassigned to some new business critical job (or hired by some other company), and the maintenance of his code handed over to someone else (possibly someone less experienced). This new person may not have the depth of visceral understanding of the code that the original developer did. In the prototypical Open Source project, however, no one gets "reassigned". You might get bored or burnt out, but it's quite common for you to keep tabs on the program for years to come, answering questions as needed or even helping out in architecture issues with your code.

Another effect that I suspect probably helps Open Source avoid "scarring" is what I think of as the coding parallel to Wikipedia's "copyediting". In Wikipedia, there are folks who rather than writing a lot of good articles, just sort of wander around and obsessively clean up other people's work, fixing typos, correcting grammar, and so forth. I think this phenomenon probably also happens with large Open Source software projects. People with an obsession about security flaws habitually look through code for buffer overflows or injection points. People who are anal about coding standards go through and fix tabs and braces. Lots of little things that individually don't seem that important, but together can have a non-trivial impact on the software quality.

At my talk, several of the audience members shared their own ideas for the reasons. One who'd had a lot of experience with gcc pointed out that some projects have extremely rigorous review processes, that keep out bad code from the start. Another pointed out that since the code is public, you have a lot more motivation to make it good than you would in closed software; after all, WHO KNOWS who might look at it, so you better do it as good as you can.

Open Clip Art Library

I volunteered to do the OCAL release for October 1st. It's the 10th... I need to get it finished. There's over 8000 SVG's in the library as of this release. I've been really hammering at getting it cleaned up and better organized. There were 550 unsorted images, but I've hammered that down to 150. I've also added some more categories and cleaned up a *lot* of the keywords. Still a ton of work left to do, but I think I'm going to wrap things up for this month and leave the rest for the future.

MythTV

Ugh... What a pain it's been trying to get knoppmyth installed. I'd initially tried knoppmyth, which worked fine as a livecd, but as a regular distro I wasn't that impressed. Maybe that's just because I don't use debian much so my apt-fu is not so strong. I found that the default kernel included with knoppmyth doesn't support my sound card, which is possibly why mythtv is behaving erratically with its recordings. However, upgrading the kernel turned into a real headache...

Anyway, finally I gave up and decided to start again from scratch, this time with gentoo, which I'm **much** more familiar with. So far it's been going great. I got mythtv emerged a few hours ago, but ran into some trouble installing ivtv so I'll need to look into that a bit more (the ivtv guys apparently decided to rearrange their website, and gentoo can't pull the tarballs).

6 Oct 2005 (updated 6 Oct 2005 at 08:04 UTC) »
H5N1

It's nice to see the media *finally* starting to talk about the 'bird flu'. Of all the possible ways for humanity to off itself, the bird flu's my fave, and I've been keeping an eye on it for most of the year.

Of course, like usual the media's focusing on the hype and the scare, and sometimes missing the details. I'm worried that even though it's getting some attention, since the media has cried wolf a lot (sars, ebola, etc.) people will ignore this one, and won't be prepared.

I've written up some thoughts on preparing for the bird flu.

6 Oct 2005 (updated 6 Oct 2005 at 00:50 UTC) »
MythTV Saga...

Hit a bit of a hitch with mythtv... Since I got the hardware a couple weeks back I've been working off and on to set it up. Got the hardware all put together, drives partitioned, and this weekend got the mythtv software installed. So far so good...

However for some reason the pvr250 isn't quite working; it only displays static. You can change channels and such, and it *is* able to get one channel tuned in (just the Hallmark channel, unfortunately). My guess is that it's some sort of kernel setting. Kees even came over and looked at it yesterday and he couldn't figure it out either. Looks like it'll take some research...

Anyway, this reinforces my feeling that hardware drivers in Linux need better testing. One of the reasons we picked the pvr250 was because of all the mythtv cards, it's far and away the most common; in theory, that should have enable us to entirely avoid these incompatibilities.

Having said that, I suspect the issue is going to turn out not to be a lack of kernel support for the card, but rather the need to know exactly which options to pass in for the module. Or maybe I just need to manually upgrade the knoppmyth kernel... In either case, this is serving as a pretty clear example of the hardware driver usability problem that people are always complaining about. I do think that open drivers are very important, and I've no problem rebuilding kernels and such (I do so pretty much every day), but for your average user who probably doesn't even know what a "kernel" *is*, this is asking a lot.

This is what I want to be able to do:

 0.  Plug all the hardware together, including my new wizbang doohicky card
 1.  Power it on
 2.  Boot my desired Linux distro installation CD
 3.  Notice that dohicky card ain't working.  Drat!
 4.  Pull out my trusty dusty "Distributed Driver Test Farm" CD and put it in the machine and reboot it
 5.  After going through startup, it asks me some questions.  I answer them, then go to bed
 6.  The next day, I receive an email that the card has been tested
     and verified to work in kernels starting with $version
 7.  I go back to step #2, this time knowing that I can upgrade the kernel and everything will now work
 8.  Profit!

OSDL Fall Golf Thing

Yesterday was our bi-annual golf outing at work. Brian, Kees, Leann, and I were on the team behind Linus so we got some good shots of and at him.

We had fun zooming /a> (avi 5.4M) the carts around, hanging beer cans on trees, and battling the sand and water traps, and just hanging out. Linus (avi 1M) and Stuart took the prize. Thanks go to John Cherry for putting the event together.

28 Sep 2005 (updated 29 Sep 2005 at 05:23 UTC) »
Modularization is the key to OSS success

Yesterday I argued that huge userbases aren't all they're cracked up to be. The key takeaway point being that if you have way more users than developers, the developers get overwhelmed. Users are valuable insomuch as they contribute patches, useful bug reports, documentation, and so on.

In Inkscape we've got a great userbase, they are very active in making contributions. We love our users.

In this post I'm going to switch around and talk about having too many developers.

Now, thankfully this isn't a problem with Inkscape. I think we could easily scale up double our number without problem. But it's something I've run into with past projects; it's a not too uncommon malady with many really annoying characteristics. Others have written at length on all these sociological conditions before so I'll try not to be *too* long winded...

Analysis Paralysis

With enough developers and not enough pressure to get something out the door, you often find the developers frittering away time arguing, "Well, what's the *best* approach?" Usually it's better to just get something good enough, release it into the wild, see how it fares, and fix it up in the field. At Inkscape we use the phrase, "Patch first, discuss later." Other projects have adopted similar proverbs.

Too Many Cooks

Another issue is when you have more workers than work to do. They start bunching up, with several people working on the same area of the code, but not communicating too well; inevitably there are cvs conflicts, heated arguments, bad feelings, cutlery being thrown about... not a pretty scene.

Perfect Ain't Good Enough

It's often been noted that folks passionate about doing OSS development often have a strong perfectionist streak. It's common to find people resisting the release of their code. It's human nature to want things to be "just right", but the time needed to get to 100% can be completely impractical. By the time its perfect, it may well be irrelevant.

I suspect (but couldn't prove) that the resistance to release increases with the number of developers in a project. When it's just you, by definition you release as soon as you damn well feel ready. With 2 people, now you have to check and see if the other guy's ready. Maybe he's not, so you start working on something else, and by the time he's ready, now you're not. With 10 people, this gets worse; now as a minimum you need to have someone to sort of coordinate the release and schedule things so people wrap up their work at roughly the same time.

One of the advantages that OSS has over commercial software, as I see it, is that we can be much more nimble and speedy. There's very little _real_ consequence in putting out a buggy piece of open source software; it's not like you're going to lose sales. Marketshare may be a concern for certain projects where there are a lot of different options - I've changed window managers and distros simply due to quality issues, for instance - but by and large users select based on overall quality, not the quality of one particular release. Reputation may be impacted, particularly if you are known to continually release problematic software; thankfully reputation is important enough among OSS developers to be a more than adequate motivation to put out quality work.

Churn

Another issue with too many developers is the repeated rewrite of portions of code, or "churn". You'll see a given feature or functionality get reimplemented multiple times. Asked why, you'll get a response such as, "It sucked." ;-)

In reality, the causes could be several. First, if it is a complex but necessary module, someone may want to rewrite it simply for the sheer challenge. I think this is why you see frequently see 14 different clocks or whatever. Coders get to a stage in their education where they need some particular challenge, and they think, "Boy, *I* could do that better." And off we go with the 15th clock.

Second, it can occur because the person that wrote the code in the first place has wandered off, and someone new has to maintain it but doesn't understand it. If you ask someone why they're rewriting a piece of code and they reply, "It was too hard to understand," then you know they're a novice. ;-) The result is predictable - the new code loses all the hard-earned domain knowledge gained from the first implementation, it takes four times longer than predicted, and the end result is just as problematic, just in new and more exciting ways.

Third, churn can occur when the developer is over their head; they'll focus on things that they understand, like the about box or rewriting the config file loader for the 14th time, because they really are uncertain how the internals should be done.

Fourth, it can happen when someone is just a *bit* too much of a perfectionist, coupled with a big ego, and a lack of something better to do. This is the classic "Not Invented Here" syndrome. It got imported wholesale from the commercial software world. ;-)

The best solution to churn is maturity. I doubt any programmer can turn into a good guru experienced programmer without having gone through at least some of these phases; the trick is to not get stuck, but to see the flaw in yourself, correct, and do better next time.

At a project level, there may be some techniques to help discourage churn. Keep a good roadmap, so folks have some perspective on what longer term objectives they should shoot for. Putting a strong focus on the importance of delivering regular, frequently releases probably also helps. And probably most important is having some senior guys in the project so simply set good examples for everyone else to emulate, and mentor the newbs to help them develop these skills themselves.

Modularization

Okay, finally to get around to my point... Given all the myriad issues that can occur from having too many developers, how can you scale up a project to have plenty of devs without it turning into flamewars and chaos?

The solution is very simple and embodied in the UNIX philosophy itself: Be small. Avoid trying to be the be-all and end-all of software programs, but instead focus on doing one particular, specific thing, and be the best at it. Or if you must do a lot of things, try to split it up into multiple modules, each of which does just one specific thing.

Recently on the Inkscape mailing list there's been a thread about the pros/cons of Inkscape's file format converter dependencies.

This is a feature Ted and I put together way, way back in Sodipodi days. It was originally only intended to be a quicky hack to be able to scripting with Sodipodi. I've always been impressed with how UNIX is able to do so much with a bunch of tiny little programs that read from stdin and write to stdout, by just chaining them together with pipes. So I figured if I could make Sodipodi do stdin/stdout stuff, it'd be reasonably easy to code up in C, and then I could just do whatever I needed in a shell script or perl or whatnot, treating it as just a filter.

Later, Ted figured out a clever way to add new extensions at runtime easily through simple little XML files. Quickly we found that there were ALL SORTS of things we could plug onto Inkscape this way. Lots and lots of programs have been written with this pipe metaphor in mind, and Inkscape was able to gain file format support for a range of different formats. We hooked to Imagemagik and got the whole slew of bitmap formats. We tied into scripts from xpdf and ghostscript. We even found that sketch had some good input/output conversion functions. It seemed inelegant to hook into a competitor, but it was butt-simple to do. We got Dia support the same way. Today we're encouraging Scribus to implement commandline import/export options so we can tie into their most excellent PDF capabilities.

Most notably, just recently Xara got into the game, by sponsoring Eric Wilhelm, who had gotten involved with Inkscape when we started looking at adding DXF support. Now Xara is funding him to create a converter between the XAR and SVG formats.

To me, the amazing thing here is how much power and capabilities we've gained from so little work. We don't spend time re-implementing a given converter in order to work with our wizz-bang patented DLL-based plugin system. We don't have to rewrite it because it's in Python and our program only supports InkScript. In many cases, we don't even need to allocate a volunteer to maintain the code, since often the project that created it is still active.

In my post yesterday I pointed out that value in open source projects comes from contributions to the project. Obviously, gaining a lot of developers can help drive up your number of contributions. But think about if you modularize your project, and adopt standardish interfaces that allow you to simply reuse existing code, isn't that effectively the same thing as gaining the labor of all those people who put that other program together?

Popularity as measure of OSS success

A couple months ago we released Inkscape 0.42 and it got announced on Slashdot. Someone replied to one of my posts regarding Inkscape's goals.

Third, building a huge userbase is not really among Inkscape's principle goals. We want to be a great application that helps make Open Source successful.

...isn't popularity a rather large standard for judging success of a project? ...I guess I don't readily know the terms of "success" that Open Source has. If it's not to be reasonably widely accepted, easy to use, and able to help other open-source projects, I don't know what it is.

I'm making a blog entry out of my reply because this is a common conception a lot of people have, and something we philosophized in great depth early on with Inkscape.

Sometimes you hear the advice with coding, "avoid premature optimization". Well, it's similarly important to "avoid premature popularity."

Now, for the vast majority of open source projects, popularity simply isn't an issue. I've easily had ten projects that never gained users for every one that did. For one reason or another, no matter how much you love your OSS project idea, it probably won't gain any users beyond yourself. Obviously, if the project meets your own particular needs, you can count it as a success.

But those aren't the type of projects I'm going to talk about here. I want to focus on the ones that *do* have the popularity problem; the ones that have the potential to have huge userbases.

Value Stream in Open Source

Let's start by defining success to be something which is "increasing in value". A company that has increased its profits by 100% is successful. A worker who has increased their net worth by 10% has succeeded. Value can be non-monetary; an environmentalist who saved 10 more forests from being cut down than last year is successful.

The fundamental misunderstanding regarding the importance to users in open source is that users in and of themselves are valuable. Users in OSS do not directly produce value. I think people get confused about this because in open source, users are not "customers", as occurs with commercial software.

With commercial software your value stream derives from users who purchase the software in a store. The user pays $50 or whatever for the software, which gets divvied out to the distributor, marketing folks, developers, and so on as salary to help them all continue producing software. The user is the source of the value that makes the system work, thus clearly in this case the user is the customer.

Now, in Open Source, users aren't paying that $50 per copy, and thus aren't covering the developers salaries. In some cases they do sponsor the developer to do work, but these situations are unfortunately rare, and we'll ignore them as the exception, not the rule. (Anecdotally, size of userbase does not seem to correlate with number of paying users; this seems to be driven more by the type of software and its criticality to the user's business model.)

So where *does* the value come from in the open source system? What enables the software itself to get better? In this case, it isn't money from users to cover developer salaries, but instead is the good souls who make contributions to the project in the form of patches, bug fixes, documentation, testing, marketing, and so forth. The value system for open source is much more direct than with commercial software - there isn't a transformation into monetary currency; the customer can directly contribute their value to the codebase.

Thus, I believe the customer for open source projects are the people who contribute back to those projects. This is a weird concept - the customer is not the consumer, but the _producer_.

From this line of thinking, we can define success a bit more accurately. A successful open source project is one that is increasing in value quickly, through the generation of improvements to the project.

Thus it can be seen that what you want to optimize for is number of contributors, not number of users. The number of users is still important, but indirectly; more users often (but not always) means more potential developers.

Signal to Noise

One of the main issues to having a large number of users is that they can unfortunately cause an increase in the amount of "noise" in a project. They ask questions that take time to answer, they can get into arguments, locate bugs that the developers are not prepared to fix, and otherwise increase the number of demands on the developers.

So long as the ratio of developers to users remains consistent, this is not an issue. If you can be sure that for every 100 users, say, you gain 1 new contributor, growth can be a good thing.

Unfortunately, in many situations it's a case of diminishing returns. For instance, producing a Windows version of your program can greatly swell your userbase, yet on Windows there is a much lower ratio of developers to users. Development tools are not as easily at hand as they are for Linux. The ideals of open source aren't nearly so prevalent among Windows users as they are for Linux. People are more accustomed to software that either "just works" or that has its lid welded shut so they can't contribute. The new users bring a whole host of new needs and demands on your time, but without the additional development assistance, this effectively *reduces* the value of the project since it must devote its existing developers to these new users.

Focus on Values

My final point is that even if users *were* valuable in an open source project, focusing on userbase size as a goal is the wrong approach.

In business school they tell you to focus not on "making money" but on "making the business". If you focus on making the business strong and good, the money has a tendancy to find a way to take care of itself.

Similarly for open source. Don't worry about your userbase. Focus on making your software as good as it can be. Think about what *you* want, or what your existing real users need, not some imagined "if only we had X we'd have more users". Focusing on real, present needs of real, present people ensures that your work will generate benefit in the near term, enabling you to build a feeling of momentum that will carry your project a lot further. If you do this and do it well, the userbase will take care of itself.

25 Sep 2005 (updated 25 Sep 2005 at 10:57 UTC) »
Sodipodi / Inkscape

MenTaLguY posts about his motivations for joining Sodipodi and initializing the efforts That Which Would Become Inkscape... and asks what the motivations of the other founders were.

Well here's my story.

I had been leading a project called WorldForge for several years. I hadn't founded that project but had fallen into the project coordinator role pretty early on. Unfortunately, I can't report that we delivered very much, but I certainly learned a lot. I learned that there is a balance to be struck between creatively cool ideas and delivering something that people can actually use. I learned that success is more than just being right. I learned about the importance of communities and just plain getting along with people. I learned a LOT more...

Anyway, one day I was working on creating a map for some RPG game idea we'd had. I'd done a lot of game world mapping using a CAD program called Campaign Cartographer, and I'd wondered if there was anything equivalent in the Open Source world. As I've written about recently, I had learned that for an open source game development project, relying on proprietary tools was bad. So in searching around, I came across this SVG file format. A bit more searching and I found 'Sodipodi', the only open source editor around for that.

I downloaded Sodipodi and tried it out. It crashed. It crashed more. It crashed HARD. But SVG was TOO COOL. I joined the mailing list. I just hung out silently for a long time. My first post was about getting it a bit better tested and more robust.

For the next year or so, I strove to participate in Sodipodi as just a lowly peon. I knew I had a tendancy to get overly involved in open source projects, and definitely did NOT want yet another project with big responsibilities for so I really held myself back with Sodipodi. I thought it'd be fun to just be an ordinary user.

Meanwhile, WorldForge was not going well for me. This is a long story, but from my perspective the issue was that I'd bought a house in Oregon and this sucked a HUGE amount of my freetime away, which resulted in vast reduction of my involvement in WorldForge to the point that I wasn't able to meet my commitments, and other folks were starting to take over. Also, a lot of the friends I'd made early on in the project had left, so I felt like the project wasn't quite the same anymore. Anyway, a variety of things had made me pretty burnt out and frustrated.

Somewhere along the line here I'd gotten access to be in charge of the Sodipodi website. At the time, I figured this would be a useful exercise for just learning PHP. However, I also found myself falling into bad habits doing project coordination type stuff: Encouraging people to send in bug reports, fiddling with processes for how patches/bug reports were handled, etc. etc. The Sodipodi project experienced sort of a rennaissance; more people were getting involved, and the amount of traffic to the list was far higher than normal.

I helped develop the extension system, get some bug/feature tracking process in place, and kick around various other ideas/implementations of features. I helped with getting a massive flag clipart project started with Uraeus. Despite intending to limit my involvement in Sodipodi, I was quickly proving to me one of the *most* active members on the mailing list...

Lauris had been gone for a long time, to the point we weren't even sure if he was coming back. Mental felt we should go ahead and fork the project in order to carry it forward. However, I really didn't feel that was a good solution. Despite the difficulties we had in the project, I didn't feel I was ready to get involved with coordinating yet another big project. I figured either Lauris would come back soon, or we could just sort of push the project forward from within. I much preferred having Lauris in charge and simply being a right hand man type guy, even if Lauris wasn't really around much. I had a new house and I was very worried that all the yardwork, cores, hobbies, and so forth I was planning would take up too much of my time. Creating a successful open source project is a *lot* of work, especially if you're trying to do it from scratch. I didn't have time to run _another project_. I thought that if we just tried a bit harder we could work things out with Lauris.

I proposed that instead of forking, we try to change the model a midge, and get more directly involved in managing patches and prepping releases, *within* Sodipodi. We instituted the Hydra Release, sorted out how to branch the Sodipodi CVS, and established our own Head branch. We started folding in various patches that'd been sitting for too long, and begain doing some organized testing and preparations for a release.

From that point, I guess destiny sort of kicked in. For whatever reason, Hydra failed as a compromise and actually increased the tension . I suppose we'd expected Lauris to appreciate the help, but our expectations were far from met. I finally decided things would be easier if we split off on our own.

Having made that decision, there were several things very important to me to achieve. First, I wanted to avoid the situation of WorldForge where we had far more rough ideas than delivered code, so I really felt we needed to emphasize doing official releases of good, working, featureful code. I didn't want to go through all the stress of being a project coordinator, so strove to get Inkscape established as sort of a "headless" project. I didn't mind *doing* coordination work, I just didn't want to have that title or anything. Remebering how many patch submissions of mine that had been outright rejected, I also wanted to keep things open so anyone with a novel, clever idea would be able to get it into the codebase with minimal hassle.

Anyway, that sort of sums up my role. While I guess I'm ok as a coder, I think my role with Inkscape has been much less about coding and more just about establishing procedure and just sort of clearing the way for others to make the big contributions. I love that today Inkscape is so much a team project and that ANY of us could be hit by a bus, and the project would carry on with barely a glitch.

21 Sep 2005 (updated 21 Sep 2005 at 21:00 UTC) »
Linux Games

Alan Horkan blogged about games and wondered about the relation of Open Source and gaming, and what games are available for Linux these days.

While I don't do much game development or playing much any more, this is an area I've put a huge amount of thought into. In fact, pre-Inkscape I coordinated a project to create open source games.

I believe that in theory there should be no reason why we shouldn't have open source games. If you visit the store you see how similar the games are from one to the next. Often a given game is simply a rehash of some other game concept, with new artwork and a few gameplay tweaks. Sometimes these companies sublicense the engines from other companies, but sometimes they create their own new engine (for better or worse). It'd seem like with open source, you could do even better since the engine would be freely reusable.

However, in practice this doesn't occur as much as it should. Yes, there are open source engines aplenty, but I think people underestimate the time, skill, and challenge of coming up with new art, levels, and so forth for a game. In some ways, the coding of the engine is the least of the problem; if nothing else, there are tons of volunteers available who want to code, and fewer who want to create art or design levels. One of the reasons for this is that in many projects programming is held to be a more respectible talent than art, so artists can sometimes feel like they're second class. This is a shame because in all honestly, games these days make or break on the quality of their art. A great game with excellent gameplay but crappy "programmer art" will never catch on in a big way, compared with a simplistic game with gorgeous, eye-popping artwork.

Back when I was doing open source game development, there was a major hurdle in that there were few open source tools available for artists working on open source games. We found ourselves in a troublesome situation where each artist was developing art with whatever professional tool they happened to have. Many artists could not afford the best tools so would make due with a cheap one; others would obtain a copy of an expensive tool through some other means (such as doing artwork for a company in exchange for a license), and then wish to use that hard-earned tool.

Unfortunately, this resulted in difficulty in that the artists could not collaborate or share their work. It was also difficult for developers since they couldn't re-render the artwork (since they didn't have Max or Poser or Illustrator or whatever).

Fortunately, today through Inkscape, Blender, and other projects, the situation is much better. These are actually high quality tools that some artists even consider to be *better* in some ways than the commercial counterparts. They're only getting better, and they'll never "go out of business", so if you establish your skillset in these tools, you'll benefit from this investment for many years to come.

Of course, tools alone are insufficient to solve the issues. Many game projects simply lack artists. One of the motivations I had for helping start the Open Clip Art Library was to establish a way to help address this particular problem. I would love to see OCAL become a valuable repository for game art in addition to regular clip art. This way, someone with an itch to create a game but little artistic talent could browse through the library and collect a bunch of art, render them from SVG into whatever raster format and dimensions they need for their game, and be assured that their initial release will look much better than it would otherwise.

Another issue with game development that I've noticed is that there is a huge gap in game modules/libraries. At the low end, near the hardware level, there are a plentiful number of libs like SDL, Allegro, and so forth, but at a higher level there are fewer libs. Sometimes you will find specialty libs for things like pathfinding or screen rendering.

What I think the Open Source communty needs are a rich variety of stand-alone libraries for high level game functionality. For example, imagine game logic libraries that encapsulates game rules for doing resource trading, or inventory management, or in-game unit design. This is not a unique idea; see for example gtkboard, which aims to be a standard library for encapsulating rules dealing with two-player turn-based boardgames (e.g. chess, tower of hanoi, samegame, etc.) Btw, take a look at the screenshots and recall what I mentioned about the need for artistic talent. Pysol is another example of the power of this sort of abstraction approach; they provide a general purpose library for creating solitaire card games, and as a result have gained over 200 different games.

I believe the Open Source and Open Art communities could achieve the same sort of successes as these efforts by reusing this abstraction approach, and scaling it up to implement other, more sophisticated types of games. Imagine taking a flight physics library (e.g., from ) and combining it with the shooting physics from a sophisticated FPS game (e.g. Quake 3) to produce a compelling WWII dogfight game. Or imagine taking a planetary simulator library (e.g. Celestia), combining it with the game logic from a rails-type game (e.g. FreeRails to produce a new space-themed strategic resource game.

To a certain degree, you can sense this is already happening. Browse through the game section of Freshmeat and note how many games are leveraging engines like Ogre3D, CrystalSpace, and so forth. Mercator Grid gets around the lack of art by using GIS data and NATO map symbology (yikes, looks like they're invading my hometown!

I mentioned in a prior article that converted my Mom to Linux. The machine I used to do this was my last Windows system, which I'd been holding onto just for game playing. The reason is that I no longer feel its necessary to have Windows for game playing. There are tons of pretty good open source games out there that work on Linux. For instance, check out FreeCol (a Colonization-like game), Globulation 2 (a nifty RTS), VegaStrike (a space simulation/trading game), Freerails (a Railroad-tycoon type game), Wesnoth, a fantasy strategy game, and lgeneral, (a turn-based strategy wargame ala Panzer General). Or see Glest, Cube, SauerBraten, and Nexuiz, that a couple friends strongly recommend. In terms of game availability, the Open Source community seems to have hit a golden age; there are TONS of open source games.

You'll note that many of the good Open Source games are clones of popular commercial games. I think by focusing on producing reusable game components, or making it easy to take an existing cloned game and repurpose it with different art or gameplay, the Open Source community will be able to start producing more innovative and unique kinds of games.

If you want to help push Open Source gaming further along, I think it's not too much work to make a big difference. Find a game you like and help it get packaged and added to your favorite distro. If you are good at art, contribute a logo or a replacement for their default unit or terrain artwork. Take an afternoon to contribute bug reports or patches to help improve the quality of these games. And most importantly, consider reformatting your Windows machine and put 100% of your support towards Open Source games. :-)

Habitat for Humanity

Several of us from work volunteered for Habitat for Humanity today, working on building a house here in Beaverton. It's a house for a mother and son; the son has cerebral palsy and is in a wheelchair, so the house was specially designed with wide doorways, no stairs, and special closets.

When we got there, the exterior walls had already been built, so we worked on framing internal walls. It was pretty interesting to go from a pile of studs into walls and rooms in just a day. It was also cool to talk with the team leads; they were retirees from Intel, Tectronics, etc. and do this three days a week. Pretty impressive.

Afterwards, Leann and I went out for beer with Eric Wilhem of uber-converter fame. We've got some good news coming out hopefully later this week. :-)

35 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!