Older blog entries for raph (starting at number 92)

Thought I might touch a chord with the Adobe font rant.

graydon: Yes, unifont is cool stuff. It's not at all fair for me to say that there's nothing going on in the world of free software fonts. Reaching into history a bit, you'd also have to give the Metafont project major props both for incredible technical innovation and a library of highly distinctive, useful fonts.

But my point still stands; if you try to point to any free font work that directly compares with Adobe's latest OpenType efforts, you will come up empty.

Regarding compilers and other bits of infrastructure: yeah, that stuff either needs to be free software or something very much like it. The idea of putting bits of this infrastructure into brightly colored shrinkwrap boxes distorts the whole endeavor incredibly destructively. Proprietary operating systems and tools tend to suck really hard because of this. However, the new Adobe fonts most emphatically do not suck.

Mulad: I'm not complaining that there's no money for free software development. In fact, I manage to get paid pretty well for the work I do. What I'm complaining about is the indirect nature of this funding. OctobrX, Raster, and others are in the position of being sponsored by a patron (in their case, a public corporation losing money at a dizzying pace). Further, I support my free software habit through consulting and other "day job"-like activities. It's a very different thing than simply being paid for the work I do, an option provided by the proprietary model, but denied me by free software.

Let me try to present my argument again. All software sucks, most of it pretty badly. When you try to analyze the causes of this suckage, two winning strategies to make software suck become immediately apparent.

First, you can fail to bring adequate resources and talent to the project, dooming it to being hopelessly amateur. For some projects, free software does manage to make these resources available, mostly for bits of widely-needed infrastructure. For many others, it simply doesn't.

Another good way to make software suck is the familiar story of decommoditization: basing it on complex, ambiguous, and ever-changing "standards" as a way to create lock-in and drive the upgrade treadmill. Microsoft Office is basically a canonical example of this, but certainly not the only one.

Note that neither proprietary nor free software have a lock on either kind of suckage. For example, I consider sendmail a really good example of a piece of software which has managed to lock in its market segment by being bad. And for demonstrations of lack of resources, just look at any segment of the market without an ultra-compelling economic argument, for example Be applications.

All I'm saying is that whenever somebody comes up with software that doesn't suck (or sucks less, anyway), we should sit up and take notice, and ask how it came into existence in spite of the powerful forces which always seem to subvert it.

C++

Over the past few days, I've had the pleasure of speaking with the creators of both Inti and Gtk-- about techniques for wrapping C libraries with C++ API's (this is in the context of Libart). It's not hard to see how the split came about. These two projects have radically different approaches. Karl Nelson seems to be in love with every trick in the C++ book. He encouraged me to consider multiple inheritance, using streams for operations on images, String-like objects for paths with copy-on-write semantics, and more. Havoc, on the other hand, encouraged me to use refcounted pointers, with a link back from the C object to the C++ counterpart. Inti is basically very C-like (ie, pointers are still pointers), but with type-safety and some other nice features that C++ provides. Even though the relative simplicity of Inti appeals to me, I don't think it's what I want. A user of Libart will often create many short-lived objects. It makes a lot of sense to use stack (auto) allocation for those - you get static enforcement of constructor/destructor pairing, almost for free, and code is a lot cleaner if it doesn't have a lot of explicit memory management stuff hanging around.

Taking a step back from the Inti/Gtk-- split for a minute, though, I'm convinced that the real problem is simply C++'s overweening complexity. Providing a nice, high-level wrapper for Libart should be a relatively simple task. However, C++ makes my design space really, really big, especially when it comes to options for memory management. I'm becoming more attracted to C++ now that it's (sorta) becoming a standard, and that decent implementations are becoming available, but I also feel that constant vigilance is needed to fight back the complexity that the language introduces, to keep it from infecting the rest of the project.

Ah, I can breathe a bit easier. Finished up one consulting job and sent an invoice. Now I can turn my attention to all the other consulting jobs I'm a bit behind on :)

I spent a bit of time looking at Adobe's new OpenType fonts. Wow. There is some amazing work that's gone into those.

Not to sound like a broken record, but I think it's a damn fine thing that the designers of those fonts are getting paid for their work. This is a golden age of font design, and it wouldn't be happening if there weren't copyright protection for the fonts, and if it weren't for the legal right of companies to demand payment for the use of the font.

Seeing beautiful work like this causes me to question my involvement in free software - what we're doing in terms of fonts is so pitiful and ugly by comparison, it's not funny. The fact that the free software model can't directly compensate people for their development work is one of the worst things that's broken about the model.

Of course, this is fonts. For software that makes up the computing infrastructure, the usual proprietary software model is even more broken, giving as it does incentive to produce complex, bad, buggy stuff. I really wish there were a better model. I don't know what it would look like, though.

I'll be at LWCE Wednesday and Thursday next week. I'm not looking forward to the show at all, though. I wouldn't be going at all if it weren't for the fact that there are going to be quite a few people there I really want to meet. For the organizers, it's obviously just another trade show, with no concept at all about what makes free software unique. I will doing a small, modest, low-tech hack. If you're going too, let me know and you can join in as well!

I'm still at Pacific Yearly Meeting, but will be back in a couple of days. It's really nice to have some time away from the daily crush. It's also a good way to spend more time with the family. As I expected, everybody loves Max, and Alan seems to be finding his legs this year - he's really making friends.

This year, I'm on Secretariat Committee, which means putting out the daily newsletter, and also dealing with the various disks that people bring in for printing, editing, etc. I knew that file format conversion from word processors was problematic, but I didn't know until now just how much of a mess it is. Very basic things you'd expect to work, don't.

My sense is that this is an extraordinary opportunity for free software. All you have to do is create a word processor that does a reasonably good job at handling all these wacky file formats. That's probably a fantastically hard problem, but on the other hand, it feels to me like something us hackers can deal with. Samba, for example, is an excellent example of a piece of software organized primarily around compatibility with strange proprietary protocols.

Between some intense controversy on the Gnome lists and some of my own work, the subject of taking on dependencies has become interesting to me. I think I'll write a front page essay on the subject, but here are a few scattered notes in the meantime.

How do you choose whether to take on a depencency to another project? On the positive side, you're reusing code, delegating maintenance tasks to others, and in general going in the direction of sharing. On the negative side, you're placing quite a bit of trust in the maintainers to provide usable, maintained code of sufficient quality. Further, you're taking on the risks of "impedance mismatch" where your needs don't match the functionality provided. Lastly, you're taking on the cost of version skew.

In my essay, I'll probably do a ToC-inspired analysis where you have N different projects using a single dependency. If each takes on a fair share of the maintenance responsibility, then the cost of using the dependency is 1/N the total maintenance cost, plus whatever integration cost is incurred. In some cases, this can be a strong win.

A really fabulous example of a dependency that's desirable to take on is libjpeg. It provides high-value functionality (ie, you don't want to write your own jpeg codec), is already used by a huge number of important projects, and is mature and stable enough that you're probably not going to get bitten badly by version skew. Does your app deal with jpeg files? If so, use libjpeg or suffer the consequences.

Dependencies can be libraries, languages, or build tools (and probably other things I'm not thinking of right now - actually, standards came to mind as I was previewing). I think many of the same analysis applies to all three. Certainly, a dependency on build tools is a serious issue for many free software projects. In addition, I think the importance of version skew for language choice is often underestimated. This is, I believe, one of the major reasons we prefer C so much. Only now that C++ is slowly converging on a (well-implemented) standard is it a reasonable choice as a dependency for serious free software work. Similarly, scripting languages such as Perl and Python are hardly suitable for code expected to last a long time - both are undergoing deep-seated revision.

Anyway, these are just ramblings - I'll try to pull together an actual point when I write it up for the front page.

I'll be spending next week at Pacific Yearly Meeting with my family. Don't be surprised if I'm hard to get in touch with.

The last couple of weeks have been pretty intense, mostly with nonbillable, noncoding work. I guess it all pays off in the end, but I'm looking forward to getting back to some actual developement after I get back. Of course, then there's LWE. Ah well.

Max just started laughing today (or maybe yesterday). All else pales.

I just got back from visiting Transmeta and attending the VA Printing Summit. For a good picture of the latter, read Grant Taylor's excellent notes.

Kudos to VA for organizing the event, and in style. It's always a great service to bring a number of passionate hackers together to work on free software, and even nicer when done in style - a nice hotel, good food, good discussion rooms, etc.

One of the interesting aspects of the summit is the way it brought together people from disparate communities (that's what "summit" means, I guess).

The speaker for the HP presentation explained that since they wanted "great Linux printing", they naturally went to VA Linux (Nasdaq: LNUX). Ok, so they didn't quote the ticker symbol, but it was clear that their thinking was business-to-business. Yet, after many months of work by a number of talented hackers, the results aren't that impressive. Yes, there is quite a bunch of code now for print spooling scripts, GUI config tools, libraries to deal with some of the low-level detritus that inhabits printer space, etc., but almost none of that is shipping with distros or particularly well integrated with other projects.

Part of the explanation, I think, is that HP's goals are somewhat different than those of the free software community. Our goal is not "great Linux printing", it's printing that doesn't suck.

To me, one of the high points of the summit was the vendor relations breakout session. It's clear that most printer manufacturers want to support Linux well, but just don't understand very well how to do it. After all, their business is making printers and selling ink, not acting as an arm of the Borg. So a lot of time was taken up (very productively, I think) thoughtfully explaining how our development processes work, answering questions, and going deeper into some of the more subtle points. I think it would be interesting to distill and bottle the discussion we had. In the meantime, support for inkjet printers benefits.

The summit really highlighted for me how we're not just forming a community, we're developing a complete culture around what we do. From this culture flows the way we make decisions, manage change control, cooperate intensely even as we compete, and our passionate, complex, subtle and certainly not homogenous attitudes towards intellectual property. And all of this is wildly different than in the proprietary software world.

More down to earth, the IBM Omni driver sure looks interesting. Basically, they're taking the printer driver library from OS/2 and releasing them as free software. It appears to be a clean, modular design, and obviously a huge amount of testing and development with real printers went into it.

I was also amazed at the rapid rate of progress that the Gimp-Print project has been making. I'd say those drivers are now roughly on par with Epson's, certainly in some ways noticeably better. Robert Krawitz and I are both eager to integrate Even Toned Screening, which should hopefully improve quality even more.

A few more quickies. kuro5hin, really sorry to see what assholes have done to your site. I eagerly look forward to its return. matt, you raise a lot of good points. The trust metric tries its best, but the accuracy of rankings is clearly limited by the quality of data going in. People, read the cert guidelines. Just using free software is not enough to earn a cert - you have to actually contribute something back. Not that there's anything wrong with using free software (I'm sure that's how we recruit nearly all of our developers), but that's just not what this site is about.

Word up to all the homies I've been spending time with in the last few days. I do greatly enjoy the real human contact.

I'll be at the VA Printing Summit in Sunnyvale Thursday and Friday. If you are in town and want to meet me, probably the best bet is to leave a message at the Sheraton, where I'll be staying (unless, of course, you know my pager number).

The summit itself ought to be quite an experience. There are a lot of issues to be hashed out. I have little or no hope that any of this will actually get resolved at the summit, but I do expect that a number of personal relationships between people in Linux printing space will grow. In my experience, it's these relationships that drive actually getting the work done. Kudos to VA for organizing the summit - we'd probably be wringing our hands a lot longer if they hadn't done it.

I made rinkj print samples today. Of course, rinkj (using Even Toned Screening) is better than anything else out there, but Gimp-Print has made dramatic improvements in the last couple of months. I'll be talking to Robert Krawitz about folding in rinkj, so it should be even better.

Next week is Quaker Pacific Yearly Meeting. I've been working at full bore recently. I think this will be an excellent time to take a step back and reflect a little. Plus, it's a really good way to spend lots of time with my family.

Oh, and there are new Libart mailing lists - follow the link. I won't be around much in the next couple of weeks to respond, but maybe people will find it useful to join up anyway.

What a amazing fucking week! Unfortunately, most of it is stuff I can't talk about yet.

I'm gearing up for the Printing Summit next week (this week?). I just finished drafting the Gnome presentation, and have also commented on the Ghostscript one. People might start to get the idea that I'm masterminding a vast conspiracy to take over 2D imaging in the free software world. Not to say they'd be wrong, of course :)

I've been working really hard, possibly too hard. Yesterday Heather took Alan and Max to the beach with their visiting out of town relatives, while I stayed at the studio and did 6 hours of billable work. Well, we'll see how much my life is going to allow taking things a little easier.

I'm finding myself liking PDF 1.4. There's quite a bit of real cleverness in there, of the kind that I sometimes miss. Implementing it efficiently is going to be a real intellectual challenge. Good! I just hope the patent thing doesn't blow up.

If you're netwave.provo.novell.com, could you please stop hammering Advogato? You're accounting for 69.4% of the hits right now, and Yakk's admittedly cool but still brokenly unscalable diarywatcher is accounting for another 8.5%. Salon referrals are at 0.4% right now, btw :)

I'll be working with Yakk to improve the scalability of diarywatcher, probably by adding XML export of "changes since last hammering^W query".

Well, being sallonned was fun. I was suitably caustic with the salesman who called and wanted to sell me "a turnkey service providing custom branded webcommunities." Reminds me of that story from way back when before "turnkey" was common in spelling checkers.

I just bludgeoned the current freetype2 beta into building with autoconf. Here's a snapshot. Unpack over Nautilus, add a line for cut-n-paste-code/freetype2/Makefile to the toplevel configure.in, and you should be set. If anyone wants to test this and tell me all the magical chicken-waving auto* incantations that I got wrong, I'd appreciate it. Otherwise, I'll check it in this evening and wait for the sudden angry influx of bug reports.

jfrisby: I think the TIGER/Line database has what you want, and much much more. It's public domain, although getting your hands on CD's is not always trivial. You can also check GIS Data Depot or this link list from Stanford. Good luck!

This is going to be another long one.

Tragedy of the Commons

I really think I'm onto something here. A deeper economic understanding of how the structure of software markets affects the public good could actually be quite useful, if for no other reason than to help inform public policy. So I'm going to continue thinking and writing about this for a little while longer.

Darius Bacon pointed out a couple of interesting things I missed in my original diary entry. First, not all economists react immediately to ToC scenarios by calling for direct government intervention. In fact, some of the trendiest economic thinking in recent years has been to create artificial "markets" for externalities. Instead of merely being allowed to pollute a certain amount but no more, you buy, sell and trade the rights to pollute. The idea is that it becomes more cost effective to clean up your factory than to buy all the pollution rights. Even more interestingly, these kinds of markets (at least in theory) allocate resources much more efficiently than heavy-handed governmental regulations.

Of course, the problem with these markets is that it's so tempting to cheat. After all, it's usually going to be cheaper to buy a politician than any of the other alternatives.

Darius goes on to point out that economists have dealt with both signs of the ToC. Underpaid artists represent one of the original ToC's considered by economic theory. Copyright law creates an artificial market, this time in "intellectual property" designed to avoid this ToC, providing direct incentives to artists. Of course, Jefferson was fathering American copyright and patent law a couple of hundred years before the ToC paper.

So what you have now is two distinct ToC scenarios playing in the same space. One is the question about how to provide incentives for creating software, ie the same scenario that motivates copyright in the first place. The other, with opposite sign, is basically analogous to pollution and environmental degradation. Software which deliberately causes incompatibilites and abuses standards ends up affecting lots of innocent bystanders. Is the fact that MSIE 5.5 still doesn't support CSS properly at root much different than a paper mill putting out a noxious stench?

What unifies, I think, these to ToC scenarios is the concept of the network effect. As I wrote in my last long diary, these network effects can create an incentive to do free software without the creation of artificial markets. These network effects are inherent in the nature of software.

Further, it's network effects that drive the "pollution" process. As in my word processor example a few weeks ago, it's one thing to buy a word processor because you like its ui, its features, etc., and another to be bludgeoned into buying one because everybody else is and if you don't, your life will be hell.

So the economic policy implications are intriguing. As network effects take a larger role in the software, intellectual property protection becomes both less needed (because people have more incentive to create free software) and less desirable (because of the economic incentives for producers of proprietary software to flout standards). A more enlightened approach to intellectual property legislation might "tune" the degree of protection based on the relative importance of the network effect. It's not hard, for example, to imagine disallowing all forms of intellectual property for basic network infrastructure, but becoming more like copyright for book- and music-like software such as games.

One last note regarding public policy. The whole antitrust concept grew up about 100 years ago to counteract "trusts," or businesses that leveraged network effects to achieve monopoly status. Railroad monopolies are some of the classic examples - if you can use your ownership of one rail line to prevent a competitor from having a good business on another, you've won. The analogies to Microsoft are pretty clear. However, railway monopolies were built over real stuff, ie land, track, etc. Microsoft's monopoly exists solely in an artifically created market. I'm not saying it's politically feasible, but it may be that the public interest would be best served simply be removing the economic petri dish for antitrust mold to grow in.

The Practice Effect

Has anyone else read "The Practice Effect", a mediocre bit of science fiction by David Brin? It's striking to me how the central "idea" of the book (an alternate universe in which objects become higher quality the more they're used, or degrade if unused) models the actual economics of scarcity in free software today.

Unfortunately, degrading through unuse is also a real issue for free software. When rtmfd and I were playing with Speakfreely the other day, we were dismayed to find that the sound drivers sucked - the default config had insane latencies, while a different option brought latencies in line but introduced unacceptable skipping. And this is from an app you'd think would be a prime candidate for real maintenance. I wonder if this has anything to do with the fact that it's pd rather than released under a more restrictive free software license.

PDF 1.4

Adobe released the PDF 1.4 transparency spec some time over the weekend. So far, it looks pretty cool, although as I expected there are some areas that are underspecified, ie the doc doesn't tell you everything you need to render pixel-for-pixel matched with Adobe's implementation.

This paragraph should strike a chill into the hearts of free software developers:

The information in this document is subject to the copyright permissions stated in PDF Reference, Section 1.4 [1.7]. Additionally, developers should be aware that many of the transparency extensions to the Adobe imaging model are the subject of patents and patents pending by Adobe Systems. The permission to use the copyrighted material in the PDF specification does not include the right to use any Adobe patents, except as may be permitted by an official Adobe Patent Clarification Notice (published at Adobe's web site or elsewhere).

I'm going ahead with my work implementing the PDF 1.4 imaging model, but it may turn out to be really gargantuan mistake. Let's hope not.

Bookbinding

So when the PDF 1.4 transparency spec came to light this morning, I immediately wanted to handbind it. I had just gotten a box of archival paper from Gaylord and was eager to try it out. Well, it turns out that the grain direction was wrong, so there's a little cockling, and the pages fold and turn more stiffly than they should. I sent Gaylord a quick email note informing them of this discrepancy, and was pleasantly surprised to get a response back from them of "we're tracking down exactly why this happened, expect a response in a day or two, in the meantime please feel free to contact customer service for a full refund." They just made themselves a friend and supporter.

I've been refining my materials and practices, and feel like I'm almost there (getting precut archival paper with the right grain direction is pretty much the last piece). I'll probably put up a Webpage (or maybe even an Archival-Bookbinding-HOWTO in the LDP :) once I'm starting to feell happy with the results. I don't know if it's just me, but there's something ineffably cool about the idea of getting a digital document published over the Internet this morning, and within a day have an archival copy that can easily last 1000 years if reasonably well taken care of.

83 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!