Older blog entries for raph (starting at number 99)

Looks like I touched a chord with my ebook rant.

Ok, let me rephrase one of the points I tried to make. It would be cool if the ebook infrastructure that ended up had the property of paying authors reasonable amounts of money. If it doesn't, ebooks will still probably be useful tools, and there will be good reasons to write them. For example, a consultant who knows a lot about a field and is looking for more business might well find it justifiable to put up an ebook. But I do worry for the health of the media when the only works that get created are those subsidized by some ulterior motive.

Dave Winer also spent time at Seybold SF and wrote a good essay about ebooks and DRM. He rightly points out that DRM is no different than copy protection for software.

The only payment system that I can think of that makes any sense is voluntary payments, ie tips. The obvious problem with tips is freeloaders. Probably only a very small percentage of people will be willing to part with their hard (or soft) earned cash just to subsidize the artist. Even so, without middlemen skimming off the lion's share of the revenue stream, the minority who does is probably enough to fund artists about as well as they are right now.

A more subtle problem, and one I haven't seen articulated as much, is the distortion that tip-seeking behavior will cause. It's basically inevitable that the artists who succeed in a tip-based system will be those who have set out to maximize tips. Promotion will border on spam - for the mp3.com station I run, I see this line already beginning to blur. Further, how many people are going to be scrupulous about only tipping real people rather? On the Internet, nobody knows you're a corporation.

Nonetheless, I have hope that a vital tipping culture can emerge based on people who really appreciate art and are willing to find the good stuff and tip the artist. It is not essential for the majority of sheep to participate. The temptation to sell out will always be there, of course, but it always has been.

Book recommendation: Reinventing Comics by Scott McCloud. He talks about a lot of the issues surrounding digital production and distribution of comics. Not surprisingly, the issues of who has control over the distribution channel and how the artist gets paid are central. Scott is an enthusiast of micropayment systems, but ignores the fact that micropayments are unenforceable. Nonetheless, the book is quite thought-provoking and is a fabulous demonstration that you can present serious, complex issues in a comic book format.

Resolution

Am I the only one unimpressed with the results of ClearType? Radagast posted a screenshot of it running. It only works on LCD screens. Conversely, you can turn it off by converting to grayscale (ie, load in Gimp and press Alt-G). I see an improvement, but it's pretty subtle.

By contrast, the improvement from increasing resolution is massive. Yet, high-res displays are not popular, largely because most software will draw little tiny illegible icons and so on. You'd think Apple would have fixed this in Mac OS X and their Aqua UI toolkit (a rewrite from the ground up), but no. Windows and X software is not a whole lot better. At least the greater configurability of X software offers some hope, but it's still a hassle to get a consistently good display at higher resolution.

Doing a non-scalable display is the Y2K bug of user interfaces. By the time high-res displays arrive, we will have mostly fixed the problem, but only after considerable difficulty. Are you doing a user interface? How easy is it to scale? No? Then don't make fun of the programmers who used 2 digit date fields because it was easy and efficient.

Seybold

I spent the last three days at Seybold SF, much of the time helping out in the Artifex (commercial licensors of Ghostscript) booth.

A few impressions first. Apple had a huge presence - a gigantic booth, lots of people doing demos on the machines, etc. I like to see competition for Wintel, but on the other hand, it's very difficult for me to get excited. Yeah, the cube has style, but in terms of performance it's just a 450MHz uniprocessor. I think the cheapo dual Celeron I bought last year for just over a thousand bucks probably outperforms it.

Linux, on the other hand, had virtually no presence. Corel had Corel Draw 9 for Linux tucked away into a corner of their booth, and there were a few companies doing server stuff that just happened to use LInux, but that was about it. This is a bit surprising to me, because Linux really seems to have a lot to offer for graphic arts, starting with generic server tasks and going from there.

There was a lot of XML there. This is hardly surprising, as XML seems to be one of the big hype waves in "digital publishing" right now.

PDF

PDF is becoming the dominant interchange format for graphic arts documents. A great many apps on the floor were showing much improved PDF import and export capabilities. This includes both Adobe's own products (notably PhotoShop 6) and competitors, including the upcoming Corel Draw 10.

PDF is making a lot of money for Adobe. It's not surprising, then, that Adobe is pulling a classic decommoditization strategy. The PDF 1.4 spec (not yet published) has a bunch of new stuff that it's going to be difficult for competitive products to implement. That, of course, includes the blending and transparency stuff that I'm implementing, but also the ability to re-wrap text. They also showed a beta of Acrobat 5 running on Palms and WinCE devices, including the Compaq iPaq.

Electronic "books"

E-books were a major theme at the show, with Microsoft massively showing off their Reader platform, including ClearType. As should be expected from Microsoft, it looks really good - clearly real typographers and UI designers were involved in this product. They will probably get a lot of users just by being so available.

I'm wildly ambivalent about ebooks. The whole concept seems to be organized around "digital rights management." The person who coined that phrase must have been remarkably insensitive to miss the Orwellian overtones. Sure, I'll sign a contract and agree to have my rights managed, and digitally at that. Welcome to the future.

In any case, most DRM is implemented around the concept of the "trusted client," or client software that is programmed to respect some access policies. Adobe Acrobat, for example, has a simple password-based scheme and rinky-dink encryption. Quoting from the PDF book:

Note: PDF cannot enforce the document access privileges specified in the encryption dictionary. It is up to the implementors of PDF viewer applications to respect the intent of the document creator by restricting access to an encrypted PDF file according to the passwords and permissions contained in the file

Such an approach, of course, is pretty contradictory to free software. If free viewers are available, it is always possible, and hopefully even easy, to comment out the if (!password_matches) {...} section of the code, and to distribute the result widely.

Nonetheless, it is important for authors to get paid for their work. To the extent that free software is unable to meet these needs, people will be drawn towards the proprietary systems that can, and I do not blame them.

As ebooks become more popular, it is inevitable that Napster-like trading will become widespread. I think this is also important, as it provides a needed safety valve to protect against those who would restrict our right to read excessively for business reasons.

In the meantime, I find myself very much liking paper books. Aside from the obvious issues of durability (books can and do last over a thousand years), portability, high resolution, high speed random access, and so on, the culture of books has evolved an imperfect but still reasonable balance between liberty, business, and incentive for authors. Libraries, used bookstores, and trading between friends are all popular, respected approaches to sharing books.

My main complaint about paper books is that authors get far too little cut (sound familiar?). However, self-publishing remains a perfectly viable option. Much self-publishing is for "vanity," ie people who pay for the printing of their books because they're simply not good enough for real publishers, but there are some amazing exceptions. Edward Tufte's books are of course beautiful examples, and then you have eccentric thinkers such as Ted Nelson self-publication of the first edition of Computer Lib in 1974.

The document file format to end all document file formats

I find document file formats to be an endlessly fascinating area of study. The most important axis for categorizing document formats is probably structure vs. presentation. Each point on this spectrum has unique advantages and disadvantages. The structure end brings you much more flexibility for editing, analyzing, and adapting (for example, reading texts aloud). The presentation end, conversely, gives a graphic designer much more control over the actual presentation, allowing (in the hands of a good designer) much higher visual quality. The tension between these goals drives much of the continuing evolution of document file formats, and suggests that designing an uber-format is not trivial. Certainly, we haven't seen any good uber-format yet.

PDF has been planted firmly in the presentation camp. PostScript was (I use past tense because it's no longer being actively developed) even more so - it should really be considered a graphics file format rather than a document format. At least PDF adds text searchability and some notion of document structure.

From a commerical point of view, there is pressure for PDF to become an uber-format. However, nailing down the exact formatting brings you to an unresolvable dilemma when displaying in a small window: scale or scroll. Both are bad choices, and both lead to a poorer user experience compared with a more structural approach, which can reflow the text.

Now that PDF is targeting small devices, the issue has finally come to a head. Thus, the PDF 1.4 spec has additions to inch down the spectrum towards structuralism, and is capable of reflowing text intended for display in small windows. At this point, I'm not sure what they did. My guess is that they just bolted on a structural document format. If you reflow, you probably give up any real control over formatting and positioning.

In any case, I'm very disappointed in the quality of the structuralist vs presentationist discourse. Both sides tend to talk about The One True Way. To me, this approach misses important truths. You need to be thinking in terms of the quality of user experience for authors, readers, and editors, in a diverse array of contexts. For a project Gutenberg e-text, pure structuralism is a good, reasonable choice. For a magazine ad, anything less than pure presentationism is probably wrong. For everything in between, well, that's what makes life interesting :)

Even so, it's possible to make better and worse compromises. HTML, for example, neither represents the true structure of documents particularly well nor offers high-quality (much less controllable) presentation. TeX has the amazing feature that it can accurately capture the structure of the document, yet render completely consistently on all platforms, allowing great artistic control over layout. The relative popularity of HTML over TeX is of course evidence that the world is unfair.

Well, that's probably enough ranting for now.

I broke down and implemented some basic InterWiki support to Advogato. I made that link by typing <wiki>InterWiki</wiki>. You can also do MeatBall:InterWiki, which also does the expected thing.

Note that Advogato is a member of the InterWiki space. For example, my diary is Advogato:person/raph and this entry is Advogato:person/raph/diary.html?start=97.

It seems to me that Advogato diary entries already have a hint of WikiNature. I'm hoping that this new InterWiki stuff can bring more of that out. Since people ask for ways to comment on diaries, one suggestion is to include a Wiki link at the bottom of your diary entry. Let's play with it a bit and see how it works.

At the same time, since the needed infrastructure was the same, I also added the <person> tag. So you can make a link to raph by typing <person>raph</person>.

My PDF blend mode work continues apace, and now I have an actual nonbug image showing two transparent stars composited with the Multiply blend mode.

In other news: since 5.0, PGP has had an optional backdoor feature intended for corporate customers. Now we learn that this feature has a serious bug, which can be exploited fairly easily to compromise message privacy. Cool.

PGP is not free software, but of course free software has its own share of exploits, even OpenBSD. The bottom line is that we have no fucking idea how to build trustworthy systems.

But computers are fun toys, aren't they?

I got somewhat stuck today implementing the PDF 1.4 blend modes. I did an algebraic simplification that I thought was nice, but it turns out to introduce a division by 1 - alpha. Since alpha can approach 1, this leads to numerical problems. Ah well, I'll just add yet another alpha channel when compositing non-isolated groups (that brings the total up to three).

You may find one of my bug images or another amusing. I suspect I'll have a lot more bug images before I'm done. PDF 1.4 blend modes are hard! Of course, I like challenges, and it's probably good for business - if I find it hard, presumably other people might as well, and would be inclined to simply use Libart.

While feeling stuck, I wandered over to the C2 Wiki. I had played with Wikis before, but somehow didn't appreciate how cool they are until now. Basically, they're structured as a nearly pure anarchy - anyone can edit any page. The cool thing is that this gives you the power to make the text you're reading better. Virtually all other "community" systems leave you powerless in this regard. Sure, you can add your own text, but sometimes that pales in comparison to simply being able to fix something that's wrong.

I also find that Advogato has just become part of the InterWiki namespace (see the Meatball:AdvoGato entry). This pleases me greatly. I feel like doing something to reciprocate, perhaps by adding an <interwiki> tag.

Also speaking of distributed networking thingies, Mojo Nation continues to show quite a lot of promise, and has gotten over some rather bad bugs that prevented the system from being useful. I've actually downloaded some music that I wanted to listen to :)

There's a lot of room for Mojo Nation to improve, but the Evil Geniuses For a Better Tomorrow seem to be quite good at that, and the code is also LGPL (and written in Python), so help yourself. In fact, once the substrate becomes solid, I wouldn't be surprised if a whole industry popped up of people doing interesting things with the network.

A few more pictures on the web pages of Alan and Max.

I'm back from lwce. It was quite a show - commercial beyond belief, but still it was a gathering place of geeks, so was worthwhile. The Eazel after-party was the high point for me, I think.

I've said it before, and I'll say it again: by focussing exclusively on the trade show aspect of the Linux business, lwce is doing a strong disservice. On and around the show floor, I put up a few dozen copies of a poster in protest. I would have put up more, but instead spent my time actually talking to other hackers, a better use of my time. In any case, I hope someone in the organization of the show listened, and that they start turning away from alienating the very people that make free software what it is.

The design of the ".org pavilion" was strange at least. All the .org booths were lined up against the far wall, with cage-like metal grids between the booths. I think they were shooting for a "spaceship" theme, but it came off as a minimum security prison instead.

I have very mixed feelings about the corporatization of free software. It's nice to be successful, but we need to stay true to our roots. Most people, including many in free software, don't have a clear picture of what our roots are. I think it's mostly about learning. I have a lot more thoughts on this subject, and will probably write up an editorial when I have a bit of time.

The ipaq is a very cool device. Jim Gettys carried one around running Linux, X, and a few simple apps, and was basically mobbed the whole time. I look forward to getting mine :)

I burned a CD of music for Alan. He'd been singing "Love Shack", "Larger Than Life" and several other songs he'd heard on the radio, and I wanted him to be able to listen to high-quality originals. I'm really looking forward to him being on the Internet and being able to get the music for himself. I don't think people have a clue how incredibly empowering it is for kids to be able to choose their own music. Downloading music from the Internet is here. Those who try to stand in its way are simply going to get run over. I hope we end up with a system that compensates artists better than the one we have now, but this is far from assured.

I've been listening to music using XMMS recently, and also playing with the waterfall plugin. Very pretty stuff, but the fact that the spectrum display is linear, rather than logarithmic, really bothers me. 90% of the screen area is given over to the high frequencies in the track. So when there's interesting things going on with flanged hi-hats, etc., you see nice patterns, but for the actual voice and melody, it's all munged together. I think I'd ditch xmms and go back to mpg123, but I find that the latter dies on a number of malformed mp3's, while xmms plays gamely through.

Code reuse

Dijkstra has long been a skeptic of code reuse. He has a quote that reads something like "the only reason to reuse a piece of code is if it's exceptionally high quality." I tend to agree with him. Take a skim through sourceforge or freshmeat one day and ask yourself how much of that code really deserves to be reused.

On the flip side, there is code out there that has been lovingly crafted and refined over a period of years (libjpeg and zlib immediately come to mind, but there are others). For some reason, these bits of code find themselves reused without inheritance, frameworks, contract programming, factoring, xp, or any of these other purported silver bullets.

So, I know am I crotchety old man, but I file code reuse and the many supporting technologies intended to foster it into the "only vaguely interesting" category. Concentrate on getting really high quality code out there, and the rest will take care of itself. Of course, that's really, really hard, probably beyond the reach of most free software hackers. Feel free to prove me wrong, though :)

Scripting

While we're on the subject, I find that a lot of the really low quality code out there is infected by what I call the "scripting mentality." Three facets particularly bother me:

1. Not checking error codes. Virtually all invocations of functions or commands should be of the form of: status = invocation(...); if (status) { cleanup() }. Having cleanup do the right thing (error logging, reporting to the user interface, making sure the system doesn't go into an inconsistent state) is often much more difficult than the task at hand. Yet it is absolutely necessary if the goal is a program that doesn't just break randomly. Commands do fail. Most script writers just ignore that.

Here, I think, is the perfect illustration of the syndrome: as of perl 5.005_03, the print function does not return a false value when it fails (for example, when writing into a full disk), even though that's promised by the manual. I find the fact that this bug is not even noticed enough to get it fixed is revealing.

2. Sloppy config information. Most scripts require some kind of config information just to function, and more to do useful things. Yet, getting the config information into the script is often compeletely ad-hoc, often involving both environment variables and hand-editing files.

3. Quoting. Since scripting languages and the interfaces to scriptable components tend to be string-based, there are usually string quoting and escaping rules that have to be followed. Yet, it's so easy to ignore these issues. The Web, with it's wonky URL and query escaping rules, is particularly vulnerable. Unix sh quoting problems are a rich source of material for the Unix Hater's Handbook as well.

Thus, I consider the scripting approach to be quite valid for one-off jobs when programmer time is critical, and the person using the script can understand and deal with the consequences of errors without too much pain. It doesn't tend to scale very well to the case when lots of people other than the author will be using the code.

DDoS

This morning's slowness at Advogato was caused by a DDoS against the Berkeley computer science subnets, facilitated by people breaking into RedHat boxes on campus.

Our computing infrastructure is so fragile and untrustworthy that it really should only be considered a toy. Yet, somehow, we manage to get our work done!

Woohoo! Serious progress on the glyph cache. You can check out my most recent code from nautilus/librsvg. The new test rig (test-ft-gtk) shows quite decent performance, and I'm not done optimizing. It also has a nice, pretty display when used with the xdvng.pfa font from itrans.

This test shows that client-side rendering of aa text is probably a lot more reasonable than many people think. Of course, when the Render extension happens, aa text will be almost indistinguishable in speed from xlib's current text.

Thought I might touch a chord with the Adobe font rant.

graydon: Yes, unifont is cool stuff. It's not at all fair for me to say that there's nothing going on in the world of free software fonts. Reaching into history a bit, you'd also have to give the Metafont project major props both for incredible technical innovation and a library of highly distinctive, useful fonts.

But my point still stands; if you try to point to any free font work that directly compares with Adobe's latest OpenType efforts, you will come up empty.

Regarding compilers and other bits of infrastructure: yeah, that stuff either needs to be free software or something very much like it. The idea of putting bits of this infrastructure into brightly colored shrinkwrap boxes distorts the whole endeavor incredibly destructively. Proprietary operating systems and tools tend to suck really hard because of this. However, the new Adobe fonts most emphatically do not suck.

Mulad: I'm not complaining that there's no money for free software development. In fact, I manage to get paid pretty well for the work I do. What I'm complaining about is the indirect nature of this funding. OctobrX, Raster, and others are in the position of being sponsored by a patron (in their case, a public corporation losing money at a dizzying pace). Further, I support my free software habit through consulting and other "day job"-like activities. It's a very different thing than simply being paid for the work I do, an option provided by the proprietary model, but denied me by free software.

Let me try to present my argument again. All software sucks, most of it pretty badly. When you try to analyze the causes of this suckage, two winning strategies to make software suck become immediately apparent.

First, you can fail to bring adequate resources and talent to the project, dooming it to being hopelessly amateur. For some projects, free software does manage to make these resources available, mostly for bits of widely-needed infrastructure. For many others, it simply doesn't.

Another good way to make software suck is the familiar story of decommoditization: basing it on complex, ambiguous, and ever-changing "standards" as a way to create lock-in and drive the upgrade treadmill. Microsoft Office is basically a canonical example of this, but certainly not the only one.

Note that neither proprietary nor free software have a lock on either kind of suckage. For example, I consider sendmail a really good example of a piece of software which has managed to lock in its market segment by being bad. And for demonstrations of lack of resources, just look at any segment of the market without an ultra-compelling economic argument, for example Be applications.

All I'm saying is that whenever somebody comes up with software that doesn't suck (or sucks less, anyway), we should sit up and take notice, and ask how it came into existence in spite of the powerful forces which always seem to subvert it.

C++

Over the past few days, I've had the pleasure of speaking with the creators of both Inti and Gtk-- about techniques for wrapping C libraries with C++ API's (this is in the context of Libart). It's not hard to see how the split came about. These two projects have radically different approaches. Karl Nelson seems to be in love with every trick in the C++ book. He encouraged me to consider multiple inheritance, using streams for operations on images, String-like objects for paths with copy-on-write semantics, and more. Havoc, on the other hand, encouraged me to use refcounted pointers, with a link back from the C object to the C++ counterpart. Inti is basically very C-like (ie, pointers are still pointers), but with type-safety and some other nice features that C++ provides. Even though the relative simplicity of Inti appeals to me, I don't think it's what I want. A user of Libart will often create many short-lived objects. It makes a lot of sense to use stack (auto) allocation for those - you get static enforcement of constructor/destructor pairing, almost for free, and code is a lot cleaner if it doesn't have a lot of explicit memory management stuff hanging around.

Taking a step back from the Inti/Gtk-- split for a minute, though, I'm convinced that the real problem is simply C++'s overweening complexity. Providing a nice, high-level wrapper for Libart should be a relatively simple task. However, C++ makes my design space really, really big, especially when it comes to options for memory management. I'm becoming more attracted to C++ now that it's (sorta) becoming a standard, and that decent implementations are becoming available, but I also feel that constant vigilance is needed to fight back the complexity that the language introduces, to keep it from infecting the rest of the project.

Ah, I can breathe a bit easier. Finished up one consulting job and sent an invoice. Now I can turn my attention to all the other consulting jobs I'm a bit behind on :)

I spent a bit of time looking at Adobe's new OpenType fonts. Wow. There is some amazing work that's gone into those.

Not to sound like a broken record, but I think it's a damn fine thing that the designers of those fonts are getting paid for their work. This is a golden age of font design, and it wouldn't be happening if there weren't copyright protection for the fonts, and if it weren't for the legal right of companies to demand payment for the use of the font.

Seeing beautiful work like this causes me to question my involvement in free software - what we're doing in terms of fonts is so pitiful and ugly by comparison, it's not funny. The fact that the free software model can't directly compensate people for their development work is one of the worst things that's broken about the model.

Of course, this is fonts. For software that makes up the computing infrastructure, the usual proprietary software model is even more broken, giving as it does incentive to produce complex, bad, buggy stuff. I really wish there were a better model. I don't know what it would look like, though.

I'll be at LWCE Wednesday and Thursday next week. I'm not looking forward to the show at all, though. I wouldn't be going at all if it weren't for the fact that there are going to be quite a few people there I really want to meet. For the organizers, it's obviously just another trade show, with no concept at all about what makes free software unique. I will doing a small, modest, low-tech hack. If you're going too, let me know and you can join in as well!

I'm still at Pacific Yearly Meeting, but will be back in a couple of days. It's really nice to have some time away from the daily crush. It's also a good way to spend more time with the family. As I expected, everybody loves Max, and Alan seems to be finding his legs this year - he's really making friends.

This year, I'm on Secretariat Committee, which means putting out the daily newsletter, and also dealing with the various disks that people bring in for printing, editing, etc. I knew that file format conversion from word processors was problematic, but I didn't know until now just how much of a mess it is. Very basic things you'd expect to work, don't.

My sense is that this is an extraordinary opportunity for free software. All you have to do is create a word processor that does a reasonably good job at handling all these wacky file formats. That's probably a fantastically hard problem, but on the other hand, it feels to me like something us hackers can deal with. Samba, for example, is an excellent example of a piece of software organized primarily around compatibility with strange proprietary protocols.

Between some intense controversy on the Gnome lists and some of my own work, the subject of taking on dependencies has become interesting to me. I think I'll write a front page essay on the subject, but here are a few scattered notes in the meantime.

How do you choose whether to take on a depencency to another project? On the positive side, you're reusing code, delegating maintenance tasks to others, and in general going in the direction of sharing. On the negative side, you're placing quite a bit of trust in the maintainers to provide usable, maintained code of sufficient quality. Further, you're taking on the risks of "impedance mismatch" where your needs don't match the functionality provided. Lastly, you're taking on the cost of version skew.

In my essay, I'll probably do a ToC-inspired analysis where you have N different projects using a single dependency. If each takes on a fair share of the maintenance responsibility, then the cost of using the dependency is 1/N the total maintenance cost, plus whatever integration cost is incurred. In some cases, this can be a strong win.

A really fabulous example of a dependency that's desirable to take on is libjpeg. It provides high-value functionality (ie, you don't want to write your own jpeg codec), is already used by a huge number of important projects, and is mature and stable enough that you're probably not going to get bitten badly by version skew. Does your app deal with jpeg files? If so, use libjpeg or suffer the consequences.

Dependencies can be libraries, languages, or build tools (and probably other things I'm not thinking of right now - actually, standards came to mind as I was previewing). I think many of the same analysis applies to all three. Certainly, a dependency on build tools is a serious issue for many free software projects. In addition, I think the importance of version skew for language choice is often underestimated. This is, I believe, one of the major reasons we prefer C so much. Only now that C++ is slowly converging on a (well-implemented) standard is it a reasonable choice as a dependency for serious free software work. Similarly, scripting languages such as Perl and Python are hardly suitable for code expected to last a long time - both are undergoing deep-seated revision.

Anyway, these are just ramblings - I'll try to pull together an actual point when I write it up for the front page.

90 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!