Older blog entries for raph (starting at number 215)

Fitz replies

kai: I'm unaware of the node design in OpenGL display lists, but they sound interesting. The OpenGL folks seem quite performance oriented. Do you by chance have a good reference?

For normal horizontally placed text, the default will be 1/4 pixel horizontally, and a whole pixel in the vertical direction. These defaults are based on fairly careful empirical study. 1/4 pixel horizontal positioning is nearly indistinguishable in quality from ultraprecise positioning, while 1/2 pixel is not quite enough. Vertically, alignment with the baseline helps rendering quality. The only real downside is that line spacing may become slightly irregular. I doubt this will be much of a problem in practice, though.

For vertically oriented text, I'll probably just switch the axes, but I'm not sure, as I haven't done any kind of careful study.

Finally, for rotated text (not aligned with either axis), I'll turn off caching altogether, because the hit rate is likely to be low.

As for progressive rendering, it's not high on my list of interests. I don't expect that the difference in performance between low quality and high quality modes will be enough to justify the complexity. And, as you suggest, it becomes less interesting as systems get faster.

mslicker writes: raph, I have to ask, what problem are you trying to solve?

This is absolutely the right question. I haven't written it down explicitly, and for that I apologize. The top-level problem is rendering a 2D graphics images provided by the client. At first, the main client will be Ghostscript, so the images will be PostScript and PDF files, but I expect interactive applications will use the library as well.

Then, there are several subgoals. Performance is critical. Memory utilization is also very important in some applications (especially embedded printers, which have very limited hardware). Even on desktops, using less memory is usually a big win, because of memory hierarchy effects. Quality of rendering is critical, but doesn't directly affect the tree API.

Given these constraints, I'd also like the client API to be clean and simple, even when the implementation does sophisticated things under the hood. I want to do a extensive unit and regression testing, and I feel that Python is a good language for these, so Python bindings are also important.

So why am I arriving at a general tree infrastructure? For one, the imaging model includes PDF 1.4 transparency, which has a very strong tree structure. Optimized rendering requires analysis of the tree. The painter's algorithm won't work as well.

And the simplest tree implementation, one in-memory object per node, is far too expensive in memory utilization when the tree is very large, or the available memory is limited. Thus, it's important to store the tree in a concise format, competitive with Ghostscript's existing "command list" at the least. I can easily envision multiple tree implementations tuned for different tasks. The simple, in-memory one will be implemented first anyway, because of its simplicity. I am not a fan of gratuitous virtualization, but when you have two different implementations of the same conceptual interfact, it's often a good idea.

At this point, I'm exploring the design space. This exploration is informed by thinking about more general problems, lots of them from my DOM experience. When a general approach is good, I'll use it. However, when generality forces unpleasant tradeoffs, within Fitz it will be a casualty. I am spending time thinking about more general trees so I know which is which.

I haven't written up Fitz in any real detail yet. In some ways, it's unfair to be blogging it, because you see the raw thinking without needed context. When I get to start work on Fitz in earnest, I will do much writing, and it will all be very accessible. But I have to concentrate on several hard problems for the upcoming Ghostscript 8.0 release first.

Change notification in trees

This is another fairly dense technical piece. If you were hoping for diversion or scandal, sorry. It is a companion to my caching in trees piece from a few days ago.

For interactive displays, the Fitz display tree will follow a Model/View pattern. The model is the tree, and the view is a rendering module that renders the tree and displays it on the screen.

What happens when you change the tree? If you wanted the simplest possible implementation and didn't care about performance, you'd re-render the tree from scratch, and display that. But I do care about performance, and I am willing to sacrifice some of that simplicity for "minimal updating". At root, this is the realization that, if you have the rendering for the old tree in hand, then to get the rendering for the new tree, you may only need to do a small incremental computation.

Let's look at minimal update in more detail. A deep optimization running through much of Fitz is culling based on bounding boxes. Changes to an object are guaranteed not to affect rendered pixels outside the bounding box (note that effects like drop shadows don't fit neatly into this category; thus, they're not directly supported in the Fitz imaging model). So, if you change an object, a reasonable approximation to the update area is the union of the old and new bounding boxes. If you delete or insert an object, just the one bounding box.

However, for some objects you may be able to do better. For example, when dragging one corner of a semitransparent rectangle, the minimal repaint area may be L-shaped. In this case, repainting the entire rectangle would be wasteful.

One more performance issue: if the tree is very large, then the cost of having one "event listener" per node could be too much. Keep in mind that the only justification for all this is performance. If you add complexity and make other aspects of performance (like memory usage) worse, then it's probably not a win.

So the broad outlines of a change notification system begin to suggest themselves. If the tree is small, then having an event listener on each node is reasonable. Changes to leaf nodes are distinguished from changes to tree structure. The former are dispatched first to a change handler for the specific leaf type. The fallback is the bounding box approach. Changes to tree structure also invoke the bounding box method.

However, as the tree grows larger, you only attach event listeners to some nodes. It's a type of cache. For nodes that have no event listeners attached to descendants, you expect a notification any time a descendant changes. Otherwise, you only care about changes to the node itself, because you know other listeners will handle the rest.

So now we have the concept of event propagation. For at least some kinds of listeners, you want to bubble the event up the tree until you hit a listener. In the extreme case, you can have just one listener at the root of the tree.

Now I face a difficult design decision. If I want to have multiple Views of the same Model, event propagation gets trickier. In particular, the propagation patterns for different Views can't interfere with each other. You can't simply stop bubbling an event upwards when you hit a listener, because maybe another View has their listener further up. Bubbling all the way to the top isn't very satisfying either, because that will tickle the listener at the root even if a deeper listener has already handled the change.

There are a bunch of other hacks that don't work so well either. You could, for example, check to see whether a deeper handler has already been called, and ignore the notification if so. But you get into trouble if you want to aggregate several changes together into one notification. Keep in mind that if you change most of the nodes in the tree, then re-rendering from scratch is almost certainly better than doing an incremental computation for each node changed.

If you did have a good solution for the one-View case, then you could extend that to the multi-View case by having an "event group" for each listener. When a listener catches an event, it sets a bit so that other listeners with the same event group farther up the tree don't get called. But the complexity is unsatisfying.

That said, let me present one reasonably simple approach supporting multiple Views. Here, the tree implementation does just bubble events all the way to the root. It's up to the client (the View) to apply the event propagation logic. At an internal node, if you have listeners for all children, and if the node itself is unchanged (meaning no insertions or deletions at that node), then you can ignore the notification.

However, there is one unappealing aspect of this approach. If you want to add more listeners (say, after they've been thrown out of the cache), then it only makes sense to add listeners to all children of a node. If you only add it to some, you can't reliably ignore the notification, so you might as well not add it to any. I expect that the fanout may be quite large, so this could be a problem.

So far, I haven't been able to come up with what feels like the Right Answer, only different tradeoffs. Right now, restricting the thing to one View seems like the best tradeoff. For Fitz, I think this will very much be the common case. But for other, more general tree applications (particularly those involving networking), it's probably not good enough. It would be nice to have a general solution that was simple enough to be practical for Fitz, but I guess I'm just too incompetent to come up with one. Ah well, such is life.

15 Jun 2002 (updated 15 Jun 2002 at 07:18 UTC) »
Advogato

I finally got around to fixing the locks so that lock contention won't cause huge delays in reading pages. Writing (updating diaries and the like) can still be affected, but this is less urgent to fix.

LotR wrote:

We need a diary-writing trust metric!

Okay. I think you're right. I might well be motivated to write a generic metadata engine and apply it to the specific application of "how interesting is diary X?". Here's roughly how it will work.

When you're logged in, you'll get a chance to enter a one-to-ten score for another user's diary page. I might put this right under the "Certify <user> as:" selection at the bottom of individual person pages, but I'm also inclined to make it more accessible, for example allowing bulk updates on a customized version of the recentlog page.

This goes into the database as generalized assertions. At first, the only assertions that will be allowed are of the form "<user>'s diary is 7 on a one-to-ten scale", but the engine doesn't care what kind of assertions are present. "Roquefort is a particularly fine cheese" is also plausible. The reason for limiting the assertion space is to avoid scaling problems, which can become quite severe as the number of assertions scales up.

Then, roughly nightly, there will be a process that computes metadata scores, using the method I presented in my HOWTO. This will compute a confidence value for each user in the trust graph and each assertion. You can see where the scaling problems come from. I am sure there exist techniques for storing this data more sparsely, but I'm not interested in doing that research now.

Finally, the recentlog display will be annotated with the metadata scores. I'll probably also put in an threshold option.

bytesplit

I am trying my best to be patient with bytesplit. I realize he is a human being like all of us, but for whatever reason driven by demons causing him to antagonize people here. I sincerely wish that he is able to tame these demons, and interact positively with Advogato.

At the same time, I realize this is unlikely. As such, bytesplit is providing an opportunity to look at the trust metrics and the dynamics of this site more critically. The current trust metric certainly has limitations, and is definitely not a magic bullet for making this site an interesting read and a comfortable place. That's up to us.

What the trust metric does do is automatically compute membership in the community based on peer certifications. While I personally feel that bytesplit's contributions to free software are marginal at best, ten people here feel that his level of interest is high enough to rate an Apprentice cert. And, he does show interest in learning more, and his on-topic writings are perfectly reasonable for an aspiring apprentice. Given that, I don't think the trust metric should reject bytesplit's ranking.

All this is good motivation to implement the generalized metadata as proposed above. Unlike the existing trust metric, this metadata system would directly address quality and relevance of writing. I'll be very interested to see how it goes.

Cert inflation

We definitely have cert inflation here. Part of that is because the trust metric is generous, part of it is that people here are generally doing an inaccurate job of evaluating peer cert levels. This is useful information for people trying to design metadata systems: a significant fraction of the information input will simply be wrong.

I could certainly make the trust metric less generous. The easiest way to do this would be to have negative certifications as well as positive ones. But I'm not convinced that cert inflation is the most important problem in the world to solve.

Asynchrony

David McCusker called again, and we had another nice chat, this time focussing on writing programs in asynchronous style. I think it's a hard problem. I think it's even worse for library writers, because it may not be realistic to assume that most users of your library will understand asynchronous programming very well. I told David of X as a cautionary tale. X actually has very sophisticated logic for dealing with asynchrony properly. For newcomers to X, this all seems very intimidating and complex (asynchronous grabs are a good case in point). In fact, I think there is widespread failure in levels above X to deal with race conditions and the like correctly.

Every time you do something over the network, it's asynchronous whether you like it or not. Yet, event-driven programs seem a lot more complex than their simple, synchronous cousins. David would like to recapture that simplicity in asynchronous programs. A lot of other people have tried things in this direction, without very happy results so far. I feel that CORBA is a cautionary tale in this regard. It pretends that method calls are really local, when in reality they're decomposed into two asynchronous events, and of course all kinds of things can happen in the meantime.

I haven't seen any of the details of Mithril yet, but I'm fairly skeptical that it will make asynchronous programming accessible to less-skilled programmers. On the other hand, I am perfectly willing to believe that it will be a good tool for expressing asynchrony concisely, and thus useful for people who know what they're doing.

One detail we touched on but didn't really go into was whether the fundamental message sending operation on channels should be synchronous (as in CSP) or asynchronous. In CSP, if you send a message on a channel, but there is nobody ready to readon the channel, you block. The other way to do it is to append the message to a queue. Both are reasonable primitives, in that it's quite straightforward to simulate one in terms of the other. So which do you choose?

I mentioned that the CSP way might be easier to reason about. There's another issue that came to mind after our call: the queue required for the fully asynchronous case requires unbounded resources in the general case. Obviously, in tiny embedded systems, this can be a real problem. On desktops, it's less clear. But if a system is operating under very high load, you probably want to worry about whether the queues will keep growing. Of course you can always implement flow control on top of async messages, but that's not really the point. On CSP, the default is not to grow unboundedly.

mwh: I haven't been following Stackless Python closely, but I am aware of it. Looking briefly at the site, I see they are now implementing a concurrency and channel approach directly inspired by Limbo and CSP. That could be very cool.

Farmers Market

We went to the Farmers Market in Benicia, as we do most Thursdays. This was the first week of corn season, so we got some. It was amazingly good. I like farmers markets; they're a good way to make a connection to the people who actually grow your food.

Fitz

Antony Courtney called me up today, and we had a nice chat. He is interested in working on Fitz, largely so he can use it for his thesis work on functional reactive user interfaces.

His work is done in Haskell, so Haskell bindings for Fitz would be part of the deal. I think these two things could work well together. Fitz has a somewhat functional design. The rendering of a tree is a function of the tree. Caching is strongly related to memoization in functional languages, and minimal updating is a form of incremental computation.

Language bindings are to be an integral part of Fitz. We'll use the Python bindings extensively for testing. It's also, I think, the best language for experimenting. But I don't mean to exclude Perl or Ruby programmers, either. I just wish it was easier to write high quality cross-language wrappers.

What I'd really like is something like Pyrex, only with the ability to generate code for many different runtimes. Pyrex itself seems to be coming along quite nicely. As of 0.3, you can define classes in C, which seems quite useful. At some point, I might be motivated to try an experiment of making Pyrex generate, say, a Ruby extension.

Picoservers

David McCusker writes briefly of picoservers. These sound like fun. Basically, the problem boils down to: how do you best express asynchronous behavior in a programming language? Threads are one way, but they have lots of pitfalls, including performance and scaling issues. Event-based programming is more lightweight, but has a reputation for being very tedious and low-level. Also, event-based programming by itself can't take advantage of multiple processors.

I haven't looked at SEDA carefully yet (it's the thesis work of Matt Welsh), but it looks interesting.

People have been thinking about asynchrony for a long time. One of the more elegant approaches is Hoare's Communicating Sequential Processes (CSP). I'm a bit surprised that CSP hasn't gone further. It seems like a nice higher level abstraction compared with event-driven programming, but without all the nasty problems with race conditions and lock contention that threads bring you.

There is actually a CSP implementation in C. It seems like more typing than languages that have CSP baked-in. Occam is the most famous of these languages, but I think Limbo might be a more useful incarnation. Occam tends to be fairly static, but Limbo lets you create "threads" and channels very dynamically.

Python's generators are already sorta like coroutines. David Mertz talks about using them to implement what he calls weightless threads.

Caching in Trees

This will be another fairly technical entry on trees. It's been in the queue for a few days. The focus is on the Fitz display tree, for which memory efficiency is a major factor. Tonight we look at caching.

If you have a display tree, there are lot of things you might want to cache: intermediate renderings, bounding boxes and other geometry information, etc. For each Bezier curve, for example, you might want to cache a decomposition into triangles. The memory footprint for these cached objects might be significant in size compared to the original tree. This is why we want a cache rather than simply annotating the tree nodes with the extra info.

Mutating the tree can invalidate cached data, as well. In some cases, the relationship between the mutation and the cache is nontrivial. For example, if you change the color of a Bezier shape, you invalidate an intermediate RGBA rendering, but the triangle decomposition remains valid.

We're not going to pin down the exact representation of the tree. One in-memory object per node is the simplest way, and should work. Another approach that should work is storing a serialization of the tree in a btree-like structure. In this case, our node id is effectively a file offset to the beginning of the serialization. Thus, the id of a node can change as the tree is mutated.

Dealing with "sliding" node id's is probably too hard for clients of the tree, so we have an additional concept of "node reference", which is an in-memory object that essentially wraps a node id. When a node id moves, the tree implementation updates the corresponding node reference. This way, clients holding node references don't have to worry about them moving around.

Node reference might take dozens of bytes of RAM each, but node id's are essentially weightless. We hope that tree clients hold a relatively small number of node references, even as the size of the tree scales up.

Now we get to tonight's central design question: what should the key of our various caches be? A node id? A node reference? Something else? Here, we consider some alternatives.

Persistent id

A very common pattern in databases is to add a persistent id to each node. The value is somewhat arbitrary, but must be unique for each node in the tree. If we had persistent id's, then it would make good sense to use them as the cache keys. The problem is the extra storage cost. We're trying to keep that down to the bone.

Node id

It's tempting to use node id's (ie file offsets in the btree case) as cache keys. The problem is that if the node id moves, the cache key needs to be updated. Keeping the inverse map from node id to cache keys has nontrivial storage costs, also.

A more subtle, but important, argument against is that updating them may be very rare in some usage scenarios. Thus, there is a risk that the update machinery won't be adequately tested.

Node reference

The cache key could simply be a pointer to a node reference object. If the tree moves the corresponding node id, it updates the internals of the node reference, but the pointer remains constant.

Cache in tree

A rather different approach is to insert the cached values into the tree. The advantage is that the RAM costs can be very low (near-zero if the tree is stored on disk as a btree). The disadvantage is that computing and evicting cached values now requires traffic with the tree, with attendant performance and fragility problems.

Also, if there are multiple caches, then they'll need to be properly

multiplexed so values from the caches don't interfere with each other.

Cache in node reference

This is something of a hybrid of the above three approaches. Instead of the cache being represented as a hash table to the side of the tree, the cache entry is an extra field in the node reference. If there is a single cache (or small, bounded number of caches), then this approach is appealing. Otherwise, you have to do multiplexing as above.

I think you see most of these approaches in real systems. For one example, the Gnome Canvas includes a bounding box in all nodes, and also an SVP (sorted vector path) in all Bezier shape nodes (thus, I claim, it represents "cache in tree"). Unfortunately, it never evicts any elements from the cache, so the memory requirements can become quite painful.

For Fitz, I now think I have an answer. Most caches will use node references as keys. However, I may treat bounding boxes specially, and store them in node references. Saving a hashtable lookup may be a significant win, and it also helps that the value is of small constant size.

Also, note that navigational links (parent, first child, next sibling), which are internal to the tree implementation, can similarly be cached in the node reference. The basic rendering traversal is: given a bounding box, find all child nodes intersecting that bbox. If the nodes are in-cache, it should be possible to do this very quickly.

I'm happy with this, but still don't feel I've come to terms with change notification. I'll blog that in the next few days, and probably lose my remaining two readers.

Kids

Alan started his Korean Royal Court Martial Arts (Koong Joong Mu Sool) classes this week. It seems to be a perfect match for him - he's in much better shape.

Community

David McCusker called on the phone, and we had a nice, wide-ranging conversation. A major theme was social networks and community. I'm looking forward to continuing the conversation.

Screening

I'm having fun with the imaging parts of the code, but I'm a bit stuck on the Ghostscript integration parts. I'll just keep at it.

Why?

David McCusker has a beautiful short piece on overcomplex technology. We should be asking ourselved, "is there a simpler way?" far more often.

In the case of trust metrics, there definitely was "goal drift". I started out trying to make a better PKI, and came up with the tmetric ideas as part of that. Since then, I've come to consider PKI too hard, and in need of a fundamental rethink. The tmetric ideas still seem sound to me, though.

davidw points out that a hardware metadata site without the trust metric would still be useful. Of course. I think a good rule of thumb is whether you want to automate important decisions. If your query is: "what kind of reliability can I expect from a JXQ-11?", then it's reasonable to wade through the trolls, spammers, etc. (if any), and you probably win it all back in overall simplicity. But part of my idea is that you'd use such data to automatically figure out orders to place electronically. Then, you really do start to care about people putting false information into the system.

I'm a little behind on sleep, so tonight's entry will be short. Today was a nice family day.

A friend gave Alan a crystal radio set, so we put up a 100' antenna and tried it out. We were able to get a faint signal on one station, so it was a cool demo of radio waves, but could have been cooler.

I've been thinking more about trees over the weekend, particularly caching and change notification. Unfortunately, it gets complicated, and I worry that most readers don't have much context. At some point, I'll put up real infrastructure for Fitz, and the display tree will be part of the design docs. In the meantime, I like the blog form. I'll post more tomorrow.

Dave Winer has a big thread on blogs, journalism, and integrity. I'm not that moved by arguments of integrity. My feeling is that journalism is governed by Sturgeon's Law just like everything else. I fear that tech journalism is particularly affected, though. Most tech stories in the mainstream press have serious factual errors, and show lack of understanding on the part of the writer. I don't really care why tech journalism is so bad. It's likely to have something to do with the highly centralized structure of the media business, but I haven't thought much about the exact pathways. Dave asks: "Dumb-it-down or deliberate manipulation?" I'm not sure it matters much.

Blogs are also subject to Sturgeon's Law, of course. The vast majority are not worth reading. But there's real diversity out here in blog-land, no doubt related to the fact that blogs are not owned by a tiny number of megacorps. Can you imagine what a mainstream story on tree access protocols would look like? Yet, if you're one of the few people who cares about this, you're reading my blog, and I'm probably reading yours, and we're both engaging the subject very deeply.

Dave points out interviews as particularly bad in the mainstream. He's right. The process is fundamentally broken. The ideal of objectivity, while it might be important in other contexts, is somewhat pointless in an interview. It's the interviewee's point of view you care about. Why filter and distort it through a journalist who doesn't understand the topic and is a bad writer to boot? A blog lets you say what you meant, and if people misinterpret you, you can answer them.

Btw, I'm notoriously bad at checking my telephone answering machine. It's one way of doing flow control, I suppose. But I'll check it tomorrow. Now it's time to catch up on sleep.

Kids

Yesterday was Alan's last day at school. He was very emotional about saying goodbye to his friends and teacher. But that evening, we went to sushi to celebrate his graduation from kindergarten, and ran into one of his classmates. He worries a lot about making friends, but he's actually very social, much more so than either Heather or me at his age.

Max is going through another great leap forward in language development. As I blogged recently, he's working on irregular verbs. A few days ago, he said, "I broked it. I broked it. I broke it." You could read it on his face - "not quite right. Nope. Aah, nailed it!". This evening, he said "I dropped my bottle. Put it over my legs." And, touching the scotch tape I used to repair our copy of Goodnight Moon, "something sticky." It's only been a couple of months or so since most of his utterances were single words.

He's also very advanced physically. He can now kick a soccer ball well, in the direction he wants and with some force. Also, he blew my away by announcing "circle", then folding his collapsible sunshield into precisely that shape.

Alan had a similar language burst at almost the same age (25.5mo). Actually, we have to be very careful about marvelling at Max, because it makes Alan feel jealous. We reassure him about how smart he is, and how proud we are of him, but he still expresses a lot of doubts.

Keys

Wes briefly forgot his ThinkPad's BIOS password. This kind of thing happens all the time to real people. I commented on the need for far more sophisticated rituals for guarding keys, with both social and technical aspects. It's a hard problem, and it clearly can be done in both peer-to-peer and centralized flavors. Governments and evil corporations have a lot of motivation to pursue the latter. I'd like to see more thinking on decentralized approaches.

Of course, at the heart of the problem is the fact that it's all but impossible to securely store a key on a general purpose PC. Ry4an Brase pointed to a really neat toy. This particular model is a bit limited (in particular, if you lose or break it, you're hosed), but I think more specialized hardware like this will play an important role.

Cheap parts

A few people have expressed interest in getting a dual Athlon system similar to spectre. One question that came up: is generic RAM actually any less stable or reliable than the "name brand" variant? I really have no idea. If you were going to get a gig or more, the price difference could be significant. I chose not to take the risk, but I have a feeling that it's probably mostly a marketing strategy on behalf of the "name brands." For example, I know that Apple SDRAM, at $150 for a 256M PC133 SODIMM, is no different than Crucial's at $69.99 (including shipping). The question is whether it's any more reliable than the $38 part from Pricewatch. (side question: why the hell did Apple put a SODIMM socket in the iMac?)

Again, I think this would be a killer app for a trustworthy metadata system. What if almost all generic parts were good, but there were a few suppliers that weren't. Wouldn't it be cool to actually know that? Also, if people had a good place to report stuff like drive failures, I think information about lemon products would disseminate much faster.

Spectre

The machine arrived today. It seems pretty sweet. Ghostscript compile is down to 53s. Fitz rendering of the tiger is 130ms, but some of that is debug overhead. I think I'm going to like this machine.

The only serious hitch so far seems to be on-board Ethernet, which gets stuck. Popping in a Tulip PCI board fixes that.

150 dpi

The Matrox G550 can handily drive the monitor at 2048x1536, or around 150 dpi. It looks surprisingly good, I think in large part to the quality of the Matrox card. I think I might stay with this resolution a while. Obviously, most default fonts are too tiny, but I can easily configure the ones I really care about.

testrgb speed is 30.5 Mpix/s in 24-bit. This is pretty good, but I was hoping for better. I'm not sure I have everything tuned yet. For one, this is XFree86 4.1.0, and 4.2.0 is now out. Incidentally, setting AGP to 4x makes no noticeable difference. I'm not surprised, but in theory it should. testrgb is very bandwidth-intensive, which is what AGP is all about.

802.11b audio

rkrishnan: you're right of course, that real Internet telephony is untrivial. But in talking about D/A's, my main point was that the same basic platform could also do CD-quality audio, which would make it much more interesting to "enthusiasts", as opposed to corporate customers.

I think it's inevitable that all these kinds of products will come out over the coming months.

206 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!