Older blog entries for company (starting at number 141)

Video Hackfest day 1

The Video hackfest is on!
I originally wanted to summarize the happenings of the day, but Carl took notes: Go read them. I just want to add that I’m very happy with how it’s turning out: Lots of discussions happening all around, the weather is great and the hostel is awesome. Off to bed so I don’t miss any discussions tomorrow…

Syndicated 2009-11-20 00:11:18 from Swfblag

I do it my way

While preparing the video hackfest I realized Google maps is still not the best guide for walking: “Please stay clear of pedestrian precincts“.
 
Google’s suggested way from the accomodation to the venue on the left, my suggestion on the right. I guess I’ll do it my way.

Also, for everyone living behind a rock: The video hackfest is officially announced, lots of video goodness for everyone ahead.

Syndicated 2009-10-26 21:55:12 from Swfblag

video hackfest

This is the result of applying my recent gstreamer-cairo hacking to a real-world application – Webkit. It’s very fast here and could record that video you see there while playing it without breaking a sweat. If you want to run the demo yourself, try http://people.freedesktop.org/~company/stuff/video-demo.html. (Be warned: That link downloads roughly 200MB of movie data. And it’ll likely only run in recent Epiphany or Safari releases.)

Also, thanks to the X foundation funding and Collabora hosting, we will be doing a video hackfest in Barcelona from 19th to 22nd next months. GStreamer, Cairo, X, GL and driver hackers will focus on stabilizing these features so that movies will be first class citizens in the future of your desktop, no matter what you intend to do with them, including doing video browsing lowfat style.
I’m both excited and anxious about this hackfest, I’m not used to being in charge when ots of awesome hackers meet in one place.

Syndicated 2009-10-14 21:22:41 from Swfblag

Cairo is slow

Here’s the pipeline:

gst-launch-0.10 filesrc location=/home/lvs/The\ Matrix\ -\ Theatrical\ Trailer.avi ! decodebin ! queue ! cairomixer sink_0::xx=0.5 sink_0::xy=0.273151 sink_0::yx=-0.273151 sink0::yy=0.5 sink_0::alpha=0.6 sink_0::xpos=72 sink_0::ypos=48 sink_2:xx=1.38582 sink_2::xy=-0.574025 sink_2::yx=0.574025 sink2::yy=1.38582 sink_2::alpha=0.7 sink_2::xpos=20 sink_2::ypos=150 sink_2::zorder=10 sink_1::xpos=300 sink_1::ypos=100 ! video/x-cairo,width=800,height=500 ! pangotimeoverlay ! cairoxsink filesrc location=/home/lvs/the_incredibles-tlr_m480.mov ! decodebin ! cairocolorspace ! video/x-cairo ! queue ! cairomixer0. filesrc location=/home/lvs/transformers.trailer.480.mov ! decodebin ! cairocolorspace ! video/x-cairo ! queue ! cairomixer0.

Here’s the result:

CPU utilization when playing this is roughly 30%, which I attribute mostly to the video decoding. The Intel 945 GPU takes 25% doing this. If I use the sync=false property, the video is done after 59s. It’s also completely backwards compatible when no hardware acceleration is available. In fact I used the same pipeline to record the video, just replacing the sink with a theoraenc.

Implementation details are here and here. Total amount of code written is roughly 10.000 lines, put into the right spots in gstreamer, cairo, pixman and the xserver. Of course, the code is not limited to GStreamer. I expect Webkit and Mozilla will use it too, once it’s properly integrated. And then we can do these effects in Javascript.

Syndicated 2009-10-05 10:53:11 from Swfblag

Byzanz 0.2.0

I wanted to do something “easy”. And with that I mean hack on code that doesn’t need reviews by other people or tests or API/ABI stability. But I wanted to do something useful. And I found something: I dug out Byzanz, the desktop to GIF recorder tool I wrote almost 4 years ago.

The first thing that stood out to me was the ugly functions this code uses. Functions like gdk_drawable_copy_to_image() or gnome_vfs_xfer_uri() are really not to be used if you want maintainable code. So I replaced GDK with Cairo functions and gnome-vfs with gvfs. It looks much nicer now and is more powerful, too. So if you still have code that uses outdated libraries, make the code use modern ones. It’s worth it.
This whole process comes at a bit of a cost though: Byzanz absolutely requires bugfixes that are only in the (so far) unreleased git master trees of Cairo, Gtk and GStreamer. So for now Byzanz is the most demanding application available on the GNOME desktop!

Next I realised that I learned a lot about coding in the last 4 years. My code looked really ugly back then. So I made it not use bad things like nesting main loops (do NOT ever use gtk_dialog_run()), follows conventions (like gio async functions handling) and refactored it to have a sane structure. While doing that I easily got rid of all the bugs people complained about. That was fun.

Then I made sure to document the goals that guided my design of Byzanz. From the README:

  • purpose

    Byzanz records animations for presentation in a web browser. If something doesn’t fit this goal, it should not be part of Byzanz.
  • correctness

    When Byzanz provides a feature, it does this correctly. In particular, it does not crash or corrupt data.
  • simplicity

    The user interface and programming code are simple and easy to understand and don’t contain any unecessary features.

    Byzanz does not attempt to do be smarter than you are.
  • unobtrusiveness

    Byzanz does not interfere with the task you are recording, neither by keeping a large settings window around nor by consuming all your CPU during a recording.

Those goals are a really useful thing, because they ensured I didn’t add features, just because I could. For example, I’d really like to add visual clues about key presses or mouse clicks. But I couldn’t find a way to make this work simple and unobtrusive. And making people fiddle with settings before a recording to enable or disable the feature is bad.

But I added a feature: Byzanz can now not only record GIF images, but also to Theora or Flash video. The web has changed in the last 5 years and supports video formats now, so it’s only following Byzanz’s design goals to add these formats. Fwiw, I only added Flash video because it’s the only lossless format. So if you wanna do post-processing in Pitivi (like add text bubbles), you wanna use the Flash format.

I also updated the UI: It asks for the filename before starting the recording, so it can save animations to your FTP while you record. And I added a bunch of niceties like remembering the last filename. Repeating a recording that doesn’t look quite right is 3 clicks: Click record button in panel, return (to select the same file), return (to confirm overwriting). Nifty that.

So, if you have a jhbuild: Get the shiny new Byzanz 0.2.0.

Syndicated 2009-08-30 21:54:10 from Swfblag

semantic desktop

I named this post “Tracker” first as I started writing from that perspective, but the problems I’m about to talk are more related to what is called “semantic desktop” and not specific to Tracker, which is just the GNOME implementation to that idea.
This post is a collection of my thoughts on this whole topic. What I originally wanted to do was improve Epiphany’s history handling. Epiphany still deletes your history after 10 days for performance reasons. When people suggesting Tracker I started investigating it, both for this purpose and in general.

How did this all start?

It gained traction when people realized that a lot of places on the desktop refer to the same things, but they all do it incompatibly. (me, me, me, me, me and I might be in your IRC, mail program and feed reader, too.) So they set out to change it. Unfortunately, such a change would have required changes to almost all applications. And that is hard. An easier approach is to just index all files and collect the information from them without touchig the applications. And thus, Tracker and Beagle were born and competed on doing just that.
However, indexing has lots of problems. Not only do you need to support all those ever-changing file formats, you also need to do the indexing. And that takes lots of CPU and IO and is duplicated work and storage. So the idea was born to instead write plugins that grab the information from the applications while they are running.
But still, people weren’t convinced, as the only things they got from this is search tools, even if they automatically update. And their data is still duplicated.

What’s a sematic desktop anyway?

Well, it’s actually quite smple. It’s just a bunch of statements in the form <subject> <predicate> <object> (called triples), like “Evolution sends emails”. Those statements come complete with a huge spec and lots of buzzwords, but it’s always just about <subject> <predicate> <object>.
Unfortunately, statements don’t help you a whole lot, there’s a huge difference between “Evolution sends emails” and “I send emails”. You need a dictionary (called ontology). The one used by Tracker is the Nepomuk ontology.
And when you have stored lots of triples stored according to your ontologies, then you can query them (using SPARQL). See Philip’s posts (1, 2) for an intro.

So why is that awesome?

If all your data is stored this way, you can easily access information from other applications without having to parse their formats. And you can easily talk to the other applications about that data. So you can have a button in evolution for IM or one in empathy to send emails. Implementing something like Wave should be kinda trivial.
And of course, you get an awesome search.

No downsides?

Of course there are downsides. For a start, one has to agree on the ontologies: How should all the data be represented? (Do we use a model like OOXML, ODF or HTML for storing documents?) Then there also is the question about security. (Should all apps be able to get all passwords”? Or should everyone be able to delete all data?) It’s definitely not easy to get right.
How does Tracker help?

Tracker tries to solve 2 problems: It tries to supply a storage and query daemon for all the data (called a triple-store) and it tries to solve the infrastructure to indexing files. The storage backend makes sense. Its architecture is sound, it’s fast and you can send useful queries its way. It has a crew of coders developing it that know their stuff. So yes, it’s the thing you want to use. Unless you don’t buy in to the semantic desktop hype.

What about the indexing?

Well, the whole idea of indexing the files on my computer is problematic. The biggest problem I have is thatthe Tracker people lack the expertise to know what data to index and how. It doesn’t help a whole lot if Tracker parses all JPEG files in the Pitures/ folder when the real data is stored in F-Spot. It doesn’t help a whole lot when you have Empathy and Evolution plugins that both have a contact named Kelly Hildebrand, but you don’t know if they’re talking about the same person. You just end up with a bunch of unrelated data.

There was something about hype?

Yeah, the semantic desktop has been an ongoing hype for a while without showing great results. Google Desktop is the best example, Beagle was supposed to be awesome but hasn’t had a release for a while, let alone Dashboard, and it hasn’t caught on in GNOME, either, even though we talk about it for more than 3 years.
But then, Nokia still builds on Tracker for Harmattan, the Zeitgeist team tries to collect data from multiple applications and make use of it in innovative ways. People are definitely still trying. But it’s not clear to me that anyone has figured out a way to make use of it yet.

Now, do I want to use it in my application or not?

Tracker is not up to the quality standards people are used from GNOME software. It’s an exciting and rapidly changing code base with lots of downright idiotic behaviors - like crashing when it accidentally takes more than 80MB memory while parsing a large file - and unsolved problems. Some parts don’t compile, the API is practically not documented and the dependancy list is not small (at least if you wanna hack on it). It also ships a tool to delete your database. Which is nice for debugging, but somewhat like shipping a tool that does rm -rf ~. In summary, I feel remembered of the GStreamer 0.7 or Gtk 1.3 days. Products with solid foundations, a potentially bright future ahead but not there yet. So it’s at best beta quality. I’d call it alpha.
There is an active development team, but that team is focused on the next Maemo release and not on desktop integration. This is quite important, because it likely means that the development focus will probably only be on new applications for the Maemo platform and not on porting old applications (in particular GNOME ones) to use Tracker. And that in turn means there will not be deeper desktop integration. Unless someone comes up and works on it.

So I don’t want to use Tracker?

The idea of the semantic desktop has great potential. if every application makes its data available for every other application in a common data store that everybody agrees on, you can get very nice integration of that data. But that requires that it’s not treated as an add-on that crawls the desktop itself, but that applications start using Tracker as their exclusive primary data store. Until EDS is just a compatibility frontend for Tracker, it’s not there yet.
So if you use Tracker, you will not have to port yor application to use it in the future, when GNOME requires it. You also get rid of the Save button in your application and gain automatic backup, crash recovery and full text search. But if you don’t use Tracker, you don’t save your data on unfinished software, and you don’t have to rip it out when Nokia (or whoever) figures out that Tracker is not the future.

Conclusion

I have no idea if Tracker or the semantic desktop is the right idea. I don’t even know if it is the right idea for a new Epiphany history backend. It’s probably roughly the same amount of work I have to do in both cases. But I’m worried about its (self)perception as an add-on instead of as an integral part of every application.

Syndicated 2009-07-23 19:34:55 from Swfblag

treeview tips

While doing my treeview related optimizations, I encountered quite a few things that help improving performance of tree views. (Performance here means both CPU usage and responsiveness of the application.) So in case you’re using tree views or entry completions and have issues with their performance, these tips might help. I often found them to be noticable for tree views with more than 100 rows, though they were usually measurable before that. With 1000 rows or more they do help a lot.

Don’t use GtkTreeModelSort

While GtkTreeModelSort is a very convenient way to implement sorting for small amounts of data, it has quite a bit of overhead, both with handling inserts and deletions. If you’re using a custom model, try implementing GtkTreeSortable yourself. When doing so, you can even use your own sort functions to make sorting really fast. If you are using GtkTreeModelFilter, you should hope this bug gets fixed.

Use custom sort functions whenever possible

As sorting a large corpus of data calls the sort function at least O(N*log(N)) times, it’s a very performance critical function. The default sort functions operate on the GValues and use gtk_tree_model_get() which copies the values for every call. For strings, it even uses g_utf_collate(). This is very bad for sort performance. So especially when sorting strings, it’s vital to have the collated strings available in your model and avoid using gtk_tree_model_get() to get good performance with thousands of rows.

Do not use gtk_tree_view_column_set_cell_data_func()

Unless you exactly know what you are doing, use gtk_tree_view_column_set_attributes(). Cell data function have to fulfill some assumptions (that probably aren’t documented) and are very performance sensitive. The implicit assumption is that they set identical properties on the cell renderer for the same iter - until the iter is changed with gtk_tree_model_row_changed(). I’ve seen quite a few places with code like this:

/* FIXME: without this the tree view looks wrong. Why? */
gtk_widget_queue_resize (tree_view);

The performance problem is that this function is called at least once for every row when figuring out the size of the model, and again whenever a row needs to be resized or rendered. So better don’t do anything that takes longer than 0.5ms. Certainly don’t load icons here or anything like this…

Don’t do fancy stuff in your tree model’s get_value function

This applies when you write your own tree model. The same restrictions as in the cell data functions apply here. Doing them wrong has the same problems as I said in the last paragraph, as this function is more low level; it gets called O(N*log(N)) times when sorting for example. Luckily there’s a simple way around it: Cache the values in your model, that’s what it’s supposed to do. And you don’t forget to call gtk_tree_model_row_changed() when a value changes.

Batch your additions

This mostly applies when writing your own tree model or when sorting. If you want to add lots of rows, it is best to first add all the rows, then make sure all the values are correct, then resort the model and only then emit thegtk_tree_model_row_added() signal for all the rows. When not writing a custom model, use gtk_list_store_insert_with_values() and gtk_tree_store_insert_with_values(). These functions do all of this already.

Fix sizes of your cell renderers if possible

This one is particularly useful when using tree views with only one column. If you have a know width and height in advance, you can use gtk_cell_renderer_set_fixed_size() to fix the size of the cell renderer in advance. This will cause the tree view to not call the get_size function for the cell renderer for every row. And this will make in particular text columns a lot quicker, because there’s no need to layout the text to compute its size anymore.
A good example for where this is really useful is GtkEntryCompletion when displaying text. As the width of the entry completion is known in advance (as big as the entry you are completing on) and the height of a cell displaying text is always the same, you can use this code on the cell renderer:

gtk_cell_renderer_set_fixed_size (text_renderer, 1, -1);
gtk_cell_renderer_text_set_fixed_height_from_font (text_renderer, 1);

Now no get_size functions will be called at all and the completion popup will still look the same.

And with these tips, there should be no reason to not stuff millions of rows into a tree view and be happy. :)

Syndicated 2009-07-10 17:35:14 from Swfblag

Neat hacks

I’m doing optimization tasks recently. The first one is covered on the gtk mailing list and bugzilla. It involves a filechooser, many thousands of files and deleting a lot of code. But I’m not gonna talk about it now.

What I’m gonna talk about started with a 37MB Firefox history from my windows days and Epiphany. Epiphany deletes all history after 10 days, because according to its developers the location bar autocomplete becomes so slow it’s unusable. As on WIndows I had configured the history to never delete anything, I had a large corpus to test with. So I converted the 100.000 entries into epiphany’s format and started the application. That took 2 minutes. Then I started entering text. When the autocomplete should have popped up, the application started taking 100% CPU. After 5 minutes I killed it. Clearly, deleting all old history entries is a good thing in its current state.
So I set out to fix it. I wrote a multithreaded incremental sqlite history backend that populates the history on-demand. I had response times of 2-20ms in my command-line tests, which is clearly acceptable. However, whenever I tried to use them in a GtkEntryCompletion, the app started “stuttering”: keys I pressed didn’t immediately appear on screen, the entry completion lagged behind. And all that without taking a lot of CPU or loading stuff from disk or anything noticable in sysprof. It looked exactly like on this 6MB gif - when you let it finish loading at least. And that’s where the neatest of all hacks came in.

That hack is documented in bug 588139 and is what you see scrolling past in the terminal in the background. It tells you which callbacks in the gtk main loop take too long. So if you have apps that seem to stutter, you can download the patch, apply it to glib, LD_PRELOAD the lib you just built and you’ll get prints of the callbacks that take too long. I was amazed at how useful that small hack is. Because it told me immediately what the problem was. And it was something I had never thought would be an issue: redraws of the tree view.

And that gets me back to the beginning and my filechooser work, because stuff there was sometimes still a bit laggy, too. So I ran it with this tool, and the same functions were showing up. It looks like a good idea to look into tree view redraws and why they can take half a second.

Syndicated 2009-07-09 19:36:03 from Swfblag

vision

It’s early in the morning, I get up. During breakfast I check the happenings on the web on my desktop - my laptop broke again. People blog about the new Gnumeric release that just hit the repositories. I click the link and it starts. I play around in it until I need to leave.
On the commute I continue surfing the web on my mobile. I like the “resume other session” feature. Turns out there’s some really cute features in the new Gnumeric that I really need to try.
Fast forward, I’m at the university. Still some time left before the meeting, so I can play. I head to the computer facilities in the lab, log in and start Gnumeric. Of course it starts up just as I left it at home: same settings, same version, same document.
20 minutes later, I’m in the meeting. As my laptop is broken, I need to use a different computer for my slides. I grab a random one and do the usual net login. It looks just like at home. 3 clicks, and the right slides are up.

This is my vision of how I will be using my computer in the future. (The previous paragraph sounds awkward, but describing ordinary things always does, doesn’t it?) So what are the key ingredients here?

The key ingredient is ubiquity. My stuff is everywhere. It’s of course on my desktop computer and my laptop. But it’s also on every other computer in the world, including Bill Gates’ cellphone. I just need to log in. It will run on at least as many devices as current web apps like GMail does today. And not only does GMail run everywhere, but my GMail runs everywhere: It has all my emails and settings available all the time. I want a desktop that is as present as GMail.

So here’s a list of things that need to change in GNOME to give it ubiquity:

  • I need my settings everywhere

    Currently it is a hard thing to make sure one computer has the same panel layout as another one. I bet at least 90% of the people configure their panels manually and download their background image again when they do a new install. So roughly every 6 months. Most people don’t even know that copying the home directory from the old setup solves these issues. But I want more than that. Copying the home directory is a one-time operation but stuff should be continually synchronized. In GNOME, we could solve most of this already by making gconf mirror the settings to a web location, say gconf.gnome.org and sync whenever we are connected.

  • I need my applications everywhere

    I guess you know what a pain it is when you’ve set up a new computer and all the packages are missing. Gnumeric isn’t installed, half the files miss thumbnailers, bash tells you it doesn’t know all those often-used commands and no configure script finds all pkg-config files. Why doesn’t the computer just install the files when it needs them? Heck, you could probably implement it right now by mounting /usr with NFS from a server that has all packages installed. Say x86.jaunty.ubuntu.com? Add a huge 15GB offline cache on my hard disk using some sort of unionfs. Instantly all the apps are installed and run without me manually installing them. As a bonus, you get rid of nagging security update dialogs.

  • I really need my applications everywhere

    Have you ever noticed that all our GNOME applications only work on a desktop computer? GNOME Mobile has been busy duplicating all our applications for mobile devices: different browser, different file manager, different music player. And of course those mobile applications use other methods to store settings. So even if the settings were synced perfectly, it wouldn’t help against having to configure again. Instead of duplicating the applications, could we please just adapt the user interfaces?

When looking at this list, and comparing it to the web, it’s obvious to me why people prefer the web as an application delivery platform, even though the GNOME APIs are way nicer. The web gets all of those points right:

  • My settings are stored on some server, so they are always available.
  • Applications are installed on demand - without even asking the users. The concept is called “opening a web page”.
  • Device ubiquity is the only thing the web doesn’t get perfect by default. But a) the browser developers are hard at work ensuring that every web site works fine on all devices, no matter the input method or the screen size and b) complex sites can and do ship user interfaces adapted to different devices.

The web has its issues, too - like offline availability and performance - but it gets these steps very right.

So how do we get there? I don’t think it’s an important question. It’s not even an interesting question to me. The question that is interesting to me is: Do we as GNOME want to go there? Or do we keep trying to work on our single-user oriented desktop platform?

Syndicated 2009-05-12 18:47:11 from Swfblag

decision-making

I’ve recently been wondering about decision-making, namely how decisions are made. I had 5 questions in particular that I was thinking about where I could not quite understand why they worked out the way they did and was unhappy with the given reasons. The questions were:

  • Why does GNOME switch to git?
  • Should the GNOME browser use Webkit or Mozilla?
  • Why has GNOME not had any significant progress in recent years but only added polish?
  • Why did Apple decide to base their browser on KHTML and not on Mozilla?
  • What is Google Chrome trying to achieve - especially on Linux?

If you have answers to any of these question, do not hesitate to tell me, but make sure they are good answers, not the standard ones.

Because the first thing I realized was that all of these questions have been answered, usually in a lot of technical detail providing very good reasons for why a particular decision was taken. But when reading about the GNOME git migration it struck me: There were equally good technical reasons to switch to almost any other version control system. And I thought I knew the reasons for the git switch, but they were not listed. So are good technical reasons completely worthless? It turns out that for the other questions, there are equally good technical reasons for the other answer: There were good reasons for Apple to base their browser on Mozilla or buy Opera, for GNOME to be more innovative or for Google to not develop a browser or invest in Mozilla, just none of them are usually listed. Plus, I knew that if different people had decided on the GNOME vcs switch, we would not be using git now.

This lead me to think that decisions are personal, not technical. People decide what they prefer and then use or make up good technical reasons for why that decision was taken. But this in turn means technical reasons are only a justification for decisions. Or even worse: they deceive others from the real reasons. And that in turn means: Don’t look at the given reasons, look at the people.

But apparently people will not tell you the real reasons. Why not? What were my reasons for prefering git as a GNOME vcs? I definitely did not do a thorough investigation of the options, I didn’t even really use anything but git. I just liked the way git worked. And there were lots of other people using it, so I could easily ask them questions. So in the end I prefer git because I used it first and others use it. Objectively, those are not very convincing reasons. But I believe those are the important ones for other people as well. I think GNOME switched to git because more hackers, in particular the “important” ones, wanted git. Why did it not happen earlier? I think that’s because a) they wanted to hack and not sysadmin their vcs of choice and b) they needed a good reason for why git is better than bzr to not get into arguments, and Behdad’s poll gave them one. So there, git won because more people want it. Why do more people want it? Probably because it’s the vcs they used first and there was no reason to move away. The only technical reason in here was “good enough”.

I don’t think technical reasons are completely unimportant. They are necessary to make an option reasonable. But once an option is a viable one, the decision process is likely not driven by the technical reasons anymore. After that it gets personal. With that it mind, things like Enron or the current bailout don’t seem that unbelievable anymore. And a lot of decisions suddenly make a lot more sense.

And because I asked the other questions above, let me give you the best answers I currently have for them.

  • First of all, GNOME should use Webkit and not Mozilla, because the GNOME people doing the actual work want to use that one. There’s a lot of people that prefer Mozilla, but they don’t do any work, so they don’t - and shouldn’t matter. This was the easy question.
  • GNOME does not have any innovation because GNOME people - both hackers and users - don’t want any. Users want to use the same thing as always and most GNOME hackers these days are paid to ensure a working desktop. And they’re not students anymore so they prefer a paycheck to the excitement of breaking things.
  • For Safari I don’t have a satisfactory answer. I’m sure though it wasn’t because it was a “modern and standards compliant web browser”. Otherwise they would have just contributed to KHTML. Also, the Webkit team includes quite some former Mozilla developers, who should feel more confident in the Mozilla codebase. So I guess again, this is personal.
  • For Google Chrome it’s the same: I don’t have a real clue. Google claims it wants to advance the state of the art in browsers, but this sounds backwards. Chrome would need a nontrivial market share to make it possible for their webapps to target the features of Chrome. Maybe it’s some people inside Google that wrote a browser as their 20% project. Maybe Google execs want more influence on the direction the web is going. I don’t know.

Syndicated 2009-04-01 10:41:46 from Swfblag

132 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!