Older blog entries for mdz (starting at number 31)

Ubuntu and Qt

I like to think that in the Ubuntu project, we’re pragmatic about technology. This means keeping an open mind, considering alternatives, and evaluating them objectively. It means bearing in mind the needs of the user, and measuring ourselves based on how well we solve their problems (not merely our own).

It is in this spirit that I have been thinking about Qt recently. We want to make it fast, easy and painless to develop applications for Ubuntu, and Qt is an option worth exploring for application developers. In thinking about this, I’ve realized that there is quite a bit of commonality between the strengths of Qt and some of the new directions in Ubuntu:

  • Qt has a long history of use on ARM as well as x86, by virtue of being popular on embedded devices. Consumer products have been built using Qt on ARM for over 10 years. We’ve been making Ubuntu products available for ARM for nearly two years now, and 10.10 supports more ARM boards than ever, including reference boards from Freescale, Marvell and TI. Qt is adding ARMv7 optimizations to benefit the latest ARM chips. We do this in order to offer OEMs a choice of hardware, without sacrificing software choice. Qt preserves this same choice for application developers.
  • Qt is a cross-platform application framework, with official ports for Windows, MacOS and more, and experimental community ports to Android, the iPhone and WebOS. Strong cross-platform support was one of the original principles of Qt, and it shows in the maturity of the official ports. With Ubuntu Light being installed on computers with Windows, and Ubuntu One landing on Android and the iPhone, we need interoperability with other platforms. There is also a large population of developers who already know how to target Windows, who can reach Ubuntu users as well by choosing Qt.
  • Qt has a fairly mature touch input system, which now has support for multi-touch and gestures (including QML), though it’s only complete on Windows 7 and Mac OS X 10.6. Meanwhile, Canonical has been working with the community to develop a low-level multi-touch framework for Linux and X11, for the benefit of Qt and other toolkits. These efforts will eventually meet in the middle.

Overall, I think Qt has a lot to offer people who want to develop applications for (and on) Ubuntu, particularly now. It already powers popular cross-platform applications like VLC, not to mention the entire Kubuntu distribution. I missed it when this happened last year, but Qt is now available under either the LGPL 2.1 or the GPL 3.0, which should make it suitable for virtually any Ubuntu application. It has strong commercial backing as well as a large developer community. No single solution will meet all developers’ needs, of course, and Ubuntu supports multiple toolkits and frameworks for this reason, but Qt seems like a great tool to have in our toolbox for the road ahead.


Syndicated 2010-10-20 10:08:29 from We'll see | Matt Zimmerman

Tips for frequent international travel

I travel pretty regularly, about 35% so far in 2010. When it goes wrong, travel can be exhausting, frustrating, complicated, stressful and even debilitating. I’m always looking for ways to make my trips run more smoothly. On a recent flight to Taipei, I wrote down a few of the techniques which I’ve successfully put into practice and found helpful. This is not an exhaustive list; I’ve omitted a lot of the common and obvious tips I’ve seen elsewhere.

  1. Make a packing list. This one may be obvious, but a lot of people neglect it. Perhaps they think making lists is boring and fussy, but really, it isn’t. Without a packing list, it’s easy to forget to do the things which will make your trip better. Use it every time, and bring a copy with you (or store it online) so you can add the things you wish you had brought or done. A simple, ever-improving packing list is the most effective technique I have found for making travel less stressful and more enjoyable.
  2. Carry a water bottle with a tight-fitting lid. I use a 32oz Nalgene bottle, which fits nicely into the seat next to me or under an armrest, and gives me enough water for even the longest flights. I fill it up after passing through security, at a cafe, bar or lounge, and generally decline the beverages offered by the cabin crew. Staying hydrated helps me feel better during the flight, and leaves me with less malaise when I arrive. I don’t need to manage a tray table or armrest full of cups and other debris, so I can sit more comfortably, with the tray table folded away.
  3. Consolidate essential items using multipurpose equipment. For example, invest in a power adapter which has USB sockets onboard, and carry USB cables instead of wall chargers. Versatile items like this save on space and weight. I can charge two devices this way, but the equipment is smaller and lighter than even a single wall charger.
  4. Learn how to sleep on an airplane. Getting some sleep on a long flight really helps to offset the effects of traveling. There are several resources out there with practical advice on how to do it. One thing which really helped me was to buy a high quality eye mask which blocks out all of the light in the cabin. The one I use looks a little funny and is not cheap, but is very comfortable and effective. It’s made of memory foam with a soft, washable cover and works much better than the ones the airlines give away for free. I no longer bother with a neck pillow, and use the flaps built into the seat to lean my head against. I’m surprised at how many people don’t know about this common aricraft feature: virtually every long-haul seat has something like this, even in economy, though it may not be obvious how to use it.
  5. Buy duplicates of things like toiletries, and keep them in your travel kit so you don’t need to pack your everyday items (and risk forgetting them) each time. The less packing you need to do, the less time it will take, and the less opportunity there is for mistakes. This also saves time unpacking when you get home, and lets you buy a smaller size of the item where available.
  6. Optimize border crossings Carry the forms you’ll need for customs, immigration, etc. in your carry-on. They don’t always provide them at the counter or on board the plane, and it’s a hassle to rush to fill in the form at the last minute. If you have a few of them with you, you can fill them out early (perhaps even before you fly) and then hustle to the front of the queue. For countries you enter frequently (especially your home country), programs like Global Entry (US) and IRIS (UK) will save you a lot of time by allowing you to use an automated kiosk to cross the border.

Syndicated 2010-10-03 17:33:48 from We'll see | Matt Zimmerman

Traveling at home

For me, the most enjoyable part of traveling is the inspiration that I derive from visiting different places, talking to people, and generally being outside of my normal environment. This bank holiday weekend, when so many Londoners visit faraway lands, my partner and I stayed in London instead, and my sought inspiration closer to home. The city has been delightfully quiet, and in contrast to the preceding week, the weather was mostly pleasant, apart from the sudden downpours the BBC described as “squally showers”.

Photo of deer in Richmond Park

Photo credit: Márcio Cabral de Moura


We spent Saturday afternoon in Richmond Park, a 2500-acre nature preserve easily accessible via public transport from London. The plentiful oak trees, fallow deer, and various species of water fowl made it easy to forget the city for a while. Having visited a few times on foot, I think it would be fun to cycle next time, and see different areas of the park.

Afterward, we had dinner at a tapas restaurant in Parsons Green which offered notably excellent service as well as good food. By this time, it was nearly 7:00pm, and we took a chance on getting last-minute theatre tickets to see Jeff Goldblum and Mercedes Ruehl in Neil Simon’s The Prisoner of Second Avenue. We arrived at the theatre just in time for the show, which was not sold out, and in fact had quite reasonable seats available. The show had several good laughs, holding up fairly well after nearly 40 years since the original Broadway production.

Photo of the exhibition at the Design Museum

Photo credit: Gary Bembridge


On Sunday, we visited the Design Museum for the first time. Having been disappointed by the nearby Fashion and Textile Museum, our expectations were not too high, but it turned out to be very worthwhile. The Brit Insurance Designs of the Year exhibition showcased designs from architecture, fashion, furniture, transport and more. Some of my favorites were:
  • Pachube, a system for sharing real-time sensor data and fostering a community around its uses
  • Grassworks, a line of flat-pack, self-assembled furniture constructed entirely from bamboo, without glue or fasteners
  • The Gocycle, a lightweight (16kg) electric bicycle for city dwellers
  • The Eyewriter, a low-cost eye tracking system powered by open source software
  • The Land Glider, a small (1×3 meters), enclosed electric vehicle which maintains stability by leaning into turns
  • Analog Digital, a clock which is operated by a person covering and revealing segments using paint
  • BMW GINA, a fabric-skinned shape-shifting car concept

I was delighted to see that there were a half dozen or so exhibits which related to open source software.

Even including the theatre tickets, it was a very inexpensive holiday compared to traveling overseas, and generated a lot less CO2. I was more than satisfied with the inspiration available within a relatively small radius. I don’t think I’ll give up traveling, as I really enjoy seeing friends who live far away, but I think I’ll be more inclined to stay home during peak travel times and enjoy local activities.


Syndicated 2010-08-30 13:17:38 from We'll see | Matt Zimmerman

DebConf 10: Last day and retrospective

DebConf continued until Saturday, but Friday the 6th was my last day as I left New York that evening. I’m a bit late in getting this summary written up.

Making Debian Rule, Again (Margarita Manterola)

Marga took a bold look at the challenges facing Debian today. She says that Debian is perceived to be less innovative, out of date, difficult to use, and shrinking as a community. She called out Ubuntu as the “elephant in the room”, which is “‘taking away’ from Debian.” She insists that she is not opposed to Ubuntu, but that nonetheless Ubuntu is to some extent displacing Debian as a focal point for newcomers (both users and contributors).

Marga points out that Debian’s work is still meaningful, because many users still prefer Debian, and it is perceived to be of higher quality, as well as being the essential basis for derivatives like Ubuntu.

She conducted a survey (about 40 respondents) to ask what Debian’s problems are, and grouped them into categories like “motivation” and “communication” (tied for the #1 spot), “visibility” (#3, meaning public awareness and perception of Debian) and so on. She went on to make some suggestions about how to address these problems.

On the topic of communication, she proposed changing Debian culture by:

  • Spreading positive messages, celebrating success
  • Thanking contributors for their work
  • Avoiding escalation by staying away from email and IRC when angry
  • Treating every contributor with respect, “no matter how wrong they are”

This stimulated a lot of discussion, and most of the remaining time was taken up by comments from the audience. The video has been published, and offers a lot of insight into how Debian developers perceive each other and the project. She also made suggestions for the problems of visibility and motivation. These are crucial issues for Debian devotees to be considering, and I applaud Marga for her fortitude in drawing attention to them. This session was one of the highlights of this DebConf, and catalyzed a lot of discussion of vital issues in Debian.

Following her talk, there was a further discussion in the hallway which included many of the people who commented during the session, mostly about how to deal with problematic behavior in Debian. Although I agreed with much of what was said, I found it a bit painful to watch, because (ironically) this discussion displayed several of the characteristic “people problems” that Debian seems to have:

  • Many people had opinions, and although they agreed on many things, agreement was rarely expressed openly. Sometimes it helps a lot to simply say “I agree with you” and leave it at that. Lending support, rather than adding a new voice, helps to build consensus.
  • People waited for their turn to talk rather than listening to the person speaking, so the discussion didn’t build momentum toward a conclusion.
  • The conversation got louder and more dense over time, making it difficult to enter. It wasn’t argumentative; it was simply loud and fast-paced. This drowned out people who weren’t as vocal or willful.
  • Even where agreement was apparent, there was often no clear action agreed. No one had responsibility for changing the situation.

These same patterns are easily observed on Debian mailing lists for the past 10+ years. I exhibited them myself when I was active on these lists. This kind of cultural norm, once established, is difficult to intentionally change. It requires a fairly radical approach, which will inevitably mean coping with loss. In the case of a community, this can mean losing volunteer contributors cannot let go of this norm, and that is an emotionally difficult experience. However, it is nonetheless necessary to move forward, and I think that Debian as a community is capable of moving beyond it.

Juxtaposition

Given my history with both Debian and Ubuntu, I couldn’t help but take a comparative view of some of this. These problems are not new to Debian, and indeed they inspired many of the key decisions we made when founding the Ubuntu project in 2004. We particularly wanted to foster a culture which was supportive, encouraging and welcoming to potential contributors, something Debian has struggled with. Ubuntu has been, quite deliberately, an experiment in finding solutions to problems such as these. We’ve learned a lot from this experiment, and I’ve always hoped that this would help to find solutions for Debian as well.

Unfortunately, I don’t think Debian has benefited from these Ubuntu experiments as much as we might have hoped. A common example of this is the Ubuntu Code of Conduct. The idea of a project code of conduct predates Ubuntu, of course, but we did help to popularize it within the free software community, and this is now a common (and successful) practice used by many free software projects. The idea of behavioral standards for Debian has been raised in various forms for years now, but never seems to get traction. Hearing people talk about it at DebConf, it sometimes seemed almost as if the idea was dismissed out of hand because it was too closely associated with Ubuntu.

I learned from Marga’s talk that Enrico Zini drafted a set of Debian Community Guidelines over four years ago in 2006. It is perhaps a bit longand structured, but is basically excellent. Enrico has done a great job of compiling best practices for participating in an open community project. However, his document seems to be purely informational, without any official standing in the Debian project, and Debian community leaders have hesitated to make it something more.

Perhaps Ubuntu leaders (myself included) could have done more to nurture these ideas in Debian. At least in my experience, though, I found that my affiliation with Ubuntu almost immediately labeled me an “outsider” in Debian, even when I was still active as a developer, and this made it very difficult to make such proposals. Perhaps this is because Debian is proud of its independence, and does not want to be unduly influenced by external forces. Perhaps the initial “growing pains” of the Debian/Ubuntu relationship got in the way. Nonetheless, I think that Debian could be stronger by learning from Ubuntu, just as Ubuntu has learned so much from Debian.

Closing thoughts

I enjoyed this DebConf very much. This was the first DebConf to be hosted in the US, and there were many familiar faces that I hadn’t seen in some time. Columbia University offered an excellent location, and the presentation content was thought-provoking. There seemed to be a positive attitude toward Ubuntu, which was very good to see. Although there is always more work to do, it feels like we’re making progress in improving cooperation between Debian and Ubuntu.

I was a bit sad to leave, but was fortunate enough to meet up with Debian folk during my subsequent stay in the Boston area as well. It felt good to reconnect with this circle of friends again, and I hope to see you again soon.

Looking forward to next year’s DebConf in Bosnia


Syndicated 2010-08-25 16:57:03 from We'll see | Matt Zimmerman

DebConf 10: Day 3

How We Can Be the Silver Lining of the Cloud (Eben Moglen)

Eben’s talk was on the same topic as his Internet Society talk in February, which I had downloaded and watched some time ago. He challenges the free software community to develop the software to power the “freedom box”, a small, efficient and inexpensive personal server.

Such a system would put users more in control of their online lives, give them better protection for it under the law, and provide a platform for many new federated services.

It sounds like a very interesting project, which I’d like to write more about.

Statistical Machine Learning Analysis of Debian Mailing Lists (Hanna Wallach)

Hanna is bringing together her interests in machine learning and free software by using machine learning techniques to analyze of publicly available data from free software communities. In doing so, she hopes to develop tools for studying the patterns of collaboration, innovation and other behavior in these communities.

Her methodology uses statistical topic models, which infer the topic of a document based on the occurrence of topical words, to group Debian mailing list posts by topic. Her example analyzed posts from the debian-project and debian-women mailing lists, inferring a set of topics and categorizing all of the posts according to which topic(s) were represented in them.

Using this data, she could plot over time the frequency of discussion of each topic, which revealed interesting patterns. The audience quickly zoned in on practical applications for things like flamewar and troll detection.

Debian Derivatives BoF (Matt Zimmerman)

I organized this discussion session to share perspectives on Debian derivatives, in particular how we can improve cooperation between derivatives and Debian itself. The room was a bit hard to find, so attendance was relatively small, but this turned out to be a plus. With a smaller group, we were able to get acquainted with each other, and everyone participated.

Unsurprisingly, there were many more representatives from Ubuntu than other derivatives, and I was concerned that Ubuntu would dominate the discussion. It did, but I tried to draw out perspectives from other derivatives where possible.

On the whole, the tone was positive and constructive. This may be due in part to people self-selecting for the BoF, but I think there is a lot of genuine goodwill between Debian and Ubuntu.

Stefano Zacchiroli took notes in Gobby during the session, which I expect he will post somewhere public when he has a chance.


Syndicated 2010-08-05 15:10:07 from We'll see | Matt Zimmerman

DebConf 10: Day 2

Today was the first day of DebConf proper, where all of the sessions were aimed at project participants.

Bits from the DPL (Stefano Zacchiroli)

Stefano delivered an excellent address to the Debian project. As Project Leader, he offered a perspective on how far Debian has come, raised some of the key questions facing Debian today, and challenged the project to move forward and improve in several important ways.

He asked the audience: Is Debian better than other distributions? Is Debian still relevant? Why/how?

Having asked this question on identi.ca and Twitter recently, he presented a summary. There was a fairly standard list of technical concerns, but also:

  • A focus on quality, as defined by Debian’s highly modular approach. Each package maintainer is an expert on the software they package, and Debian as a whole offers a superior repository of packages.
  • The principles of software freedom, as embodied in Debian’s Social Contract. The Debian community’s current interpretation is a purist one, and Stefano cited the elimination of non-free firmware as a milestone in the upcoming Squeeze release. I wonder, though, how many of the audience, tapping away on WiFi-connected laptops, were able to do so without such firmware.
  • The project’s independent status, supported by donations and volunteers, which empowers it to make its own decisions, free of external impositions.
  • Debian’s ability to make decisions, as embodied in the constitution. This happens mostly through do-ocracy (individuals are empowered to decide questions concerning their own work), though larger scope issues are decided democratically. This one evoked a bit of a chuckle, as decision making in Debian is not always perceived as fully effective.

He pointed out some areas which we would like to see improve, including:

  • Developers accepting shared responsibility for the release as a whole. Making one’s own packages ready for release is necessary, but not sufficient. He cited evidence that the culture around NMUs is changing: historically, due to the do-ocratic system mentioned above, Debian developers have been somewhat territorial about their packages, and non-maintainer uploads were seen as stepping on their toes. However, recent experiments have indicated that this may no longer be the case, and Stefano encouraged more developers to help each other through NMUs.
  • When making decisions, we should seek consensus, not unanimity. In a project with thousands of contributors, whose operations are open to the public, there will never be unanimous support for a proposal, and seeking unanimity leads to stalled decisions.
  • In order to gain more contributors, Debian needs to welcome new and inexperienced contributors, as well as users (who can grow into contributors. He suggested reaching out to derivatives to find more of both. He decried the conventional wisdom that a “thick skin” should be a prerequisite for joining the project, pointing out that this attitude simply leads to fewer contributors. This point was met with applause by the DebConf audience.

All in all, I thought this was an accurate, timely and inspirational message for the project, and the talk is worth watching for any current or prospective contributor to Debian.

Debian Policy BoF (Russ Albery)

Russ facilitated a discussion about the Debian policy document itself and the process for managing it. He has recently put in a lot of time working on the backlog (down from 160+ to 120), but this is not sustainable for him, and help is needed.

There was a wide-ranging discussion of possible improvements including:

  • Editing the policy manual so that it is more readable start to finish as a document, rather than a reference
  • Creating a closer linkage between lintian and the policy manual, so that best practices from lintian get documented, and policy changes are accompanied by new checks
  • Separating the normative and informative parts of the policy manual

There was also some discussion in passing of the long-standing confusion (presumably among people new to the project) with regard to how policy is established. In Debian, best practices are first implemented in packages, then documented in policy (not the reverse). Sometimes, improvements are suggested at the policy level, when they need to start elsewhere. I’m not very familiar with how the policy manual is maintained at present, but listening to the discussion, it sounded like it might help to extend the process to include the implementation stage. This would allow standards improvements to be tracked all the way through from concept, to implementation, to documentation.

The Java Packaging Nightmare (Torsten Werner)

Torsten described the current state of Java packaging in Debian and the general problems involved, including licensing issues, build system challenges (e.g. maven) and dependency management. His slides were information-dense, so I didn’t take a lot of notes.

His presentation inspired a lively discussion about why upstream developers of Java applications and libraries often do not engage with Debian. Suggested reasons included:

  • They are not interested in Linux as a target platform
  • Although their code is released under a free license, they are not interested in meeting Debian standards for freedom and license correctness
  • They use Java because it is cross-platform, and so do not want to concern themselves with platform-specific issues
  • Because Java applications are easy to download and run manually, they perceive relatively little value in the Debian packaging system

Collaboration between Ubuntu and Debian (Jorge Castro)

Jorge talked about the connections between Debian and Ubuntu, how people in the projects perceive each other, and how to foster good relationships between developers.

He talked about past efforts to quantify collaboration between the projects, but the focus is now on building personal relationships. There were many good questions and comments afterward, and I’m looking forward to the Debian derivatives BoF session tomorrow to get into more detail.

Tonight is the traditional wine and cheese party. When this tradition started, I was one of just a handful of people in a room with some cheese and paper plates, but it’s now a large social gathering with contributions of cheese and wine from around the world. I’m looking forward to it.


Syndicated 2010-08-02 23:23:24 from We'll see | Matt Zimmerman

DebConf 10: Day 1

This week, I am attending DebConf 10 at Columbia University in New York.

The first day of DebConf is known as Debian Day. While most of DebConf is for the benefit of people involved in Debian itself, Debian Day is aimed at a wider audience, and invites the public to learn about, and interact with, the Debian project.

These are the talks I attended.

Debian Day Opening Plenary (Gabriella Coleman, Hans-Christoph Steiner)

Hans-Christoph discussed Debian and free software from a big picture perspective: why software freedom matters, challenging the producer/consumer dichotomy, how the Debian ecosystem hangs together, and so on.

Steps to adopting F/OSS in government (Andy Oram)

Andy discussed FLOSS adoption in governments, drawing on examples from Peru, the city of Munich, the state of Massachusetts. He covered the reasons why this is valuable, the relationship between government transparency and software freedom, and practical advice for successful adoption and deployment.”

Pedagogical Freedom (panel, Jonah Bossewitch et al)

The panelists discussed the use of technology in education, especially free software, some of the parallels between free software and education, and what these communities could learn from each other. This is a promising topic, though the perspectives seemed to be mostly from the education realm. There is much to be learned on both sides.

Google Summer of Code 2010 at Debian (Obey Arthur Liu)

This talk covered the student projects for this year’s Summer of Code. Most of the students were in attendance, and presented their own work. They ranged from more specialized projects like the Hurd installer, to core infrastructure improvements like multi-arch in APT.

Beyond Sharing: Open Source Design (Mushon Zer-Aviv)

Mushon gave an excellent talk on open design. This is a subject I’ve thought quite a bit about, and Asheesh validated many of my conclusions from a different angle. I’ve added a new post to my todo list to go into more detail on this subject.

Some points from his talk which resonated with me:

  • When collaborating on code, everyone must reason with one collaborator: the computer. This forces a level playing field and a common encoding.
  • Collaborating on other types of creative work is more difficult in part because of the differences encoding/decoding information between different individuals
  • Making this easier for design work requires improving motivational factors and language as well as tools and processes
  • Many design decisions are actually rational, and are compatible with a group consensus project. Too often, I hear that design can’t be done collaboratively, citing “too many cooks in the kitchen” analogies, but I have never believed it.
  • Mushon’s own project, shiftspace.org, seems to be a browser-plugin-based system for collaboratively remixing web applications. I haven’t looked at it yet.
  • Leadership and openness are not mutually exclusive. This is another pet peeve of mine, and there are so many examples of open leadership in the free software community that I don’t see how anyone can think otherwise.
  • Mushon’s presentation is available in revision control so that it can be freely used and improved

How Government can Foster Freedom in Technology (Hon. Gale Brewer)

Councillor Brewer paid a visit to DebConf to tell us about the work she is doing on the city council to promote better government through technology.

Brewer seems to be a strong advocate of open data, saying essentially that all government data should be public. She summarized a bill to mandate that New York City government data be public, shared in raw form using open standards, and kept up to date. It sounded like a very strong move which would encourage third party innovation around the data.

She also discussed the need for greater access to computers and Internet connectivity, particularly in educational settings, and a desire to to have all public hearings and meetings shared online.

Why is GNU/Linux Like a Player Piano? (Jon Anderson Hall, Esq.)

Jon is a very engaging speaker. He drew parallels between the development of player pianos, reproducing pianos, reed organs, pipe organs…and free software. He even tied in Hedi Lamarr’s work which led to spread spectrum wireless technology. To be quite honest, I did not find that these analogies taught me much about either free software or player pianos, but nonetheless, I couldn’t help but take an interest in what he was saying and how he presented it.

DebConf Opening Plenary (Gabriella Coleman)

Biella and company explained all the ins and outs of the event: where to go, what to do (and not do), and most importantly, whom to thank for all of it. Now in its 11th year, DebConf is an impressively well-run conference.

I’m looking forward to the rest of the week!


Syndicated 2010-08-02 00:50:44 from We'll see | Matt Zimmerman

Embracing the Web

The web offers a compelling platform for developing modern applications. How can free software benefit more from web technology, and at the same time promote more software freedom on the web? What would the world be like if FLOSS web applications were as plentiful and successful as traditional FLOSS applications are today?

Web architecture

The web, as a collection of interlinked hypertext documents available on the Internet, has been well established for over a decade. However, the web as an application architecture is only just hitting its stride. With modern tools and frameworks, it’s relatively straightforward to build rich applications with browser-oriented frontends and HTTP-accessible backends.

This architecture has its limitations, of course: browser compatibility nightmares, limited offline capabilities, network latency, performance challenges, server-side scalability, complicated multimedia story, and so on. Most of these are slowly but surely being addressed or ameliorated as web technology improves.

However, for a large class of applications, these limitations are easily outweighed by the advantages: cross-platform support, instantaneous upgrades, global availability, etc. The web enables developers to reach the largest audience of users with the most compelling functionality, and simplifies users’ lives by giving them immediate access to their digital lives from anywhere.

Some web advocates would go so far as to say that if an application can be built for the web, it should be built for the web because it will be more successful. It’s no surprise that new web applications are being developed at a staggering rate, and I expect this trend to continue.

So what?

This trend represents a significant threat, and a corresponding opportunity, to free software. Relatively few web applications are free software, and relatively few free software applications are built for the web. Therefore, the momentum which is leading developers and users to the web is also leading them (further) away from free software.

Traditionally, pragmatists have adopted free software applications because they offered immediate gratification: it’s much faster and easier to install a free software application than to buy a proprietary one. The SaaS model of web applications offers the same (and better) immediacy, so free software has lost some of its appeal among pragmatists, who instead turn to proprietary web applications. Why install and run a heavyweight client application when you can just click a link?

Many web applications—perhaps even a majority—are built using free software, but are not themselves free. A new generation of developers share an appreciation for free software tools and frameworks, but see little value in sharing their own software. To these developers, free software is something you use, not something you make.

Free software cannot afford to ignore the web. Instead, we should embrace the web more completely, more powerfully, and more effectively than proprietary systems do.

What would that look like?

In my view, a FLOSS client platform which fully embraced the web would:

  • treat web applications as first-class citizens. The web would not be just another application, represented by a browser, but more like a native application runtime. Web applications could feel much more “native” while still preserving the advantages of a web-style user experience. There would be no web browser: that’s a tool for legacy systems to run web applications within a compatibility environment.
  • provide a seamless experience for developers to build web applications. It would be as fast and easy to develop a trivial client/server web application as it is to write “Hello, world!” in PyGTK using Quickly. For bonus points, it would be easy to develop and run web applications locally, and then deploy directly to a PaaS or IaaS cloud.
  • empower the user to manage their applications and data regardless of where they are hosted. Traditional operating systems act as a connecting fabric for local applications, providing a shared namespace, file store and IPC mechanisms, but web applications are lacking this. The web’s security model requires that applications are thoroughly sandboxed from each other, but a mediating operating system could connect them in meaningful ways, just as web browsers store cookies and passwords for various websites while protecting them from each other.

Imagine a world where free web applications are as plentiful and malleable as free native applications are today. Developers would be able to branch, test and submit patches to them.

What about Chrome OS?

Chrome OS is a step in the right direction, but doesn’t yet realize this vision. It’s a traditional operating system which is stripped down and focused on running one application (a web browser) very, very well. In some ways, it elevates web applications to first-class status, though its paradigm is still fundamentally that of a web browser.

It is not designed for development, but for consuming the web. Developers who want to create and deploy web applications must use a more traditional operating system to do so.

It does not put the end user in control. On the contrary, the user is almost entirely dependent on SaaS applications for all of their needs.

Although it is constructed using free software, it does not seem to deliver the principles or benefits of software freedom to the web itself.

How?

Just as free software was bootstrapped on proprietary UNIX, the present-day web is fertile ground for the development of free web applications. The web is based on open standards. There are already excellent web development tools, web application frameworks and server software which are FLOSS. Leading-edge web browsers like Firefox and Chrome/Chromium, where much web innovation is happening today, are already open source.

This is a huge head start toward a free web. I think what’s missing is a client platform which catalyzes the development and use of FLOSS web applications.


Syndicated 2010-07-26 10:43:52 from We'll see | Matt Zimmerman

Read, listen, or comprehend: choose two

I have noticed that when I am reading, I cannot simultaneously understand spoken words. If someone speaks to me while I am reading, I can pay attention to their voice, or to the text, but not both. It’s as if these two functions share the same cognitive facility, and this facility can only handle one task at a time. If someone is talking on the phone nearby, I find it very difficult to focus on reading (or writing). If I’m having a conversation with someone about a document, I sometimes have to ask them to pause the conversation for a moment while I read.

This phenomenon isn’t unique to me. In Richard Feynman’s What Do You Care what Other People Think?, there is a chapter entitled “It’s as Simple as One, Two, Three…” where he describes his experiments with keeping time in his head. He practiced counting at a steady rate while simultaneously performing various actions, such as running up and down the stairs, reading, writing, even counting objects. He discovered that he “could do anything while counting to [himself]—except talk out loud”.

What’s interesting is that the pattern varies from person to person. Feynman shared his discovery with a group of people, one of whom (John Tukey) had a curiously different experience: while counting steadily, he could easily speak aloud, but could not read. Through experimenting and comparing their experiences, it seemed to them that they were using different cognitive processes to accomplish the task of counting time. Feynman was “hearing” the numbers in his head, while Tukey was “seeing” the numbers go by.

Analogously, I’ve met people who seem to be able to read and listen to speech at the same time. I attributed this to a similar cognitive effect: presumably some people “speak” the words to themselves, while others “watch” them. Feynman found that, although he could write and count at the same time, his counting would be interrupted when he had to stop and search for the right word. Perhaps he used a different mental faculty for that. Some people seem to be able to listen to more than one person talking at the same time, and I wonder if that’s related.

I was reminded of this years later, when I came across this video on speed reading. In it, the speaker explains that most people read by silently voicing words, which they can do at a rate of only 120-250 words per minute. However, people can learn to read visually instead, and thereby read much more quickly. He describes a training technique which involves reading while continuously voicing arbitrary sounds, like the vowels A-E-I-O-U.

The interesting part, for me, was the possibility of learning. I realized that different people read in different ways, but hadn’t thought much about whether one could change this. Having learned a cognitive skill, like reading or counting time, apparently one can re-learn it a different way. Visual reading would seem, at first glance, to be superior: not only is it faster, but I have to use my eyes to read anyway, so why tie up my listening facility as well? Perhaps I could use it for something else at the same time.

So, I tried the simple technique in the video, and it had a definite effect. I could “feel” that I wasn’t reading in the same way that I had been before. I didn’t measure whether I was going any faster or slower, because I quickly noticed something more significant: my reading comprehension was completely shot. I couldn’t remember what I had read, as the memory of it faded within seconds. Before reaching the end of a paragraph, I would forget the beginning. It was as if my ability to comprehend the meaning of the text was linked to my reading technique. I found this very unsettling, and it ruined my enjoyment of the book I was reading.

I’ll probably need to separate this practice from my pleasure reading in order to stick with it. Presumably, over time, my comprehension will improve. I’m curious about what net effect this will have, though. Will I still comprehend it in “the same” way? Will it mean the same thing to me? Will I still feel the same way about it? The many levels of meaning are connected to our senses as well, and “the same” idea, depending on whether it was read or heard, may not have “the same” meaning to an individual. Even our tactile senses can influence our judgments and decisions.

I also wonder whether, if I learn to read visually, I’ll lose the ability to read any other way. When I retrained myself to type using a Dvorak keyboard layout, rather than QWERTY, I lost the ability to type on QWERTY at high speed. I think this has been a good tradeoff for me, but raises interesting questions about how my mind works: Why did this happen? What else changed in the process that might have been less obvious?

Have you tried re-training yourself in this way? What kind of cognitive side effects did you notice, if any? If you lost something, do you still miss it?

(As a sidenote, I am impressed by Feynman’s exuberance and persistence in his personal experiments, as described in his books for laypeople. Although I consider myself a very curious person, I rarely invest that kind of physical and intellectual energy in first-hand experiments. I’m much more likely to research what other people have done, and skim the surface of the subject.)


Syndicated 2010-07-12 12:57:39 from We'll see | Matt Zimmerman

We’ve packaged all of the free software…what now?

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications.

This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I’m occasionally surprised to find something very useful which isn’t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the “long tail” of niche software is generally packaged very effectively.

Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved.

Rough edges

This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn’t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape.

  • Embedded systems need to be pared down to the essentials to minimize storage, distribution, computation and maintenance costs. Standardized packaging introduces excessive code, data and interdependency which make the system larger than necessary. Tight integration makes it difficult to bootstrap the system from scratch for custom hardware. Projects like Embedded Debian aim to adapt the Debian system to be more suitable for use in these environments, to varying degrees of success. Meanwhile, smart phones will soon become the most common type of computer globally.
  • Data, in contrast to software, has simple requirements. It just needs to be up to date and accessible to programs. Packaging and distributing it through the standardized packaging process is awkward, doesn’t offer tangible benefits, and introduces overhead. There have been extensive debates in Debian about how to handle large data sets. Meanwhile, this problem is becoming increasingly important as data science catalyzes a new wave of applications.
  • Client/server and other types of distributed applications are notoriously tricky to package. The packaging system works within the context of a single OS instance, and so relationships which span multiple OS instances (e.g. a server application which depends on a database running on another server) are not straightforward. Meanwhile, the web has become a first-class application development platform, and this kind of interdependency is extremely common on both clients and servers.
  • Cross-platform applications such as Firefox, Chromium and OpenOffice.org have long struggled with packaging. In order to be portable, they tend to bundle the components they depend on, such as libraries. Packagers strive for normalization, and want these applications to use the packaged versions of these libraries instead. Application developers build, test and ship one set of dependencies, but their users receive a different stack when they use the packaged version of the application. Developers on both sides are in constant tension as they expect their configuration to be the canonical one, and want it to be tightly integrated. Cross-platform application developers want to provide their own, application-specific cross-platform update mechanism, while distributions want to use the same mechanism for all their components.
  • Virtual appliances aim to combine application and operating system into a portable bundle. While a modular OS is definitely called for, appliances face some of the same problems as embedded systems as they need to be minimized. Furthermore, the appliance becomes a component in itself, and requires metadata, distribution mechanisms and so on. If someone wants to “install” a virtual appliance, how should that work? Packaging them up as .debs doesn’t make much sense for the same reasons that apply to large data sets. I haven’t seen virtual appliances really taking off, but I expect cloud to change that.
  • Runtime libraries for languages such as Perl, Python and Ruby provide their own packaging systems, which manage dependencies and other metadata, installation, upgrades and removal in a standardized way. Because these operate independently of the OS package manager, all sorts of problems arise. Projects such as GoboLinux have attempted to tie them together, to varying degrees of success. Meanwhile, each new programming language we invent comes with a different, incompatible package manager, and distribution developers need to spend time repackaging them into their preferred format.

Why are we stuck?

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow

The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it “fits” into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job.

Various attempts have been made to extend the packaging concept to make it more general, for example:

  • Portage, of Gentoo fame, offers impressive flexibility by building packages with a custom configuration, tailored for the needs of the target system.
  • Conary, from rPath, offers finer-grained dependencies, powerful revision control and object-oriented build recipes.
  • Nix provides a consistent build and runtime environment, ensuring that programs are run with the same dependencies used to build them, by keeping the relevant versions installed. I don’t know much about it, but it sounds like all dependencies implicitly refer to an exact version.

Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems.

Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them.

This lock-in effect makes it difficult for new packaging technologies to succeed.

Divide and Conquer

No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won’t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system.

  • Decouple applications from the platform. Debian packaging is an excellent solution for managing the network of highly interdependent components which make up the core of a modern Linux distribution. It falls short, however, for managing the needs of modern applications: fast-moving, cross-platform and client/server (especially web). Let’s stop trying to fit these square pegs into round holes, and adopt a different solution for this space, preferably one which is comprehensible and useful to application developers so that they can do most of the work.
  • Treat data as a service. It’s no longer useful to package up documentation in order to provide local copies of it on every Linux system. The web is a much, much richer and more effective solution to that problem. The same principle is increasingly applicable to structured data. From documents and contacts to anti-virus signatures and PCI IDs, there’s much better data to be had “out there” on the web than “down here” on the local filesystem.
  • Simplify integration between packaging systems in order to enable a heterogeneous model. When we break the assumption that everything is a package, we will need new tools to manage the interfaces between different types of components. Applications will need to introspect their dependency chain, and system management tools will need to be able to interrogate applications. We’ll need thoughtfully designed interfaces which provide an appropriate level of abstraction while offering sufficient flexibility to solve many different packaging problems. There is unarguably a cost to this heterogeneity, but I believe it would easily outweigh the shortcomings of our current model.

But I like things how they are!

We don’t have a choice. The world is changing around us, and distributions need to evolve with it. If we don’t adapt, we will eventually give way to systems which do solve these problems.

Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users.

These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I’ve presented here are only one possible way forward, and I’m sure there are more and better ideas brewing in distribution communities. I’m sure that I’m not the only one thinking about these problems.

Whatever it looks like in the end, I have no doubt that change is ahead.


Syndicated 2010-07-06 15:31:59 from We'll see | Matt Zimmerman

22 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!