Thoughts on the world of GUI's

Posted 15 Feb 2001 at 08:05 UTC by jono Share This

Since I have been into Linux, I have really been concentrating my efforts on the world of the Linux Desktop. OK...I admit it, I came to Linux as a Windows user, but I don't see that as any bad thing.

KDE was one of the first projects that captured my imagination, and since has given me the enthusiasm to learn C++, learn the Qt and KDE libraries, and learn the fine art of communicating with different people from different backgrounds in a professional manner. The crux of what I want to say in this article are my observations as a developer for a desktop for Linux (and other UNIX's too of course). Oh and these opinions are mine by the way, so if you disagree and want to flame someone, please don't go and have a go at someone else.

Desktop's for Linux sure are a funny thing, and there seem to be a number of issues in which people want to either express an opinion or they don't. An example is the X Window system. In my opinion X is a dog. It is a hefty chunk of software which sits under the desktop software and slows the system down quite a lot. X does have many advantages such as network transparency, but I ask the question "does the featureset of X warrent its weight on the system?". I am by no means an expert on X and I do not wish to profess to be, but I get the impression that X is an old technology that is having trouble being dragged into the modern age. An example is anti-aliased fonts and support for truetype fonts. Only recently have these features made it into X. I feel development of X needs to speed up on implementing features such as this, and optimising the system.

the problem with people is that they compare. People naturally compare Windows and KDE or Windows and GNOME etc, and although technical people can say to them, "look you cant compare them as they are not comparable", the fact is that they do. If people are going to compare Windows and KDE or Windows and GNOME or whatever, and they notice that the Linux Desktop is slower than the Windows Desktop, they are not going to leave with a good impression.

I personally feel that the direction of future desktop development needs to head towards the framebuffer. The last time I checked (ages ago) the framebuffer was not accelerated and was quite slow at high resoloutions. I feel when it is accelerated, and with the advent of GTK and Qt being ported to the framebuffer, we may see forked desktops being developed for the framebuffer. Who knows?

With so much going on in the Linux world it is often hard to keep up. Although the framebuffer looks promising, so does DGA in X. Also with the rapid and cheap updates in technology I feel people may even get desensitized towards performance issues. If you have a 700 processor Athlon 10GHz with 20GB RAM who cares about performance?

The aim of this article is not to say one thing is better than the other, but to make statements which cause discussion. I hope my text causes people to look into which path is the way to go.

And before I go, let me tell you my motivation being into Linux and helping the effort. The people. I have found Linux people (and KDE people in particular) to be the most friendly, enthusiastic bunch of people around. It is this environment that makes me enthuse about it myself, and I find working with such a talented bunch of people across all subsets of the Linux community a joy. Admittedly I also find some Linux people to be the most anal I have met; most are cool though. If I want to stay away from the analites, I just avoid Slashdot... ;-)


X just isn't sexy..., posted 15 Feb 2001 at 09:50 UTC by dirtyrat » (Journeyer)

"I feel development of X needs to speed up on implementing features such as this, and optimising the system. "

I suspect that the number of developers on either KDE or GNOME far outweighs the number of developers working on X. Changes in X have very little visible effect: If I code a nicer widget for GTK or QT then everyone can see what I've done. If I port X to the framebuffer then it doesn't look any different.

X isn't inherently slow; you can buy one of the commercial X servers and by all accounts they are much faster than XFree86. They are, however, commercial and can employ full-time programmers to work on it. I'm not bashing XFree86, but developing X just doesn't seem to have the appeal of developing GUI apps and desktop environments: therefore there are less hands working on the project and so its development is going to be less hectic.

X is more than ok, posted 15 Feb 2001 at 15:38 UTC by RyanMuldoon » (Journeyer)

Every once in a while, people tend to start complaining about how slow and bloated X is. It really isn't - X is pretty fast. It was designed for computers that are only a little more powerful than the Compaq iPaq handheld - X runs on the iPaq without problems as well. Probably the biggest reason that apps don't seem as fast on X (as compared to windows) is that drivers are nowhere near as optimised in most cases. On windows, you'll find mature drivers that were developed by OEMs. X is starting to get that, but it will take a bit of time. I'm sure some of X itself can be further optimized, but that doesn't mean we should abandon it and start all over again. It would gets things done faster if everyone that wanted to replace X instead started working on improving Xfree86. I can see the motivation behind Berlin, and it does seem like an interesting project. But claiming that we can do everything on the framebuffer is silly. It just doesn't have enough features to handle running a desktop environment on top.

Reply to RyanMuldoon, posted 15 Feb 2001 at 16:13 UTC by jono » (Master)

I can see what you are saying to a certain extent, but I also disagree in some areas. Fair enough that X is slower due to drivers, but those drivers are part of X distribution for getting a drawing engine up and running, and although you say this is not X's falt - the fact that it is in the X distribution means it is part of X.

As for people who complain about X working on it, this is a bit of an unfair request. If my stereo doesnt work, I dont go out and spend months learning how to build and fix stereos - I turn to a pre-existing stereo enthusiast instead. What I am trying to say is that I have no interest in developing X, and I would say this is the case for the majority of X users. Just because non X developers have an opinion or gripe about it, it does not mean they should fix it. It is best for non X developers to express their thoughts so the X developers know what people think.

Finally as for X not being as bloated as people think it is, I think this is objective. You say it runs on the iPaq; well it does, but a lot of the X distribution is cut out due to space limitations, and it is optimised for the iPaq. I am sure if you 100% tweak X for a particular machine it will work fast, but at present this is not the case. Maybe this will change, I don't know, but at the moment all I can see is X in it's current form, and my past 2 - 3 years experience of X's development.

Direct Framebuffer and DGA, posted 15 Feb 2001 at 16:42 UTC by pphaneuf » (Journeyer)

Ah, the good olde "we want direct access to the hardware"... I was about to miss that one! ;-)

Direct access to hardware sure sounds sexy, but isn't all in life. What you actually want is just to have all of this go fast, and rather strangely, in this era of hardware-accelerated-everything, separating the client from the server as much as possible is what we need to go fast.

What X is lacking is abstraction. Nowadays, accelerated hardware operates in strange ways, and things that previously were not accelerated (and not thought they could ever be) are. Lacking the abstraction to express what a client really want to do, it has to resort to painting it itself in an XImage and sending it over to the server.

If you wanted to draw two anti-aliased lines that go from the corners of a large window, you have to draw it locally and send over a huge number of pixels that you didn't even touch (or have a high level of command traffic by sending what you did change rectangle per rectangle). Also, you would always be using your own software rendering, even if the display hardware could do the job.

If instead you could just tell the X server what you actually wanted, two anti-aliased lines with such and such endpoints and characteristics, you'd have only two commands and you'd get what you want. If the hardware supports the feature, you get hardware-accelerated anti-aliased lines. If it doesn't you get an even better software implementation that has access to the framebuffer and can do whatever is appropriate to go as fast as possible with the particular display hardware the user has.

For example, some video card compress images send over the PCI bus (and keep them in compressed form if they are not visible, uncompressing them on the fly as they are drawn to the framebuffer). If you ask for direct access to the video memory, you expose yourself to behind-the-scene compression and decompression, many of which would be useless (because they could be done by the video hardware that can manipulate them in the "native" compressed form) and slow (any DirectX programmer that tried locking DirectDraw surfaces on an nVidia Riva TNT2 knows what I mean, very painful).

When you get access to the display hardware yourself, such as what you get with the unaccelerated (or lightly accelerated) Linux framebuffer device and the DGA extension, you get to do everything in software all the time, and in a generic manner that is just about never the optimal way to do it.

If you were to get a rich enough API for the Linux framebuffer device, you'd be just as well with X anyway.

Now, I would like to lose a few things from X too. Context switches, I hate with my guts, if there was a way to avoid them, it would be great. Oh wait! There is one.

D11. Too bad it doesn't exist, but it really should.

Things that X sucks at (IMHO), posted 15 Feb 2001 at 17:06 UTC by pphaneuf » (Journeyer)

  • Context switching

    Nobody wants that (even though looking at how much multithreading is popular, I might be wrong on that account). Something as common as creating a top-level window involves no less than three processes!

  • Slow (and mandatory) transport

    Alleviated by the use of shared memory, but on a local machine, the X server ought to be able to dig stuff out of a client address space directly (or the reverse). Isn't that in part what the DRM kernel module (part of the Direct Rendering Infrastructure) is about?

  • Limited API

    Theoretically could be improved with extensions, but we've seen little work done on this (until recently that is, with Keith Packard's Render extension, which covers an area, but also leaves others quite uncovered, like 2D "twitch" games).

Berlin, posted 15 Feb 2001 at 18:56 UTC by logic » (Journeyer)

A project called Berlin was started to address a few of the problems mentioned here. It's worth a look, if you're interested in helping develop a new windowing system. They have a comparison of X and Berlin here in their Wiki.

followup to Jono, posted 15 Feb 2001 at 21:39 UTC by RyanMuldoon » (Journeyer)

I disagree with your evaluation of what 'X' is. Drivers are components that plug into X - if they are bad, it does not mean that X in of itself is. A gripe that I could understand, and fully agree with, is the fact that we have a confused graphics system on linux (and other unices, but my experience is mostly with linux). If you want to fuly support a graphics card in linux, you need a framebuffer driver and an X driver, and if you have any other graphics display system, you need a driver for that too. This seems utterly redundant and a waste of developer resources. It would make a whole lot more sense if there were a single graphics subsystem that each of these display targets can use. The argument against this would be that placing graphics drivers at the kernel level is a bad idea, but we already see this with DRI/DRM and framebuffer drivers, so if anything, it would cut down on the driver count. So, what would make more sense (to me) would be to have a well-defined drawing API for everything, and then allow for drivers to supply optimized implementations, and then provide multiple display targets. I think that I pretty much just described the GGI project.

But your fundamental claim that X needs to be replaced seems like a bad one. Really, we want to keep the network transparency now more than ever. Cool things can happen with that. Abstraction is important. More abstracted systems have a greater long-term possibility for optimization. Also, if you were to try to do what X does in a framebuffer, by the time you're done, you've pretty much recreated X.

UI Design, posted 16 Feb 2001 at 03:40 UTC by Mulad » (Apprentice)

Bah.. All praise the almighty Google, which produced this link up on top for the query ``user interface design'': User Interface Design for Programmers

X is not the problem, posted 16 Feb 2001 at 06:44 UTC by jc » (Master)

X does not make your desktop applications slow. Xfree86 does not make your desktop applications slow. Network transparency in X does not make your desktop applications slow.

When do you even notice that your desktop applications are slow? Although I use gnome, not KDE, I feel confident making the folowing statements. When you click a button there is no lag. When you open a menu there is no lag. When you open a new window of an existing application there is damn near no lag. Repainting even a complex window is damn near instantanious. Scrolling a window is smooth and lag free.

So why do you feel that your desktop is slugish? Because there is a delay measurable in seconds when you open a new application. There are many reasons for this, none of them have to do with X. Your application probably has to load several shared libraries. It probably has to search several places for config files. The widget set you use probably has to search for and parse several config files. Your application may have to establish communication with a corba or other ORB. Your application may have to check 200+ plugins and parse 100 scheme scripts. Your application may initialize all of it's subdialogs during its startup sequence. All of this is usually done before the first window of your application is ever drawn on screen. This is the source of the sluggishness you feel on your desktop.

It is common to optimize application startup performance in the commercial software world, but I have not yet noticed much of it in the free software world. Of course it probably is going on and I'm just not paying attention.

Why _APPLICATIONS_ are slow, posted 16 Feb 2001 at 17:39 UTC by cbbrowne » (Master)

It's not X that is normally the cause of slowness, nor is it the network, nor, very often is it the transport layer.

What tends to be slow, in practice, are things like the following:

  • GUI libraries are often enough implemented badly and in bloated ways.

    Motif is the usual casualty of such comments, but Gnome and KDE are not immune from such criticism.

  • The "theme" support for GTK allows creating themes that chew up memory and require a lot of rendering work, for instance. And this has nothing to do with X being slow

  • Applications sometimes do pathologically bad things.

    The canonical example of this is that Netscape Navigator/Communicator makes the severe error of querying the X server for all font information before it starts up. If you run it remotely, the sheer quantity of data transfer takes around 15 seconds across Ethernet. And this is quite brain-damaged; there is no need to ask for all the fonts when the app could just ask for the two or three fonts that it is configured for, and then fall back to looking for more if it doesn't find the ones it wants.

    Having a faster GUI doesn't help in the slightest if application developers do silly things...

  • X caches like crazy.

    If you have lots of memory, this is very good. If you don't, this is probably very bad.

  • The Enlightenment window manager combines many of the issues above with an expectation of having fast rendering capabilities; it's extremely memory-, CPU-, and hardware rendering-happy.

There's only one of the items here that provides an informed "improvement" to X; there should be a more visible way of tuning X to not do nearly so much cacheing. Perhaps there should be a way of specifying maximum cache size, and then attaching values/ages to buffer entries so that data could be more intelligently discarded.

But that would only help on systems that are relatively memory "starved." The more usual performance problem comes from application developers that write slow code.

The "Berlin" proposal does the classic thing of assuming this issue away by saying that everything will have to get rewritten to use the Berlin APIs. Which is well and good if the only developers that write for the Berlin APIs write good, fast code.

But as soon as Berlin would get popular (feel free to s/Berlin/Something-else/g as needed; I'm not particularly bashing Berlin here...), you'll get application developers that write slow code, and we'd be back in the situation of people saying

But Berlin is so slow! We need to design a new, faster GUI system!"

Slow X Apps, posted 18 Feb 2001 at 05:24 UTC by jmg » (Master)

cbbrowne, I just reread your post about the real reason X apps are slow. I personally don't think that X cacheing is an issue. That's more server specific in my experience.

What really makes it slow is if you receveive an expose event for the lower 1/20th of the screen, and so, as most programmers are to lazy to decide what exactly is in the lower 1/20th of the window, they will draw the whole bloody window over again. Again I haven't taken a look at most application widget code, but I'm sure that they probably don't use subwindows in a single window often enough, so you only get an expose event for a few buttons instead of the whole thing.

I think 3D grapihcs should be part of the X11 specification. It should support all the primitives of a display screen, and 3D is part of that. Wasn't there a company that reciently demoed real 3D displays that had mutiple layers that were usable? How are you suppose to do that over the network when the time comes?

There was someone that was working on an extension to the X11 protocol that would add support for these and more. Another important thing might be to render parts of the image locally and then sending a bit map the server if the line speed is to slow. (Ever try to animate a mash of 3D lines drawn with the line command over a 10mbit X11 connection and then do the same thing, except you draw the lines and blit the bitmap over the network? Well, if you have, the later is about 10 times faster on realatively small meshes like a 128x128 point.) I believe this was posted to one of the earlier articles what talked about similar things about performance and X. Should of made a node of it in my diary.

Mark Kilgard's idea of volving X11 into D11, posted 18 Feb 2001 at 16:56 UTC by mvw » (Journeyer)

In this discussion, so far two approaches for evolving X11 into something better have been mentioned:

1. migrating one (like Mark Kilgard's D11 proposal) 2. revolutionary one (like Berlin)

As I too believe that there is only a small number of people actively working on X11 in the XFree86 project (perhaps something between 20 and 30 people, some of them very specialized), I believe a radical approach would have not much chance, it would be too costly to start from scratch (look at Mozilla, what a new write costs).

Kilgard's D11 proposal is charming, as it points out how a migration could take place. Anyone knows where he works now (nvidia?) and what his present view on X11 evolution is?

I am also not sure, if the X11 typical separtion of tasks into

1. X11 server for low level operations 2. OpenGL renderer for low level 3d 3. Toolkits for assembling widgets etc 4. Windowmanagers for shuffling windows 5. Desktops for integrating applications

is a blessing or a pain of this environment.

That some parts are outdated is obvious. While I am usally the pro C++ and contra JAVA guy, I must admit that JAVA's JAVA2D and Swing APIs have some very good ideas, I have not seen in QT yet. (GTK+ was no match, due to its lack of commercial Win32 support)

I suggest: 1. establishing a D11 project on SourceForge 2. get some experts opinion (where is Kilgard, when we need him? :-) 3. steal some of few good JAVA ideas..

Mark Kilgard's idea of volving X11 into D11, posted 18 Feb 2001 at 16:57 UTC by mvw » (Journeyer)

In this discussion, so far two approaches for evolving X11 into something better have been mentioned:

1. migrating one (like Mark Kilgard's D11 proposal) 2. revolutionary one (like Berlin)

As I too believe that there is only a small number of people actively working on X11 in the XFree86 project (perhaps something between 20 and 30 people, some of them very specialized), I believe a radical approach would have not much chance, it would be too costly to start from scratch (look at Mozilla, what a new write costs).

Kilgard's D11 proposal is charming, as it points out how a migration could take place. Anyone knows where he works now (nvidia?) and what his present view on X11 evolution is?

I am also not sure, if the X11 typical separtion of tasks into

1. X11 server for low level operations 2. OpenGL renderer for low level 3d 3. Toolkits for assembling widgets etc 4. Windowmanagers for shuffling windows 5. Desktops for integrating applications

is a blessing or a pain of this environment.

That some parts are outdated is obvious. While I am usally the pro C++ and contra JAVA guy, I must admit that JAVA's JAVA2D and Swing APIs have some very good ideas, I have not seen in QT yet. (GTK+ was no match, due to its lack of commercial Win32 support)

I suggest: 1. establishing a D11 project on SourceForge 2. get some experts opinion (where is Kilgard, when we need him? :-) 3. steal some of few good JAVA ideas..

X is ...., posted 19 Feb 2001 at 08:07 UTC by nymia » (Master)

I'm not claiming to be knowledgeable in this area, but, I'd like to post what my opinions are about X.

To me, I consider Xfree graphics server on the PC real slow as compared to BeOS. I tried opening several OpenGL windows on X and it just couldn't handle the load. On BeOS, it was different, it was running fine.

Could it be that X doesn't use the video graphics chipset? Maybe X is doing all the work by managing the video ram? Or could it be the request and reply protocol? Sending request and receiving replies across the wire (network) will surely cause a performance hit.

Maybe X needs a better threading model so it will never block when no events or errors are on the queue?

IMO, apps+graphics servers like the one on BeOS and AtheOS are an improvement over X. Basically, they're the same in terms of messaging, but appservers are much better in terms of design and implementation. If a client app dies, the server app counterpart sitting on the appserver will just be deleted.

Port ZooLib to framebuffer, test on slow machines, posted 19 Feb 2001 at 17:00 UTC by goingware » (Master)

I'd like to suggest that you port ZooLib to the framebuffer. And hasn't someone already ported GTK?

ZooLib is a cross-platform application framework, so an app written for framebuffer ZooLib will also run under X (and Mac OS, Windows or BeOS for that matter).

It's both efficient and powerful. I was using it to develop a graphics app for Mac OS and Windows last year, and I had some beautiful interactive animated alpha blending on fairly large areas working on a 150 MHz PowerPC 604 (not even 604e) Mac 8500.

If your software isn't running fast, its because programmers aren't writing fast software. Maybe the problem is that programmers tend to buy themselves fast computers. If they programmed on the slowest computers tolerable, applications would tend to run faster (similarly for computers with less memory, apps would be leaner).

Using a fast computer for builds is OK, but at least test on a slow machine. Get yourself a Pentium or Pentium Pro for testing. But note the advantage of fast builds - if you structure your source code the right way (as discussed in the chapters on "Large Scale Software Architecture" by John Lakos in More C++ Gems and I would guess Lakos' book as well), and make good internal use of reusable code , your code will build fast too, and this is more noticable on a slow machine.

Testing on slow machines, posted 20 Feb 2001 at 15:29 UTC by pphaneuf » (Journeyer)

I have two computers here, a 486DX4/120 with 80 megs of RAM and a Pentium MMX 225 with 96 megs of RAM.

I know what is slow. I avoid the STL because it takes no less than 8 to 10 times longer to compile on my faster machine (I suppose this isn't a good reason, but please, someone hack pre-compiled headers into GCC!).

As a result, I like it. Quadra runs almost perfectly on my 486 in 16 bit color depth, with fullscreen fading and everything (it runs perfectly in a PseudoColor visual).

I have to agree 100% with goingware: an old machine helps to write fast software.

As a note, I would add that I play Quake III Arena frequently on my below-minimal-specs Pentium (with a Voodoo2), and I sometimes even win. Why is GNOME and Mozilla so slow on a machine that can run a machine-intensive first person shooter in a playable manner?

NOBODY will make me say that GNOME is fast or memory-efficient. Quake is, but not GNOME. Disclaimer: I didn't even try KDE on my machine, maybe it is fast as hell, maybe it is just as slow as GNOME, I can't tell.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page