Software reliability

Posted 7 Dec 2000 at 01:22 UTC by jmg Share This

Salon.com posted an article on High tech's missionaries of sloppiness.

In the article the author brings up many good points. In dealing with the computer industry, I remeber the days back in the 80's when software was actually reliable and usable without worries to random crashes. Of course, I'm talking about consumer level applications that applied to DOS based machines.

Even today, I have spent two weeks simply to find out that my hard drives had broken firmware on them and they can't handle tag queueing properly. Durning that time, my entire home network was pretty much frozen as it depended upon the server. This all started over a simple SCSI card upgrade and replacing a few hard disks.

Hardware design is significantly more reliable as there are tools provided to make sure that designs are correct. It still doesn't eliminate the issues that have plagued recient hardware issues. The fact that hardware is at the mercy of the environment which can causes signals to change (see AMD760, i820 issues of recient).

With software, we don't have the excuse that our signals have changed because of outside interference. If the user caused the application to crash because they provided incorrect input, there needs to be stricter input checking. If a library provides incorrect data, the library should be fixed (or the documentation for it fixed) and not apply special work arrounds for it. Software is completely at the mercy of the people that design and write the code to begin with. Software should be flawless.


The Laziness to Education ratio..., posted 7 Dec 2000 at 06:37 UTC by Johnath » (Journeyer)

The salon article really assumes, and I don't exactly disagree with this, that the willingness of programmers to write buggy code is pretty much entirely a symbol of their laziness and unwillingness to change. And yet, I know lots of anal programmers who will track a bug for days if needed - laziness can't be the whole story.

I would argue that a significant component (how significant, I can't really say, hence the title of this post referring to a ratio but not defining it) is a lack of education. Having almost completed a 4 year CompSci degree I can say that while there is an emphasis on testing and program correctness, there is (as one would expect) so much theory that very little room is left for actual "and this is how you securely create a temp file" instruction. I understand, appreciate, and advocate the fact that university is not SUPPOSED to focus on the applied stuff, since a good theoretical framework leaves you with a broad foundation which allows you to pick up the applied stuff in short order - however by totally forgoing a treatment of 'common mistakes when playing with file descriptors' or 'why pointer arithmetic is bad for you', you produce a group that can quickly absorb any new language or paradigm, and then proceed to make the same silly mistakes.

It is not that much more difficult to open a temp file securely than to do so insecurely, for example, so laziness can't really be the only explanation... instead I'd contend that it's a lack of knowledge - as to how to do it right, and why doing it the 'obvious' way is wrong. If we could fill in those gaps - if we could put just a little practical emphasis back into the curriculum, I think we'd see this stuff decline.

I do not think, however, that we would see it disappear. There is certainly rampant laziness and arrogance in a lot of our industry, and lot of, to quote the article quoting someone else, "Don't worry, be crappy" advocates. I dunno - I've rambled on a fair bit, but as a way to frame this discussion - I'd place the Laziness:Education ratio, in terms of explaining software failures and breaches, at about 4:1, or 80% laziness, 20% missing knowledge - where do others see it?

Some Quality Advocacy, Improving Free SW Quality, posted 7 Dec 2000 at 07:34 UTC by goingware » (Master)

I won't repeat my whole post here but please see my response to the slashdot discussion of this entitled Some Software Quality Advocacy.

The links I give in my comment are to some pretty relevant quality advocacy sites such as:

Also see my comment "Organized Linux QA Proposal - linuxquality.org soon" in the recent article here Announcement List for Technical Reports. I'm pleased to say that after I wrote back to the folks who'd offerred to host this effort after a hiatus of several months they responded that they were quite enthusiastic to continue with it.

A Good Example of RTTI, posted 7 Dec 2000 at 09:49 UTC by goingware » (Master)

A good example of the use of RTTI is in a recent project where I had the problem of editor tools operating on objects in an editor pane.

The editor pane itself knew of the objects according only to their abstract base class, but they could actually be of different subclasses, with dramatically different data representations - but the common things you could do to them would be to paint them on the screen, move them bodily, rotate them and so on.

The tools that did the common operations like moving and rotating only needed to know of them as the abstract base classes.

But there where certain special tools that could only operate sensibly on certain object types. For example, you could add a vertex to a polygon. So I had to use dynamic_cast (which uses RTTI for its information) to determine whether the item was a polygon and then add a vertex to it.

You could argue that there might have been a better way to structure this so that none of the tools needed to know any of the exact types of any of the editor objects. But I couldn't really see a way to do this.

I struggled mightily with the problem and in fact submitted an Ask Slashdot over it which was run as Overcomming (sic) Programmer's Block?. I wanted to have nice clean interfaces and well-defined communications channels and not make everything public or have lots of friends.

In my particular case, I had two class heirarchies. At the top of one class heirarchy was an abstract object editor, and subclasses were particular kinds of object editors. In the other class heirarchy the base was a drawing tool, and the subclasses were particular tools. In my case, only particular subclasses from one heirarchy were meant to operate on particular subclasses in another heirarchy, and I think there you can argue that RTTI is needed.

I particularly think RTTI as its actually done is far better than the hand-coded ways used to accomplished the same thing in systems where RTTI isn't available or they choose not to use it - usually assigning some kind of class ID to a member variable. With RTTI, you have a single way of handling the problem that's built into the language, and it's pretty easy to use:

void Foo( Base *bar )
{
	Sub *sub = dynamic_cast< Sub* >( bar );

if ( sub == NULL ){ // it's not a sub }else{ sub->methodDefinedOnlyInSub(); } }

I think one of the reasons why C++ is more difficult is that it really demands that you design your program better. If you choose your initial classes poorly, you're quickly going to get into a rat's nest that's hard to get out of. But if you do take the trouble to make the better design, the coding itself is easier and your whole program works better. But architecting this program I worked on most of my last year was a real stretch and frequently made my brain hurt.

There's no questioning that I got excellent results from the resulting program though.

The above got posted in the wrong place, posted 7 Dec 2000 at 09:51 UTC by goingware » (Master)

Hey that got posted in the wrong article somehow, it was supposed to go under the C++ article.

Is that a bug?

Why Software is not Reliable as It Once Was, posted 7 Dec 2000 at 12:00 UTC by moshez » (Master)

Well, I think the overwhelming factor, and the one most thoroughly ignored, is that clients needs have changed. People who use software do not place a high value on reliability. Ease of use, features and compatability all rate much higher. So, naturally, programmers who care about how many people use their software do not code for reliability.

Laziness and WTF, posted 7 Dec 2000 at 12:56 UTC by hadess » (Master)

Laziness and WTF I guess are the main reasons why software is, or can be, crappy. For example, most Linux users, or programmers that have only known Linux as their Unix system, won't care too much about portability. Code crappiness comes also from the "scratch my own itch" thing, "if it works for me, that's must be alright then...". And you have people like Apple and Microsoft that give most of the computer users in the world, shite software. Well, to our defense, I could add that DOS program were simply not complicated compared to what we have today...

Please port, and make your programs useful for other people, you're not the only one.

Linux and portability - Lessons from the Linux kernel, posted 7 Dec 2000 at 13:16 UTC by goingware » (Master)

hadess: I wrote a yet-to-be published article on cross-platform development in which I quote Jon Watte, an engineer for Be, Inc. and formerly of Metrowerks as saying:

Portability, to some people, means it compiles on at least two Linux distributions and various flavors of gcc.

Portable? You want portable? See the ZooLib cross-platform application framework - native executable binaries for Mac OS, Windows, BeOS and Linux. Now that's portable. And note I say native - no virtual machine, and you can use OS-dependent API's anytime you feel like it, so you're not locked in like in Java.

I've downloaded source code for applications the author wrote on some other Linux distribution that I couldn't even get to build on Slackware.

It's interesting though, that one of the finest examples of portable, cross-platform development is the Linux kernel itself.

This is impressive because my recollection is that the first kernel was hardwired to depend on 80386 memory management, but now it supports so many CPU's - and not just microprocessors, but mainframes like the S/390 - PowerPC, x86, Alpha, 680x0, IA64, Sparc. Does it do MIPS?

And there's also all the busses - PCI, ISA, whatever is in use on Suns, I think on the Macintosh they support the NuBus, PCMCIA and Carbus. And filesystems - FAT, NTFS working somewhat, Mac HFS, all the native Linux filesystems (ext2, ext3, ReiserFS), ISO9660 - the list goes on.

And yet the kernel itself is where this kind of portability would be just the hardest, because you have to deal with all the lowest-level details of the system - different kinds of interrupt handling, dramatically different TLB's.

We could all draw a lesson on portability from what the kernel folks are able to do.

good code, posted 7 Dec 2000 at 15:17 UTC by mstevens » (Journeyer)

We won't get good code until "do it fast" is less important than "do it right". I feel most programmers could perform much better than they do, it's just that no-one wants good code, and you can even get penalised for spending the time necessary to produce it...

silver bullets, posted 7 Dec 2000 at 17:44 UTC by wnewman » (Master)

I quote:

Quality-focused software development can dramatically shrink overall development costs.

0.05 defects per 1,000 lines of code

I'm sure Humphrey is not a complete idiot, but by the time his arguments have been boiled down to this level of summary, they're clearly out of touch with reality. Admittedly, many customers won't pay for reliability, but some will. I'd confidently bet that a C++ compiler which implemented the complete ANSI standard with only a dozen bugs (240,000 lines of code times 0.05/1000) in its initial release, halving that number in each subsequent releases, could take a good chunk of the market on that feature alone, even if its programming environment, compiled code performance, and other features were unremarkable. If Humphrey's techniques are so powerful that this level of reliability can be achieved more cheaply than the current level of reliability, why haven't Humphrey's followers grabbed this niche in the compiler market? Humphrey may enjoy the ivory tower too much to sully himself with real development for a mere $100M or so, but it strains credulity to imagine that every single one of his followers is similarly high-minded.

I read several chapters of Humphrey's book several years ago. Many of the recommendations seemed reasonable, but I thought the analysis was shot through with what I consider to be a confused worldview, that of course software metrics are measuring something meaningful. I myself consider software metrics to be about as valuable as readability metrics for English prose: they measure something, and it's correlated to what you care about, but it's not a very accurate or reliable measure of what you care about. For example, I chose the ANSI C++ compiler example so that the bugs per line of code claim would be relatively meaningful. But in most real world development, where collecting requirements is a big part of the game, there's no bright line to tell you whether a behavior is a bug. "Working as designed," anyone? Claims based on software metrics collected by biased observers leave me unimpressed.

We shouldn't be surprised..., posted 7 Dec 2000 at 18:21 UTC by deven » (Journeyer)

Given the obsession over time-to-market, why should we be surprised that so much software sucks so badly? Products are rushed out the door with little regard for the quality, because of the need to have something out there. Sure, there are programmers who lack the skills to produce quality code, and their ranks have probably been growing recently. But there are plenty of competent programmers who could do a much better job when given the time to do it right.

Netscape 6 and Mozilla make an interesting case study. Mozilla is still (at least) 5-6 months away from a final 1.0 release. Right now, it's considered beta. It's improved a lot over older versions, but still has some problems needing attention. Nevertheless, Netscape 6 branched from the Mozilla codebase around M18, because of time-to-market pressures. As a result, this "final" Netscape 6.0 release is out, even though it still has some serious bugs, including some that are already fixed in the Mozilla trunk. Maybe Netscape should have called it a beta, or a "preview release", but they didn't.

Then again, it helps combat the (undeserved) do-nothing impression too many people have of the Mozilla project. And it's got webmasters working on fixing their websites to work properly, when they couldn't be bothered to before the release of Netscape 6. And Microsoft might have gotten an even more commanding lead in the "browser wars" if they waited until Mozilla 1.0 to fork Netscape 6. (By the way, I believe that reports of Netscape's death in the browser wars have been greatly exaggerated.) There are benefits to the release of Netscape 6.0, despite the questionable quality. Can we really be sure it was wrong of Netscape to release it when they did?

The irony is that a company that takes the time to do the job right and creates a high-quality product is likely to lose to the company that got to market first with a slipshod product. Until users demand higher-quality software, it's not likely to change -- unless a very well-funded software company actually sees the light. The motto should be: "We will release no software before its time." (After Mozilla does reach 1.0 status, the Netscape 6.x release based on it should be much better than Netscape 6.0 is...)

The analogy to the auto industry is very revealing. It's also a chilling warning that will probably go unheeded. The U.S. may be the "software leader" today, but if this disregard for quality continues, someone's going to fill that vacuum and make the users realize that they really do care about quality, but never had that option before. It seems more and more likely that India will do to the American software industry what the Japanese did to the American auto industry. The parallels are striking, and should not be discounted lightly. In a decade or two, will it be commonly heard that "Americans make crappy software"?

Right now, too many people are resigned to the belief that computers are inherently unstable and you should just deal with random crashes, having to reinstall software, fixing problems without ever understanding what caused them, etc. (Microsoft has done an amazing job of convincing people that this fate is inevitable, and nothing to blame them for!) Sooner or later, someone is going to start convincing people that it doesn't have to be that way, by creating high-quality software. Let's hope the Open Source community does it first...

PC hardware is crap too, posted 7 Dec 2000 at 23:35 UTC by walken » (Master)

Has anyone else noticed how much the hardware quality has decreased in the last few years too ?

I find that most PCs around me are just not able to work to their specs. One stress test that I usualy do each time I get a PC to work with is to do repetitive tar+gz/detar/diff -ru cycles on one big source tree. It does stress both the disk, the cpu and the memory. If the CPU is not 100% used doing this, I also run kernel compiles in the background and watch for signal 11's. And, before doing this I setup hdparm according to the drive and controller specs.

I run this on new machines for usualy 24h and the thing is, now I almost expect to see it fail. It does on about half the machines I try. Then its always a bit hard to trace it back to the actual hardware problem - often it is bad ram, but I've seen wierder stuff too. Oh, and memtest86 does not always detect these bad ram failures either, sometimes you have to use the disk too to trigger the problems.

I think the PC industry is in pretty bad shape. I blame it on the users, most of which do not care if their PC is unreliable since they use crap software anyway so they could not tell the difference :-/

Is keeping software simple the key?, posted 8 Dec 2000 at 02:48 UTC by atai » (Journeyer)

Modern software systems are just so complex that they introduce many variables that can cause failure. Software generally grow fatter through iterations of development. OSes, GUI environments and applications becomes richer, never leaner. Maybe the key is to keep software simple, or to make "trimming the fat" a standard part of any software development process?

Imagine the job of software engineers as to pick and to select pieces from the source code out there, to integrate them, and then to eliminate unnecessary parts, for specific applications. And the same thing will be done again for new applications next time. Today there is no "elimination" step (maybe except for embedded systems) and whatever monster created is used as is (say Windows 2000).

Re: Is keeping software simple the key?, posted 8 Dec 2000 at 08:57 UTC by thorfinn » (Journeyer)

Hrm. I think I've got to say the same thing over here as I said over in the C++ thread...

A couple of key people that grok software architecture, and are in charge of making sure that it happens, is all you need for any given project to work and work well.

In a way, that is precisely what atai is talking about. That process of identifying and ensuring that key components (the ones that are reused again and again) are minimal, clean and efficient, is the job of the software architect. If that process isn't happening, then projects of significant size are doomed to eventual, if not immediate, decrepitude.

The problem is, though, that very few people actually have the style of thought that suits being a software architect. It requires the ability to think in seriously fractalesque fashion about the fundamental interconnectedness of all things, then reach in and be able to tweak that fractal headmap of whatever software is being worked with. Not many people can actually do that on large projects.

The good news, though, is that if you do have someone like that on your team (and an awfully high proportion of lead open source developers on the well known open source projects are like that) that people listen to, it leads to a general knock-on effect of code quality, that significantly reduces both time-to-release and number of bugs per cycle.

Blame management, posted 8 Dec 2000 at 12:18 UTC by ztf » (Apprentice)

Now that I've got your attention with an inflamatory title ...

Software quality is certainly a technical issue (and for a great exploration of that topic, I highly recommend Robert L. Glass's Building Quality Software). However, that's only one side of the coin.

The other side is the myriad of non-technical issues that can kill any hope of achieving technical quality. Yes, this means management. Bad project management has doomed more software to bugginess than lazy or incomptetant programmers by a longshot.

Those of you with Real World(tm) experience know what I mean. For example:

  • Vague and nebulous product specifications: I once worked on a product whose software specifiation was "provide all reports necessary to meet all applicable Federal, State, local, and site- specific environmental regulations." Of course, this was on a (low) fixed-price contract basis.
  • Schedules from Fantasy Island: When every previous project of a similar type has taken two years to complete, when every technical expert consulted estimates that this project will also take approximately two years to complete, have upper management base their entire product launch plan around getting it done in six months. Then, beat up on the technical staff when they don't deliver in that time, and insist that they code faster. Finally, consider the project a failure when working code is delivered in eighteen months.
  • Don't practice change control: I don't mean failing to practice software configuration management with a good version control tool like CVS or ClearCase (although failing to do that will certainly get you in hot water). I mean failing to practice project change control. For instance, an embedded systems project I worked on required close hardware-software cooperation and codesign to meet the product specification. However, the hardware group would occaisionally make unilateral design changes that had software ripple effects. Fair enough, except do you think that was followed up with a new software design review, estimate, and schedule change? Ha! Again, this was on a fixed-price contract, and our project manager apparantly had a phobia about charging change items back to the customer.

I'm sure everyone can add their favorite horror stories. The point is, these kind of management blunders effectively kill any chance of shipping quality, bug-free software, regardless of the technical skill and work ethic of the programming staff (which was high on all of the projects I've mentioned).

Or, to put it personally, blame Bill Gates and Steve Ballmer, not legions of lazy and stupid Microserfs, for Microsoft's quality troubles.

Following the leader, posted 8 Dec 2000 at 18:23 UTC by DrCode » (Journeyer)

What happens when the most profitable, successful software company is also known for producing some of the most bug-ridden products? They provide a lesson for the rest of the industry: Success comes mainly from marketing and pretty user interfaces, rather than solid, efficient code.

A bit wrong, posted 9 Dec 2000 at 05:48 UTC by kjk » (Journeyer)

You're a bit wrong here, DrCode. Success comes from making products better than the competition does, products that people are willing to shell their hard earned money for. Not to imply anything but I can understand why this might not be a PC thinking in some circles. After all isn't it that much more comforting to think that M$ is indeed a bunch of lazy and stupid coders led by incompetent management with their only asset being a good PR and marketing. Now that's explains their success, doesn't it?

And yes, if reliability isn't high on the customer's list there will be no reliability.

And I won't even mention how my 2.2.16 kernel locks itself during shutdown on my laptop trying to save mixer settings (which is quite unnecessary because the sound driver sucks so badly that I decided that I would rather re-boot into Windows than endure the scratching sound it produces), X-Windows happily locks itself when I press mouse buttons in the order it doesn't like, Netscape crashes on me once in a while and running eog *.jpg (EyeOfGnome) in a directory with >100 pictures is practically a DoS attack. There's an example of reliability for ya. Which is just to say that if most prominent Open Source projects (kernel, X-Windows, Gnome) are bug-ridden products how can we expect anything better from Joe Average Programmer?

the choice, posted 9 Dec 2000 at 05:59 UTC by fair » (Master)

It's the old adage:

"Cheap, Fast, Good - pick any two."

If PC hardware is any guide, the common choice is "cheap & fast."

Re: A bit wrong, posted 9 Dec 2000 at 07:18 UTC by thorfinn » (Journeyer)

Hrm... I'm not sure about this "better" product business, kjk. Sure, in a pure capitalist theory sense, theoretically consumers buy the best product at the best price, according to demand.

Unfortunately, that pure theory ignores a rather critical feature of marketplaces... namely, Vendor Lock-in. If you're out in the marketplace first, and you sell a lot of stuff, particularly if it's stuff that does not interoperate well with whoever's out in the marketplace second... you get fairly serious vendor lock-in occurring.

And, as is very well known, once you've got vendor lock-in, that market does not shift very much, until you get significant technological/paradigmatic change. Consumer mind-share has bugger-all to do with whether the product is fundamentally better or worse.

So long as the product A is sufficiently close in featureset to product B, the consumer/user cannot distinguish between them without actually buying both products, or going on recommendation from other people with one of the products... and when you go on recommendation, generally the recommendation is for the product that someone has. Hence, vendor lock-in.

It takes a very significant technological difference (eg, the level of difference between DVD and VHS tape as opposed to the difference between betamax and VHS) to make any real impact on vendor lock-in...

And reliability of software is just not perceived as a significant technological difference in most marketplaces. In some, e.g., medical hardware and life critical control system software, it counts... but the general consumer doesn't care if his computer crashes once a day or never... because those crashes hardly ever destroy anything important.

And that's the real driving force behind the lack of software reliability. The market doesn't care. And if the market doesn't care, the producers sure as hell aren't going to, at least, if they've got any sense and want to make any money.

As fair says... Consumers want it cheap. The market demands that producers get it (whatever "it" is) out in the marketplace first, hence fast. That leaves "good" out in the cold, usually. Sucks, don't it.

Very good point, posted 9 Dec 2000 at 09:30 UTC by kjk » (Journeyer)

This is very good analysis that I fully agree with.

One minor point is that I think "M$ was there first and sold lots of stuff" is a bit simplistic. There was MacOS (graphical OS) before windows, there was VisiCalc before Excel, there was WordStar (if my memory doesn't fail on me) before Word, there was Netscape before IE, there was Watcom and Borland before Visual C++ etc. (one on-going battle whose results we'll see a few years from now is Pocket PC vs. Palm). Which just means that M$ didn't acquire their monopolistic power out of thin air just by having good PR and marketing (unless it did in which case we should give those PR people a Noble-equivalent).

As to better or not: I guess it won't be news to you that there is no definition of "better" that we could get everyone to agree upon (e.g., one could define "better" by "being Open Source" and according to that definition M$ products would always be inferior). However, I explicitly defined "better" for the purpose of discussion: "products that people are willing to pay for as opposed to paying for similar products from competition" (it's worded a bit different but the meaning is there). You can only argue that they are not better if we define the term differently.

I would even argue the case that at a meta-level a marketplace with a monopoly still follows "pure capitalist theory" logic. Customers act in their self-interest by buying products with the best price/value ratio for them. M$ monopoly is just another variable present in the marketplace. Maybe if M$ didn't have the monopoly more people would be willing to try Gnumeric instead of Excel (it's still an inferior product (in many important ways) but you can't beat the price/value ratio) but given the monopoly (that evolved in a free market) they still find Excel to have a better value overall thus for practical purposes Excel is a better product even though one might argue that customers are actually short-sighted (stupid) by feeding a monster that will bite their head off once he gets big and bad enough (and it always does). My point of view would be that you only break "free market logic" be electing laws that actively screws the logic (e.g., tax law that would exempt from tax all businesses whose CEO's initials are S.B. thus unfairly favoring one business over another but don't we have such laws (copyright) already?). U.S.' legislators seem to think that even monopoly that was arrived at by perfectly legal means is still harmful to the idea of "free market" and long-term prosperity of the country so I'm not gonna argue my point too much and especially not in the presence of court. This of course begs a reminder that Microsoft has not yet been named a monopoly by the forces of law.

All were blamed except ...., posted 10 Dec 2000 at 23:01 UTC by Malx » (Journeyer)

I thinks languages must be blamed for low quality of software :)
Could anyone name a buggie Eifell based project?
I thinks Java is the first step - it just stops you from making out-our-array references and errors unCatched. It persuades you to make small source files - one per public class.

Java still have buggie software. .. but it must be thanked to libraries and buggie API (API is very much like language in the language).

So... where is the best language? The one which will not require lots of things to remember or to copy from examples just to comform language/API , not to add feature.

Re: All were blamed except ...., posted 11 Dec 2000 at 05:02 UTC by Pseudonym » (Journeyer)

Malx: I got the smilie, but nonetheless... Ye have heard it said in the past "You can write Fortran in any language". Verily I say unto you: You can write Visual Basic on any platform, too.

A good software engineering language helps in making good programmers write higher quality software faster and more cheaply than they otherwise could, but it is no substitute for lack of programmer discipline, which is what we're talking about here.

Could anyone name a buggie Eifell based project?

I can think of several reasons why you probably won't find one:

  • If the proportions of "buggy projects" are the same across languages, the fact that there are few projects in Eiffel to begin with would imply that it'd be hard to find a "buggy project".
  • Most professional Eiffel programmers know the language inside out and backwards. A lot of, say, C++ programmers simply don't know the language well enough.
  • Eiffel advocates won't mention "buggy projects". Anti-Eiffel zealots are, by contrast, few and far between, if any of them exist at all. (If they do, they probably use Sather.)

I think part of the problem is that people fresh out of University think that a piece of paper makes them software developers ready for all challenges that the industry can throw at them. While it's an advantage, the fact is that entry-level people need on-the-job mentoring.

My first place of employment (a holiday job at age 15 working with geographic databases) would only take graduates for that reason. It meant they weren't yet poisoned by previous jobs giving them an inflated ego, so they could actually be mentored without finding it insulting.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page