When Open Source doesn't open and source doesn't matter

Posted 20 Jul 2004 at 15:57 UTC by lkcl Share This

One frustration too many: time for a rant. When a bug in Mozilla (keyboard focus is on the previously selected window) has remained unfixed for at least 18 to 24 months, when XFree86 mouse interaction with PS/2 or GPM remains hazardous and makes a system unusable and that bug has been fobbed off to the kernel developers and not dealt with for at least two years - when there are more examples like this that make using Open Source software a pain, what do you do?

Are you one of the few people with the time and money and expertise sufficient to delve into the source yourself to fix the problem?

Do we have it "too good" and these niggles are, by comparison to the rest of the world's computer users (Windows), absolute peanuts?

Even just now, i wrote an email to report a bug on the Debian bugtrack system, I return to mozilla: i have a View-Source window open, i press ctrl-w and ONE OF THE TABS ON THE WINDOW BELOW closes. this has been a bug present in mozilla for at least two years, and it still ain't fixed.

I am at a loss, because I am completely dependent on the developers of Mozilla, XFree86, CUPS - you name it, my livelihood is beholden to it (out of choice on my part - let's be absolutely clear, here).

I wouldn't have it any other way: I'd rather put my fingers across two mains phases (you get about 50% more voltage that way) than use Internet Explorer, Microsoft Office and Visual Studio - the last time someone ordered me to use Visual Studio I nearly smashed up their computer after two hours of multiple self-destructing "from scratch" Windows 2000, SP1 plus MSVC+SP3 installs [they got the hint and I continued with Python + MySQL on linux, and later I developed pysxqmsll to contact MS-SQL 2000 using XML].

To return to the issue: we, the users (most of us ourselves involved in open source, so we should at least understand), are completely beholden to Open Source developers.

Has anyone the time or resources to do anything other than, just as you would with a proprietary product, report bugs? No. It takes time to get to know a codebase, let alone download the source (Xfree86 is a 45mbyte download, and that's excluding the build dependencies), compile it first (which may not always succeed if the build dependencies are wrong or the latest versions of the build tools are themselves in development, e.g. the latest version of flex can't cope with comments longer than 1024 characters, and automake versions from 1.4 to 1.8 are incompatible with each other due to M4 macro variations).

Have the developers _themselves_ got the time, knowledge, expertise and resources that THEY need to work on bugs and fix them?

Well, apparently not!

We've seen recently a license change to XFree86 - at the whim of the XFree86 developers, which forced the hand of some of the larger corporations to form a consortium and help set up X.Org.

Now, whilst I do not know why the XFree86 developers did that, it has a quite alarming implication. The only way that a project may change license is if all of the developers own the copyright to the code within it, and all copyright holders agree to a change (You can read in detail about the effect that Copyleft or not to Copyleft has had on XFree86, here).

And suddenly, a significant proportion of XFree86 changed license.

That means that anyone who attempted to submit code / patches / bugfixes to XFree86, one of the following happened:

- they relinquished all rights to their copyright to XFree86.

- they agreed to "tow the line" of the

- their code was rejected.

it is the latter of these, as someone who has written at least 100,000 lines of code for a major open source GPL project, that particularly bothers me.

we're all for open source - yet if the open source license leaves the project in a vulnerable position, or the project's charter leaves us in a vulnerable position, then there's absolutely nothing that can be done about it.

fortunately, the XFree86 group made a temporary decision to release code simultaneously under the GPL, which they later withdrew. it was a close call.

Now, there is a way to protect code.

The ASF's charter mandates that developers must show "mutual respect" for each other, and that decisions must be made on grounds of technical merit [not strategic merit, unfortunately - note for anyone wishing to set up an ASF-like charter, you might wish to address particular issue if the scope of the project is very broad].

ASF projects _have_ had their lead developer replaced, including in instances where that lead developer donated the code to the ASF originally. One such "leader" was causing so much grief and antagonism that the ASF had to activate its charter and expel the "leader" from the project!

like i said at the beginning of this article, i wouldn't swap the present situation for a proprietary one even if it was within my power and someone offered me a billion dollars.

... but if all our major Open Source projects - apache, xfree86, samba, mozilla, openoffice, php, kde, gnome - were protected from the whims and desires of the project leaders, the developers and from sociopathic corporations, i would not be so worried.

In short, I am not entirely enamoured with Open Source, yet I would not have it any other way. It would be fantastic if every critical project had an ASF charter, and a foundation with teeth that [legally] protected the code.


Gah, posted 21 Jul 2004 at 02:55 UTC by tk » (Observer)

There seem to be 2 points in your article:

  1. Open source developers don't know how to fix the bugs in their own programs.
  2. When other developers try to contribute code, their code gets rejected.

Now, even though I won't call myself a seasoned developer, I actually know enough to answer these charges...

  1. The good old magic word: ask. If your build gives problems, send a copy of your PC's setup and the error messages to the developers, and see what they say.
  2. A few things here:
    1. Even if your patches are rejected by the `official' maintainers, what prevents you from uploading it onto your own web page nevertheless? Nothing. In fact, this is exactly what I'm doing with several patches I've made.
    2. Do we really need a rigid, formal procedure to change the `official' maintainer for a project? Definitely not! The original maintainer of the FreeDOS kernel was rejecting patches which conflicted with his pet proprietary project, so developers simply submitted their patches to another guy. And before anyone knew it, the second guy had become the maintainer. Voilá -- maintainer change without a formal framework. So if you want to become `king' of a project based on 100,000 lines of code, you can, but the onus is on you to convince the rest of the world that your 100,000 lines of code are relevant to them. Have you succeeded in doing that?

Blurgh, posted 21 Jul 2004 at 11:28 UTC by scrottie » (Journeyer)

Rant is right - you started on one topic and quickly switched to another. Forgiving that... I'm going to try to connect them myself.

First, developers burn out - especially those doing it for the right reasons. Those doing it for the wrong reasons (spite) can go at it forever.

Even the developers going at it for the right reasons may not have the same agenda you do. It's hard to fix bugs all the time and not code new things ever. While users really want stability out of a mature product like Mozilla, Mozilla developers want to make it pretty, play with cool features (and then retract them in future releases in the name of making it look pretty), and so on.

Some developers are brutally honest about why they're in it. Others lie - to themselves perhaps even. It's the ones that lie that cause problems - those tend to be the spiteful ones, who are convinced that they're saving the world when they're really just fragmenting things or playing politics in the persuit of glory.

Developers going at it for the right reason are scratching their own itch (which might be any of making it look pretty, go fast, have cool features, or be stable). Or they're scratching their companies itch. Or, they're just digging the scene, enjoying the chance to make interesting friends, learn from the project, and participate in something. (These people will concoct "itches" so that they can scratch them - they're mildly dangerous if not kept in check - I'm one such person).

Microsoft has sworn that they're making security the top priority, but they're also promising vast numbers of new features in Longhorn. Which is it? And what's with this massive stock buy-back? At Ford, Quality is Job One! Or is it shareholder value? Because the shareholder packets say that shareholder value is top proirity. But if you work there, they seem to be more concerned about throughput than quality - Saturn was an experiment in giving employees some small say in quality, it wasn't a Ford experiment, and most of the parts are from Japan and Mexico.

It's said that "no man may serve two masters". A project can't go after lots of whiz-bang new features, constantly improving looks, speed, and stability - and have one not suffer for the other. Even speed and stability is an obvious trade-off. I'm of the opinion that applications software shouldn't be written in C (but that was a previous article).

So, even though developers might be misleading you or themselves about the goals of the project, don't be mislead. Mozilla is antique - it should be rock solid. It still crashes for me. It's said that it takes 10 years for a large program to mature - in numerous cases, I've found this to be true. Mozilla has had its 10 years (if you consider it to have started as Mosaic, which it did). Sometimes I fire up Mosaic (or mMosaic) and fantasize about how nice it would be if people had spent time in making Mosaic stable - a sort of retrograde thing. It works on most sites - and lots of browsers don't do JavaScript. Mosaic could be rock solid. I could use it to leave open large reference volumes that I like to consult periodicly. As it is though, tens of thousands of poeple are hooked on freshmeat.net and sourceforge.net like crack. Between eye-candy, new features, and faster versions of things (tightvnc, lame, woo!), it's like being a kid in a candy store.

Linux is on a pretty strong growth binge, with lots of new features and speed enhancements. This code-burn damages stability and security, but for most people, this is exactly what they want. NetBSD is more stable (sorry!), but code changes are glaciar by comparison - and not just because it's a smaller project. Things are tested heavily on a longer release cycle with far more consideration for breaking things. The developers want stability so they heckle any changes that would endanger this, and they spend their own time and energies fixing crashers rather than hacking on new features. FreeBSD is somewhere in the middle - something of a sweet spot for a lot of people.

Getting involved in a project won't make a noticable dent in the direction of the project - if it favors new features, you won't be able to do much to improve stability. If you fix every crasher that comes along, the developers who run the show will feel that much more slack to create more crashers as they create their whiz-bang features.

If you hadn't guessed, I myself run NetBSD - but that's not all. I also run thttpd, djbdns, and lots of stuff like that. I take a hit on features, and it's painful, sure, but I can't stand being helpless - I tried to deal with Microsoft too and failed. Someone described Microsoft's products as behaving like a moody 5 year old with gun. I've not only traded features for stability, but also for simplicity - I run software that I'm far more likely to be able to fix myself. In the case of thttpd, I've had to do this (no reflection on thttpd that someone somewhere will want to change it).

For browsing, I run w3m and links (in graphics mode) a lot - links will run one window per process without complaining, so (infrequent) coredumps won't take down all of my windows. w3m is light. And I resent how much time I've spent looking for a "perfect" browser and how far down on the list stability is for all of them, and how I have to stop and think what browser I should open each link in. But I'm grateful that open source gives me a choice - none of this "the browser is part of the OS" garbage. (The media player is part of the OS; the Windows system is part of the kernel; digital rights management is part of the kernel; the window manager is part of the windowing system - kill me now).

One of my little side projects is TinyWiki. I wrote it for my own use - I was running something 3,000 lines long written in a wretched idiom for Perl 4 (making each and every variable global was only the start). I decided that something is seriously wrong if Wiki can't be done in 100 lines of Perl so that's exactly what I did (actually, I have a version of it in 26 lines, too). A pretty good number of people run it - far more than I expected for something so small and crude - and I've found that they all run it for the same reason - it's easily hacked on. No Wiki will do everything for everyone, and often, custom features are more important than any core feature, so sometimes a Wiki should be tiny. (I need to simplify it a lot, actually - it's incomprehensible right now, but that was kind of a diversion for me).

So, besides taking an honest look at open source projects true goals, you need to confront yourself about yours. (This won't stop you from getting frustrated - I still do - but this might help a little) ;).

So, to answer your questions directly, no, I don't think getting involved in hacking Mozilla will help you much and as you said, it would be a huge investment of time. Playing janitor isn't very sustainable anyway - unless that's what you enjoy doing.

Re: open source licenses, again, you have options, but it's annoying to have to think about. A lot of folks insist on GPL'd software so they don't have to worry that the main fork might run off as something restrictive in a really annoying way. It's hard to kill a project (accidentally or willfully) - usually they fork into a GPL version in response. Things are as they should be. See above about developers who are in it for the wrong reasons. The copyright system got out of whack and the GPL is a natural response - too much (hard) work done by programmers was burried every year because it was undermaintained, underdebugged, and too closely tied to one company that would too often decide not to do it anymore (AmigaOS and OS/2 come to mind). Software isn't like a novel - you can't run the same version year after year - people find exploits, new data exchange formats must be supported, and you get awefully tired of looking at a DOS box after a while - and you want to write software that interacts with it. Had copyright for software opted for 4 years for each version of a program after which the source code had to be released to the public trust, Free Software as a movement (in my estimate) wouldn't have happened. As Free Software has happened, companies are waking up to the foolishness of sinking millions into code and then killing it 9 out of 10 times. We're seeing a lot more "open" groups - code and standards developed by consortiums by interested parties who all share, with a charter formed. OpenGL is one such, with several members including Microsoft (claiming they own vertex shading, argh), SGI, and others on the board, and the result is cheap graphics cards that haul but are very well standardized, and a strong standard for game programmers to program for. SVG is an emerging standard on wireless platforms, I've read, and J2ME is promising for mobile applications even if Java has spent as much time as Perl niche-hoping, and without a community process, access to source code, and strong specifications, IBM, Nokia, and so on, wouldn't have come on board. The copyright system was encouraging destructive, inefficient behavior, and companies are overcoming (to some small degree at least) this with standards, openness, and charters. Whether or not you Free Software, looking for open standards, multiple players, multiple sources, and consortiums is a Good Thing. It's just kind of a cool, lucky thing that GPL builds in most of the best attributes of an industry consortium. And of course, a single company by itself behaves like a developer with the wrong motives.

(Pretty far afield, I submitted a bug report a little while back that got a year old build/link bug closed - I found the old bug, attached more information, demonstrating that it's still happening, and listed variables that were the same and variables that differed - same version of gcc, different OS, multiple and numerous versions of Mozilla - sometimes a good bug report is all that is needed).

To tie up an earlier lose end, I was involved in one project, years ago, that was large for its time. There was a lot of demand on the developers - lots of yelling, kicking, screaming, emailing, posting - just bloody. People complained about everything that was done. They wanted all kinds of things. They thought they were being singled out by changes and it was a conspiracy against them (seriously!). For the most part, people are a lot better off today - I think Microsoft made everyone grateful for Free Software, and made them stop to think more about why it is and what it is. But these developers on this project only heard the screams after a while - it got harder and harder to remember that the screams where just the facade of a much larger user base, and was the natural result of people enjoying the project, using it, and caring about it.

Sorry to type your ear off - whether or not I have anything useful to say, these are certainly things I've thought about. You could generalize some of this - meeting between industry and hobby is always good (industry needs to serve users better, mostly by now throwing away valuable code, and hobby needs resources and legitamacy). You can't change an open source project that already has direction. Hackability (and fixability) is a feature just like any other feature, and you can't have everything - speed, stability, fixability, cool graphics.

Reguards,
-scott

thanks, posted 21 Jul 2004 at 14:10 UTC by lkcl » (Master)

hi scott,

gosh, thanks. hey, i think you should have written this as an article rather than me :)

tk, hello again :)

1. Open source developers don't know how to fix the bugs in their own programs.

i do not believe that i am the sort of person who can claim sufficient knowledge on all individual open source developers in order to be able to make such a statement.

on a large project, the code can get "out of hand" and beyond the ability of its developers to continue to generate stable releases on a sensible release cycle, and especially on large _popular_ projects, bug-reports are so numerous as to be almost useless.

debian bugs lists for xfree86 run to 120 important 198 normal 67 minor and 21 wishlist bugs, and mozilla run to 61 important bugs, 278 normal and 100 wishlist items.

i feel really sorry for the people who have to vet and forward these bugs to the developers. bear in mind that mozilla has its OWN bug-tracking system as well, excluding the debian one!

2. When other developers try to contribute code, their code gets rejected.

sometimes it does.

sometimes it sits there.

sometimes it's used - as one might expect.

sometimes there are license and copyright issues: i know of developers who will not accept code because they would "lose control of the project" as it would no longer be entirely their copyright.

sometimes there are conditions.

Cygnus used to require that you sign an indemnity document before any code patches were accepted: they were concerned about liability [they supplied open source code - other people's code - to their customers, who could sue them if it didn't work]. even for two lines of code.

sometimes the contribution is genuinely poor quality rubbish (introducing massive security holes, that sort of thing): sometimes the developers cannot be bothered to contact the contributor, sometimes, as in the case of the linux kernel mailing list they don't have TIME to contact the contributor except to say "no", when in fact "no" is a short-hand for "been there, discussed that, concluded it wasn't a good idea already".

what drives _me_ nuts on large and critical (popular) projects is when developers are beginning to show signs of not coping with the task: they start making decisions that lock other people out, be it over licensing, be it over design issues, or copyright.

has anyone tried to contribute to libqt, for example?

yes it's GPL (thank goodness), but because it's dual-licensed, do trolltech accept patches and do they add your copyright to the authors list, and do they then send you a percentage of their license fees based on your contribution?

and KDE _depends_ on this code.

Re: thanks, posted 21 Jul 2004 at 15:47 UTC by tk » (Observer)

lkcl, but you're not answering a single point in the later part of my reply. You're just ignoring it and repeating your previous argument.

To make it clearer: The maintainer may reject your patch... but so what? A patch doesn't have to be accepted into the `official' code base in order to be useful.

yep., posted 21 Jul 2004 at 15:58 UTC by lkcl » (Master)

yep, tk, that's right, because i'm fed up of you turning every post i make into a pissing contest. sorry.

pissing contest?, posted 21 Jul 2004 at 16:20 UTC by yeupou » (Master)

The post of tk was clean an addressing the some points you raised. In my opinion, it has nothing to do with a pissing contest.

You apparently refuse to argue with someone that addressed the issues you pointed out. A pity. You're the one focused on the identity of the poster, ready for a pissing contest, not tk.

Indeed, I think tk points are valid. Why would I post a message about it if you ignore tk's one? Are you expecting only post from people sharing your view?

You do not need to reply to this post, I do not wish to start endless talks about how you should communicate -- you're an adult, aren't you? I only felt necessary that another person than tk reminds you that you haven't got any legitimate reason to ignore tk posts and to focus on tk as person instead of what he wrote.

leave it, please., posted 21 Jul 2004 at 16:38 UTC by lkcl » (Master)

really. it's not worth it. after almost two years of endeavouring to persuade tk to be less defamatory, less willing to take an extreme negative interpretation of what i post, such that i always end up having to refute what he says, he has conditioned me to read the first couple of lines of what he posts, and then skip the rest.

if you believe that he has a valid point to make, please feel free to discuss it and expand on it.

"Open Source"?, posted 22 Jul 2004 at 02:39 UTC by ncm » (Master)

What is this Open Source you speak of? My Debian system is full (well, getting there) of Free Software.

ah ha, free software., posted 22 Jul 2004 at 08:17 UTC by lkcl » (Master)

it's this funny stuff invented by richard stallman.

Re: &quoOpen Source&quo, posted 22 Jul 2004 at 08:30 UTC by tk » (Observer)

Free software is...

  1. The freedom to associate Linux with (anarcho-)socialism.
  2. The freedom to claim that "free software" is clearer than "open source".
  3. The freedom of RMS, and no one else, to change his interpretation of freedoms as he sees fit.
  4. The freedom to ask people to abandon proprietary software in favour of inane, broken clones of the same.

Open source is...

  1. The freedom to associate Linux with anarcho-capitalism.
  2. The freedom to claim that "open source" is clearer than "free software".
  3. The freedom of ESR, and no one else, to claim to speak for "our tribe".
  4. The freedom to lambast RMS for talking about abstract ideals, then turn around and extol the imaginary virtues of anti-gun control.

Now, now..., posted 22 Jul 2004 at 08:39 UTC by scrottie » (Journeyer)

Re: "When Open Source doesn't open and source doesn't matter", it isn't hard to imagine a case where having the source doesn't help a programmer (and it's hard to imagine a case where it does help a private end-user), and there are plenty of examples where Open Source has left folks high and dry (where Open Source isn't full-on Free Software).



Re: code bloat, one of the primary new features of Perl 6 isn't for the users at all but is for the developers - a brand new core. The learning curve to get involved in hacking the overgrown Perl 5 core is so high that developers with the dedication and talent to do it appear years apart - slower than the old ones retire. There is a lot of reason not to throw out so much code and start over, but it's good to recognize that the ebb and flow of developers is a critical meter of a projects health. It was certainly getting to the point that developers didn't have the knowledge to fix bugs - any bugfix having to do with the regex engine is just tagged "wishlist" in Perl 5.



Re: open source doesn't matter, not to harp on Perl, but it's what I know right now (politically and otherwise), Perl 6 aims to "reinvent the language and reinvent the community". This is fascinating that it would be mentioned, but I'm seeing a lot more effort put into reinventing the language. For a project that's been around a long time (which also describes the old project I was on about before), a status quo can be a bad thing. (My old project worked overtime bringing developers up to speed and into the fold and it payed off - if I were to do another large project, I'd use this again in a heart beat). It's hard to break into hacking on Perl 5's core, but it's also hard to break into the culture - so many people know so much and have known so much for so long that it's hard for a novice to get any initial trust metric points. It's almost impossible to climb the ladder. Over and over again, I've watched people try and fail. Making Perl 6 successful depends, in my view, just as much on reinventing the community as it does on the language, as without the developer/user community, the language is doomed to never be finished (though the specifications might have that effect reguardless). I don't know what needs to be done. That's another article, I think. I could speculate. Would that be so wrong? People feel like they can contribute to Perl, but they don't feel like they have any (however tiny) ownership or representation. This is a side-effect of the "benevolent dictator" model, one that Linux suffers from to a degree, but patched versions of the Linux kernel (gr-security, ARM, MIPS ports, etc) are common, and a lot of documentation has been written on the inner workings (some of it published as proper books). Anyone can patch the Linux kernel and distribute their version - someone doing that to Perl would be in an awkward place. Already, people who distribute patches to Perl (that could in theory be integrated into the core) are shunned and held in a certain kind of contempt - they're threatening to fragment the core, introducing incompatabilities, and all that stuff. Right now there is a lot of flack from the Perl 6 language design - it changes a lot of things in the language - and the flack is to be expected. However, the developers just don't want to hear about it - period. They're tired of hearing people say things like "call it something other than Perl!". If people are to ever feel that Perl 6 is the one true Perl 6, they need to be able to say these things and feel as though they're being heard - otherwise how could they possible be represented in the language? Current attitude vary between ramming the language down peoples throat, telling them to essentially love it or leave it, or most commonly, telling them that they can define new operators and write macros to alter the language to their liking if they object to some behavior or change. Technically, these are satisfactory arguments, but socially, they are lacking. I've used early Linux 1.x's. Slackware came on a pile of probably 20 or 30 3.5" floppy discs. It was wretched. It was buggy, incomplete, amaturish, hobbled together, inconsistent in every way - the worst kind of a mutt. 386BSD came out about the same time, and it was a complete, consistent, mature operating system, that was expertly crafted and refined over years. Linux distributors took patches - the Jolitzes who were working on BSD didn't. People felt like Linux was there. People collected patches for 386BSD (and this is how NetBSD was formed - a fork that included peoples submitted patches), but the damage was already done, and this made BSD feel like a shaky prospect - like a product oblivious to developers and bugs being rammed down your throat - something you could love or leave. (Yes, Linux is a lot better now days - this is a historical perspective - and Linux has basically always had better hardware support on x86). This should support your agument - in order for a project to be useful in the way that we hope Free Software to be, it has to make effective use of free labor, and in order to do that, it must appeal to people who would work on it, and it can do this by making people feel as if they have some say in things (whether or not they actually do).



Just because there are no easy, simple solutions that you yourself can implement on other peoples projects doesn't mean that there is no problem. Hand-waving won't fix it. Saying "oh, it's easy" seldom dispels frustrations. Like esr's writ on CUPS was frustration that was still valid even though he could file bug reports and send patches - he was complaining about the state of usability in general, not just for that one project, and he obviously can't fix everything. Sometimes just pointing out the problem is useful - especially when it reaches are roar and drowns out the voices of the hand-wavers ;)



I do get two replies, right? >=)



-scott

oh go on then., posted 22 Jul 2004 at 09:27 UTC by lkcl » (Master)

scott, you get at least one: thank you for the additional example and for the clarification at the beginning of the kinds of points i am endeavouring to make. l.

free software is , posted 22 Jul 2004 at 14:11 UTC by yeupou » (Master)

free software provides: the freedom to run the program, for any purpose ; the freedom to study how the program works ; the freedom to redistribute copies ; the freedom to improve the program, and release improvements to the public.

In some circumstances, open source provides the same.

All your talk about RMS and ESR is just in your mind. Nobody is forced to take these persons as leader to support free software (whatever you call it).

Re: free software is, posted 22 Jul 2004 at 16:09 UTC by tk » (Observer)

yeupou, no. "The freedom to redistribute copies must include binary or executable forms of the program, as well as source code, for both modified and unmodified versions." Which of the 4 freedoms mention this? None of them. In other words, the `commentary' below the supposed 4 freedoms must be taken as modifying the definition of free software, not just explaining it.

"If a contract-based license restricts the user in an unusual way that copyright-based licenses cannot, and which isn't mentioned here as legitimate, we will have to think about it, and we will probably decide it is non-free." Note that this decision may not depend on whether the contract-based license literally satisfies the 4 freedoms. If the FSF `thinks' about it and `decides' it's non-free, then it's non-free, period. By the way, this is part of the free software definition too.

So yes, whether something is or isn't free software does depend on what the FSF thinks, and we are forced to treat FSF as the leader of free software (even if not the leader of the GPL).

The OSI have much more respect for the written word: they've accepted some flawed licenses as being open source based on a literal interpretation of the OSD, but I prefer that to being held hostage by RMS's latest brain movements.

understanding how works freedom in general, posted 22 Jul 2004 at 22:54 UTC by yeupou » (Master)

" Which of the 4 freedoms mention this? None of them."

None of them need to mention this. When a state gives its citizens the freedom to walk down the streets, he must take action to avoid some people to remove that right from citizens. Providing freedom means forbidding slavery. That's obvious. That's a basical principle: any right means laws. The right to live exists because it is forbidden to kill. The right to be safe exists because it is forbidden to harm.

"By the way, this is part of the..."

I give a short definition. Keep focused on that. The rest is irrelevant to what I said.

"So yes, whether something is or isn't free software does depend on what the FSF thinks, and we are forced to treat FSF as the leader of free software (even if not the leader of the GPL)."

You have no moral or legal obligation to do so. I do not myself give the FSF any leadership on me. I may agree with most of what the FSF (basically RMS) say, I never gave up my freedom of thinking and do not need any leadership in that area. In my opinion, the FSF should not seek any leadership but keep focused on providing advices. But that's off-topic with our talk right now.

"The OSI have much more respect for the written word: they've accepted some flawed licenses as being open source based on a literal interpretation of the OSD, but I prefer that to being held hostage by RMS's latest brain movements."

You are focused on a debate OSD against FSF, the persons behind. I personally do not give a toss about it. I'm only focused on ideas, and stand for I think is right, because I think its right, not because FSF os OSD said its right.

(side note), posted 22 Jul 2004 at 23:00 UTC by yeupou » (Master)

Where did you read that you cannot distribute only source code????

Re: understanding how works freedom in general, posted 23 Jul 2004 at 08:06 UTC by tk » (Observer)

yeupou, while I can decide to distribute source code alone, I can't create a "free software definition"-compliant license that forces others to distribute only source code. This is so even though the 4 freedoms don't preclude such a license. Why? Well, because there's this text outside of the 4 freedoms that says that you can't force others to distribute only sources.

In other words, the 4 freedoms alone do not define (FSF's idea of) free software. Once you claim that "free software" == "the 4 freedoms", you're already diasgreeing with FSF's stance on free software.

Besides I'm curious, if you don't buy an idea just because RMS says it, then by what criterion do you judge an idea? If you "think" an idea is correct, on what grounds do you "think" it is correct?

?, posted 23 Jul 2004 at 11:42 UTC by yeupou » (Master)

"Well, because there's this text outside of the 4 freedoms that says that you can't force others to distribute only sources"

Hum. I misunderstood your point (not expressed clearly, IMHO). Indeed, you cannot disallow people to distribute binary version of free software you contribute to. Otherwise, you would not provide the freedom mentionned.

"In other words, the 4 freedoms alone do not define"

Frankly, I'm not sure to understand what you wrote. Can you elaborate?

"If you "think" an idea is correct, on what grounds do you "think" it is correct? "

On my feeling about that is right, what is good. On my morals that depends only on me.

!, posted 24 Jul 2004 at 07:31 UTC by tk » (Observer)

yeupou, so your judgement is based on your "feeling", namely your emotions, not logic? That's interesting.

To make it abundantly clear what I meant: this is not the free software definition:

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.

This is the free software definition:

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.

A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, [...blah blah blah...]

[...blah blah blah...] If a license includes unconscionable restrictions, we reject it, even if we did not anticipate the issue in these criteria. [...blah blah blah...] When we reach a conclusion about a new issue, we often update these criteria to make it easier to see why certain licenses do or don't qualify. [...blah blah blah...]

Dig?

do we need these extra paragraphs? why should we care?, posted 28 Jul 2004 at 11:55 UTC by yeupou » (Master)

"so your judgement is based on your "feeling", namely your emotions, not logic? That's interesting."

You are over-interpreting what I wrote. I used the the word feeling to highlight the fact that I understand that other persons can see things differently, not that I do not use logic to make judgment. Also, some judgments are purely a matter of morals (politics is a matter of morals: in what society you to live in, that's the point), so there's indeed a great deal of feeling involved.

"To make it abundantly clear what I meant: this is not the free software definition: [...] This is the free software definition:"

I'm too dumb to find any differencies in the two texts you pasted here. Can you give me clues? Maybe it is only the two latests paragraphs that makes a difference for you -- apparently it is, if I re-read carefully your previous posts. But how it is relevant to our current discussion, if you take a look at the big picture? How should it matters to me that some people thinks appropriate to add these paragraphs to define free software? I do not think it changes anything in the end.

Forcing people to only distribute sources would be a disrespect of "the freedom to redistribute copies". Your extra-clause would break the short definition I gave, no matter what the FSF USA would think of it (we know FSF USA would agree with me on that, but it's just off-topic).

Re: do we need these extra paragraphs? why should we care?, posted 28 Jul 2004 at 18:40 UTC by tk » (Observer)

yeupou, the text below the 4 freedoms includes this tidbit:

If a license includes unconscionable restrictions, we reject it, even if we did not anticipate the issue in these criteria.

Whose conscience decides what's "unconscionable"? You? Me? No, RMS.

do we really need these extra paragraphs? why should we really care?, posted 29 Jul 2004 at 13:30 UTC by yeupou » (Master)

Tk, as I said before, who's forced to care about these extra paragraphs? How the FSF USA works and thinks is not a matter to us, is it (if so, why?)? These paragraphs defines how the FSF USA proceed to determine whether a software is free software or not. In no way it defines what is free software.

The GNU project proposed a free software definition. You can agree on that definition and use that definition to determine for yourself which software is free software and which one is not. At the same time the FSF USA can use the same definition to determine which software is free and which one is not. In theory, since you share a common reference, you're likely to reach the exact same conclusion. Why would it mean that you lost your self-judgment?

Simple example: FSF USA (as result, GNU) and Debian have a different understanding of how Free Software relates to documentation. Even if Debian agree with the Free Software definition, Debian did not lose its right to determine in his opinion what is free software and what should be free software.

OK, posted 29 Jul 2004 at 15:21 UTC by tk » (Observer)

yeupou, in that case can you give an alternative answer to my previous question? Whose conscience decides what license restrictions are "unconscionable"?

It's that simple. There's no need to dodge the question for 3 paragraphs.

whose conscience, posted 29 Jul 2004 at 22:43 UTC by yeupou » (Master)

tk, you're asking to answer a question that solely make sense if we take into account two paragraph that I consider to be off-topic. Giving you a straight answer to your question would mean accepting to consider these two paragraphs, one thing I'm not prepared to do.

I will not comment these two paragraphs that I consider meaningless to the debate "what is free software". Maybe you'd like to discuss about how the FSF USA works. If so, say it, and then interested persons will talk about it.

To avoid loosing track of the original debate, I replied to the errouneous (IMHO, obviously) statement "free software is # The freedom to associate Linux with (anarcho-)socialism. # The freedom to claim that "free software" is clearer than "open source". # The freedom of RMS, and no one else, to change his interpretation of freedoms as he sees fit. # The freedom to ask people to abandon proprietary software in favour of inane, broken clones of the same.".

I still found nothing conclusive about it. Free software is still a set of freedom and does not at all implies to agree with RMS, even about how one should interpret a definition he wrote.

Re: whose conscience, posted 30 Jul 2004 at 02:33 UTC by tk » (Observer)

yeupou, the "two" paragraphs you consider "off-topic" are part and parcel of FSF's free software definition. They're completely relevant.

You claim my definition is erroneous, but your claim lies precisely on ignoring the evidence in the extra paragraphs under the 4 freedoms. It's like saying, "yes, this is a smoking-gun proof that the accused committed the murder, but I consider it off-topic, so the accused is innocent." Is this the kind of "logic" that your judgements are based on?

Re Re, posted 30 Jul 2004 at 13:59 UTC by yeupou » (Master)

I will not rewrite what I already wrote. The extra paragraphs are just mot part of the definition. Re-read the page you are referring to.

"Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it refers to four kinds of freedom, for the users of the software [Four Freedoms]". This is the definition. This were free software is defined. The rest is just explanation of why these freedom are needed.

To reuse your example, we would be in the case "yes, this is a smoking-gun proof that the accused committed the murder, but the current trial is not about this murder but about whether the accused enjoy ice-cream or not. So I consider it off-topic".

Since you talk about logic, I'd like to point out that "but I consider it off-topic, so the accused is innocent" is illogical at most. Not because one should consider the evidence relevant (that's another issue). But because if an evidence is off-topic, it does not make the accused an innocent: the evidence should only be diregarded. If the result of deleting this evidence from the case is to be forced to declare the accused innocent, it's only because the prosecutor failed to bring valid evidences. A lousy evidence of guilt is not a proof of innocence.

w00t, posted 30 Jul 2004 at 16:30 UTC by tk » (Observer)

yeupou: Wow great, the free software definition has just become self-contradictory. Or rather, the FSF has just contradicted itself.

To see why, here's part of FSF's explanation on why APSL 1.0 was "non-free":

The APSL does not allow you to make a modified version and use it for your own private purposes, without publishing your changes.

Which of the 4 freedoms does this actually violate? Again, none of them.

Of course you can say that it violates freedom 1 since the freedom to adapt the software to your needs isn't unconditional. If that's the case, now consider what happens if I use the GPL: when I distribute my software to a neighbour, the neighbour is bound by the GPL to redistribute the software under the GPL as well, so freedom 2 again isn't unconditional, and is thus violated. In other words, by the same reasoning, the GPL is also non-free!

The only way to make sense of all this is to include the extra paragraphs under the 4 freedoms in the free software definition, which basically means that if RMS says it's free then it's free, if RMS says it's not free then it's not free. Then again, even this only way doesn't work, since the definition clearly says that the 4 freedoms are the entire free software definition... duh.

Incidentally, the reason that Debian developers were able to disagree with the FSF on whether the GFDL is free or not is this: Debian doesn't even agree with FSF's free software definition in the first place! They have an entirely different definition, namely the DFSG.

short answer, posted 30 Jul 2004 at 17:37 UTC by yeupou » (Master)

"Which of the 4 freedoms does this actually violate? Again, none of them"

(Why do you say "again". This is not justified)

"Of course you can say that it violates freedom 1 since the freedom to adapt the software to your needs isn't unconditional. If that's the case, now consider what happens if I use the GPL: when I distribute my software to a neighbour, the neighbour is bound by the GPL to redistribute the software under the GPL as well, so freedom 2 again isn't unconditional, and is thus violated. In other words, by the same reasoning, the GPL is also non-free!"

Freedom does not mean unconditional. Never. You're free to walk down the street only because you respect a set of rules. The GPL is not unconditional either.

The problem with the ASPL is precisely because it put a strange extra condition on the freedom to modify the software: being forced to spend time on redistribution, doing an extra activity. The GPL put an extra condition on the freedom to distribute the software but this extra condition is directly linked to the 4 freedom: the GPL force you to redistribute software as free software. It does not force you to do an extra activity, to do something that is not directly issued from the freedom we're talking about.

"Debian doesn't even agree with FSF's free software definition in the first place! They have an entirely different definition, namely the DFSG."

This "set of commitments" does not contradict the GNU's free software definition.

"The reason that Debian developers were able to disagree with the FSF on whether the GFDL is free or not is this: Debian doesn't even agree with FSF's free software definition in the first place!"

You apparently dont know the story. The disagreement is not on free software definition but on the nature of documentation. For most Debian Developers documentation is software, for GNU it is not. One use software in the large original meaning, the other as synonym of program. GFDL does not seek to be Free Software but Free Documentation... For instance most page of www.gnu.org are not free software. GNU people dont care: it is not even software, software meaning program, these freedom does not appears to be necessary, nobody need to be able to modify a page where XYZ gives his personal feelings on history. Debian people would say it's software, as it is not hardware, but it's not free and no page could be distributed by Debian, because the right to modify it is not given.

But in the end nothing of all this is even near to prove that one should take the FSF USA as master in order to appreciate free software.

What a load of silly apologetic, posted 30 Jul 2004 at 19:02 UTC by tk » (Observer)

In what way is "publishing your changes" a "strange extra condition", yeupou? Publishing your changes gives other people the freedom to use and benefit from your changes, something which is directly pertinent to the 4 freedoms. Your labelling of the APSL condition as "strange" and the GPL condition as "non-strange" is purely arbitrary.

You keep claiming that your brain is independent of FSF; so why on earth are you trying so hard to defend whatever the FSF writes, and with such twisted logic? If your brain isn't 0wn3d by RMS, then why do you cringe at any suggestion that RMS can be wrong?

(Then again, there are many ideologistic cranks who also claim to exhibit `independent thinking'. And as an imaginary person once said, "You can learn to think for yourself, but only Bob can show you how!")

well, posted 30 Jul 2004 at 22:03 UTC by yeupou » (Master)

"In what way is "publishing your changes" a "strange extra condition", yeupou?"

In many ways. For instance, I can make change on a source code without having the web. Please do not force me to list to two billions real-life cases I can think of.

"Your labelling of the APSL condition as "strange" and the GPL condition as "non-strange" is purely arbitrary. "

Not at all. My jugdment would maybe be slightly different if the APSL condition was something like "if you publish on the web modified code, you must publish it on publicly accessible area". But we're on the case "if you modify, publish".

"You keep claiming that your brain is independent of FSF; so why on earth are you trying so hard to defend whatever the FSF writes"

How can you say that and at the time accusing me to take into account only a part of what the FSF USA says?

Well well, posted 31 Jul 2004 at 03:51 UTC by tk » (Observer)

APSL 1.x never says you must publish your source code on the web and nowhere else. See below (emphasis mine):

2.2(c) You must make Source Code of all Your Deployed Modifications publicly available under the terms of this License, including the license grants set forth in Section 3 below, for as long as you Deploy the Covered Code or twelve (12) months from the date of initial Deployment, whichever is longer. You should preferably distribute the Source Code of Your Deployed Modifications electronically (e.g. download from a web site); ...

Clearly, you think the APSL is "non-free" and the GPL is "free" simply because that's what the FSF says. And when the FSF says that their free software definition == the 4 freedoms and nothing else, you believe it too, because that's what they say. You ignore part of what they write, because you want to pretend to be an independent thinker -- not because the extra stuff is wrong. The FSF is never wrong.

.. bla bla, posted 31 Jul 2004 at 11:06 UTC by yeupou » (Master)

What the hell "publicly available" means to you?

"Clearly, you think" bla bla, it piss me off to talk with people so sure of the state of mind of the others.

Re: .. bla bla, posted 1 Aug 2004 at 15:36 UTC by tk » (Observer)

Making source code "publicly available" can mean one of many things:

  • Put it on the web.
  • Copy it on lots of diskettes/CDs/memory sticks/etc. and give them away.
  • Print lots of copies on paper so that others can OCR it.
  • Just write down the code whenever someone asks for it.
  • ......

Other people can know the state of minds of others better that the subjects themselves, and they can even prove it. As I said, cultists and cranks also claim to be `thinking independently' and `individualistic', even though their `individualistic' thoughts are nothing short of a parroting of what others say.

"freedom", the GPL, and copyright law, posted 2 Aug 2004 at 19:11 UTC by lkcl » (Master)

"Of course you can say that it violates freedom 1 since the freedom to adapt the software to your needs isn't unconditional. If that's the case, now consider what happens if I use the GPL: when I distribute my software to a neighbour, the neighbour is bound by the GPL to redistribute the software under the GPL as well, so freedom 2 again isn't unconditional, and is thus violated. In other words, by the same reasoning, the GPL is also non-free!"

Freedom does not mean unconditional. Never. You're free to walk down the street only because you respect a set of rules. The GPL is not unconditional either.

i do not believe the GPL to be lawful, either. more specifically, if someone releases code under the GPL, and a company - or another open source project - wishes to "interoperate" with that project, and is prevented from doing so by the license, then they are entitled under EU law to IGNORE the copyright for "interoperability" purposes and also, bizarrely, to fix bugs.

the LGPL is infinitely better in this respect: namely that it may be used for any purpose. consequently, any company or other open source project may split the LGPL'd code down into small sections, release the modifications made at the section "interface" points, and interoperate with the LGPL'd code _that_ way without having any need to resort to legal but objectionable methods.

Re: &quofreedom&quo, the GPL, and copyright law, posted 3 Aug 2004 at 06:31 UTC by tk » (Observer)

lkcl, I never said the GPL is unlawful, I only said that the free software definition (which is different from the GPL) is self-contradictory (which is also different from being unlawful). The GPL is a fine document.

How can the GPL even prevent interoperability? The source code of a GPL project is available for all to see, and if you want to fix any bugs in a GPL project, then as I said, you just fix them and put them up on a web page. (And, optionally, submit the patches to the `official' maintainer.) Arguably it's the proprietary vendors trying to use GPL projects who are hindering interoperability, by keeping their own source code under wraps.

One frustration too many:, posted 23 Aug 2004 at 18:31 UTC by DeepNorth » (Journeyer)

Not sure how/why this degenerated into this strange squabble.

When I saw the opening line of the article, it reminded me of my main gripe with 'Open Source Software' or 'Free Software':

It is simply too difficult to make quick fixes and changes to this stuff.

I have known a few programmers who just gave up trying to do a bug fix or get involved because they could not build the project and the source was so convoluted, interdependant and confused with 'special cases' (hacks right in an inappropriate source file to compile under a particular architecture/OS/machine.

What is really needed is a BIG community consensus to agree on standards for underlying function, keep syntax and semantics compatible to allow stability and a really, really BIG effort to simplify core APIs to a minimum set.

I guess the above sounds a little idealistic, but I think it is sorely needed. The current state of affairs leaves control in too few hands. The major projects have good people, so I am not talking about politics here. I am talking about workload. Since so few people have a real grip on a given package and its dependancies, the amount of people actually available to do fixes is radically reduced.

The article starts with 'One frustration too many'. As a user of the various systems, I have found it very frustrating that all these separate moving targets make it near impossible for an entire installation to work without problems. I won't go into the logic of incommensurate interdependancies, but anyone who has much experience with this should know what I mean...

If I had the time, I would start taking major packages one at a time and reduce their dependancies (for core function) to an absolute minimum.

Simple example of a frustration with a system that is actually pretty good:

I use CYGWIN. Every time I attempt to do something major under this system I have to go away for lunch while I upgrade the system again.

The problem is similar to the one I have with Java. Since I only am occasionally working in Java, I have to upgrade to the latest version every time I want to do something. Every upgrade breaks my old code.

When I switch environments, all my vanilla ANSII C code compiles without complaint. This includes changes to OS/Compiler/Hardware, whatever. This is a good thing.

There is a disjoint between package developers and end-users further up the chain. If it takes six months to update your product when dependancies change and underlying dependancies change every three months, you are basically stuck. When you depend upon numerous packages and they break compatibility annually, you have a real problem, because that means something is breaking all the time. This affects the entire community and its products.

Socially, I think our community is inclined to ignore or downplay problems with the end product as it sits on the desktop and (less so) the server.

Given what I have seen of the politics of this community (even this thread for goodness sake), I sometimes despair that we can all get together to agree on co-ordinated core functions, APIs and release schedules. However, I think that for the most part the people in this community have good hearts. They would like to do the right thing. If we could somehow manage to start cleaning up and rationalizing all these packages, I think that things would stabilize rapidly.

Lest people think I am bashing on the Free Software/Open Source community, let me just say that the entire industry from silicon on up has been apalling with respect to agreeing on good standards and maintaining compatibility with them.

I digress here, but part of the problem with agreeing on standards has been a crazy lack of foresight with respect to bandwidth, storage and processing power. We keep going through socket changes, comm API changes, etc, etc because the standards never properly recognize and/or respect the future. Just think of all the broken systems because of Y2K, 8->16->32->64 bit changes, socket changes, chipset changes, bus architecture changes, RAM architecture changes, power supply changes, switching network changes, communications convergence, etc, etc. Surely some of this agony could have been avoided if people had simply respected Moore's law and done the math.

Another big part of the problem, of course, is that many of the players (large companies), keep attempting to hijack the commons by setting proprietary standards. Because some are spending time hiding things from the rest of the group (think RAMBUS and JDEC) and/or attempting to force a proprietary architecture which attracts license fees (think IBM and micro-channel), the standards process is perverted to everyone's detriment.

Don't get me started on all the time wasted everywhere and the technical/political/financial implications of Software Patents...

Here is a single mnemonic which I think sums up my advice:

Keep your eye on the ball.

Re: One frustration too many:, posted 23 Aug 2004 at 19:07 UTC by tk » (Observer)

DeepNorth: Sounds nice on the abstract, but even if you throw out politics, I don't see any way at all to create an API that's actually infinitely stable, infinitely extensible, and infinitely clean. Because, well, people are fallible. The Java folks tried their hand at the Ultimate Stable Interface, and look what happened.

The best option at the moment, I think, is for projects with lots of dependencies to include the dependencies' source code right in their tarballs -- some projects are already doing that.

Angels and Pins, posted 24 Aug 2004 at 15:20 UTC by DeepNorth » (Journeyer)

The internet runs on stuff like sendmail and has for ages. That's pretty stable. My base64 code at sourceforge was built to a spec many years old and it does something that is quite predictable and usable even now. People actually use this code and it compiles quite comfortably in vanilla C. DNS may not be a dream, but it does the job. You can depend on its function and interface.

It is possible to do what I suggest and we are talking about a lot of very bright experienced people in this community. To the extent that the trust metric is a reflection of skill, there are a lot of people with a greater level of skill than I. I have more than two decades of experience with hands on programming in many languages and environments. I could certainly help with this. There must be others about that could as well.

Re: Best option to include dependencies ... I hate to agree with you on this one, but to some extent it is true and I practice this myself a lot. However, that's a kludge. Core function should not be kludgy by design. Kludges always creep in, but they should be avoided by design.

Remember, I am not talking about really esoteric stuff like some of the Java APIs. I am only talking about core function. Just the basic wheels that get badly re-invented over and over. I am talking about a global re-factoring exercise. It is happening anyway. It would be better if we could all get together and agree on basic forward looking standards right across the board.

The Internet RFC process has been a good one, I think. Also, the W3C has done the community a great service. The growing dominance of things like Apache creates defacto standards and in some cases, they are not too bad. It would be nice if community-wide we could put all this together so that going forward stuff would not keep breaking like it does...

Good programmer's are extremely tenacious. When they start to gripe about the difficulty and obscurity of things, you know you have a problem.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page