From ESR:

Early and frequent releases are a critical part of the Linux development model. Most developers (including me) used to believe this was bad policy for larger than trivial projects, because early versions are almost by definition buggy versions and you don't want to wear out the patience of your users.

This belief reinforced the general commitment to a cathedral-building style of development. If the overriding objective was for users to see as few bugs as possible, why then you'd only release one every six months (or less often), and work like a dog on debugging between releases. The Emacs C core was developed this way. The Lisp library, in effect, was not -- because there were active Lisp archives outside the FSF's control, where you could go to find new and development code versions independently of Emacs's release cycle QR .

The most important of these, the Ohio State elisp archive, anticipated the spirit and many of the features of today's big Linux archives. But few of us really thought very hard about what we were doing, or about what the very existence of that archive suggested about problems in FSF's cathedral-building development model. I made one serious attempt around 1992 to get a lot of the Ohio code formally merged into the official Emacs Lisp library. I ran into political trouble and was largely unsuccessful.

But by a year later, as Linux became widely visible, it was clear that something different and much healthier was going on there. Linus's open development policy was the very opposite of cathedral-building. The Sunsite (now Metalab) and tsx-11 archives were burgeoning, multiple distributions were being floated. And all of this was driven by an unheard-of frequency of core system releases.

Linus was treating his users as co-developers in the most effective possible way:

7. Release early. Release often. And listen to your customers.

Linus's innovation wasn't so much in doing this (something like it had been Unix-world tradition for a long time), but in scaling it up to a level of intensity that matched the complexity of what he was developing. In those early times (around 1991) it wasn't unknown for him to release a new kernel more than once a day! Because he cultivated his base of co-developers and leveraged the Internet for collaboration harder than anyone else, this worked.

But how did it work? And was it something I could duplicate, or did it rely on some unique genius of Linus Torvalds?

I didn't think so. Granted, Linus is a damn fine hacker (how many of us could engineer an entire production-quality operating system kernel?). But Linux didn't represent any awesome conceptual leap forward. Linus is not (or at least, not yet) an innovative genius of design in the way that, say, Richard Stallman or James Gosling (of NeWS and Java) are. Rather, Linus seems to me to be a genius of engineering, with a sixth sense for avoiding bugs and development dead-ends and a true knack for finding the minimum-effort path from point A to point B. Indeed, the whole design of Linux breathes this quality and mirrors Linus's essentially conservative and simplifying design approach.

So, if rapid releases and leveraging the Internet medium to the hilt were not accidents but integral parts of Linus's engineering-genius insight into the minimum-effort path, what was he maximizing? What was he cranking out of the machinery?

Put that way, the question answers itself. Linus was keeping his hacker/users constantly stimulated and rewarded -- stimulated by the prospect of having an ego-satisfying piece of the action, rewarded by the sight of constant (even daily) improvement in their work.

Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

Or, less formally, ``Given enough eyeballs, all bugs are shallow.'' I dub this: ``Linus's Law''.

My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.'' But the point is that both things tend to happen rapidly.

Here, I think, is the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena -- or, at least, that they turn shallow pretty quick when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

And that's it. That's enough. If ``Linus's Law'' is false, then any system as complex as the Linux kernel, being hacked over by as many hands as the Linux kernel, should at some point have collapsed under the weight of unforseen bad interactions and undiscovered ``deep'' bugs. If it's true, on the other hand, it is sufficient to explain Linux's relative lack of bugginess and its continuous uptimes spanning months or even years.

Maybe it shouldn't have been such a surprise, at that. Sociologists years ago discovered that the averaged opinion of a mass of equally expert (or equally ignorant) observers is quite a bit more reliable a predictor than that of a single randomly-chosen one of the observers. They called this the ``Delphi effect''. It appears that what Linus has shown is that this applies even to debugging an operating system -- that the Delphi effect can tame development complexity even at the complexity level of an OS kernel.

One special feature of the Linux situation that clearly helps along the Delphi effect a lot is the fact that the contributors for any given project are self-selected. An early respondent pointed out that contributions are received not from a random sample, but from people who are interested enough to use the software, learn about how it works, attempt to find solutions to problems they encounter, and actually produce an apparently reasonable fix. Anyone who passes all these filters is highly likely to have something useful to contribute.

I am indebted to my friend Jeff Dutky for pointing out that Linus's Law can be rephrased as ``Debugging is parallelizable''. Jeff observes that although debugging requires debuggers to communicate with some coordinating developer, it doesn't require significant coordination between debuggers. Thus it doesn't fall prey to the same quadratic complexity and management costs that make adding developers problematic.

In practice, the theoretical loss of efficiency due to duplication of work by debuggers almost never seems to be an issue in the Linux world. One effect of a ``release early and often policy'' is to minimize such duplication by propagating fed-back fixes quickly JH .

Brooks even made an off-hand observation related to Jeff's: ``The total cost of maintaining a widely used program is typically 40 percent or more of the cost of developing it. Surprisingly this cost is strongly affected by the number of users. More users find more bugs.'' (my emphasis).

More users find more bugs because adding more users adds more different ways of stressing the program. This effect is amplified when the users are co-developers. Each one approaches the task of bug characterization with a slightly different perceptual set and analytical toolkit, a different angle on the problem. The ``Delphi effect'' seems to work precisely because of this variation. In the specific context of debugging, the variation also tends to reduce duplication of effort.

So adding more beta-testers may not reduce the complexity of the current ``deepest'' bug from the developer's point of view, but it increases the probability that someone's toolkit will be matched to the problem in such a way that the bug is shallow to that person.

Linus coppers his bets, too. In case there are serious bugs, Linux kernel version are numbered in such a way that potential users can make a choice either to run the last version designated ``stable'' or to ride the cutting edge and risk bugs in order to get new features. This tactic is not yet formally imitated by most Linux hackers, but perhaps it should be; the fact that either choice is available makes both more attractive.

Copyright (c) 2000 by Eric S Raymond. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v2.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/). Excerpted from The cathedral and the bazaar by deus_x on Mon Feb 14 2000 at 20:05:38. The cathedral and the bazaar may be found at http://www.tuxedo.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/index.html.

In an open-source development community, the "release early, release often" philosophy has many advantages. Your users are also bug testers, co-developers, and quality assurance techs. They can not only report bugs, but also peruse the code itself for its possible causes and write their own patches to recommend fixes. This takes the most time-consuming and tricky part of code development from the hands of the dedicated originators and distributes it over a much larger group of amateurs and hobbyists, who can often quickly identify and correct problems (and actually enjoy doing so). For open-source projects, this method works very well. It has become quite common for download pages to include both a potentially buggy, developmental release, and also the last known stable release of a program. People who just want to use the program can download the stable one, and people who want to be involved with the bug-testing and development can download the development version.

This model most emphatically does not work in closed-source projects.

Professionally-released software under copyrights and licensing regulations is expected to be released in an almost completely pristine, working order. One does not purchase a car from the Ford dealership with the expectation that it will break down on the way home and the owner may be expected to suggest ways to fix it. The average software user doesn't even know how to spell C++, much less program in it. When things don't work on the computer, they have no recourse but to go use something else that does work. While it is expected that version 1.0 of any given program might have a minor bug or two, they are not expected to be game breakers or in any way damaging to the user's precious data. Releasing buggy software doesn't look like an inclusive means of sharing the responsibilities of development to these people, it looks like lazy, incompetent programming. Software development is difficult, and end-users (quite reasonably, I think) expect those with specialized knowledge to shield them from these difficulties.

This clash of cultures is extremely visible in Apple's iTunes App Store. With the release of the iPhone 3G, Apple has opened iPhone app development to the masses, subject to a few restrictions on what the app is and is not allowed to do. The overwhelming majority of these developers are hobbyists and amateurs, and I don't think I'm out of line making the assumption that a good portion of them have open-source backgrounds.

The iPhone's distribution model is not conducive to open-source development. One does not download source code from the App Store, and when multiple versions are available they are generally limited to a full, pay-for version and a free "Lite" version with limited capabilities. The software is distributed with the clear expectation that it will work properly. When it doesn't, the backlash is immediate and strong. Version updates, even for paid apps, are easy to install and cost nothing, but the damage could already be done by the time it becomes available.

To help iPhone users select the apps they need from the overwhelming list of choices available, Apple has a review system built in to the App Store. Users can not only rate the software on a scale of 1-5 stars, but they can also write a short review about the app's merits and drawbacks. When an initial release is buggy, this can be devastating to the app's reputation. A flood of poor ratings and bug complaints can easily become permanent marks of shame on an app's review page, driving potential customers away even after these bugs have been fixed. It's an easy way for a developer to shoot himself in the foot.

Recognizing this problem, the iTunes review page has begun explicitly marking what version of the software a reviewer is talking about. While this can only help, the average customer as likely as not will be put off by bad reviews without paying much attention to the version number. Newer reviews stating that the bugs have been fixed and that the app works fine now will help, but grade-school threats of poor performance going on a so-called "permanent record" have never been so rooted in reality.

Does software need to be perfect on its initial release? Certainly not. Anyone who's ever used a Microsoft product is certainly cynical enough by now to understand a few bugs are reasonable, so long as a simple quit-and-restart of the program will clear it and it doesn't lose (most of) the data. But the core functionality should be stable, and the program should be able to run for extended periods of time without crashing.

As app development in iTunes continues to mature, I'm sure that it will become understood that the average user expects a well-tested and smoothly-functioning app when they download it, especially one they've paid for. Even the basic 99¢ apps will be frustrating to a user who discovers he's just wasted his money (if not due to the loss of a dollar, the implication of more wasted dollars in the future due to further buggy apps). But never before has so much amateur software been made so easily available to so many ordinary users, and software developers on all platforms would do well to learn a lesson from how this unique process is playing itself out.

Log in or register to write something here or to contact authors.