The FreeBSD Corporate Networker's Guide

Presentation to Clark Linux User Group
December 7, 2002

The text of Ted's presentation to the Clark Linux User Group is below. There is also a sound file of the presentation available.

The Future of Open Source

Hello everyone, and thank you for coming.

My name is Ted Mittelstaedt and I work for a local ISP, Internet Partners Inc.

Iím here today to spend some time reflecting on the future of Open Source Software. I know most of you are probably Linux fans so hopefully my background with BSD might provide you with some different perspective on the question. Besides I might be able to sell some of you on it!

Today Iím going to attempt to answer the question of what is the future of Open Source, where itís going and whatís going to happen with it. Iím going to try to do that by talking about where itís been and where it is now. That gives us 2 points which hopefully determine enough of a line to make some projections about where itís going. If nothing else, it will at least provide some amusement value.

But first, before I start in on that, what exactly IS Open Source?


Open Source is one of those wonderfully generic terms which can mean anything from words on a page to a political movement. In truth, thereís no actual unified agreement of what a program copyrighted under an Open Source license really is. Fortunately, there are 3 major generally used licenses in the Open Source family. These licenses all agree that one part of the Open Source license means that at a given point in time, the user can see and make changes to the source code of the software they are using for their own purposes. Where they differ is how these modifications are viewed, who owns them and whether they can be redistributed.

The three definitions are as follows:

  • BSD
  • GPL
  • Commercialized
Now keep in mind that there are variations of all of these--from qtís restricting modifications to patchfiles to Microsoftís restricting source to people that buy a lot of software from them--but all of these are minor issues and all of these variants all fall into 1 of the 3 definitions.

The BSD License
The BSD license has as central to its definition that software should be free. And that means completely free, even to the extent that someone can take a piece of BSD software, incorporate it into a commercial program, and then sell the result. After all, this goes back to Beatrice Hallís famous quote, "I disapprove of what you say, but I will defend to the death your right to say it." True freedom means exactly that--freedom to do ANYTHING YOU WANT with the software.

Now, while the BSD license does accept that modifications to BSD code may be lost from the Open Source community--for example, Microsoft modified the BSD versions of ftp and Telnet and several other TCP/IP applications for its Windows operating system--this is the price of freedom. The original code is still intact. BSD has long pointed out that if a company uses a BSD program as a basis of a commercial piece of closed-source software, the second they close the source they lose all the benefits of using Open Source because they now have to synchronize all changes to the BSD software with their software. The effort to do this eventually becomes as great as the effort to write the software from scratch, so thereís no long term benefit to "stealing" BSD code as a basis for commercial software. It is this license that is one of the critical reasons that TCP/IP has been adopted by everyone as a universal networking protocol, and it is why the Internet runs on TCP/IP today.

One critical point of the BSD license is that the copyright of the code is transferred to the University of California, Berkeley. If the author retains copyright, itís not BSD. This is often equated in the general publicís mind with putting the software into Public Domain, although legally this is not the same thing.

From a redistribution standpoint, BSD has no restrictions on redistribution, period.

The GPL License
The second license is the GPL license, and it was developed after the BSD license and in some sense can be considered the most activist license in the Open Source family. It has as its central tenet that any changes and modifications to the software MUST carry the same licensing as the original, to third parties. This was done to prevent people who were NOT the software developers from taking GPL software and making a pile of money off of it. (And if you donít agree with that, read Richard Stallmanís writings on Free Software.) One critical difference from BSD is that software can be released under both GPL and authorís copyright, such as MySQL. Of course, the author of the GPL strongly recommends the copyright be transferred over to the FSF, but few people do that. The ironic thing is that, despite the desire to prevent the establishment of huge software publishing houses that make piles of money off GPL software, because of Linux, today GPL software makes more money for commercial organizations than BSD software does and is part of more commercial software distributions.

I take this as very significant because it proves to me that thereís a fundamental axiom in software development, that software by itself is worthless as a means of generating revenue. In short, if youíre going to become a programmer to make a lot of money, youíre doing the wrong thing. What generates the money with software is the activities that surround it, such as the redistribution of it, the application of it, and the support of it. If you want to make money, go into one of those fields.

Now, one of the big grey areas of the GPL license is in the rights of the copyright holder to modifications made to the original program. Unlike BSD, where the copyright to the software is always immediately turned over to UCB, the copyright of any GPL software can be either turned over to the FSF or retained by the author. If itís retained by the author, then the author can, of course, release non-GPLíd versions of the software under a restrictive commercial-style license. MySQL does this, for example. The issue is, though, that if someone modifies the GPL code, then the GPL license remains silent on whether the modifications can be used by the original copyright holder unencumbered by the GPL. It is clear what applies to everyone else, but not the copyright holder.

So, from a redistribution standpoint the GPL has some restrictions on redistribution.

The Commercialized License
Commercialized Open Source is basically a slew of new marketing terms, such as Apple Public License, Microsoft Shared Source, and so on, for an old idea, that of source licenses. The use of these terms was primarily done as a reaction to the success of the GPL and BSD licenses. It basically says, "We will open the code but retain copyright and redistribution rights." Ironically, Sun, SCO, HP, Digital, AT&T and many others originally used this very successfully with UNIX, and its UNIX-like operating systems such as FreeBSD and Linux that spawned the less restrictive licenses.

Now, probably many of you are wondering, "Why is this guy telling us something we already know?" and probably more are thinking, "Why is that idiot talking about Microsoft Shared Source and Linux in the same breath?" Well, here is the crux of the matter. I said earlier that "where they differ is how these modifications are viewed, who owns them and whether they can be redistributed."

This is one of the fundamental issues with Open Source today. Itís the redistribution! Think of it this way--what major benefit does a user have to getting an Open Source program as opposed to a closed source program? Itís not price. Last I saw at Fryís, a box of Windows and a box of Red Hat cost the same. It is simply this: the user can make changes to the software if they have the source. Redistributing those changes to anyone OTHER than the author does not help the user! Letís set aside all the feel-good idealism for a moment and boil it down to a pure "what benefits Number One" scenario. Simply, itís the ability to modify the program. And nobody, I mean nobody, from either an Open Source or a Closed Source camp would argue that users making changes to their software to enhance it is a Bad Thing. The controversy both WITHIN the Open Source community and BETWEEN the Open Source and Closed Source communities today can be boiled down to who gets to make the redistribution decisions about the software.

So out of this comes a simple conclusion when predicting the future of Open Source. It isnít about whatís going on between FreeBSD and Linux or Linux and Windows. It is about where ALL COMPUTER SOFTWARE THAT HAS CODE AVAILABLE is headed, whether that software comes from RedHat or Microsoft.

Now I have one last comment on the term Open Source. There is an effort among some of the GPL advocates, spearheaded by Bruce Perens, to attempt to redefine Open Source as an umbrella term covering both GPL and BSD, as if there will be some kind of unification of BSD and GPL in the future. However, there has long been a lot of jealousy of BSD by many in the GPL camp, since BSD was the first Open Source UNIX, and this effort is just another reflection of that rivalry. Itís an attempt to minimize BSD and the BSD philosophy. The same sort of thing goes on with the efforts to claim that the MIT and BSD licenses are the same (which is Richard Stallmanís favorite thing to do), as if the MIT license was that historically significant. It is to the extent that it covers part of X Windows, but it does not have the significance that the BSD license does. This effort is shortsighted and wrong; it is tantamount to saying that Dodge and Mercedes are the same car because DaimlerChrysler owns both companies.

OK, now that we all hopefully know WHAT software Iím going to be talking about, the next question is, where is Open Source going? To see this, next Iím going to talk about the history of OPEN SOURCE.


As you all know, most of the computer industry is in the midst, (hopefully getting close to the end, though) of a serious economic contraction. The last few years, from the early 90s to 2000, were the economic golden years of the High Tech sector. I worked squarely in this sector during this time, and as a matter of fact, I worked for several Dot-Comís although at the time we didnít know that they were dot coms. In retrospect they were, and none of those companies are alive any longer; they went the way of the rest of the dot bombs. Any of you that worked in High Tech during the 90s will know what Iím talking about. As for the rest of you, well thereís still time to switch to pre-med.

Looking back now, 2 years after the end of the High Tech Golden Age of the 90s, gives us the great benefit of hindsight. Despite the fact that this decade spawned some of the stupidest technology ideas ever constructed, I can see now why Linux, FreeBSD and the other Open Source operating systems roared to life during this decade. And itís NOT just Microsoftís actions! In fact, I can point now at the one person that was responsible for starting this--Gary Kildall.

CP/M and the Development of "Microcomputers"
I wonder how many people here know who Gary Kildall was? Well in short, he was the author of CP/M, which was the earliest commercial operating system used for the earliest microcomputers, S100 systems. He almost single-handedly ignited the personal computer revolution. This was years before anyone knew that Apple was more than a fruit.

Gary developed CP/M in a project for Intel to produce a PL/M compiler that ran on the DEC (PDP-11?) and targeted Intelís 8080 microprocessor, which had been introduced in 1974. Gary wrote CP/M using PL/M to provide a development environment on the 8080 itself, but Intel was not interested in this. They bought the PL/M compiler but not CP/M. Gary took CP/M and founded Digital Research which quickly became the foremost supplier of the CP/M operating system for personal computers (called "microcomputers" then). At the same time Digital Research was founded, the first personal microcomputer, the Altair, debuted in the magazine Popular Electronics. This was followed by many S-100 systems which dominated the personal computer industry until 1981, when IBM introduced the PC.

Now I mention all of this because Gary made what in hindsight was a fatal decision with the licensing of CP/M. He was not satisfied with the legal protection of copyright law to allow him to make money using CP/M. Instead of releasing the operating system as a commercial program with a source license included, which would have prohibited piracy but permitted the computer community to help advance CP/M, he issued CP/M as a closed source program.

Today, I cannot convey to you how much a betrayal this decision was. For starters, the Altair, and the later S-100 systems that came from it, used a standardized, open hardware architecture. People building 8080 personal computers did NOT even need to disassemble ROMS or reverse engineer the hardware of the Altair--it was open and non-proprietary. Even the bus was standardized by IEEE. Secondly, Gary himself came from the computer research community; just about everything he learned was done using code and algorithms passed around openly. Third, at the same time that the activity was starting up in the microcomputer industry, researchers at University of California, Berkeley were booting up the first copies of UNIX. And UNIX at that time was evolving from source code that AT&T was handing out under source licenses. Thereís no legitimate argument that can be made that CP/M needed to be closed source. It was not copy-protected and people could and did pirate it.

The release of CP/M as closed source set the tone for every following personal computer manufacturer for the next 20 years to release operating systems as closed source. Apple followed 2 years later with the Apple II, Microsoft and IBM followed even later with PC DOS. If CP/M had been open source, those companies would have been forced to release their operating systems as open source, or they would never have been competitive. In fact, the port of CP/M to the IBM PC as CP/M-86 would have occurred almost instantly and Microsoft/IBM PC-DOS would have been stillborn.

As a result of closed source, during the 20-year stretch from the late 70s to the late 90s, advancements in personal computer operating systems were tremendously retarded. While UNIX commenced to get TCP/IP, the X Windows system, Emacs, GCC and a host of other programs all contributed by a vibrant user community that was using Open Source, the personal computer industry got years and years waiting for simple bugs like the inability to address partitions larger than 32MB to be corrected. Five years was wasted on OS/2, another closed source OS, a big fight with IBM and Microsoft which stunted NT the successor to OS/2, years of lackadaisical DOS releases with himem.sys and emm386 being the most advanced thing they could come up with, and finally a series of 16 bit Windows releases that were more bug-ridden than a mattress in a $5 a night hotel.

It wasnít until 1992 that Microsoft started feeling the pressure to get off their duffs and start working seriously on a personal computer operating system that was more than a toy. What stuck a pin in their fanny was the publishing of 386BSD which was the first unencumbered port of UNIX to the 386. Of course, it took 3 more years of development for them to spit out Windows 95, and even then as a final spit in their eye, the well-known trade magazine Infoworldís readers voted OS/2 as the product of the year in 1995.

So really now what happened in the late 90s personal computer industry can be viewed as a fateful decision back in 1975 by one person - Gary - that caused 20 years of pain and suffering by the personal computer user community under this ridiculous idea that commercial software developers who write operating systems would somehow lose money if they published their source! The reason Linux and FreeBSD became so used in the 90s was because the computer industry was finally moving back to normalcy, that of operating system source being available. Itís like bending a branch of a tree back. The personal computer software market was so bent by Digital Research, Apple, IBM and Microsoft towards this idea of closed source microcomputer operating systems that when users finally started letting go of the branch by switching to Open Source operating systems like FreeBSD and Linux that the branch has whipped back with lightening speed. Thatís the major reason why Linux and FreeBSD came out of nowhere in the 90s. But there is another piece of the puzzle, because Linux and FreeBSD are more than just commercial PC computer operating systems with source that has been opened. Itís more than just the PC market coming back into alignment with the mainframe market, because Linux and FreeBSD are different. And to understand why, Iím going to talk about the invention of UNIX and C.

UNIX and C
The invention of UNIX is probably the single most important event in the history of Open Source, and it is this event, more than any other, that started Open Source down a collision course with commercial software. UNIX was created for the C language, and it and C have several properties that have turned out to be critical for Open Source, but the one overall property that is most critical is that of flexibility. If UNIX and C werenít as flexible as they are, C and, later, C++, would have never become the de facto standard language for writing software. Because of UNIXís flexibility, it was embraced by the research community, and was used everywhere--eventually. Gradually, a single, standardized software platform--C plus UNIX--emerged. This is what really set the foundation for wide dispersal and collaboration of Open Source software. Only standardization in a common computer software language allows the intense refining and debugging that is critical to turning a piece of software from a mere interesting research effort into real, production software that you can rely on for mission-critical applications, to exist at all.

Of course, itís important to keep in mind that in the early 1970s, shortly after the invention of C and UNIX, that Open Source was far from being production quality. The research community at that time, even with student help, wasnít numerous enough to sustain widespread development on many multiple projects. There were a few Open Source programs, like Sendmail, that everyone collaborated on because everyone had to USE them! But most other projects simply didnít have enough bodies to be able to keep up with even a poorly funded commercial software effort. So the idea that software, particularly commercial software, whether source was available on it or not, of being a proprietary product, was not seriously threatened.

The PC and Open Source
It took the invention of the personal computer to supply the last, missing ingredient for production Open Source--namely, the wide availability of computers. Of course, early PCs had no memory protection and thus were little more valuable than terminals to access the REAL computers running UNIX--this despite the laudable efforts of MINIX and XENIX. But they were sufficiently cheap, and thus numerous, to commence the widespread introduction of a lot of bodies into the computer industry. Eventually, the Ď386 was released. This is what helped to ignite projects like FreeBSD and Linux.

The last part of the 80s and the early 90s saw the major Open Source operating systems arising--BSD and Linux. Development on these was fueled by the following 5 things:

  • The existence of the Ď386
  • The existence of large numbers of cheap computers using the Ď386 processor
  • The existence of large numbers of people who had started into the industry 5 years earlier using the predecessor computers to the Ď386
  • The destruction of the Computing Science Research Group (CSRG) at Berkeley and the spin-off of the BSD codebase from that
  • The realization by a large number of researchers, students and programmers in the industry that if they ever wanted to get a decent Open Source UNIX that the time to strike was RIGHT NOW, before the entrenched commercial UNIX industry was able to bury the Open Source code that had "escaped" from the previous decadeís collaboration between the computer industry, and the computer research groups. If any of you remember the series of Dr. Dobbs articles by Bill Jolitz over a decade ago, detailing the port of BSD to the Ď386, you will recall the firestorm of interest they created.
Then, one more event happened that turned out to add even more fuel to the Open Source operating system fire--the sudden rise in popularity of the Internet in the mid 90s due to the invention of the Web browser. When the Internet exploded in 1995, it was right when the major Open Source operating systems, BSD and Linux, had been completed and were getting really solid enough to use as production systems. The sudden demand for servers by startup ISPs plus the demand by personal computer users for an alternative to 20 years of closed source, stagnant commercial OS development just caused interest in these OSs and the server programs that ran on them to explode.

The rest of the story was simple. With the production-quality Open Source OSs, standardized on UNIX, and the large Open Source codebase of programs, standardized on C and C++, and the huge number of people now who were able to collaborate on the Internet and had access to UNIX-capable workstations, itís easy to see why we have arrived at a situation now where the Open Source Linux server growth has outstripped the commercial Windows server software growth. In short, the time was right, the OSs were there, the market was there, and the Internet was the spark that ignited the fire.

The role of the individual
And the last note on the events of the past that Iíll add is this: the Open Source story is not just about a story of a sequence of favorable social events. Yes, those events had to be there, itís true. But just as important in the past are the ambition and drive of INDIVIDUALS in the industry.

One of the things Iíve learned as Iíve gone through life is this: While the individualís voice always matters from a philosophical point of view, in practice, the smaller the group, the more important that a single individual is.

Look at it this way: a single person today, even someone notable like the President of the United States, has little effect over the United States government. This is simply because the US government is so vast, and has such an incredible amount of social inertia, that it tremendously resists change. However, 225 years ago, during the first Constitutional Convention, a single person had an enormous amount of effect over the US government. This was simply because there were so many fewer people involved.

The computer industry was no different. While it had people like Gary Kildall that made bad decisions that had years of repercussions, there were also people like Ken Thompson and Dennis Ritchie, the inventors of UNIX, that made great decisions. The design decisions they made 30 years ago had enormous effect on the computer industry of today. Similarly, the actions of Billy Gates when he stole DOS and locked up the contract with IBM, have had enormous effect on the industry of today, because those decisions were made back when the computer industry was small, with few people. There are a host of people that I wonít take the time to mention who had enormous vision and drive to envision a future of Open Source UNIX, people like Bill Joy, Richard Stallman, William Jolitz, Tim Berniers-Lee, Linus Torvalds and many others. Those people struck when the iron was hot and they had maximum effect on the industry, and the result is today Open Source is challenging commercial software for supremacy.


So, that was what got us to today. A good question then is, where ARE we today?

  1. The first thing I am seeing is that software companies (indeed all companies) have put their heads down and are mushing forward on the business of making money off what they have now. In managerspeak they call this "refocusing on our core competencies." In short, the dreamers have been all shot. What I mean is that the high tech industry is kind of in a "morning after" phase after the excesses of the 90s. Yes, a lot of great things happened in Open Source then, but a lot of stupid things happened in the computer industry. For example, can anyone tell me how many different formats there are for Microsoft Word documents? Companies, both producers and customers, are tired of this, and they arenít in any mood anymore to fund a bunch of idle speculators who want to sell dog food online. Everyone is pretty much minding the store these days. It is not fertile ground for a revolution. If Open Source hadnít gotten ready in the 90s for the prime time, it would be dead now.

  2. The second thing Iím seeing is that the era of Microsoftís software adventurism is over. Do you remember when Microsoft sunk literally millions upon millions of dollars into development on the Internet Explorer Web browser and IIS server--products that brought in ZERO revenue? Do you remember that they did it simply to drive Netscape out of business? Well, that will never happen again. Why? Itís not because of the anti-trust lawsuit. Itís because Microsoft discovered that while they were pouring all this money into putting Netscape out of business, they werenít paying attention to their bread and butter--operating systemsóand, as a result, development on Windows has stagnated ever since Windows 95. In hindsight, this has turned out to be an almost mortal blow to the company, which is usually what happens when you make business decisions based on emotion rather than profit. Because of the attention focused elsewhere, Linux and BSD were allowed to grab the spotlight and today, Windows is being viewed as the "same old, same old" while all the exciting things are happening over on the Open Source side of the fence.

    Eventually, of course, Microsoft will make it through this. But they will never recapture the mindshare and the glory they had in the late 80s and early 90s when Bill Gatesís word was law in the computer industry. They will become like IBM--a rich, oldline computer company that has their fingers in some profitable pies and is mainly concerned with trying to maintain and enhance their position in the industry, rather than dictating events and policy for everyone and everything else in the computer industry.

    In a way, this is an expression of the refocusing on core competencies issue I mentioned.

  3. The third observation I have is that Microsoft is dumping the low-baller purchasers because they are afraid of another anti-trust trial. (expand on this) This is what WPA is all about. This is forcing a huge number of users out of the Microsoft Way because those users have no money. Since Linux is free, itís obvious where THOSE people are all going to go.

  4. And speaking about anti-trust lawsuits, my fourth observation is that Open Source is immune from an anti-trust lawsuit filed against it, no matter how popular it gets. Now, I know that a lot of you here are probably rabid anti-government types, so I can see you pooh-poohing this observation even now. But the truth is that the DOJ anti-trust lawsuit fundamentally damaged Microsoft, far more than the anti-government group is willing to admit. It shattered the belief that Microsoft would be given special treatment by the government because they were single-handedly propping up the economy. Most damaging of all, though, is that the anti-trust lawsuit basically reminded everyone in the industry that all good things come to an end, even Microsoft. The era of Microsoft invincibility was now over.

    By contrast, as Linux and BSD are open source, itís impossible for the government to claim that a company like, say, Red Hat Software is "monopolizing" the market because the Linux source is the same for all Linux vendors, and anyone can download it for free from Red Hat. If a particular Linux vendor, like Debian, goes out of business then so what? Anybody, even Joe Blow in a 1-room studio apartment with no investment capital, can pick up the code and start maintaining it.

  5. So this leads us to my fifth observation: Linux and other Open Source have captured the brainshare of the industry. I cannot emphasize enough that there is a underlying bias in the software market against companies that are perceived to be on the decline. Of course, this is emotional, but the software market is vicious. If purchasers think that youíre going downhill, they simply start delaying purchases. That sets up a closed loop, where your sales go down, your revenue goes down, word gets out, fewer people purchase, your sales go down faster, etc. Itís a self-fulfilling prophecy. The reverse works too--if your sales go up, then people flock to you to get into what they think is the next happening thing, which creates a closed loop that increases your sales even more. This is all part of the cycle of software growth and death which is well known in the software industry. Itís why software companies overlap releases. But, there is an additional component with Open Source in the software cycle growth thatís going to push this bubble far, far larger than most people think--the financial component, which Iíll be talking about later.

    Today, anytime anyone, be it Microsoft or other entity, starts up a software project, their product is constantly being measured by the "open" yardstick. Indeed this was one of the settlement items of the DOJ lawsuit against Microsoft.

    It is impossible to overestimate the threat of the Open Source movement to the commercial software companies--warm bodies. I donít know how many of you have ever worked in commercial software development firms. I have, and I can tell you this much. Simply put, the cost today to get top-quality software development is unbelievable. The majority of software developers out in the world are worthless, overpaid, incompetents that churn out ream after ream of junky code. I mean, hasnít anyone ever wondered just WHY Microsoft software is so buggy and full of holes? Do you think that itís because the CEO of Microsoft WANTS to produce junky code? Of course not! Itís because the entire development team at Microsoft is saturated with incompetence. About 90% of the developers there couldnít code their way out of a paper bag, and the 10% that can are swamped with fixing all the problems that the rest of them create. And there is NOTHING that Microsoft can do about it except to fire the lot of them. And if they do that it would take a decade to put together a really competent team. They would have to train the majority of people from scratch, and if they ever did, Windows would cost ten times what it costs today.

    And it is NO different at most of the other commercial software companies that produce desktop software code. There are a few that produce really fantastic, powerful, incredible Windows code, companies like Tradestation, AutoCad, and a small handful of others at the major gaming software manufacturers. The rest is garbage.

    Today, the top-flight programmers are working at places like IBM, Visa, Mastercard, Oracle, Sun, Cisco, Juniper, Lucent, etc. They are writing on UNIX. And when they come home, they write Open Source. If you donít believe me, just read the credits list of most of the UNIX open source and the Linux and FreeBSD operating systems.

    The folks that run the major software houses that produce the really good code know this. They know that Open Source only has value as long as that dedicated group of users and developers is spending time on it. Most of these commercial software companies like IBM, Sun, and so on have chosen Linux because of the installed base, but they know that FreeBSD is out there too.

  6. Open Source is proving to have superior bug correction. With traditional commercial software, since bugs can only be fixed by the development team, the bug testers can only do part of the work, bug identification and description. The developers still have to fix the bug. This limits the size of software projects because the larger the software gets, the more bugs in it and the longer time it takes to fix them. With open source, the people finding the bugs are many times submitting fixes as well. Usually, the larger the software project gets, the more general it is, and thus more people get interested in it and are contributing bug fixes, and operating systems are some of the most widely used and general software on earth.

    This bug handling hasnít been missed by the commercial software vendors. All of them that have any kind of commercialized open source offering, like Microsoftís Shared Source, cite easier bug correction by users as one of the driving forces they are opening up their source. This is one of the examples of Open Source grabbing the braintrust of the industry.

  7. My seventh observation is that software consumers are satisfied with FreeBSD and Linux being the yin and yang of the Open Source market. What I mean by this is that most markets stabilize into either a market of a lot of little producers and no real big ones, or a market of 2 or 3 big producers and few little ones. What kind of market a given industry will develop is dependent upon a number of factors, but thatís not important. What is, though, is that in either of these market types, users are satisfied. Take Coke and Pepsi for example. They are different enough soft drinks that users either like one or the other, so 90% of the consumers end up with one or the other and the remaining people that just want to be independent all go buy Macs, New Beetles, and PT Cruisers. Markets with a single producer usually have very dissatisfied consumers, because people just donít like not having a choice. And Windows Home and Windows Professional are NOT giving the consumer a choice.

  8. Lastly, Open Source is very good about filtering out the worthless 10%. Thereís an axiom in the computer industry: 10% of the users of the software create 90% of the problems. In short, they demand 90% of the available support time, they issue 90% of the complaints, and 90% of the gunk thatís layered onto the program to improve human interface (i.e.: the GUI, etc.) are put in as a result of this 10% being confused by otherwise obvious instructions.

    The root of this, of course, is that this 10% are the trolls that refuse to use their brains to learn how to use the software properly. Since Open Source is more flexible than commercial software, it is more complicated and a little harder to use. Thus, the lazy 10% is scared off and makes the Windows support engineersí lives miserable. This helps speed development of new versions and frees up support time that can then go to help people who have serious, non-RTFM problems.


Now, the future.

So now that we know where we were, and where we are, what is going to happen in the future?

Today there are a number of significant trends that I believe are going to dictate the future of Open Source Software for the next several decades.

  1. Linux and BSD are the climax operating system products of the Open Source Movement. What I mean by this is that for the next 20 up to even 30 years, there will NOT be any other Open Source operating system project that will ever become important outside of a laboratory. In short, if you are planning on becoming a IT manager, for the rest of your career you WILL BE dealing with Windows Server OSs, Windows Desktop OSs, Linux, and BSD. You MAY deal with other commercial OSs such as Solaris, but Windows and UNIX-alikes are a guarantee. Why? For one thing, if an alternative, unknown operating system is going to replace these OSs by then, it would have to be alive right now and growing. But there is nothing out there, other than perhaps MacOS X, and Apple has tied MacOS Xís future into their proprietary hardware. And anyway, MacOS X is in the BSD camp.

    As long as FreeBSD is a strong healthy movement, it presents an alternative to Linux, an OPTION that is available to the Linux programmers. Thus, if IBM or Sun were to attempt to play funny games with the GPL, such as testing it in a court and invalidating part of it, and as a result a significant amount of Linux became ineligible for the GPL protection, the developers would simply shift over to FreeBSD, and vice-versa. It pretty much guarantees both OSís place in the market--together they form a stable pair.

    The times now are all wrong in the computer business for revolutionary operating system attempts. For the rest of the decade, the mantra is going to be "evolution, evolution, evolution." We may see the 64-bit CPUs change this, but at the rate that Intel is going, I doubt that a 64-bit chip will be in a majority of computers by 2005. Linux and FreeBSD have "made it" and arenít going away, and neither is Windows.

    Iíll make one other comment about 64-bit as well. All the chip manufacturers would love to see Intelís dominance of the PC market go away. In many quarters Intel is as disliked as Microsoft. From the chip manufacturersí points of view, the dominance of Intel turns CPUs into commodity items, and they donít make a lot of money off of them. So, companies like Sun and AMD see the rise of the 64-bit CPUs as an opening--if they can fragment the 64-bit market now, then perhaps they can get in some differing architectures, and the personal computer market will be less commodity and more specialty. The motherboard manufacturers have the same idea as well. All of them would love to see the PC hardware market split into desktop-style hardware built on 32-bit Intel-architecture chips and server-style hardware thatís all specialty built on incompatible 64-bit chips and architecture.

    Now, I donít believe that these schemes will work out. I think a pretty safe bet is that we will see the Itanium become the majority chip in the personal computer by the end of the decade. As a result, I donít see that porting efforts of Open Source operating systems like Linux and FreeBSD to 64-bit will be harmed, nor do I see any opportunity for a new radical operating system to come in and take over on the newer hardware, such as when DOS took over from CP/M due to the shift from 8-bit to 16-bit processors.

  2. Most if not all Open Source projects are experiencing growth in use. Whatís important, though, is itís greater than the average growth percentage of the software market. Itís not just operating systems like Linux and FreeBSD. Now that Open Source operating systems have "blazed the trail," so to speak, thereís an acceptance of other Open Source software, i.e.: applications. More and more people are using applications like Sendmail, Hylafax, Samba, Open Office, etc. This penetration of these applications into the market will continue, as well as better and better integration of these apps into Windows networks.

    OK, so now Iím going to go out on a limb here and make some numerical predictions. The first is that sheer "growth inertia" (the bandwagon effect) is going to push Linux and BSD into what everyone will agree is at least a 50% market share of corporate servers by the end of the decade. Desktop penetration is going to be a bit different, as itís going to be highly dependent upon whether the open source community reaches consensus on pursuit of the desktop. If things stay as they are I canít see desktop penetration going past 25% by the end of the decade; furthermore, it will be a static 25%, almost impossible to grow beyond that.

  3. The snowball effect is greatly increasing the speed that Open Source software projects are developed and this trend will continue. Anyone that looks at release dates for not just BSD and Linux but many other Open Source projects can see this - itís a simple formula. Open Source development is, in fact, much like a massively parallel processing array. If you want the array to run faster you just add processors - if you want the code developed faster, just add bodies. Hell, code development can go on 24 hours straight because as the time zones shift around the world, and your developers are scattered all over the globe, as some go to bed and put away the work for the night, others wake up and start picking it up and working on it. This puts Open Source development in an enormous advantage over development for the larger and more popular of the commercial projects where development is centralized.

    And on the development side Iíll make one more observation. Iíve worked at a number of software companies now and a huge amount of commercial software is developed like the average high schooler turns in a term paper. The developers screw off for 75% of the development time, then in the last weeks before itís due they work day and night on it. This severely impacts quality. Open Source development isnít like that because people only work on it if they LIKE doing it. Coding is something they WANT to do, not work that you have to pay them to do. It really enhances the quality of code when you have people working on it that are eager to write it!

  4. How all software is supported is changing due to Open Source. Everyone is rushing to copy the "user-supported" model of Open Source support by setting up user forums, etc. Of course, commercial enterprises do it to help save support costs, whereas with Open Source, since thereís generally no central commercial support channel in most cases, it has forced support questions into the public forums. What is really important though is that since these forums are public they can be archived in search engines like Google and on mailing list archives. Thus every year that passes the larger and larger these databases get and the greater the chance that someoneís support query will have been answered. Getting answers to support questions then becomes a process where the questioner just issues a few commands into a search engine and the answer pops up for them.

  5. The Open Source OS distributors, like Red Hat, Lindows, and so on, are going to continue with a model of adding value to the same base of software, with some fluffy packaging, for the foreseeable future. This is an obvious observation, but it bears looking at because it is really more of a validation of the BSD Way of keeping people in line than general Open Source. In short, as GPL advocates like to point out, the GPL doesnít bar inclusion of GPL into commercial software, as long as the separation between commercial and GPL is maintained at a program level. Yet folks like Red Hat are taking pains to make sure enhancements they make to Linux that wouldnít necessarily fall under the GPL "feedback" requirement are folded back into the Linux GPL source anyway. This is being done to simplify code maintenance for Red Hat, like BSD has always said would happen on BSD projects, not for altruistic reasons.

  6. Sooner or later we are going to see a legal challenge of the GPL. So far, the FSF has been doing what it could to duck this and settle disputes out of court. They really donít want to put the GPL into a court case. But it is inevitable that the more and more GPL project copyrights get assigned over to the FSF and the more and more GPL code gets used commercially that this is inevitable.

  7. The entire 9-11 phenom has been successfully used to focus on the inherent security weakness of not opening your source. While government is most interested in this-- they want to see secured computer networks in place, particularly in wireless LANS--business is also starting to focus on this. And another thing thatís driving the emphasis on security is the rise of cybertheft. A lot of this it lax password control and basic nonsense like that, but attacks over the Internet are rising and businesses will be more and more interested in the future as more and more of them get stung. Thereís even some talk that we will see some giant class action lawsuits in the future against some of the largest businesses that have had terrible internal security which is allowing a huge amount of identity theft. The time is coming when even for medium-sized businesses that security audits of entire electronic systems are going to become normal parts of installing these systems, and it will be mandated that their software vendors disclose the source of all of their code. And if they are making purchasing decisions, the vendor that discloses under non-NDA like Open Source will be in a competitive advantage.

  8. The high tech labor market oversupply is also greatly pushing Open Source penetration. Let me explain this. Back in 2000, I stated the following on page 379 of my book, The FreeBSD Corporate Networkerís Guide:
    The High Tech industry continues to expand and change rapidly. As long as the current pace continues, there will be far more inexperienced, unskilled system administrators than knowledgeable and experienced ones. Inexperienced and unskilled people want simple answers to complex problems. The companies that write operating systems for money know this. If they write an OS that is rigid and limited in its problem-solving ability, they can make it simple--thus they can sell more copies.
    Of course, this was written during the hubris of a rapidly expanding high tech sector, and I was attempting to explain the allure of Windows with that statement. Two years later, today, thereís been a fundamental change--the high tech sector is shrinking. I didnít address then what would happen if the high growth premise was changed, but today, I will.

    What is going on now is this. In a shrinking labor market, there is no room for the incompetent. Companies today are laying off people right and left and donít have the money to train inexperienced people when they can get them from a flooded job-seekers market with little effort. Thus, like it or not, IT departments of corporations are becoming stacked with an abundance of experienced people very rapidly. Itís a weed-out process that everyone in this room understands--if you didnít understand it, you wouldnít be here trying to be better computer users by listening to speakers like me.

    The effect of this on the commercial business software market is nothing short of disastrous. Experienced IT people donít fall for the "simple solutions for complex problems" line. They donít swallow a bunch of marketing nonsense. They are annoyed by so-called "white papers" that treat the reader as though heís a child of 6 and that gloss over product deficiencies and sugarcoat the product. And from a financial business perspective, the commercial software houses like Microsoft are in further trouble because the economics are now stacked against them. It used to be that they could argue that every time they made their product simpler and easier to use the business didnít need expensive experienced system admins to plug it in, so the businesses labor costs are lower. However, today the experienced system admins are a dime a dozen at the employment agency and there is no longer any benefit to the business for hiring cheap, inexperienced admins. And since the job market for admins is in a temporary holding pattern, the admins that are still working are gaining more and more experience every day and arenít acting as referee to a crowd of newbie support people that were hired in response to corporate growth. Instead they are concentrating on admining the network--and they now have the time to pull out years and years of bandaid solutions and start placing in strong, stable, and reliable systems that arenít being viewed as stopgaps that will go away in a year or so. The results are that one of the biggest marketing advantages of commercial software, ease of use, is becoming more and more irrelevant. People today and over the next 5 years are going to care more and more about functionality than about form. Commercial software is going to have to compete based on quality, not hackneyed marketing, and thereís a big question that all of them will be able to do this.

  9. The ripe fruit has been picked. Basically what I mean here is that the business problems that have been EASY to solve with off-the-shelf software have been solved. What everyone is now starting to do is tackle the more ugly projects--like getting their internal inventory on the Web, getting their customer service order fulfillment on the Web, etc. And these projects are not the same; they are unique and different. This is going to demand a lot of customization, and commercial solutions like Windows that donít have a lot of flexibility are harder to customize.

    And this is going to be happening right when the ability to take an Open Source package and tweak it for your use is steadily driving the costs to modify the software down. The limiting factor is the "guns-Ďn-butter" slew factor of moving from commercial software development to open source modification.

    Another way to look at this is from the point of view of a software firm like Microsoft. For them to maximize their profit, they want to convince their customers that all usersí needs look alike.

    In the past the model was that the commercial software company produced one kind of software product and everyone had to conform to it, unless they wanted to spend horrible amounts of money writing customer software tailored to them.

    So, the software available forced people to tackle the computerization projects where it was fairly easy to modify the workflow, rather than projects where it was expensive to modify the workflow.

    But, in reality, this has always been a cost-benefit issue. In other words, which is more expensive, modifying your workflow to match the boxed software, or modifying the software to match your unique workflow?

    What is happening with Open Source and what will be happening more and more is that since the source is available, now itís a lot cheaper to modify the software. Instead of begging the software manufacturer, who would want 10 grand to even discuss the possibility of making a custom mod, admins can just go right into the code themselves. So now, projects that required the workflow be left alone and the software be modified, which were previously economically unfeasible, are now becoming within the bounds of consideration. So over the rest of the decade thereís going to be a huge amount of work available for people that want to go into the business of adapting Open Source software into businesses that are still not computerized, because their workflow is so different.

  10. The rise of scripting. This is really hand in hand with what I was talking about with the ripe fruit observation. More and more of these complex systems that people are going to be putting together in the next decade are going to be built out of scripts rather than compiled programs. Scripting like back-end Perl scripts and interfaces constructed with a Web browser and some PHP code are going to be the order of the day. And this isnít limited to Open Source, either (witness ASP code in Windows), but while scripting has always been important in UNIX, itís a new thing for Windows. The increase in CPU power also is helping this.

  11. The techno-political shift. What is going on here is this: As computer technology becomes more and more entwined with peopleís daily lives, there is more and more demand from the man-on-the-street average consumer for government to step in and regulate and intrude on the industry. In short, (horrors!) POLITICS is starting to intrude into technology!

    We all are familiar, of course, with the crude and obvious governmental regulations, such as regulation of monitor emissions, and things like the EPA Energy Star program, which can hardly be called regulation. Then there is the obvious interference of the FCC into the telcom and Internet industry. But there are more subtle things going on. For example, the SEC and some of the governmental economic bodies are now under intense pressure to put a lid on the excesses of the technology industry, because they are viewing the technology slump and dot-bomb implosion as being one of the big reasons the country went into a recession. It is likely that we will see technology startups barred from obtaining large amounts of startup capital from the stock market for at least a decade. This helps Open Source tremendously because itís possible to build a business plan that doesnít require large startup capital around a product based on Open Source. Right now you canít do that with a "from-scratch" commercial Windows program which would take a couple of years of coding development before it could even go into beta.

    Another piece of fallout from this is that as the federal government becomes more and more interested in technology, Microsoft is put at more and more of a political disadvantage for the simple reason that they are not headquartered back East. I know a lot of you may not understand this, not having lived back East, but there is a subtle bias against the West from the Eastern establishment. Thereís a lot of big money investors now on the East Coast who lost a lot of money on dot-bombs, and they are pretty disgusted with Silicon Valley right now. One big area this shows up in is the awarding of government contracts. These days the governmental purchasers are favoring the established contractors that are based back East, not young new startups from Silicon Valley. And consider this, too: IBM is headquartered in New York, and IBM announced its $1 billion dollar Linux investment.

  12. Another area Open Source will be more important in is the whole "digital rights" copyright arena. We have already seen this with Gnutella vs Napster, etc. This goes along with the techno-political shift I mentioned earlier; the DMCA is a prime example. But, more importantly, when a company has software they are trying to affect redistribution on, whether by outright sales of modified BSD software, or by selling service on software they GPL so as to get the modifications other people make they are highly dependent upon the whole principle of copyrights--or rather, that the license is being observed, you might say.

    Open Source also is an excellent platform for releasing how a commercially encrypted algorithm works, as well as any holes in it. For example, consider the controversy over DeCSS. The DVD manufacturers are all very upset about this. Of course their big problem is they assume that they can sue anyone that posts or even links to DeCSS decryption code that breaks their CSS scheme. They canít, of course. In fact, the recent lawsuit against Matthew Pavlovich, who started the Linux Video project and its DVD player for Linux, by the DVD forum has just collapsed, as the California Court of Appeals ruled in favor of Pavlovich two weeks ago on 11/25/02.

    Itís one thing to post a description of holes in a copy-protection scheme. This wonít normally permit the average person to create a copy-protection breaker. But posting code for a decryptor is completely different as it permits anyone to easily build a decryptor in software on a computer. What I predict that we are going to see here is that eventually the movie and entertainment industry will reach a truce in the DVD wars where you will see them able to successfully go after and sue people that manufacture movie and audio copy-protection-breaking devices; however, they will not be able to touch people that merely post source code of how to do it. There are just too many First Amendment issues otherwise. But this hasnít been decided yet. The next decade will have many court battles over this, and Open Source will be the medium used to trigger these court cases. Open Source will also be squarely involved with other battles such as the Unisys patent on compression used in .gifs, which affected the gd library, where they removed the ability to create gifs. Albatross Consulting then put it back in with a patch. Albatross Consulting is in Australia, where the Unisys software patent doesnít apply.

  13. The commercial software makers make their money today by having in effect a continuous amount of software sales. This is why they release new versions over and over when the old ones work just as well. What drives the new releases are not features, but the need for more money. Microsoft realized early on that people donít like upgrading, so they force the upgrade process by simply NOT selling old versions of software anymore. This worked well enough as long as the hardware market was expanding and the majority of people would buy whatever software that the majority had.

    Open Source efforts, by contrast, have new releases driven by new features. So upgrades arenít forced, but become a cost-benefit decision. The older versions are always available, so if you happen to be, for example, a software preloader, then one you have a solid preload of, say, Linux worked out, then why change it? Eventually security or hardware changes may force you to change it, but the upshot is that in the future we are going to be seeing longer and longer use periods of particular versions of Open Source programs.

  14. Thereís an adage in the software industry: Itís not what is technically superior, itís who has the best marketing. Microsoft used this very successfully to convert a large part of Novellís serverbase from Netware 3.12 to NT. There have been numerous examples of technically inferior programs winning the market from superior ones. The interesting thing about this is that the lure of "free" Open Source is also a marketing gimmick of sorts! Sure, in large installations you can save licensing dollars, but in small ones you still have to spend the time training the admin about the Open Source youíre using, so the costs are more of a wash. In the future, youíre going to see more and more emphasis on the money-saving aspects of Open Source not because they really exist in many cases, but because the "free is a very good price" gimmick is such a strong lure.


But all isnít rosy and light for Open Source. There is a big problem that the Open Source community is going to have to face in the future, one in which thereís little consensus now. The problem: Where do you draw the line between ease of use and flexibility?

What this is all about is a fundamental principle in software and sociology. What it stems from is that the more choices that you give to someone, the more complicated a place you make the world for them. However, for any given group of people, the more complicated the world is, the more of them desire the direct opposite--simplicity.

The problem with software is that all software is written to solve problems. Even entertainment software like games is written to solve a problem--boredom. The more different types of problems the software can solve, the more valuable it is to more people. But the more problems it can solve the more complicated it is and the fewer people will use it.

If you look at the history of commercial software on microcomputers, this is very evident. The earlier personal computers used DOS and were used by few people. Later ones used Windows and MacOS, which has a simpler interface, and these were used by many more people.

Now, a lot of folks in the industry talk about this in terms of "moving to the desktop." I hate this. It instantly assumes all desktop users are simpletons. Itís a Microsoft marketing term created to charge more money for "server" software. Instead, I see this as "moving to the more novice users."

Today, the effort to get Linux and other Open Source programs into the hands of more and more novice users is a mirroring of the effort that occurred 15 years ago. It is an effort to get more people using Linux by giving it a simpler interface. In a large part, this is working very well. It has been demonstrated pretty convincingly that every time Linux is made easier to use and install by the average person, more people use it.

But there is a problem with this, and that is the law of diminishing returns. Every time the GUI and desktop paradigm is made easier and easier to use, the GUI and its programs is made more and more complicated and harder to maintain, and more chances are created for problems to develop. Additionally, the choices for problem-solving are reduced more and more, because so much has to be chosen in advance for the end user. For example, you take away a Full/Half duplex button on the networking GUI interface for a network card, thus making it easier for the novice to use (since they now donít have to know what Duplex is) and you have increased complexity because now the computer has to determine if full or half duplex is to be used. And you have reduced the usefulness of the program for the user that may happen to have a hub that doesnít autodetect properly, and the card has to be hard coded in half duplex.

For another example, you put a GUI install program in the software--and now the program canít be installed in systems that have weird video cards that donít synchronize right. Or, my personal favorite, you make the controls to change video parameters only available in the GUI, but you need to change those parameters to get the GUI to come up at all.

The fundamental problem I see is that Open Source has not really defined for the rest of the world how far they want to go on this usability scale. If they want to go where Windows is, then itís going to require a prodigious amount of programming effort. I personally feel that this would be a waste and, in particular, that there are many more important projects that should have the attention. This is a classic BSD attitude, by the way. I also donít see how this can be done without stripping the most useful part of Open Source out--its flexibility. And lastly, if it is done, then how does it differ from Windows?

The folks that want to take Linux all the way to the most novice users in the market mostly seem to forget that just because Linux might be made as easy to use as Windows, that Windows isnít going to just go away. The end user will still have to make a personal choice between Windows and Linux on their PC. And frankly, if price was ALWAYS the determining factor in any market, nobody would be purchasing Lexuses. Open Sourceís strengths lie in having software modifiable by the user so the user can get things done. If it is made so complicated in a quest to make it so easy to use, then itís being taken into a market where the user does not value the ability of it to be modified and may not care that itís a few hundred dollars cheaper than Windows. To use a phrase I mentioned earlier, itís getting out of its core competency. And as the 90s so aptly proved, doing this is a dangerous thing.

Site hostingSite design and maintenance
Internet Partners Inc.
Web-Strider Enterprises

© Copyright 2000-2003 Ted Mittelstaedt. All rights reserved.