This feature first appeared in Certification Magazine. Click here to get your own print or digital copy.
A large number of beginning developers and administrators mistakenly think of "Linux" and "open source" as synonyms when they are anything but. Linux is a wonderful example of open source development, and certainly the most well-known instance of the same.
Linux, however, is just one implementation among many of a long-lived and highly successful model. Not only does open source development predate Linux, but it provided the foundation which made that operating system possible. To understand open source, it is necessary to look back at the origin of the model and some of its history.
Open source and sharing
The definition of open source software is autological (or homological) in that the phrase itself describes exactly what it means: The source code is made freely available with the software.
Not only that, but the licensing that accompanies such software states that users may modify (even in a collaborative and/or public way) and redistribute the software (and its source code) to anyone for any purpose. Certain other rights, or conditions can accompany it based on the actual license that is applied, but this is the true heart of all open source software.
I encountered this model in my first job after college. I went to work for a not-for-profit organization — where it is not uncommon for tech to be a decade behind other business sectors. What I fell into was a world that had only recently transitioned from mainframes to mini-towers and every IT dollar available was used to purchase the most expensive hardware that could be afforded.
With all the budget allotted to hardware, the software had to be cobbled together. The organization I worked for was loosely associated with healthcare, so the programming was done in the now-forgotten (and horribly named) MUMPS language and we belonged to a developer's group that openly shared code.
Someone at another institution would ask us for the program we had written to send daily reminders of crucial tasks and we would ask them, in return, for the program they had been bragging about for compiling reports.
Several weeks later, you could be assured that the person who had taken your program would post to the community that they had had to rewrite one or more sections of it to get it to work right (there was always a certain amount of bravado involved). They would share the improved version with anyone who wanted it. With each institution that adopted the program, and each tweak of the code, it got better.
That sharing model existed not only among not-for-profit groups strapped for cash, but among many other businesses as well. It was commonplace to spend as much money as the budget would allow on the best hardware that could be had and then piece together the software. While it was the open source model, rarely was it called that since it was simply the accepted model and there were few exceptions to it.
Along came Microsoft
When the IT world first started pivoting to personal computers, the mindset of sharing software accompanied it. A marked turning point came when Bill Gates, then a little-known founder of one of the few startup software companies around, published a bold manifesto in 1976 denouncing sharing as piracy.
Titled "An Open Letter to Hobbyists," Gates' heated screed had a much greater impact than just to put the tinkerers and sharers on notice — it essentially divided the software industry. Purists and idealists who gave away their source code with their programs ended up on one side, exchanging cold-eyed glares with profit-hungry protectionists.
While the two terms were not necessarily accurate, one model started getting referred to as "free software" and the other as "proprietary." Much of the "free" moniker came from the Free Software Foundation (FSF) which advocated against the proprietary model. Over time, the FSF alienated some with the stridency of their beliefs, and in the 1990's, the term "open source" began to be widely used to reference the non-proprietary model while distancing from the FSF.
Over this same time period, a different paradigm was becoming prominent: one in which hardware was no longer the main consideration and software an afterthought. As more and more users became empowered to do what they had needed administrators to previously do, the need to share data (documents, presentations, spreadsheets, etc.) became a primary need.
The need to share data in turn brought the need for standardization of software to the forefront. Now hardware started being purchased on the basis of being able to run programs, and the need for all of the users in an organization to uniformly use selected software became the key consideration.
Then there was Linux
Since software programs need an operating system to function, many people began to recognize the need for a robust OS that was not proprietary. There were a handful of attempts at coming up with something the market would appreciate and rising from among the x86 possibilities was Linux, which embraced open source so deeply that it became the embodiment of it.
Spearheaded by Linus Torvalds and released in 1991, Linux quickly caught on. Torvalds proposed a model in which he would essentially be responsible for the kernel while leaving everything else the OS needed to programmers throughout the world to create.
While others had made very similar attempts previously, their end results had not been as successful at gaining widespread adoption. Linux benefited greatly from several factors with two of the most important being the ability to see what other developers had attempted before (what worked with the predecessors and what did not) and timing.
The development of Linux could not have been timed better given the licensing and legal issues surrounding other leading operating systems — most notably, of course, the omnipresent Windows developed by Gates' and Paul Allen's Microsoft. That, and the increasing ability for programmers to communicate and share with each other using the Internet.
Other notable open source successes
Largely based on the success of Linux, many other projects have adopted the open source model with the Apache Software Foundation (www.apache. org), ASF, being one of the most well- known. While originally created when developing the Apache HTTP Server, ASF has become the organization overseeing many open source projects (currently in excess of 350).
Managed and guided solely by volunteers, ASF marshals the efforts of more than 40,000 contributors for projects under its care, with 4.2 billion lines of code involved. A complete list of projects, in alphabetic order, can be found at the bottom of their home page. The Linux Foundation is similarly focused on "Helping open technology projects build world-class open source software, communities and companies" and their most recent numbers include 235,000 developers and more than 1 billion lines of code. The Linux Foundation offers a number of certifications as well, providing a means by which administrators and developers alike can have their skills authenticated to enhance their résumés.
Also worthy of mention are the Mozilla Foundation of Firefox fame, and Debian. Lastly, Google Open Source is noteworthy for its size: more than 2,000 projects are in the works. As the company states: "We often release code to push the industry forward or share best practices we developed. But sometimes, it's just fun and interesting code."
That sums up so much of what the open source model represents.
The good, the bad, and the ugly of open source
While the open source model is intended to allow software programs to become better through modifications and corrections, one of the biggest drawbacks is that there is more than one licensing model. You can just help yourself to anything you come across on the assumption that no one will mind.
Rather, there are a number of competing licensing models in existence, and it is important to understand that "open" does not necessarily equate to free — in fact, even when the word "free" is used, it is often intended to represent free to distribute, free to modify, etc., and not "free of charge." This misunderstanding has existed for decades and been problematic for many users and developers alike.
One of the primary reasons for a developer deciding to adopt the open source model — in addition to possible development help from others in the community — is to be able to gain distribution. By utilizing this model, developers can share their programs and code with the world and still retain rights to what they created.
It bears pointing out, however, that just because you embrace the open source model does not mean that a program will be met with outstretched, welcoming arms. There are numerous examples of programs that the world didn't really need that were released in hopes of finding a market and instead withered away. No model can save a program by itself. Any program must still fulfill a genuine (or at least perceived) need.
Looking to the future
The open source model, whether blanketed under that title or not, has been around as long as software programming and shows no signs of going way. It predates Linux and the conviction that sharing source code leads to mutually better programs was a founding principle of software development.
This core belief will remain as long as professional and beginning programmers alike have something new to share with the world.
Recommended Reading
In addition to the short "An Open Letter to Hobbyists" by Bill Gates (linked above), anyone interested in open source history should read Eric S. Raymond's seminal paper The Cathedral and the Bazaar (later expanded to book length).
Raymond first compares the proprietary and open source models, and then offers excellent advice for programmers based on his experiences and observations. Nicholas Carr's The Ignorance of Crowds, published a decade later, takes exception to some of Raymond's observations and completes the picture.
Important Update: We have updated our Privacy Policy to comply with the California Consumer Privacy Act (CCPA)