Freenixes and Unix History



If your ISP doesn’t use Unix, it should. Many of the various flavors of Unix provide excellent solutions for everything from webhosting to dialup authentication to networking to powerful workstations. Even better, some of the Unixes (the so-called “Freenixes”) are free to use, are available on cheap hardware, and come with powerful free software suites (like Apache, Sendmail and others) that match or beat the performance of commercial software costing thousands of dollars. Plus – let’s face it – there’s an undeniable “sysadmin badass” appeal that comes from telling people you’re a Unix wizard that you simply don’t get by telling people you’re “a real guru with AppleTalk.”


However, even among Freenixes, there are many to choose from. And, if you’re a new user or even an experienced administrator looking to taste other flavors, it’s difficult to get good advice on which kinds to choose. Most people who already use Unix are very devoted to their flavor of choice, and it’s tough to get unbiased advice from someone with a lifetime subscription to “BSD Jihad” or a tattoo of a heart with a little “Kernel 2.0.36” inside it.


So, for the next few months, we’ll look at what the strengths and weaknesses of each Freenix flavor are, and which may be right for you. To understand how and why each Freenix does what it does, it’s important to have a knowledge of where they (and Unix itself) come from – and that’s the focus of this month’s column.


Normally, histories of Unix are dull enough that they are used only in scientific experiments to anesthetize lab rats at a distance of 50 yards. In the interest of bringing everyone up to speed, the following is a completely irresponsible, glibly assertive and overly sarcastic yet theoretically amusing brief history of Unix up to the past few years.



Many millions of years ago, after the Earth cooled (roughly 1969), there was Unix. And it was good. Ken Thompson and Dennis Ritchie of Bell Labs found the legendary “little-used PDP-7 in a corner” and set to work on creating a new operating system for internal use at AT&T. Originally called “Unics” (UNiplexed Information and Computing System) as a pun on the earlier OS project “Multics” (Ha ha! Programmer humor!), its development was confined for several years to a small group of ultra-geek programmers within Bell Labs.


A fundamental step for Unix came in 1973 with Version 4, which had been rewritten from platform-dependent assembler code into the new C language (also a creation of Bell Labs). By writing the core of the OS in a high-level programming language, it now essentially meant that Unix could be ported to almost any platform which included a C compiler. Since that time, Unix and C have gone hand-in-hand like deer ticks and Lyme disease.



Version 6 of Unix in 1975 was released to the outside world in “unsupported” (“Don’t blame us if it hoses your CPU and makes it cough blood”) format. It had two different licenses for the OS/source package: a university license for $100 and an “everybody else” license for $20,000. As a result, a generation of Computer Science students grew up on hacking Unix and improving it (most notably at MIT, Carnegie-Mellon and the University of California-Berkeley). While the “other” price tag sounds hefty, it was a small price for large companies to be able to take an existing OS and build their own around it. This eventually attracted numerous vendors to base an OS on Unix – including Sun, SGI, DEC, HP, IBM, NeXT and even Apple (anyone out there remember A/UX?). Unix grew rapidly in the mainframe world of the ‘70s and ‘80s because it was relatively cheap, and small armies of new computer science graduates entered the workforce already familiar with it.


The early development of Unix defined the characteristics it carries to this day. Every OS is a compromise of sorts between two fundamental sets of questions: maximum configurability versus ease of use, and stability versus expandability. Unix was written by programmers and for programmers, not end users – and it stressed the power of configurability over user-friendliness (although great steps in that direction have been taken lately).


As time went on, more and more features were added to Unix by various offshoots of Bell and by university computer science students. UC Berkeley’s Computing Systems Research Group (CSRG) acquired a license for Version 6, and set a goal to write a completely AT&T code-free version of the system. The rise of the ARPAnet/NSFnet network drove Berkeley to develop innovations such as a full integration of TCP/IP into Unix and the birth of domain names (DNS and BIND). As such, the development of what would become the Internet was intimately tied to Unix – and Unix would remain the Internet’s OS of choice. By the mid-‘80s, Unix was well known within the computer science community as the operating system that could wring maximum stability and performance out of almost any hardware.



In the early ‘80s, Bell had transferred their development of Unix to Western Electric, which called their commercial version of Unix “System III” (they figured that if they called it “System I” no one would buy it since they’d think it was too buggy). Unix System V was released in 1983, and System V Release 4 (SVR4) was released in 1989, incorporating many of the improvements made by the Berkeley group and Sun Microsystems (including TCP/IP and NFS). Unix even had two Graphical User Interface schemes (Motif and OpenLook) – both commercial and (appropriately) completely incompatible implementations of the open MIT “X Window” standard. By the late 1980s, Unix was a fully commercial operating system family in all its variants; but it was a “high-end” OS that was far out of reach for individual users, and which the personal computer revolution had largely passed by.


At this point, it’s worth a brief digression to discuss “what is Unix?” If you want to be technical, only operating systems based on AT&T code and blessed by the Open Group (the Unix trademark holder) qualify as Unix. Personally, I think that’s a load of crap. For the purposes of this article (and the rest of this column’s run), any OS that you can bring to a screeching halt by typing “kill –9 1“ at a root shell prompt is Unix – and that includes SVR4, BSD and Linux-based OSes.


Anyway, during this time, the university community that had given birth to so many of Unix’s developments remained active – and they wanted a Unix that was inexpensive enough to be used by individuals or groups without the deep pockets of universities and corporations. Parallel to the growth of private Internet Service Providers seeking Internet access outside of the institutions through which they had used the Internet, hackers were still active trying to create a Unix of their own.



The Berkeley CSRG’s funding ran out before it fully completed its goal of creating an AT&T-free Unix, but just before its demise CSRG released a version (still retaining five or 10 percent AT&T code) called 4.3 BSD Net/2.They published their work under what became known as the “BSD” license, which made the source free to anyone who wanted it (but also allowed it to be incorporated into non-open-source and commercial projects). One of the many groups that picked up the Net/2 ball and ran with it was a commercial interest, Berkeley Software Design, Inc. (BSDI), which worked on filling in the gaps in Net/2 and selling the resulting product. Simultaneously, Berkeley’s Bill Jolitz made a great leap for hackers and hobbyists everywhere with a more-or-less AT&T-free port of Unix to the Intel 80386 architecture called 386BSD.


Frightened by the “free PC Unix clone” possibilities of 386BSD, AT&T sued Berkeley. Predictably, Berkeley countersued AT&T. While their lawyer landsharks battled it out, AT&T sold the SVR4 code rights to Novell (anyone remember UnixWare?). Development still went on in the meantime, and a horde of groups worked on merging the BSD and SVR4 releases or creating an entirely new Unix. Around about a million different Unix “standards” groups came into existence and were wholeheartedly ignored by everyone else.



Richard Stallman had already begun his “free-software” movement in the mid-‘80s when he started to read “Karl Marx’s Greatest Hits” and write Emacs. With some MIT buddies, he formed the GNU (GNU’s Not Unix) Project to take all of those AT&T Unix utilities and make better versions that were free/open source – and eventually to create a completely new free Unix, to be called the GNU Hurd. The GNU license(s) essentially forbade using the code in products that were not open. The GNU licenses provided the crucial distinction of the “free software” movement that would evolve from the GNU project, and separated it from the looser BSD license, which would become the model of the often semi-corporate “open source software” movement of the late 1990s.


Meanwhile, in 1991, a 21-year-old Finnish computer science student named Linus Torvalds was working on making a free and better version of Unix’s kissing cousin Minix. Torvalds developed the beginnings of a completely new kernel (the “core” part of the OS that handles memory management and device input/output, and acts as interface between hardware and software), and posted to Usenet’s comp.os.minix his source code for version 0.02 of what would soon be called “Linux.”



The BSD development model emphasized a “core team” of developers responsible for the primary guidance of a project, and a number of teams sprang up to release free versions of Unix based on the BSD code. Unfortunately, they all managed to get along “like two cats in a sack.”


At the same time, Linux – which had been released under a GNU license – emphasized a decentralized development model. Torvalds remained that head of Linux kernel development, and Linux variants were unified by their use of the Torvalds-approved kernel. However, everyone was invited to take part in adding drivers and kernel developments, and anyone was free to form their own Linux “distribution” (anyone remember Yggdrasil?) concentrating on whatever direction they felt that Linux should go.


Suddenly, in 1994, the lawyer-welfare project known as the “AT&T/Novell vs. Berkeley war” ended. The exact details of the settlement are sketchy, but BSDI shortly thereafter discontinued its Net/2-based system and replaced it with a release based on a new code base called 4.4 BSD Lite. Novell eventually sold its SVR4 rights to the Santa Cruz Organization (SCO), and the SVR4-based non-free Unixes (including Solaris, HP/UX, Digital Unix and IBM’s AIX) continued and proliferated. Also in 1994, Linux kernel 1.0 was quietly released. In time, Stallman’s Free Software Foundation – seeing that the GNU Hurd project would be up to speed sometime around 2040 – came to adopt Linux (Debian distribution) as a replacement for the Hurd, despite some well-publicized nomenclature hissy-fits by Stallman about calling Linux “GNU/Linux.”



Today, the commercial Unixes are still around and (at least some of them) flourishing. Meanwhile, development for hobbyists, hackers, ISPs and many businesses has focused on the Freenix family. Right now, there are four primary Freenixes of interest to ISPs: Linux, FreeBSD, NetBSD and OpenBSD – each with a different philosophy, focus, and strengths or weaknesses.


In next month’s edition, we’ll explore what makes them different, and why one or the other may be right for your ISP. In the meantime, those of you with enough free time or pain tolerance to have read this far should please feel free to send your questions and comments about the varying Freenix families to Flames and hate mail should be directed to John Dvorak. ;-)