Why Microsoft Will Rule the World: A Wake-Up Call at Open-Source’s Mid-Life Crisis

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, August 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

In a Nutshell: The original hype over open-source software has died down – and with it, many of the companies built around it. Open-source software projects like Linux, *BSD, Apache and others need to face up to what they’re good at (power, security, speed of development) and what they aren’t (ease of use, corporate-friendliness, control of standards). They will either have to address those issues or remain niche players forever while Microsoft goes on to take over the world.

The Problem

There is a gnawing demon at the center of the computing world, and its name is Microsoft. 

For all the Microsoft-bashing that will go on in the rest of this column, let me state this up front: Microsoft has done an incredible job at what any company wants to do – leverage its strengths (sometimes in violation of anti-trust laws) to support its weaknesses and keep pushing until it wins the game. That’s the reason I hold Microsoft stock – I can’t stand the company, but I know a ruthless winner when I see it. I hope against reason that my investment will fail.

It has been nearly two years since I wrote a column that wasn’t “about” something, that was just a commentary on the state of things. Many of you may disagree with this, and assign it a mental Slashdot rating of “-1: Flamebait.” Nonetheless, I feel very strongly about this, and I think it needs to be said.

Here’s the bottom line: no matter how good the software you create is, it won’t succeed unless enough people choose to use it. Given enough time and the accelerating advances of other software, I guarantee you it will happen. You may not think this could ever be true of  Linux, Apache, or any other open-source software that is best-of-breed. But ask any die-hard user of AmigaOS, OS/2, WordPerfect, and they’ll tell you that you’re just wishing. 

Sure, there plenty of reasons these comparisons are dissimilar; Amiga was tied to proprietary hardware, and OS/2 and WordPerfect are the properties of companies which must produce profitable software or die. “Open-source projects are immune to these problems,” you say. Please read on, and I hope the similarities will become obvious.

For the purposes of this column, I’m going to include Apple and MacOS among the throng (even though Apple’s commitment to open source has frequently been lip service at best), because they’re the best example of software that went up against Microsoft and failed.

You say it’s the best software out there for you – so why does it care what anyone else thinks? It doesn’t, at first. But, slowly, the rest of the world makes life harder for you until you don’t have much choice.

In the Software Ghetto

Look at Apple, for example (disclaimer: I’m a longtime supporter of MacOS, along with FreeBSD and Linux). My company’s office computers are all Windows PCs; my corporate CIO insisted, despite my arguments for “the right tool for the job,” that I couldn’t get Macs for my graphics department. “We’ve standardized on a single platform,” is what he said. He’s not evil or dumb; it’s just that Windows networks are all he knows and are comfortable with.

Big deal, right? Most Mac users are fanatics. There’s a registered multi-million-person-plus community out there of fellow Mac-heads that I can always count on to keep buying Macs and keep the platform alive forever, right? An installed base of more than 20 million desktops plus sales of 1.5 million new computers in the past year alone is enough for perpetual life, right?

Sure, until that number is fewer than the number of licenses that Microsoft decides that it needs to keep producing MS Office for Mac. Right now, the fact that my Mac coexists with the Windows-only network at my office, because I can seamlessly exchange files with my Windows/Office brethren. But as soon as (which Microsoft could easily do by upgrading windows with a proprietary document format that can’t be decoded by other programs without violating the DMCA or something asinine like that) platform-neutral file-sharing goes out the Window (pun intended) … I’m going to have to get a Windows workstation.

Or Intuit decides there’s just not enough users to justify a Mac version of Quicken. Or, several years from now, Adobe decides it’s just not profitable to make a Mac version of Photoshop, InDesign or Illustrator … or even make critical new releases six months or a year behind the Windows version. I can still keep buying Macs … but I’ll need a Windoze box to run my critical applications. As more people do this, Apple won’t have the revenues to fund development of hardware and software worth keeping my loyalty.. And I’ll keep using the Windows box more and more until I finally decide I can’t justify the expense of paying for a computer I love that can’t do what I need.

“Apple,” you say, “is a for-profit company tied to a proprietary hardware architecture! This could never happen to open-source software running on inexpensive, common hardware!”

Open Source with a Closed Door

Let’s step back and look at Linux. A friend of mine works as a webmaster at a company that recently made a decision about what software to use to track website usage statistics. His boss found a product which provided live, real-time statistics – which only ran on Windows with Microsoft IIS. My friend showed off the virtues of Analog as a web stats tool, but they were too complicated for my friend’s boss to decipher. Whatever arguments my friend provided (“Stability! Security! The Virtues of Open Development!”) were simply too intangible to outweigh the benefits his boss wanted, which this one Windows/IIS-only software package provided. So, they switched to Windows as the hosting environment. 

There may come a day when you suggest an open-source software solution (let’s say Apache/Perl/PHP) to your boss or bosses, and they ask you who will run it if you’re gone. “There are plenty of people who know these things,” you say, and your boss says, “Who? I know plenty of MCSEs we can hire to run standardized systems. How do we know we can hire somebody who really knows about ‘Programming on Pearls’ or running our website on ‘PCP’ or whatever you’re talking about? There can’t be that many of them, so they must be more expensive to hire.” Protest as you might, there isn’t a single third-party statistic or study you can cite to prove them wrong.

If you ask the average corporate IT manager about open source, they’ll point to the previous or imminent failures of most Linux-based public companies as “proof” that open-source vendors won’t be there to provide paid phone support in two years like Microsoft will. 

I’m willing to bet that most of you out there can cite examples of the dictum that corporate IT managers don’t ever care about the costs they will save by using Linux. They are held responsible to a group of executives and users that aren’t computer experts, aren’t interested in becoming computer experts, and wouldn’t know the virtues of open source if it walked up and bit them on the ass. They want it to be easy for these people, and fully and seamlessly compatible with what the rest of the world is using, cost be damned. Say what you will – right now, there’s just no logical reason for these people not to choose Windows.

So maybe Linux users drop down to the point where Mac users have (still a significant number) – only the die-hard supporters. But how many of you Linux gurus out there don’t have a separate Windows box or boot partition to play all the games you like that aren’t developed for Linux because of lack of users/market share? Well, what about the next killer app that’s Windows-only until you use Linux less and less? Or the next cool web hosting feature that only MS/IIS has? Or as more MS Internet Explorer-optimized websites appear? 

I’m not arguing that Linux or BSD would ever truly disappear (there are still plenty of OS/2 users out there). I am, however, saying that as market share erodes, so does development; and, over the long run – if things continue on the present course – Windows has already won.

The main point is this: niche software will eventually die. It may take a very long time, but it eventually will die. Mac or Linux supporters claim that market share isn’t important: look at BMW, or Porsche, which have tiny market shares but still thrive. The counterpoint is that if they could only drive on compatible roads, and the local Department of Transportation had to choose between building roads that 95% of cars could ride on or building separate roads for these cars, they would soon have nowhere to drive. True, Linux/BSD has several Windows ABI/API compatibility projects and Macs have the excellent Connectix VirtualPC product for running Windows on Mac, but very few corporate IT managers or novice computer users are going to choose those over “the real thing.” And I’m willing to bet that those two groups make up 90% of the total computer market. 

You can argue all you like that small market share doesn’t mean immediate death. You’re right. But it means you’re moribund. One of the last bastions of DOS development, the MAME arcade game emulator, is switching after all these years to Win32 as its base platform – because the lead developer simply couldn’t use DOS as a true operating platform anymore. It will take time, but it will happen. Think of all the hundreds of thousands (if not millions) of machines out there right now running Windows 3.11 for Workgroups, OS/2, VMS, WordPerfect 5.1, FrameMaker, or even Lotus 1-2-3. They do what they do just fine. But, eventually, they’ll be replaced. With something else.

The Solution

All this complaining aside, the situation certainly isn’t hopeless. The problems are well known; it’s easier to point out problems than solutions. So, what’s the answer? 

For that 90% of users that will decide marketshare and acceptance, two things matter: visible advantages in ease of use, or quantifiable bottom-line cost savings. Note for example how Mac marketshare declined from 25% to less than 10% when the “visible” ease-of-use differential between Mac System 7 and Windows 95 declined. Or, look at how the cost of more-expensive Mac computers and fewer support personnel versus cheaper Windows PCs and more (but certified with estimable salary costs) support personnel.

Open-source software development is driven by programmers. Bless their hearts, they create great software but they’re leading it to its eventual doom. They need to ally firmly with their most antithetical group: users. Every open-source group needs to recruit (or conversely, a signup is needed by) at least one user-interface or marketing person. Every open-source project that doesn’t have at least one person asking the developers at all steps “Why can’t my grandmother figure this out?” is heading for disaster. Those that do are making progress.

Similarly, those open-source software projects that have proprietary competitors and are dealing with some sort of industry standard that aren’t taking a Microsoft-esque “embrace, extend” approach are going to fall behind. If they don’t provide (and there’s nothing against making these open and well-documented) new APIs or hooks for new features, Microsoft will when they release a competing product (and, believe me, they will; wait until Adobe has three consecutive bad quarters and Microsoft buys them). The upshot of this point is that open-source projects can’t just conform to standards that others with greater marketshare will extend; they need to provide unique, fiscally-realizable features of their own. 

Although Red Hat has made steps in this direction, other software projects (including Apple, Apache, GNOME, KDE and others) should work much harder to provide some rudimentary for of certification process to provide some form of standardized qualification. Otherwise, corporate/education/etc. users will have no idea what it costs to hire qualified support personnel.

Lastly, those few corporate entities staking their claims on open source should be sponsoring plenty of studies to show the quantifiable benefits of using their products (including the costs of support personnel, etc.). The concepts of “ease of use” or “open software” don’t mean jack to anyone who isn’t a computer partisan; those who consider computers to merely be tools must be shown why something is better than the “safe” choice.

Linux 2.4: What’s in it For You?

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, April 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

In a Nutshell: Linux’s day as a scalable server is here. The long wait for Linux kernel 2.4 made its release seem somewhat anti-climactic, so many of its new features  have gone largely unnoticed. Although many of the changes were available as special add-ons to 2.2.x kernels before, the 2.4 kernel wraps them all together in a neat package – as well as integrating a number of great new features, notably in the networking, firewall and server areas. 

Bringing Everybody Up to Speed

If you’re at all familiar with the Linux kernel and its upgrade cycle, you can skip the next several paragraphs and go on to make fun of the technical inaccuracies and indefensible opinions in the rest of the column. Everybody else should read these introductory paragraphs, and only then should they go on and make fun of the rest of the column.

The Linux kernel is the foundation of the Linux operating system, since it handles all of the low-level “dirty work” like handling processes and memory, I/O to drives and peripherals, networking protocols and other goodies. The capabilities and performance of the kernel in many ways circumscribe the capabilities and performance of all the programs that run on Linux, so its features and stability are critical.

In Linux, odd-numbered “minor” version numbers (like x.3) are unstable, developmental versions where adding cool new features is more important than whether or not they make the system crash. They are tested in the developmental kernels and once the bugs are worked out, they are “released” as the next highest even-numbered minor version (like x.4) kernel, which is considered stable enough for normal users to run. Linux kernel 2.2.x was the “stable” kernel (while kernel 2.3.x was the developmental version) from January 1999 to January 2001, when the new stable kernel became the very long-awaited 2.4.x (as of this writing, the most recent version was 2.4.2).

What’s New, Pussycat?

The changes since Linux kernel version 2.2 largely reflect the expansion of Linux as it comes to be used in an ever-wider variety of hardware and for different user needs. It wraps in features required to run on tiny embedded devices and multi-CPU servers as well as traditional workstations. Improving Linux’s multiprocessing capabilities also requires cleaning up a lot of other kernel parts so they can make use of (and also not to get in the way of) using multiple processors. To expand Linux’s acceptance in the consumer marketplace, it includes drivers for a large number of new devices. And to hasten Linux’s acceptance in the server market (especially on the high-end), it has enhanced its networking performance – notably in areas where earlier benchmarks had shown it losing to Microsoft’s Windows NT/2000.

With high-end server vendors (most notably IBM) embracing Linux, they have pushed for the kernel to include the features that would make Linux run reliably on high-end hardware. In addition to all the CPU types supported by Linux 2.2, the Intel ia64 (Itanium) architecture is now supported, as are the IBM S/390 and Hitachi SuperH (Windows CE hardware) architectures. There are optimizations for not only the latest Intel x86 processors, but also their AMD and Cyrix brethren as well, plus Memory Type Range Registers (MTRR/MCRs) for these processor types. Support for Transmeta Crusoe processors is built-in (as you would expect from Linus Torvalds being an employee of Transmeta). Whereas kernel 2.2 scaled well up to four processors, 2.4 supports up to 16 CPUs. 

As part of the “scalability” push, a number of previous limitations have been removed in kernel 2.4. The former 2 GB size limit for individual files has been erased. Intel x86-based hardware can now support up to 4 GB of RAM. One system can now accept up to 16 Ethernet cards, as well as up to 10 IDE controllers. The previous system limit of 1024 threads has been removed, and the new thread limit is set at run time based on the system’s amount of RAM. The maximum number of users has been increased to 2^32 (about 4.2 billion). The scheduler has been improved to be more efficient on systems with many processes, and the kernel’s resource management code has been rewritten to make it more scalable as well.

Improved Networking

Kernel 2.4’s networking layer has been overhauled, with much of the effort going into making improvements necessary for dealing efficiently with multiprocessing. Improved routing capabilities have been added into Linux by splitting the network subsystem into improved packet filtering and Network Address Translation (NAT) layers; modules are included to make backward compatibility with kernel 2.0 ipfwadm and 2.2 ipchains-based applications available. Firewall and Internet protocol functions have also been added to the kernel. 

Linux’s improved routing capabilities make use of a package called iproute2. They include the ability to throttle bandwidth for or from certain computers, to multiplex several servers as one for load-balancing purposes, or even to do routing based on user ID, MAC address, IP address, port, type of service or even time of day.

The new kernel’s firewall system (Netfilter) provides Linux’s first built-in “stateful” (remembering the state of previous packets received from a particular IP address) firewalling system. Stateful firewalls are also easier to administer with rules, since they automatically exclude many more “suspect” network transactions. Netfilter also provides improved logging via the kernel log system, automatically including things like SMB requests coming from outside your network, the ability to set different warning levels for different activities, and the ability to send certain warning-level items to a different source (like sending certain-level logging activities directly to a printer so the records are physically untouchable by a cracker that could erase the logfiles).

The system is largely backward-compatible, but it now allows Netfilter to detect many “stealth” scans (say goodbye to hacker tool nmap?) that Linux firewalls previously couldn’t detect, and blocks more DoS attacks (like SYN floods) by intelligently rate-limiting user-defined packet types. 

Under kernel 2.2 (using a model that is standard across most Unix variants), all Unix network sockets waiting for an event were “awakened” when any activity was detected – even though the request was addressed to only one of those sockets. The new “wake one” architecture awakens only one socket, reducing processor overhead and improving Linux’s server performance.

A number of new protocols have been added as well, such as ATM and PPP-over-Ethernet support. DECnet support has been added for interfacing with high-end Digital (now Compaq) systems and ARCNet protocols. Support for the Server Message Block (SMB) protocol is now built-in rather than optional. SMB allows Linux clients to file share with Windows PCs, although the popular Samba package is still required for the Linux box to act as an SMB server.

Linux 2.4 has a web server called khttpd that integrates web serving directly into the kernel (like Microsoft’s IIS on WinNT or Solaris’s NCA). While not intended as a real replacement for Apache, khttpd’s ability to serve static-only content (it passes CGI or other dynamic content to another web server application) from within the kernel memory space provides very fast response times.

Get On the Bus

Linux’s existing bus drivers have been improved as part of the new resource subsystem, plus significant improvements and new drivers (including Ultra-160!) for the SCSI bus support. Logical Volume Manager (LVM), a standard in high-end systems like HP/UX and Digital/Tru64 UNIX that allows volumes to span multiple disks or be dynamically resized, is now part of the kernel. Support is also there for ISA Plug-and-Play, Intelligent Input/Output (I2O, a superset of PCI), and an increased number of IDE drivers.

The device filesystem has been changed significantly in kernel 2.4, and the naming convention for devices has been changed to add new “name space” for devices. These device names will now be added dynamically to /dev by the kernel, rather than all potential device names needing to be present beforehand in /dev whether used or not. While backward-compatibility is intended, this may interfere with some applications (most notably Zip drive drivers) that worked with previous kernel versions.

New filesystems have been added (including a functional OS/2 HPFS driver, IRIX XFS (EFS), NeXT UFS supporting CD-ROM and NFS version 3). Support for accessing shares via NFSv3 is a major step forward, although Linux volumes will still be exported using NFSv2. Linux’s method for accessing all filesystems has been optimized, with the cache layer using a single buffer for reading and writing operations; file operations should now be faster on transfers involving multiple disks.

For the Masses

There are, of course, a large number of updates to Linux that are primarily oriented towards the desktop (rather than server) user. A generic parallel port driver has been added which enables abstract communication with devices; this can be used for things like Plug-and-Play (PnP) polling or splitting the root console off to a parallel port device (like a printer). The new Direct Rendering Manager (DRM) provides a “cleaned-up” interface to the video hardware and removes the crash-inducing problem of multiple processes writing to a single video card at once.

There are a wide variety of new graphics and sound card drivers (including support for speech synthesizer cards for the visually impaired). The UDF filesystem used by DVDs has been added, and infrared (IrDA) port support is also included. Support for Universal Serial Bus (USB) has been added but isn’t yet perfect (although IMHO, whose USB implementation is?) PCMCIA card support for laptops is now part of the standard kernel rather than requiring a custom kernel version, but an external daemon will still be required for full support. FireWire/i.Link (IEEE 1394) support is there, as well as IEEE 802.11b wireless (Apple’s “AirPort,” Lucent’s “WaveLAN”).

Probably the most far-out “consumer”-level enhancement is that kernel 2.4 has added support for the rare infrared RS-219 standard, a management interface used by specialized remote controls for Mobil and Amoco station (and some others) car washes! With the optional xwash software package, this can actually be used (on a laptop) to send signals for a “free” carwash. 

I’m kidding about that last one.

Is Anything Missing?

The 2.4 kernel itself does not have encryption technology built into it; that’s probably a wise decision, based on the various cryptography regulations of countries worldwide that might actually make it prohibitive to export or import the Linux kernel. Unlike the 2.2 kernel which included Java support automatically, you must specifically include it when building a 2.4 kernel.

Although Journaling File System (JFS) efforts have been underway for a while, their maturity was not sufficient to include in kernel 2.4. JFS systems  – a major requirement for true mission-critical servers – record (“journal”) all of their operations (analogous to a transactional database), so advanced data recovery operations (such as after a crash or power loss during read/write operations) are possible. See IBM’s open-sourced JFS project (http://oss.software.ibm.com/developerworks/opensource/jfs/?dwzone=opensource) for more information and software availability.

For Mac Linux users, support for newer Mac Extended Format (HFS+) disks has not yet been added. As of this writing, the NTFS (Windows NT/2000 file system) driver can read but not write data from within Linux. Alas, support for Intel 8086 or 80286 chips is not present either.

Lastly, you should immediately assume that things that worked with kernel 2.2 will always work with 2.4. Changes in the device filesystem and the block device API (block devices are non-serial objects; i.e. devices like hard disks or CDs that can have any sector on them accessed randomly rather than receiving input in order) may break compatibility with some existing drivers.

Getting In-Depth with Linux 2.4

In this column, I’ve only been able to touch the surface of the new functionality available in Linux kernel 2.4. The “definitive” (most frequently quoted) analysis of 2.4 kernel changes is an ongoing set of posts to the Linux kernel-developers list by Joe Pranevich. The (currently) most recent version can be found at http://linuxtoday.com/stories/15936.html.

There’s also a good “kernel 2.2 vs. 2.4 shootout” with specific application test results and details at http://www.thedukeofurl.org/reviews/misc/kernel2224 and upgrade instructions for kernel 2.2.x systems at http://www.thedukeofurl.org/reviews/misc/kernel2224/5.shtml.

For an excellent overview of Linux 2.4’s new firewalling capabilities, see the article at SecurityFocus (http://securityportal.com/cover/coverstory20010122.html). For great information on the new network system’s routing capabilities, check the HOWTO (http://www.ds9a.nl/2.4Networking/HOWTO//cvs/2.4routing/output/2.4routing.html). A detailed article on the new security features in 2.4 can be found at http://www.linuxsecurity.com/feature_stories/kernel-24-security.html.

For a more in-depth overview of the general features, read the IBM DeveloperWorks kernel preview part 1 (http://www-106.ibm.com/developerworks/library/kernel1.html) and part 2 (http://www-106.ibm.com/developerworks/library/kernel2.html).

For an interesting comparison between Linux 2.4 and FreeBSD 4.1.1 (ignoring many of the advanced features of the new Linux kernel and concentrating on common tasks), see Byte Magazine’s article (http://www.byte.com/column/BYT20010130S0010).

For the kernels themselves and the most recent change logs, visit http://www.kernel.org/pub/linux/kernel/v2.4/. For ongoing news about Linux 2.4 and life in general, see http://slashdot.org.

Webhosting with Free Software Cheat Sheet

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, April 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

So you want to run a webserver without paying a dime for software, eh? Or you want to make sure you have the source code to all your webserving applications in case you get bored and decide to try and port them to your networked office Xerox copier? Well, you’re in luck; webhosting with free (open-source, or free as in “free speech”) and free (no cost, or free as in “free beer”) software isn’t just possible, it also provides some of the best tools out there at any price.

In case you’re new to the game, or you’re looking for alternatives to packages you’re using now, the following is a brief guide to some of the more popular options that are out there. Trying to condense the features of any OS or application down to a couple sentences is inherently a dangerous thing; and I’m sure that many fans of the software listed below will disagree with elements of my “Reader’s Digest Condensed (Large Print Version)” summaries. Still, the following – based on my experiences and those of others – should provide at least a basic idea of what’s out there and why you might – or might not – want to choose it.

Operating Systems

Linux vs. BSD:

These OSes show the characteristics of their development styles: BSD was developed by small teams, largely focused on server hardware. Linux has been developed by many more people with wider uses, focusing more on desktop/workstation uses. 

BSD has been around longer and is (in some ways) more optimized for server use. Due to its hype, Linux has many more developers, and almost all new third-party software is available for Linux first. Linux has the edge in user-friendliness, because distributions are targeting new users; BSD is, for the most part, more for the “old school.” Linux has also been adopted by a number of server hardware vendors producing “integrated” solutions.

Ultimately, it’s a matter of what you feel most comfortable with. Either way, with commodity x86 hardware, your server components (RAM, drives, etc.) and network connection will affect your performance much more than your choice of Linux vs. BSD will.

• FreeBSD (www.freebsd.org)

Best known among the BSDs. Concentrates on x86 architecture, server performance, integration of utilities. Standout features include ports collection, sysinstall admin utility, Linux binary compatibility, frequent releases.

• NetBSD (www.netbsd.org)

BSD with a focus on porting it to as many platforms as possible and keeping code portable. Great for using old/odd hardware as a server. Infrequent releases, not as popular as other BSDs.

• OpenBSD (www.openbsd.org)

BSD with a focus on security. Still in the process of line-by-line security audit of the whole OS. Infrequently released, utilities/packages lag behind other OSes because of security audits, but it’s the #1 choice if security is your primary concern.

• Red Hat Linux (www.redhat.com) 

The number one U.S. distro, considered by many (rightly or wrongly) as “the standard.” As a result, it’s what many third-party/commercial Linux apps are tested against/designed for. Early adopter of new features in its releases; is on the cutting edge, but sometimes buggy until “release X.1.” Standout features: Red Hat Package Manager (RPM) installation, third-party support.

• SuSE Linux (www.suse.com)

The number one European distro. Favored by many because its six-CD set includes lots and lots of third-party software to install on CD. Less “cutting-edge” than Red Hat. Standout features include the YaST/YaST2 setup utility and the SaX X Windows setup tool.

• Slackware Linux (www.slackware.com)

Designed for experts: Slackware has no training wheels, and is probably the most “server-oriented” of Linux distros (maybe because of its close relationship to the BSDs). Not cutting-edge, few frills, but designed to be stable and familiar to BSD administrators.

• Linux Mandrake (www.linux-mandrake.com/en)

A solid, user-friendly distribution with good (but not great) documentation. Standout features include the DrakX system configuration utility and the DiskDrake disk partitioning utility.

• Debian GNU/Linux (www.debian.org)

The ideological Linux – totally supported by users rather than a corporation, and free (as is the GNU definition) software only is included. This is “ideologically pure” Linux – GNU-approved, but infrequent releases and not necessarily a good choice for beginners.

• Caldera OpenLinux (www.caldera.com/eserver)

Very user-friendly for new users. Standout features include LIZARD, its setup/configuration wizard.

• Corel LinuxOS (linux.corel.com)

By the time you read this, Corel will have sold its LinuxOS product to someone else, but the distro should remain the same. Ease of use for Windows converts is stressed, includes great SAMBA integration. Good for new users. Focus is mainly on desktop use.

Essentials

• Perl (www.perl.com)

From CGI to simple administration tasks, Perl scripts can cover a lot of territory. Perl is a must, and is practically part of Unix now. Check the Comprehensive Perl Archive Network (www.cpan.org) to find modules to extend Perl’s functionality.

• Perl’s cgi-lib.pl (cgi-lib.berkeley.edu) and/or CGI.pm (stein.cshl.org/WWW/software/CGI)

These are also “must-haves” for CGI scripts, whether you’re writing your own or using scripts found “out there” on the web.

• Sendmail (www.sendmail.org) or Qmail (www.qmail.org)

Free mailservers. Sendmail has the history and the documentation (a good thing, since its internals are famously complex), but Qmail has a less-complicated design, and a strong and growing band of followers.

• wu-ftpd (www.wu-ftpd.org)

A significant improvement in features over the classic BSD FTP daemon – for both BSD and Linux. Despite an older security flaw that was recently exploited by the “Ramen” Linux worm, it’s a very good program.

• OpenSSH (www.openssh.com/)

In this day and age, Telnet has become a liability for security reasons. There’s no reason not to migrate users who need a shell account to SSH. See www.freessh.org for a list of clients.

Web Servers

• Apache 1.3.x (www.apache.org/httpd.html)

The current king of web servers. Very good performance, stable enough to run on mission-critical systems. Very user-friendly to install and configure (due to comments in httpd.conf), but not always as easy as it should be to debug problems.

• Apache 2.x (www.apache.org/httpd.html)

Still in beta development, but may be final by the time you read this. It probably shouldn’t be used for mission-critical systems until it’s had a few months of time “out there” to find bugs after its final release. Version 2.0 will be much easier to add new protocols into (like FTP or WAP), and should have significantly better performance because of its multi-threaded nature.

• Roxen (www.roxen.com/products/webserver)

Roxen is much more than just a simple webserver – it includes its own web admin interface, secure server, and more. Used by real.com; shows promise but doesn’t have the acceptance level yet of Apache.

Secure Web Servers

Note: You may receive a “security warning” in most web browsers about your secure certificate if you generate your own secure certificate (free). For a non-free certificate created by an authority that most web browsers will accept without a warning, see VeriSign (www.verisign.com/products/site/ss/index.html), Thawte (www.thawte.com), Baltimore (www.baltimore.com/cybertrust), ValiCert (www.valicert.com/), Digital Signature Trust Co. (www.digsigtrust.com) or Equifax (www.equifaxsecure.com/ebusinessid) for more information.

• ApacheSSL (www.apache-ssl.org)

Focused on stability/reliability, and lacking in “bells and whistles” features. It’s simple and it works, but it lacks some features of mod_ssl and it isn’t updated very often.

• mod_ssl (www.modssl.org)

Originally based on ApacheSSL, mod_ssl is now largely rewritten and offers a number of extra features, plus better documentation. 

Microsoft Web Compatibility

• FrontPage Extensions for Unix (www.rtr.com/fpsupport)

On one hand, it allows you to host sites built and published with FrontPage on a Unix server. On the other hand, it’s possibly the biggest piece of junk Unix software ever created. Use it if you have to; avoid it if you can.

• Improved mod_frontpage (home.edo.uni-dortmund.de/~chripo)

Addresses a number of problems with mod_frontpage (www.darkorb.net/pub/frontpage), with extra security and documentation, support for Dynamic Shared Objects (DSOs), better logging, as well as (unverified) claims of increased performance.

• Apache::ASP (www.nodeworks.com/asp)

An alternative to the very expensive ChiliSoft or Halcyon ASP Unix solutions, using Perl as the scripting language for ASPs. Requires the Apache mod_perl.

• asp2php (asp2php.naken.cc)

As its FAQ says, “ASP2PHP was written to help you correct the mistake of using ASP.” Converts ASP scripts to PHP scripts for use with Apache/PHP.

Application Building

• Zope (www.zope.org)

A complete tool for building dynamic websites; there’s a (somewhat) stiff learning curve that may be too much for basic needs. Zope offers incredible functionality, and is well-suited to large projects and web applications; it may be overkill for simple scripting that could be done with PHP or Perl CGIs.

• PHP (www.php.net)

The favorite open-source tool for building dynamic websites, and the open-source alternative to ASP. Reliable, uses syntax that seems like a cross between Perl and C, and features native integration with Apache. Version 4 is thread-safe, modular, and reads then compiles code rather than executing it as it reads (making it much faster with large, complex scripts).

Database Software

Note: for a more in-depth comparison, I highly recommend the O’Reilly book MySQL and mSQL, as well as the article “MySQL and PostgreSQL Compared” (www.phpbuilder.com/columns/tim20000705.php3).

• MySQL (www.mysql.com)

The “Red Hat” of free relational database software. Well-documented, and its performance for most users is excellent, designed around fast “read” rather than “write” operations. It doesn’t offer “subselect” functionality, and tends to buckle under very heavy loads (more than 15 concurrent users per second), but is very fast and reliable for most sites.

• PostgreSQL (www.postgresql.org)

Has an active developer community, especially popular among the “GPL-only” crowd. Offers advanced features that MySQL doesn’t (subselects, transactional features, etc.), but traditionally wasn’t as fast for common uses and sometimes suffered data corruption. New versions appear to have remedied most of these deficiencies.

• mSQL (www.hughes.com.au) 

The first of the bunch, but appears to have fallen behind. More mature than MySQL or PostgreSQL, but may not have all of the features of its rapidly developing brethren.

Administration

• Webmin (www.webmin.com/webmin)

Fully featured web-based administration tool for web, mail, etc. Offers excellent functionality, but can present a potential security risk (I get really nervous about anything web-accessible which runs with root permissions).

Java Servlets

• Tomcat (jakarta.apache.org) and JServ/mod_jserv (java.apache.org)

Tomcat is an implementation of the Java Servlet 2.2 and JavaServer 1.1 specifications that works with other browsers as well as Apache. JServ is an Apache module for the execution of servlets. The two work together to serve JSPs independently of Apache.

Website Search

• ht://Dig (www.htdig.org)

ht://dig is relatively simple to set up, and (with a few quirks) offers excellent searching capabilities. Easily customizable, and has a good “ratings-based” results engine.

• MiniSearch (www.dansteinman.com/minisearch)

A simple Perl search engine, which can also be run from the command line. Not as fully featured as ht://dig, but good enough for basic needs.

Web Statistics

• analog (www.statslab.cam.ac.uk/~sret1/analog)

Analog is extremely fast, reliable and absolutely filled with features. Its documentation is a bit confusing for beginners, however, and it takes some configuration to make it pretty.

• wwwstat (www.ics.uci.edu/pub/websoft/wwwstat)

A no-frills, simple statistics analysis program that delivers the basics.

Other Goodies

Configuration Scripts:

• Install-Webserver (members.xoom.com/xeer)

• Install-Qmail (members.xoom.com/xeer)

• Install-Sendmail (members.xoom.com/xeer)

Shopping Carts:

• Aktivate (www.allen-keul.com/aktivate)

Aktivate is an “end-to-end e-commerce solution” for Linux and other Unixes. It is targeted at small-to-medium-sized businesses or charities that want to accept credit card payments over the Web and conduct e-commerce. 

• OpenCart (www.opencart.com)

OpenCart is an open source Perl-based online shopping cart system. It was originally built to handle the consumer demands of Walnut Creek CDROM, was later expanded to also work with The FreeBSD Mall, and was finally developed to be used by the general public.

• Commerce.cgi (www.careyinternet.com)

Commerce.cgi is a free shopping cart program. Included is a Store Manager application to update program settings, and you can add/remove products from the inventory through a web interface.

Message Boards:

• WaddleSoft Message Board (www.ewaddle.com)

WaddleSoft is a message board system that includes polls, user registration, an extensive search engine, and sessions to track visitors.

• MyBoard (myboard.newmail.ru)

MyBoard is very easy and light-weight web messageboard system. It also has some extended features such as search and user registration.

• NeoBoard (www.neoboard.net)

NeoBoard is a Web-based threaded message board written in PHP. It includes a wide variety of advanced features for those comfortable with PHP.  

• PerlBoard (caspian.twu.net/code/perlboard)

PerlBoard is a threaded messageboard system written in Perl. It is very easy to use and set up, and has been time-tested for the past several years on the site it was originally written for.

• RPGboard (www.resonatorsoft.com/software/rpgboard)

RPGBoard is a WWWBoard-style message board script. It includes a list of features as long as your arm, and is well worth checking out for those who need a rather advanced message board.

Notes from the Underground

If you see a favorite package here that I’ve overlooked, or would like to offer comments on any of the package descriptions, e-mail me at [email protected]. I’ll update this list with more information for a future column.

The Next Apache

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, February 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

The Apache webserver is (along with Linux and Perl) probably the more widely-used open-source software in the world. After beginning as a set of patches to the NCSA http server (“a patchy server” was how it got its name), Apache had moved by 1996 to become the most popular http server out there. According to Netcraft, Apache today powers 59 percent of all the webservers on the Internet, far more than the 20 percent share of the runner-up, Microsoft Internet Information Server. 

Apache’s last “full-point” release (Apache 1.0) was released on December 1, 1995, and it has been five years since then. Naturally, there’s a lot of excitement about the long-awaited Apache 2.0, which should be in beta release by the time you read this. To find out what’s new with Apache 2, I asked the Apache Project’s Dirk-Willem van Gulik and Ryan Bloom. The following is selected portions of an e-mail interview with these Apache team members:

Boardwatch: Why was the Apache 2.0 project started? What shortcomings in Apache 1.x was it created to address, or what missing features was it designed to implement?

Bloom: There were a couple of reasons for Apache 2.0. The first was that a pre-forking server just doesn’t scale on some platforms. One of the biggest culprits was AIX, and since IBM had just made a commitment to using and delivering Apache, it was important to them that we get threads into Apache. Since forcing threads into 1.3 would be a very complex job, starting 2.0 made more sense. Another problem was getting Apache to run cleanly on non-Unix platforms. Apache has worked on OS/2 and Windows for a long time, but as we added more platforms (Netware, Mac OS X, BeOS) the code became harder and harder to maintain. 

Apache 2.0 was written with an eye towards portability, using the new Apache Portable Run-time (APR), so adding new platforms is simple. Also, by using APR we are able to improve performance on non-Unix platforms, because we are using native function calls on all platforms. Finally, in Apache 2.0, we have implemented Filtered I/O. This is a feature that module writers have been requesting for years. It basically allows one module to modify the data from another module. This allows CGI responses to be parsed for PHP or SSI tags. It also allows the proxy module to filter data.

Boardwatch: What are the significant new features of Apache 2.0?

Bloom: Threading, APR, Filtered I/O. 🙂 And Multi-Processing modules and Protocol modules. These are two new module types that allow module writers more flexibility. A Multi-Processing module basically defines how the server is started, and how it maps requests onto threads and processes. This is an abstraction between Apache’s execution profile and the platform it is running on. Different platforms have different needs, and the MPM interface allows porters to define the best setup for their platform. For example, Windows uses two processes. The first monitors the second. The second serves pages. This is done by a Multi-Processing Module. 

Protocol modules are modules that allow Apache to serve more than just HTTP requests. In this respect, Apache can act like inetd on steroids. Basically, each thread in each process can handle either HTTP or FTP or BXXP or WAP requests at any time, as long as those protocol modules are in the server. This means no forking a new process just to handle a new request type. If 90% of your site is served by HTTP, and 10% is served by WAP, then the server automatically adjusts to accommodate that. As the site migrates to WAP instead of HTTP, the server continues to serve whatever is requested. There is no extra setup involved. I should mention now that only an HTTP protocol module has been written, although there is talk of adding others.

Boardwatch: How much of a break with the past is Apache 2.0, in terms of 1.) the existing code base, 2.) the administration interface, and 3.) the API for modules?

Bloom: I’ll answer this in three parts.

1) The protocol handling itself is mostly the same. How the server starts and stops, generates data, sends data, and does anything else is completely different. By adding threads to the mix, we realized that we needed a new abstraction to allow different platforms to start-up differently and to map a request to a thread or process the best way for that platform.

2) The administrative interface is the same. We still use a text file, and the language hasn’t changed at all. We have added some directives to the config file, but if somebody wants to use the prefork MPM (it acts just like 1.3), then a 1.3 config file will migrate seamlessly into 2.0. If the MPM is changed, then the config file will need slight modifications to work. Also, some of the old directives no longer do what they used to. The definition of the directive is the same, but the way the code works is completely different, so they don’t always map. For example, SetHandler isn’t as important as it once was, but the Filter directives take its place.

3) The module API has changed a lot. Because we now rely on APR for most of the low-level routines, like file I/O and network I/O, the module doesn’t have as much flexibility when dealing with the OS. However, in exchange, the module has greater portability. Also the module structure that is at the bottom of every module has shrunk to about 5 functions. The others are registered with function calls. This allows the Apache group to add new hooks without breaking existing modules. Also, with the filter API modules can should take more care when generating data to generate it in size-able chunks.

Boardwatch: What are the advantages and disadvantages (if any) of Apache 2.0’s multithreaded style? What does it mean to have the option of being multi-process and multi-threaded?

Bloom: The multi-threading gives us greater scalability. I have actually seen an AIX box go from being saturated at 500 connections to being saturated at more than 1000. As for disadvantages, you loose some robustness with this model. If a module segfaults in 1.3, you lose one connection, the connection currently running on that process. If a module segfaults in 2.0, you lose N connections, depending on how many threads are running in that process, which MPM you have chosen, etc. However, we have different MPMs distributed with the server, so a site that only cares about robustness can still use the 1.3 pre-forking model. A site that doesn’t care about robustness or only has VERY trusted code can run a server that has more threads in it.

van Gulik: In other words; Apache 2.0 allows the webmaster to make his or her own tradeoffs; between scalability, stability and speed. This opens a whole new world of Quality of Service (QoS) management. Another advantage of these flexible process management models is that integration with languages like Perl, PHP, and in particular Java, can be made more cleanly and more robust without loosing much performance. Especially large e-commerce integration projects will be significantly easier.

Boardwatch: What is the Apache Portable Runtime? What effect does this have on the code, and on portability?

Bloom: The Apache Portable Runtime is exactly what it says. 🙂 It is a library of routines that Apache is using to make itself more portable. This makes the code much more portable and shrinks the code size, making the code easier to maintain.  My favorite example is apachebench, a simple benchmarking tool distributed with the server. AB has never worked on anything other than Unix. We ported it to APR, and it works on Unix, Windows, BeOS, OS/2, etc without any work. As more platforms are ported to APR, AB will just work on them, as will Apache. This also improves our performance on non-POSIX platforms. Apache on Windows, the last I checked, is running as fast as Apache on Linux.

Boardwatch: Can Apache 1.3.x modules be used with Apache 2.0? How will this affect things like Apache-PHP or Apache-FrontPage?

Bloom: Unfortunately, no. However, they are very easy to port. I have personally ported many complex modules in under an hour. The PHP team is already working on a port of PHP to 2.0, as is mod_perl. Mod_perl has support for some of the more interesting features already, such as writing filters in Perl, and writing protocol modules in Perl. FrontPage will hopefully be subsumed by DAV, which is now distributed with Apache 2.0.

van Gulik: Plus it is likely that various Apache-focused companies; such as IBM, Covalent and C2/RedHat will assist customers with the transition with specific tools and migration products

Boardwatch: Do you predict that server administrators used to Apache 1.3.x will have a hard time adjusting to anything about Apache 2.0? If so, what?

Bloom: I think many admins will move slowly to Apache 2.0. Apache 2.0 has a lot of features that people have been asking for for a very long time. The threading issues will take some getting used to, however, and I suspect that will keep some people on 1.3 for a little while. Let’s be honest, there are still people running Apache 1.2 and even 0.8, so nobody things that every machine running Apache is suddenly going to migrate to 2.0. Apache tends to do what people need it to do. 2.0 just allows it to do more.

Boardwatch: Who should/shouldn’t use the alpha or beta releases?

Bloom: The alpha releases are developer releases, so if you aren’t comfortable patching code and fixing bugs, you should probably avoid the alphas. The betas should be stable enough to leave running as a production server, but there will still be issues so only people comfortable with debugging problems and helping to fix them should really be using the betas. (Napster is using alpha 6 to run their web site)

Boardwatch: For server administrators, what guidelines can you give about who should or shouldn’t upgrade to Apache 2.0 when it becomes a final release? For what reasons?

Bloom: Personally, I think EVERYBODY should upgrade to Apache 2.0. 🙂 Apache 2.0 has a lot of new features that are going to become very important once it is released. However, administrators need to take it slowly, and become comfortable with Apache 2.0. Anybody who is not on a Unix platform should definitely upgrade immediately. Apache has made huge strides in portability with 2.0, and this shows on non-Unix machines.

van Gulik: I’d concur with Ryan insofar as the non-Unix platforms are concerned; even an early beta might give your site a major boost; as for the established sites; running on well understood platforms such as Sun Solaris and the BSD’s – I am not so sure if they will upgrade quickly or see the need; The forking model has proven to be very robust and scalable. Those folks will need features; such as the filter chains to be tempted to migrate.

Boardwatch: Apache is the most popular web server out there, meeting the needs of many thousands of webmasters. What else is there to do? What is on the Apache web server team’s “wish list” for the future?

Bloom: Oh, what isn’t on it? I think for the most part, we are focusing on 2.0 right now, with the list of features that I have mentioned above. We are very interested in the effect of internationalization on the web and Apache in particular. There are people who want to see an async I/O implementation of Apache. I think that we will see some of the Apache group’s focus move from HTTP to other protocols that compliment it, such as WAP and FTP. And I think finally we want to continue to develop good stable software that does what is needed. I do think that we are close to hitting a point where there isn’t anything left to add to Apache that can’t be added with modules. When that happens, Apache will just become a framework to hang small modules off of and it will quiet down and not be released very often.

Boardwatch: Anything else you’d like to add? 😉

Bloom: Just that Apache 2.0 is coming very close to its first beta, and hopefully not long after that we will see an actual release. The Apache Group has worked long and hard on this project, and we all hope that people find our work useful and Apache continues to be successful.

Running Linux Programs on FreeBSD

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, January 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Even though many server admins prefer BSD Unix, there’s no denying that Linux is “where it’s at” for third-party software development. So, what’s a BSD admin to do?

In the bygone misty past of Unix (in the era historians call “the early ‘90s”), acceptance and market share were hampered by the diverging standards that forced application developers to write for only one version of Unix. This (as the youngsters today say) “really sucked,” and resulted in the Unix wars that left many syadmins hospitalized with “flame” wounds from heated Usenet exchanges. However, the tide has fortunately turned in favor of compatibility among the many *nixes.

To commercial software vendors, market share is everything. As such, if *BSD forced Linux vendors to develop for their OSes, it would be like your average Unix sysadmin saying, “If Cindy Crawford wants to go out, she can call me.” Fortunately, Jordan Hubbard of FreeBSD has indicated a prudent recognition that Linux’s API is becoming the default for third party developers (Indpendent Software Vendors, or ISVs) to develop for free *nixes. Therefore, it’s better for BSD users to accept the Linux ABI (Application Binary Interface) rather than to force ISVs to choose between Linux and BSD. This, in my opinion, is a very pragmatic and wise attitude.

FreeBSD, NetBSD and OpenBSD all include options to allow Linux binaries to run on their systems (at least for the Intel x86 architecture; on other architectures, your mileage may vary). For OpenBSD and NetBSD, you can check your system documentation or see their respective websites (www.openbsd.org and www.netbsd.org); for the purpose of this article, we’ll look at Linux binary compatibility on FreeBSD for x86. NOTE: Much of the information on the inner workings of Linux compatibility came from information graciously provided by Terry Lambert ([email protected]).

Under FreeBSD, Linux binary compatibility is managed by creating a “shadow” Linux file system with the requisite programs and libraries, and re-routing the Linux program’s system calls into this file system. These libraries then deal with a Linux kernel embedded into the FreeBSD kernel.

Setting Up Linux Compatibility

The development of an “executable class” loader for FreeBSD has been under development since 1994 (although when the project was started, Linux wasn’t one of the primary target OSes). There are three essential items that enable this functionality.

The first is a KLD (“Kernel LoaDable”) object called linux.ko, which is essentially a Linux kernel that can be loaded dynamically into the FreeBSD kernel. As a KLD, it can be loaded or unloaded without rebooting, using the kldload and kldunload commands. To check and see whether linux.ko is loaded properly on your system, use the kldstatcommand:

schnell# kldstat

Id Refs Address    Size     Name

 1    4 0xc0100000 1d9a60   kernel

 2    1 0xc1038000 3000     daemon_saver.ko

 3    1 0xc1822000 4d000    nfs.ko

 4    1 0xc1020000 10000    linux.ko

The second necessary item is the set of basic Linux libraries (pre-existing code that developers can link their programs to, so they don’t have to write every function from scratch) and binaries (like bash, cp, rm, etc.). These can be installed via the ports collection from /usr/ports/emulators/linux_base, which is based on a minimal Red Hat installation. As of this writing in late September, for the i386 architecture, the installed libraries matched a Red Hat 6.1-1 release including glibc 2.1.2-11, libc 5.3.12-31, glib 1.2.5-1, ld.so 1.9.5-11, libstdc++-2.9.0-24, gdbm-1.8.0-2 and XFree86 libs 3.3.5-3. 

To be able to use a wider range of Linux apps, you may also wish to install the Linux development tools, found in the ports collection at /usr/ports/devel/linux_devtools. Note that you’ll want to do this after installing linux_base, since linux_devtools requires rpm (the Red Hat Package Manager application, also available as a FreeBSD app through the ports collection) and /compat/linux/etc/redhat-release to be installed. This currently includes (also for i386) kernel-headers-2.2.12-20, glibc-devel-2.1.2-11, make-3.77-6, cpp-1.1.2-24, egcs (plus egcs-c++ and egcs-g77)-1.1.2-24, gdb-4.18-4, and XFree86-devel-3.3.5-3.

Also, any necessary binaries/libraries can be set up manually by installing them into their original paths under the FreeBSD directory /compat/linux (e.g., install the Linux /usr/X11R6/lib/libX11.so.6.1 library onto FreeBSD as /compat/linux/usr/X11R6/lib/ libX11.so.6.1). This process is also necessary for installing libraries that aren’t part of the default ports collection sets.

The third item is “branding” Linux ELF (“Executable and Linking Format”) binaries. Although the GNU toolchain now brands Linux ELF binaries with the OS name (so newer Linux binaries should work without branding), some older binaries may not include this information (although ELF has replaced a.out as the primary binary format since Linux kernel 2.x and FreeBSD 3.0). To do this, use the brandelf command (which sounds like a move from “Dungeons and Dragons” to poke elves with hot sticks) with the “-t” (type) flag like:

schnell# brandelf -t Linux /usr/local/games/doom

How Linux Compatibility Works

Originally, Unix supported only one “binary loader.” When you ran a program, it examined the first few bytes of the file to see if it was a recognized binary type. If the file type that was attempted to being loaded and run wasn’t the singular binary type that it recognized, it would pass the file in question to the basic shell interpreter (/bin/sh) and try to run the program with that. Failing that, it would spit a nasty message back to you about how you couldn’t run whatever it was that you were trying to run.

Nowadays, FreeBSD supports a number of loaders, which include an ELF loader (by the way, Linux binaries can be stored on either a FreeBSD UFS or Linux ext2 file system, assuming you have mounted the ext2 file system the app is located on using the mount -t ext2fs command). If the “brand” is seen in the ELF file as “Linux,” it replaces a pointer in the “proc” structure of the binary to point automatically to /compat/linux/, and the binary is executed using the Linux libraries and binaries, instead of the FreeBSD ones. If it isn’t a FreeBSD or Linux binary, and the list of loaders (including those examining the first line for “#!” lines like Perl, Python or shell scripts) is exhausted without recognizing a “type,” then it is attempted to be executed as a /bin/sh script. If this attempt to run is unsuccessful, it is spit back out as an “unrecognized” program.

Linux and BSD Binaries: Both “Native?”

If a binary or library is required by a Linux program that is not included under the Linux compatibility “area,” the FreeBSD tree is searched. Thus, a Linux application that is expected to run with the Linux version of /compat/linux/usr/lib/libmenu.so – if it doesn’t find it – is checked to see whether it can run under the FreeBSD /usr/lib/libmenu.so. Only if a compatible library is not found in either the Linux-compatible “re-rooted (/compat/linux/whatever)” filesystem or the regular FreeBSD libraries is an executable rejected. 

Because of this construction, it isn’t really accurate to say that Linux is “emulated” under FreeBSD, especially in the sense that, say, platforms are emulated under the MAME (www.mame.net) emulator, since it doesn’t involve a program which impersonates another platform on a binary level and translates all of its native system calls into local system calls. Linux binaries are simply re-routed to a different set of libraries, which interface natively with a (Linux kernel) module plugged into the FreeBSD kernel. Rather than “emulation,” this is largely just the implementation of a new ABI into FreeBSD.

Under the current implementation of binary compatibility, programs calling FreeBSD’s glue() functions are statically linked to FreeBSD’s libraries, and programs calling the Linux glue() function calls are statically linked to its own libraries, so their executions are dependent on completely different code libraries; however, the importance of glue()is on the wane. In the future, this might change, which would seriously open the question as to whether Linux apps under FreeBSD are any more “native” than apps written specifically for FreeBSD are. 

Running Linux Binaries on BSD

Currently, the major problem with running Linux binaries on FreeBSD is similar to the major problem with using new apps on Linux: discovering which libraries they require and downloading and installing them. Unfortunately for BSD users, the only certain way to do this right now is to have access to a Linux system which already has the app in question installed, and run the ldd command on the application to list the libraries needed. If you don’t have access to a Linux system that already has the program installed, your next best bet is to check info on the program’s website, or search for it at Freshmeat (www.freshmeat.net).

Under current FreeBSD Linux binary emulation, most Linux binaries should work without problem. However, due to operating system differences, it won’t support those that heavily use the Linux /proc filesystem (which operates differently from the FreeBSD /proc), ones that make i386-specific calls like enabling virtual 8086 mode, or ones that use a significant amount of Linux-specific assembly code. While some people have reported that some Linux binaries run faster in this BSD environment than they do natively in Linux, this is a highly subjective claim, and one that should be taken with a grain of salt (at least until some real standardized testing is done with it). While Linux binary emulation on FreeBSD shouldn’t be considered as a major security risk, it is worth noting that a buffer overflow bug was discovered in August 2000 (www.securityfocus.com/frames/?content=/vdb/bottom.html%3Fvid%3D1628).

Popular Uses for Linux Compatibility

The most popular binaries for using Linux compatibility are the ISV applications that have been developed for Linux from popular commercial/semi-commercial vendors. These include programs like Sun’s StarOffice, Wolfram’s Mathematica, Oracle 8, and games like Bungie’s Myth II and id Software’s Quake III (there are already tutorials on installing Oracle 8.0.5 and Mathematica 4.x included in the FreeBSD Handbook; see below for more). 

Other popular installations include the Netscape Navigator 4.6-7.x browser. Particular advantages to installing the Linux version (even though there is a FreeBSD version already) are that the Linux version under FreeBSD compatibility is reportedly more stable than the “native” FreeBSD version. Also, binaries for popular plug-ins (Flash 4, Real Player 7, etc.) are available, since “native” FreeBSD versions aren’t. Info on installing VMware for Linux on FreeBSD can be found at www.mindspring.com/~vsilyaev/vmware/.

Getting Help with Linux Compatibility

The primary reference on Linux binary emulation on FreeBSD is the “handbook” page at www.freebsd.org/handbook/linuxemu.html, including specific pages on Oracle (www.freebsd.org/handbook/linuxemu-oracle.html) and Mathematica (www.freebsd.org/handbook/linuxemu-mathematica.html). To keep up-to-date on Linux emulation under FreeBSD, subscribe to the mailing list freebsd-emulation (send e-mail to [email protected] with the text [in the body] subscribe freebsd-emulation).

Additional information on Linux compatibility (including instructions for older versions of FreeBSD), see www.defcon1.org/html/Linux_mode/linux_mode.html. For a list of Linux applications that are known to run on FreeBSD under Linux binary compatibility, see www.freebsd.org/ports/linux.html. For an example of setting up Word Perfect 8 in Linux compatibility – as well as an account of running Windows 2000 on top of VMware on top of Linux on top of FreeBSD, read the excellent article at BSD Today (www.bsdtoday.com/2000/August/Features252.html). You can find some (mildly outdated but still useful) instructions on installing StarOffice on FreeBSD at www.stat.duke.edu/~sto/StarOffice51a/install.html. And, for fun, there’s info on setting up an Unreal Tournament server on FreeBSD via Linux compatibility at www.doctorschwa.com/ut/freebsd_server.html.

Trying on Red Hat – Questions, Answers and Red Hat Linux 7

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, December 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Red Hat Linux is the unquestioned leader in the U.S. for Linux market share. According to research firm IDC, Red Hat shipped 48 percent of the copies of Linux that were purchased in 1999. Many smaller distributions are based on its components, and it’s often used as the “target” distro that third-party (especially commercial) software is developed for. Still, the company has found it difficult sometimes to reconcile its place in the Linux community with its place as the Linux standard-bearer on Wall Street. With the release of Red Hat Linux 7.0, is the company still on top?

Under the Hat

Red Hat has been around since the days when Linux distributions were loose, volunteer-driven projects with users numbering in the thousands. Red Hat gradually reached a position of great popularity, based on several factors. First, they put a significant amount of effort into creating ease of use for inexperienced admins, with semi-graphical installation/configuration programs and the Red Hat Package Manager (RPM) software installation system. Second, Red Hat shipped a solid distribution that tended to include the “latest and greatest” features. Lastly, Red Hat showed a genius for building and marketing a brand name, and getting a shrink-wrapped, user-friendly product into stores.

Once Linux began to appear on the radar scopes of investors, Red Hat became the high-profile “star” of the Linux world. Red Hat went public on August 11, 1999 and soared immediately – making paper fortunes overnight for of some of the Linux world’s luminaries. The stock reached a dizzying high of 151, but later became a victim of the hype as Wall Street’s love affair with Linux cooled; as of October 1, RHAT was trading near its 52-week low of 15. 

The company has also faced the very difficult problem of maintaining its standing in the open source world (which is often distrustful of corporations and for-profit ventures) while pleasing its investors and shareholders. Like Robin Hood becoming the new Sheriff of Nottingham, they have found that it’s sometimes impossible to please all of your constituencies at once. Red Hat has made some occasionally unpopular moves, and faced criticism from the more evangelical Linux enthusiasts that it was trying to “co-opt” Linux standards as its own; however, the company has overall done a good job of pleasing the majority of its disparate audiences.

Introducing Red Hat Linux 7.0

As of this writing, Red Hat Linux 7.0 has been out for only a few days, but seems to be well-received. There appear to be some viable reasons to upgrade (even beyond the fact that RH 7 is supposed to be ready out of the box for a 2.4 kernel when it’s released).

Red Hat 7 is more of an evolutionary upgrade than a revolutionary one, and this will please some users and disappoint others. Still, this is generally a positive step as far as ISP/server users are concerned – when compatibility with hardware and a stable system are your prime requirements, “conservative” is never a bad approach. 

As Red Hat itself is quick to point out, the value of a distribution is in meshing functionality with the expertise and testing to make sure that all of its included parts “play nicely with each other” as much as possible. In the past, Red Hat has been (depending on your viewpoint) applauded or derided for being an “early adopter” (the company brushes aside this characterization in the interview below) of new versions of libraries and applications. 

An example of the tack they’ve taken with Red Hat 7 is that because of stability concerns about KDE 2’s prerelease versions by Red Hat testers, version 1.1 was included with the final release. On the other hand, some users (rightly or not) questioned the use of a non-“stable release” version of the GNU C Compiler (gcc 2.96 20000731) as the default compiler. Again, whether these are steps forward or backward is a matter of personal preference; you can’t please everybody. Still, it appears that Red Hat has worked hard to avoid a “buggy x.0 release” that some have complained about in the past.

The big differences in Red Hat 7 for desktop users are built-in USB support and the default inclusion of XFree86 4.0, with its (in my opinion, much-needed) major overhaul and modularization of the X server. Also, Sawfish is now used as the default window manager with GNOME rather than Enlightenment.

Overall, administration is roughly the same as with the 6.2 distro, with a few bugfixes and improvements here and there. The basic US package still includes two CDs (other geographical versions will include more); while this won’t please SuSE users who love the extra CDs full of applications, the included software still represents a pretty good sampler of the software you’d want (with the possible exception of “office suite” software). For more information on the included software, read the interview below.

Q & A with Red Hat 

The following is from an e-mail interview with Paul McNamara, VP, Products and Platforms for Red Hat.

Q: Could you give me a brief history of Red Hat?

A: Founded in 1994, Red Hat (Nasdaq:RHAT), is the leader in development, deployment and management of Linux and open source solutions for Internet infrastructure ranging from small embedded devices to high availability clusters and secure web servers. In addition to the award-winning Red Hat Linux server operating system, Red Hat is the principal provider of GNU-based developer tools and support solutions for a wide variety of embedded processors. Red Hat provides run-time solutions, developer tools, Linux kernel expertise and offers support and engineering services to organizations in all embedded and Linux markets.

Red Hat is based in Research Triangle Park, N.C. and has offices worldwide. Please visit Red Hat on the Web at www.redhat.com.

Q: What platforms does Red Hat support? What are the minimum requirements for Red Hat?

A: Red Hat supports Intel, Alpha, and SPARC processors. Minimum requirements for the Intel product are 386 or better (Pentium recommended) 32MB RAM and 500MB free disk space.

Q: What is included with the newest release of Red Hat? (i.e., kernel version, version of Apache, Perl, Sendmail, etc. for other software packages you feel are important, plus what other third-party software do you add?)

A: A significant new feature of Red Hat Linux 7 is Red Hat Network. Red Hat Network is a breakthrough technology that gives customers access to a continuous stream of managed innovation. This facility will dramatically improve customers abilities to extract maximum value from Red Hat Linux.

We ship the following: 2.2.16 kernel, Apache 1.3.12, openssl 0.9.5, [ed: openssh 2.1.1p4 is also included] sendmail 8.11.0. Complete package description can be found at www.redhat.com/products/software/linux/pl_rhl7.html.

Third party apps can be found at www.redhat.com/products/software/linux/pl_rhl7_workstation.html and www.redhat.com/products/software/linux/pl_rhl7_server.html.

Q: What are some configurations that Red Hat would recommend, or it really excels with?

A: While Red Hat Linux is a superior general purpose OS supporting a wide range of application segments, the most popular configurations are: web servers, secure web servers, database servers, internet core services (DNS, mail, chat, etc), and technical workstations.

Q: What is Red Hat’s ‘Rawhide’ development tree? Who is it suitable for?

A: The Rawhide development tree represents our latest development code drop. It is our next release, in progress. In the traditional software development model, the developing company provides the latest engineering build to its internal developers to use to drive the development effort forward. Since Red Hat uses the collaborative development style, our ‘internal’ development release is made available to the community. This release is intended for community OS developers and is not intended to be used by customers for production environments.

Q: When Linux is united by a common kernel, what is it that keeps Red Hat as the “number one” distribution? What would you say differentiates Red Hat from other Linux distros.

A: Your question is a lot like asking since all car makers use a four cycle internal combustion engine as the primary component, what differentiates Lexus from other cars? Note that all cars are essentially compatible (can be driven on the same roads, use the same fuel, and a driver trained to drive one brand of car can easily drive another brand). What sets our brand apart is the mix of features and the quality of the finished product. We process “raw materials” (the various packages) and turn them into a finished product. Our selection of the packages, the engineering we do to create an integrated product, and our ability to deliver a quality result make the difference.

Q: Red Hat has often been cited as an “early adopter,” moving to new library versions, etc. in its releases before other distributions do. Is this a fair characterization? What advantages and/or disadvantages does this have?

A: I’ve only heard us described in this way by a competitor. I don’t know what this means. We clearly drive an agenda, and others tend to follow.

Q: Red Hat obviously has many strengths. What users, if any, should *not* choose Red Hat? For what reasons?

A: We generally discourage people interested in a legacy desktop OS from purchasing Red Hat. Red Hat is designed for servers, technical workstations, and post-PC embedded devices. It is either (1) intended for internet and IT professionals who need a high performance, internet-ready OS, or (2) is designed to be built into post-PC consumer information appliances where the device manufacturer has integrated our product into the consumer product.

Q: What advice would you give for ISP administrators about when/when not to upgrade their servers running Red Hat?

A: Red Hat Linux is a different kind of OS. Customers can choose, on a feature by feature basis, which packages to upgrade and when. Through a subscription to the Red Hat Network, customers can receive proactive notifications when new features become available and can receive a continuous stream of managed innovation to give them strategic advantage.

Q: What is the Red Hat Certification program? What benefits does it offer to Internet server administrators?

A: The Red Hat Certified Engineer (RHCE) program is the leading training and certification program on Linux. RHCE is a performance-based certification that tests actual skills and competency at installing, configuring, and maintaining Internet servers on Red Hat Linux. Complete details can be found at www.redhat.com/training/rhce/courses/ and www.redhat.com/training.

RHCE program courses and the RHCE Exam are regularly scheduled at Red Hat, Inc. facilities in Durham, NC, San Francisco CA, and Santa Clara, CA. Global Knowledge and IBM Global Services are Red Hat Certified Training Partners for the RHCE Program, offering RHCE courses and the RHCE Exam in over 40 locations in North America. Red Hat can also run Red Hat training on-site for 12 students or more.

Red Hat offers the most comprehensive Developer Training for systems programmers and application developers on Linux, as well as training on Advanced Systems and Advanced Solutions, including the only regularly scheduled training on Linux on IA-64 architecture.

Q: What plans does Red Hat have for the IA-64 platform?

A: Red Hat is a leading participant in the IA-64 consortium, and we intend to aggressively support this new platform by delivering Red Hat Linux concurrently with the availability of IA-64 hardware.

Q: What advantages would you cite for someone choosing Red Hat Linux over another server OS, like Windows 2000, Solaris or FreeBSD?

A: Red Hat offers a superior mix of reliability, performance, flexibility, total cost of ownership and application availability. We believe it is simply the best choice for deploying Internet infrastructure.

Q: Where could someone running an Internet server go for help and tips on Red Hat? Do you specifically recommend any advice?

A: There is a huge volume of information available for Red Hat Linux. Sources include a large selection of books available at leading book stores, on-line information from news groups and mailing lists, a worldwide network of Linux Users Groups (LUGs), on-line help in the form of man and info pages, and support offerings available directly from Red Hat and from www.redhat.com.

How to… Install Linux on a G3 Power Macintosh

By Jeffrey Carl

From MacAddict Magazine, March 1999

You’ve heard the hype about Linux – a free version of the powerful Unix operating system. You’ve heard it makes a great web server or file server; or you’ve heard that it makes a fast workstation. You’ve envisioned yourself being the envy of your friends with an un-crashable OS, doing cool (yet vaguely dirty)-sounding things like “tweaking your kernel.” And, being an unrepentant geek, you can’t wait to play around with it. So you’re ready to think different – really different – and try installing it.

First, the good news – despite what you’ve heard, Linux installation can be a very simple, non-intimidating process. Even better, Linux has become available for most Macintoshes – both PowerPC and 68k. With any luck, you could be up and running in about an hour.

Now, the bad news – Linux support isn’t perfect for all Mac-compatible machines and peripherals. Most importantly, you’re taking off the training wheels here – installing an operating system on your machine which isn’t officially supported, doesn’t come with much documentation, and can require you to mess around with the very guts of your machine in ways you had never imagined were possible. 

Getting ready to install Linux on a Power Macintosh G3

If you’re still with us after reading that last paragraph, then you’re stout of heart and soul – or you’re a glutton for punishment. Either way, let’s get started. For this example, we’ll be installing LinuxPPC 4 from a CD onto a Power Macintosh G3/266, with an external SCSI 1.5 GB hard drive set up as the Linux volume.

1. Get Prepared

1. First, make sure that you have a supported Macintosh for your Linux of choice (see sidebar) and a hard drive you can repartition to make a home for Linux. At least 400 MB of space is required, and 1.2 GB or more is recommended. You can use your MacOS drive to include Linux volumes, but you’ll have to wipe it clean and repartition it first. 

2. Back up all of your files. Really. You can get away without doing this if you’re installing Linux onto a fresh new disk (and you like living on the edge); but if you’re repartitioning your current MacOS drive, you’ll need to do this because you’ll be wiping your disk clean in the process.

2. Partition Your Disk

You’ll need to create several disk partitions for Linux (at least two, and four or five is recommended). This comes from an old Unix tradition of placing files which seldom change (and important system files) on different partitions from frequently-changing user files so that they’re less likely to be corrupted by frequent writes to the hard disk. The minimum number of partitions is two: one for swap (sort of Linux’s virtual memory scratch disk) and one for /(“root,” or your regular filesystem). It is recommended that you create these two partitions, as well as one for /usr (where most of Unix’s programs are installed) and /home (where users’ personal files are stored).

1. Choose a drive-partitioning utility. Your choice here depends on what type of disk you’re going to use (SCSI or IDE/ATA). If you aren’t sure what type of disk you have, consult the documentation that came with your Mac. 

For SCSI disks, you can use the Apple_HD_SC_Setup program which came with your Mac to create the partitions and set them as the correct type (A/UX). If you have a non-Apple disk or a Mac clone, you can use the third-party utility that came with the drive or computer (like FWB Hard Disk ToolKit) for this. If you have an Apple IDE or SCSI drive, you’ll need to use the Apple Drive Setup utility that came with your Macintosh (make sure you have the newest version). If you use Drive Setup (as we’ll be doing here), you’ll also need the pdisk utility (included on the CD) to convert the HFS partitions you create to their proper type.

2. Open Drive Setup utility and choose the disk you’ll be partitioning. Note that you can’t use Drive Setup from your startup disk; you’ll have to partition another drive, or boot from your MacOS system CD.

3. Get the Tools

First, you’ll need to get Linux and the utilities for installing it. 

1. One of the nice things about “free operating systems” is that they’re just that – free. If you have a fast Internet connection, you can do an installation via FTP from the LinuxPPC site (ftp://ftp.linuxppc.org) or one of its mirrors, and it’s absolutely free. However, it may be easier for most users to order a CD from the good folks at LinuxPPC (http://www.linuxppc.org), which includes a recent distribution plus other programs and goodies for $XX plus shipping. In addition, if you buy a CD, you’re completely free to share it with as many people as you like.

2. Install BootX (included on your LinuxPPC CD), the utility for switching back and forth from MacOS to Linux when you boot your computer. Simply drag the BootX control panel onto your system folder to install it (you’ll need to reboot before you can use it).

If this doesn’t work for you, you can manipulate which OS you boot into through BootVars (http://url.goes.here, or included on the CD), a control panel which allows you to manipulate your Mac’s Open Firmware. However, this isn’t recommended – mucking around with Open Firmware has reduced more than one formerly confident Mac Jedi to bingeing on non-prescription cold medications in frustration (also, this option is no longer officially supported by LinuxPPC).

3. Drag two files from the CD onto your System Folder: vmlinux and ramdisk.image.gz. These files should stay at the “top” level of your System Folder (not inside any folders inside the System Folder).

4. Begin the Installation

1. Insert the LinuxPPC CD into your CD-ROM drive, and open your BootX control panel (double-click it or select it from your Control Panels menu in the Apple Menu).

2. Leave the “root device” field blank. Check the “Use RAM Disk” and “No video driver” options.

3. Click the “Linux” button to reboot your computer into the Linux Red Hat Installer.

5. Use the Red Hat Installer

[Sidebar]

Resources

The LinuxPPC website:

http://www.linuxppc.org

The LinuxPPC Installation Guide:

http://www.linuxppc.org/userguide

The MkLinux website:

http://www.mklinux.apple.com

The Linux Mac68k Project website:

http://www.mac.linux-m68k.org

The Linux on PowerPC newsgroup: comp.os.linux.powerpc

[Sidebar]

Which Macs Can I Get a Linux For?

LinuxPPC 5.0 supports:

Any PCI-based Power Mac, PowerBook or Macintosh clone (including iMac), as well as BeBoxes. “Blue-and-white G3” not supported yet. 

MkLinux 3.0 supports:

NuBus-Based Power Macs (6100, 7100, 8100, 9100), PCI Power Macs (7200, 7500, 7600, 8500, 9500, 7300, 8600, 9600), PCI Performas (4400, 5400, 5500, 6400, 6500), 20th Aniversary Mac, Desktop and Minitower Power Mac G3 (but not “blue-and-white G3” yet), PowerBooks (5300, 1400, 2400, 3400, G3, G3 Series)

LinuxMac68k supports:

Most 68030- and 68040-based Macs (but not most 68LC040-based). 68020-based Macs with a FPU (Mac II, or those with a FPU emulator).

For complete, up-to-date lists, please refer to the website of your Linux of choice.