Why Microsoft Will Rule the World: A Wake-Up Call at Open-Source’s Mid-Life Crisis

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, August 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

In a Nutshell: The original hype over open-source software has died down – and with it, many of the companies built around it. Open-source software projects like Linux, *BSD, Apache and others need to face up to what they’re good at (power, security, speed of development) and what they aren’t (ease of use, corporate-friendliness, control of standards). They will either have to address those issues or remain niche players forever while Microsoft goes on to take over the world.

The Problem

There is a gnawing demon at the center of the computing world, and its name is Microsoft. 

For all the Microsoft-bashing that will go on in the rest of this column, let me state this up front: Microsoft has done an incredible job at what any company wants to do – leverage its strengths (sometimes in violation of anti-trust laws) to support its weaknesses and keep pushing until it wins the game. That’s the reason I hold Microsoft stock – I can’t stand the company, but I know a ruthless winner when I see it. I hope against reason that my investment will fail.

It has been nearly two years since I wrote a column that wasn’t “about” something, that was just a commentary on the state of things. Many of you may disagree with this, and assign it a mental Slashdot rating of “-1: Flamebait.” Nonetheless, I feel very strongly about this, and I think it needs to be said.

Here’s the bottom line: no matter how good the software you create is, it won’t succeed unless enough people choose to use it. Given enough time and the accelerating advances of other software, I guarantee you it will happen. You may not think this could ever be true of  Linux, Apache, or any other open-source software that is best-of-breed. But ask any die-hard user of AmigaOS, OS/2, WordPerfect, and they’ll tell you that you’re just wishing. 

Sure, there plenty of reasons these comparisons are dissimilar; Amiga was tied to proprietary hardware, and OS/2 and WordPerfect are the properties of companies which must produce profitable software or die. “Open-source projects are immune to these problems,” you say. Please read on, and I hope the similarities will become obvious.

For the purposes of this column, I’m going to include Apple and MacOS among the throng (even though Apple’s commitment to open source has frequently been lip service at best), because they’re the best example of software that went up against Microsoft and failed.

You say it’s the best software out there for you – so why does it care what anyone else thinks? It doesn’t, at first. But, slowly, the rest of the world makes life harder for you until you don’t have much choice.

In the Software Ghetto

Look at Apple, for example (disclaimer: I’m a longtime supporter of MacOS, along with FreeBSD and Linux). My company’s office computers are all Windows PCs; my corporate CIO insisted, despite my arguments for “the right tool for the job,” that I couldn’t get Macs for my graphics department. “We’ve standardized on a single platform,” is what he said. He’s not evil or dumb; it’s just that Windows networks are all he knows and are comfortable with.

Big deal, right? Most Mac users are fanatics. There’s a registered multi-million-person-plus community out there of fellow Mac-heads that I can always count on to keep buying Macs and keep the platform alive forever, right? An installed base of more than 20 million desktops plus sales of 1.5 million new computers in the past year alone is enough for perpetual life, right?

Sure, until that number is fewer than the number of licenses that Microsoft decides that it needs to keep producing MS Office for Mac. Right now, the fact that my Mac coexists with the Windows-only network at my office, because I can seamlessly exchange files with my Windows/Office brethren. But as soon as (which Microsoft could easily do by upgrading windows with a proprietary document format that can’t be decoded by other programs without violating the DMCA or something asinine like that) platform-neutral file-sharing goes out the Window (pun intended) … I’m going to have to get a Windows workstation.

Or Intuit decides there’s just not enough users to justify a Mac version of Quicken. Or, several years from now, Adobe decides it’s just not profitable to make a Mac version of Photoshop, InDesign or Illustrator … or even make critical new releases six months or a year behind the Windows version. I can still keep buying Macs … but I’ll need a Windoze box to run my critical applications. As more people do this, Apple won’t have the revenues to fund development of hardware and software worth keeping my loyalty.. And I’ll keep using the Windows box more and more until I finally decide I can’t justify the expense of paying for a computer I love that can’t do what I need.

“Apple,” you say, “is a for-profit company tied to a proprietary hardware architecture! This could never happen to open-source software running on inexpensive, common hardware!”

Open Source with a Closed Door

Let’s step back and look at Linux. A friend of mine works as a webmaster at a company that recently made a decision about what software to use to track website usage statistics. His boss found a product which provided live, real-time statistics – which only ran on Windows with Microsoft IIS. My friend showed off the virtues of Analog as a web stats tool, but they were too complicated for my friend’s boss to decipher. Whatever arguments my friend provided (“Stability! Security! The Virtues of Open Development!”) were simply too intangible to outweigh the benefits his boss wanted, which this one Windows/IIS-only software package provided. So, they switched to Windows as the hosting environment. 

There may come a day when you suggest an open-source software solution (let’s say Apache/Perl/PHP) to your boss or bosses, and they ask you who will run it if you’re gone. “There are plenty of people who know these things,” you say, and your boss says, “Who? I know plenty of MCSEs we can hire to run standardized systems. How do we know we can hire somebody who really knows about ‘Programming on Pearls’ or running our website on ‘PCP’ or whatever you’re talking about? There can’t be that many of them, so they must be more expensive to hire.” Protest as you might, there isn’t a single third-party statistic or study you can cite to prove them wrong.

If you ask the average corporate IT manager about open source, they’ll point to the previous or imminent failures of most Linux-based public companies as “proof” that open-source vendors won’t be there to provide paid phone support in two years like Microsoft will. 

I’m willing to bet that most of you out there can cite examples of the dictum that corporate IT managers don’t ever care about the costs they will save by using Linux. They are held responsible to a group of executives and users that aren’t computer experts, aren’t interested in becoming computer experts, and wouldn’t know the virtues of open source if it walked up and bit them on the ass. They want it to be easy for these people, and fully and seamlessly compatible with what the rest of the world is using, cost be damned. Say what you will – right now, there’s just no logical reason for these people not to choose Windows.

So maybe Linux users drop down to the point where Mac users have (still a significant number) – only the die-hard supporters. But how many of you Linux gurus out there don’t have a separate Windows box or boot partition to play all the games you like that aren’t developed for Linux because of lack of users/market share? Well, what about the next killer app that’s Windows-only until you use Linux less and less? Or the next cool web hosting feature that only MS/IIS has? Or as more MS Internet Explorer-optimized websites appear? 

I’m not arguing that Linux or BSD would ever truly disappear (there are still plenty of OS/2 users out there). I am, however, saying that as market share erodes, so does development; and, over the long run – if things continue on the present course – Windows has already won.

The main point is this: niche software will eventually die. It may take a very long time, but it eventually will die. Mac or Linux supporters claim that market share isn’t important: look at BMW, or Porsche, which have tiny market shares but still thrive. The counterpoint is that if they could only drive on compatible roads, and the local Department of Transportation had to choose between building roads that 95% of cars could ride on or building separate roads for these cars, they would soon have nowhere to drive. True, Linux/BSD has several Windows ABI/API compatibility projects and Macs have the excellent Connectix VirtualPC product for running Windows on Mac, but very few corporate IT managers or novice computer users are going to choose those over “the real thing.” And I’m willing to bet that those two groups make up 90% of the total computer market. 

You can argue all you like that small market share doesn’t mean immediate death. You’re right. But it means you’re moribund. One of the last bastions of DOS development, the MAME arcade game emulator, is switching after all these years to Win32 as its base platform – because the lead developer simply couldn’t use DOS as a true operating platform anymore. It will take time, but it will happen. Think of all the hundreds of thousands (if not millions) of machines out there right now running Windows 3.11 for Workgroups, OS/2, VMS, WordPerfect 5.1, FrameMaker, or even Lotus 1-2-3. They do what they do just fine. But, eventually, they’ll be replaced. With something else.

The Solution

All this complaining aside, the situation certainly isn’t hopeless. The problems are well known; it’s easier to point out problems than solutions. So, what’s the answer? 

For that 90% of users that will decide marketshare and acceptance, two things matter: visible advantages in ease of use, or quantifiable bottom-line cost savings. Note for example how Mac marketshare declined from 25% to less than 10% when the “visible” ease-of-use differential between Mac System 7 and Windows 95 declined. Or, look at how the cost of more-expensive Mac computers and fewer support personnel versus cheaper Windows PCs and more (but certified with estimable salary costs) support personnel.

Open-source software development is driven by programmers. Bless their hearts, they create great software but they’re leading it to its eventual doom. They need to ally firmly with their most antithetical group: users. Every open-source group needs to recruit (or conversely, a signup is needed by) at least one user-interface or marketing person. Every open-source project that doesn’t have at least one person asking the developers at all steps “Why can’t my grandmother figure this out?” is heading for disaster. Those that do are making progress.

Similarly, those open-source software projects that have proprietary competitors and are dealing with some sort of industry standard that aren’t taking a Microsoft-esque “embrace, extend” approach are going to fall behind. If they don’t provide (and there’s nothing against making these open and well-documented) new APIs or hooks for new features, Microsoft will when they release a competing product (and, believe me, they will; wait until Adobe has three consecutive bad quarters and Microsoft buys them). The upshot of this point is that open-source projects can’t just conform to standards that others with greater marketshare will extend; they need to provide unique, fiscally-realizable features of their own. 

Although Red Hat has made steps in this direction, other software projects (including Apple, Apache, GNOME, KDE and others) should work much harder to provide some rudimentary for of certification process to provide some form of standardized qualification. Otherwise, corporate/education/etc. users will have no idea what it costs to hire qualified support personnel.

Lastly, those few corporate entities staking their claims on open source should be sponsoring plenty of studies to show the quantifiable benefits of using their products (including the costs of support personnel, etc.). The concepts of “ease of use” or “open software” don’t mean jack to anyone who isn’t a computer partisan; those who consider computers to merely be tools must be shown why something is better than the “safe” choice.

Darwin Evolves – Apple’s BSD Hits the Prime Time

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, May 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

In a Nutshell: DarwinOS, the core of Apple’s just-released MacOS X, is open-source and available as a free download. While it inherits many characteristics from current BSD Unixes, it’s also radically different – revealing its true identity as the direct descendant of the ahead-of-its-time NeXTSTEP operating system. Darwin has some rough edges and missing documentation, but offers some very interesting possibilities. Unfortunately, the fact that it’s intended for Apple hardware will limit its appeal to many ISPs and sysadmins.

By the time you read this, Apple’s MacOS X should be on the street – the first-ever consumer OS with a Mach microkernel/BSD Unix core. That core is called Darwin, and it’s an open-source project – available under the BSD-ish Apple Public Source License – that can be downloaded for free (MacOS X, which includes the new Mac GUI and lots of other goodies, costs $129).

I’ve talked about Darwin here before, when it was in its earlier stages, but wasn’t able to go into many specifics about what Darwin was really like for a Unix admin. Now that the version that ships with MacOS X 1.0 is here, let’s take a look at this remarkable OS change for Apple.

The Politics Behind Darwin

Darwin is the triumph of NeXT over Apple. The GUI-only, outmoded classic MacOS of Apple’s last 17 years is gone, and a tough, sexy Unix core has replaced it (although MacOS X includes a compatibility layer to run existing MacOS apps). To understand why – and to understand why a Unix admin might be very interested in an Apple OS (other than the “research project” MkLinux) for the first time in many years – it helps to know the politics behind the situation.

In case you haven’t been paying attention to Apple’s corporate soap opera for the last few years (or, more likely, just don’t care), here’s the short version. Apple was at its lowest depths in late 1997, having wasted more than six fruitless years trying to create a modern operating system to replace MacOS (which was still building on foundations laid in 1984). Then-Apple CEO Gil Amelio instead looked to buy a company that already had an advanced OS, and settled on NeXT, the failing company run by Apple’s exiled cofounder Steve Jobs. The brilliant, mercurial, occasionally tantrum-prone Jobs was instrumental in building Apple from Steve Wozniak’s garage into an industry giant. But in 1985, he was fired by his own board of directors, essentially for being a major jerk to everyone within a 50-yard radius. It was a very nasty “divorce,” and Jobs left in a huff to found NeXT.

The NeXTSTEP (later called OPENSTEP) operating system which powered NeXT’s computers was years ahead of its time – but shipped on underpowered, grossly overpriced computers (sound familiar?) and was rejected by the marketplace. NeXTSTEP was based on Unix and the Mach microkernel, and included a radical new GUI and a pretty cool object-oriented development framework. 

Microkernels (of which Mach is the most famous example) provided a much more elegant OS design than most OSes used (and still use), by moving everything but memory management and other lowest-level tasks out of the kernel. However, the overhead of this elegant and scalable design provided reduced performance in many “everyday” situations (like replacing a simple Excel spreadsheet with a full relational database), and microkernels were sidelined as academic curiosities. The NeXT object-oriented development kit was very advanced, but required knowledge of the relatively obscure Objective-C language and was largely ignored as well.

For Apple’s $400 million purchase of NeXT in 1997, they got not only the company’s OS but CEO Jobs as well. Then-Apple CEO Gil Amelio thought he was getting a valuable figurehead/consultant in Steve Jobs. But an ex-Apple employee who knew Jobs better than Amelio did predicted to Apple CTO Ellen Hancock that “Steve is going to f*** Gil so hard his eardrums will pop.” Sure enough, on July 6 1998, Gil Amelio was forced to resign and Steve Jobs once again was in charge of Apple. (Hancock resigned when Amelio did, and went on to become president of web hosting and datacenter giant Exodus.)

The rest is history. In the foreground, Jobs was introducing the fruit-colored, legacy-free  iMac to the world, sparking Apple’s sales resurgence. In the background, nearly all of Jobs’ loyal NeXT troops were assuming the top posts at Apple and changing the company’s technology and direction.

In 1999, the hype about Linux and open source was at its height, and Apple felt the pressure to join the crowd. Since BSD and Mach – which formed the core of NeXTSTEP – were already open source, it wasn’t hard for the normally ultra-proprietary Apple to take the step of officially open-sourcing the core of the forthcoming MacOS X. The NeXTSTEP core of MacOS X officially became “Darwin,” and a developer community of Apple engineers, Macolytes and Unix hackers began to form around the project. Along the way it saw contributions from Quake designer John Carmack, a basic port to Intel x86 hardware, and a “1.0” version that has evolved significantly as MacOS X neared release.

It’s noteworthy to keep in mind how much of a departure Darwin is from the old MacOS. It’s as if the “House that GUI Built” was taken over by the Unix geeks from Carnegie-Mellon and handed over to the hacker community for safekeeping. NeXT was the “ugly step-child” of the BSD family that nobody else noticed; and now, it will claim more users than the rest of the family combined – probably in less than six months.

Getting In-Depth with Darwin

After all that, it’s time to get under the hood of Darwin as a Unix admin would approach it. Many reports have claimed erroneously (at times, I have said this as well) that Darwin was basically “FreeBSD on Mac.” In fact, it’s a bit of updated Mach, a bit of traditional Free/Open/NetBSD, and a lot of NeXTSTEP flavoring. 

Darwin uses the Mach microkernel, but attempts to solve some of its performance problems by moving much of the BSD functionality into the same “kernel space” (rather than in “userland” as a pure microkernel would). As such, it merges the two worlds in a way that is designed to keep the architectural elegance of a microkernel design while minimizing the performance overhead that microkernel process scheduling causes.

The first thing a BSD Unix admin will notice upon logging into Darwin is its familiarity. The default shell is tcsh, and nearly all of the customary /bin, /usr/bin/ and /usr/sbin utilities are there. In addition, you’ll find vipicoperlsendmailApache, and other typical goodies. Ports to Darwin of popular Unix open-source Unix apps – from mysql to XFree 4.0.2 are proliferating rapidly. From a typical user’s perspective, it’s almost indistinguishable from *BSD.

It’s only once you become root and muck around in the system’s administration internals that you start to notice what makes the system a true child of NeXTSTEP. You’ll notice in /etc (actually a link to /private/etc; see below for more information) that /etc/resolv.conf doesn’t contain what you would expect. Nor does /etc/hosts, /etc/group or /etc/passwd. Why? It’s because Darwin wraps many of the functions contained in separate system and network configuration files in *BSD into a single service (inherited from NeXT) called NetInfo. Why is this useful rather than just annoying?

In *BSD and Linux, a number of different services derive their information from separate text configuration files – each with its own syntax and options. There’s no global preferences database – for example, any application that doesn’t automatically know how to read all the items in /etc/resolv.conf can’t find out what name servers your computer is using, or a program that can’t parse the syntax of /etc/passwd doesn’t know which users are on your system. 

Somewhat like the Microsoft Windows Registry, Darwin’s NetInfo provides a database of network and user settings that can be read by any other NetInfo-aware application. NetInfo supersedes the information in the traditional /etc/* configuration files, as well as being favored by system services. NetInfo is consulted not only by MacOS X-native applications, but also by traditional BSD/Unix applications as well (making it much easier to port these apps to Darwin). The Apple engineers have accomplished this by hooking a check into each libc system data lookup function to consult NetInfo if it’s running (by default, it’s only “off” in single-user mode). 

MacOS X’s GUI provides graphical tools for manipulating the NetInfo database; in Darwin, this can be done using the niutil and nicl commands (use man niutil and man nicl to see the syntax and options; it’s interesting to note that these man pages are dated from NeXT days). NetInfo can also “inherit” its settings from a “parent” NetInfo server, so you can create one server which has everyone’s account information on it, and all of its client machines will have their login info, network setup, et cetera (imagine a “family” of servers where users can interchangeably log in with the same accounts, settings, etc.).

Like NetInfo settings, application preferences are stored in a global XML database; they can be manipulated from the command-line defaults program. Typing man defaults from the command line will give you an idea of how its structure works.

One area that has been changed since the MacOS X Public Beta is that it is no longer necessary to reboot or log out/log back in to change network settings. Anyone who has used *BSD/Linux on mobile computers, or changed network profiles in Windows NT will appreciate this difference.

Darwin/MacOS X includes a /private directory at the root filesystem level, which includes the normal BSD /etc, /var, /tmp and the Darwin /cores and /Drivers). It appears that (in behavior inherited from NeXT) this directory is filled with info that is machine-specific, so that the rest of the filesystem can be booted from a parent network server. This clearly plants the roots for Darwin or MacOS X-based systems to serve as “thin clients.”

As for performance, recent builds of Darwin work admirably on mid-range Mac hardware. The only real complaint I have about Darwin (my gripes with the MacOS X user interface could fill a small book, however) is its woeful lack of documentation. While many BSDs suffer a similar problem, their user communities have had time to fill in the gaps; the changes that Darwin makes to the traditional BSD model are largely known only to the Darwin community and old-school NeXT gurus.

The closer I look at it, the more Darwin (and MacOS X) is appearing to take up where Novell left off in the race to compete with Windows NT/2000 in the corporate network space. It might be possible that Apple is looking to go head-to-head with Windows Whistler/XP while nobody is looking. And any victory in that space for *nix is something to be cheered.

Darwin’s Dilemma

Unfortunately, most of the work that has gone into Darwin thus far (understandably) has been in developing it for recent Apple hardware. While an Intel x86 port exists (but is still as of this writing in embryonic stage for hardware support), and older PowerPC Mac hardware support will undoubtedly extend over time, the current release of Darwin is not officially supported for Mac hardware older than G3 Power Macs. This sadly eliminates (at least for now) using Darwin to make a great server out of any old Mac hardware you have sitting on a shelf.

When you buy a new Mac, you’re paying for things (like full MacOS X, an ATI Radeon or nVidia GeForce video card, optical mouse, etc.) that you aren’t going to care about as a server admin. While new Mac systems are surprisingly powerful considering their CPU clock speed (I would sooner compare a G4 to an UltraSPARC III than a Pentium III), you still won’t get the same performance dollar-for-dollar as you would with commodity x86 hardware and a free OS. 

As a result, buying a new Mac just to make into a Darwin server simply isn’t worth the money. If you love the GUI tools of MacOS X, it may be worthwhile (I’m personally salivating over the new Titanium PowerBook G4); otherwise, it still doesn’t make bottom-line dollars and sense to purchase a new Mac as a Darwin server.

As Darwin expands its processor base, this may change. In the meantime, it’s well worth your while to keep an eye on the Darwin project, and to get to know it, since some of its features are well worth adopting by the other BSDs and Linux.

Some of proceeding items stem from a series of articles that BSD guru Matt Loschert and I wrote for the BSD news site Daemon News (see www.daemonnews.org/200011/ and www.daemonnews.org/200012/ for excessively long and drawn-out versions). 😉 For great info on Darwin, you can skip its home page (www.opensource.apple.com) and go straight to Darwin Info (www.darwinfo.org). Also, check Xappeal (www.xappeal.org) and Apple’s Darwin lead Wilfredo Sanchez’s updates page (www.advogato.org/person/wsanchez). As always, let me know about any comments, corrections or suggestions you have, and I’ll publish them in a future column.

Linux 2.4: What’s in it For You?

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, April 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

In a Nutshell: Linux’s day as a scalable server is here. The long wait for Linux kernel 2.4 made its release seem somewhat anti-climactic, so many of its new features  have gone largely unnoticed. Although many of the changes were available as special add-ons to 2.2.x kernels before, the 2.4 kernel wraps them all together in a neat package – as well as integrating a number of great new features, notably in the networking, firewall and server areas. 

Bringing Everybody Up to Speed

If you’re at all familiar with the Linux kernel and its upgrade cycle, you can skip the next several paragraphs and go on to make fun of the technical inaccuracies and indefensible opinions in the rest of the column. Everybody else should read these introductory paragraphs, and only then should they go on and make fun of the rest of the column.

The Linux kernel is the foundation of the Linux operating system, since it handles all of the low-level “dirty work” like handling processes and memory, I/O to drives and peripherals, networking protocols and other goodies. The capabilities and performance of the kernel in many ways circumscribe the capabilities and performance of all the programs that run on Linux, so its features and stability are critical.

In Linux, odd-numbered “minor” version numbers (like x.3) are unstable, developmental versions where adding cool new features is more important than whether or not they make the system crash. They are tested in the developmental kernels and once the bugs are worked out, they are “released” as the next highest even-numbered minor version (like x.4) kernel, which is considered stable enough for normal users to run. Linux kernel 2.2.x was the “stable” kernel (while kernel 2.3.x was the developmental version) from January 1999 to January 2001, when the new stable kernel became the very long-awaited 2.4.x (as of this writing, the most recent version was 2.4.2).

What’s New, Pussycat?

The changes since Linux kernel version 2.2 largely reflect the expansion of Linux as it comes to be used in an ever-wider variety of hardware and for different user needs. It wraps in features required to run on tiny embedded devices and multi-CPU servers as well as traditional workstations. Improving Linux’s multiprocessing capabilities also requires cleaning up a lot of other kernel parts so they can make use of (and also not to get in the way of) using multiple processors. To expand Linux’s acceptance in the consumer marketplace, it includes drivers for a large number of new devices. And to hasten Linux’s acceptance in the server market (especially on the high-end), it has enhanced its networking performance – notably in areas where earlier benchmarks had shown it losing to Microsoft’s Windows NT/2000.

With high-end server vendors (most notably IBM) embracing Linux, they have pushed for the kernel to include the features that would make Linux run reliably on high-end hardware. In addition to all the CPU types supported by Linux 2.2, the Intel ia64 (Itanium) architecture is now supported, as are the IBM S/390 and Hitachi SuperH (Windows CE hardware) architectures. There are optimizations for not only the latest Intel x86 processors, but also their AMD and Cyrix brethren as well, plus Memory Type Range Registers (MTRR/MCRs) for these processor types. Support for Transmeta Crusoe processors is built-in (as you would expect from Linus Torvalds being an employee of Transmeta). Whereas kernel 2.2 scaled well up to four processors, 2.4 supports up to 16 CPUs. 

As part of the “scalability” push, a number of previous limitations have been removed in kernel 2.4. The former 2 GB size limit for individual files has been erased. Intel x86-based hardware can now support up to 4 GB of RAM. One system can now accept up to 16 Ethernet cards, as well as up to 10 IDE controllers. The previous system limit of 1024 threads has been removed, and the new thread limit is set at run time based on the system’s amount of RAM. The maximum number of users has been increased to 2^32 (about 4.2 billion). The scheduler has been improved to be more efficient on systems with many processes, and the kernel’s resource management code has been rewritten to make it more scalable as well.

Improved Networking

Kernel 2.4’s networking layer has been overhauled, with much of the effort going into making improvements necessary for dealing efficiently with multiprocessing. Improved routing capabilities have been added into Linux by splitting the network subsystem into improved packet filtering and Network Address Translation (NAT) layers; modules are included to make backward compatibility with kernel 2.0 ipfwadm and 2.2 ipchains-based applications available. Firewall and Internet protocol functions have also been added to the kernel. 

Linux’s improved routing capabilities make use of a package called iproute2. They include the ability to throttle bandwidth for or from certain computers, to multiplex several servers as one for load-balancing purposes, or even to do routing based on user ID, MAC address, IP address, port, type of service or even time of day.

The new kernel’s firewall system (Netfilter) provides Linux’s first built-in “stateful” (remembering the state of previous packets received from a particular IP address) firewalling system. Stateful firewalls are also easier to administer with rules, since they automatically exclude many more “suspect” network transactions. Netfilter also provides improved logging via the kernel log system, automatically including things like SMB requests coming from outside your network, the ability to set different warning levels for different activities, and the ability to send certain warning-level items to a different source (like sending certain-level logging activities directly to a printer so the records are physically untouchable by a cracker that could erase the logfiles).

The system is largely backward-compatible, but it now allows Netfilter to detect many “stealth” scans (say goodbye to hacker tool nmap?) that Linux firewalls previously couldn’t detect, and blocks more DoS attacks (like SYN floods) by intelligently rate-limiting user-defined packet types. 

Under kernel 2.2 (using a model that is standard across most Unix variants), all Unix network sockets waiting for an event were “awakened” when any activity was detected – even though the request was addressed to only one of those sockets. The new “wake one” architecture awakens only one socket, reducing processor overhead and improving Linux’s server performance.

A number of new protocols have been added as well, such as ATM and PPP-over-Ethernet support. DECnet support has been added for interfacing with high-end Digital (now Compaq) systems and ARCNet protocols. Support for the Server Message Block (SMB) protocol is now built-in rather than optional. SMB allows Linux clients to file share with Windows PCs, although the popular Samba package is still required for the Linux box to act as an SMB server.

Linux 2.4 has a web server called khttpd that integrates web serving directly into the kernel (like Microsoft’s IIS on WinNT or Solaris’s NCA). While not intended as a real replacement for Apache, khttpd’s ability to serve static-only content (it passes CGI or other dynamic content to another web server application) from within the kernel memory space provides very fast response times.

Get On the Bus

Linux’s existing bus drivers have been improved as part of the new resource subsystem, plus significant improvements and new drivers (including Ultra-160!) for the SCSI bus support. Logical Volume Manager (LVM), a standard in high-end systems like HP/UX and Digital/Tru64 UNIX that allows volumes to span multiple disks or be dynamically resized, is now part of the kernel. Support is also there for ISA Plug-and-Play, Intelligent Input/Output (I2O, a superset of PCI), and an increased number of IDE drivers.

The device filesystem has been changed significantly in kernel 2.4, and the naming convention for devices has been changed to add new “name space” for devices. These device names will now be added dynamically to /dev by the kernel, rather than all potential device names needing to be present beforehand in /dev whether used or not. While backward-compatibility is intended, this may interfere with some applications (most notably Zip drive drivers) that worked with previous kernel versions.

New filesystems have been added (including a functional OS/2 HPFS driver, IRIX XFS (EFS), NeXT UFS supporting CD-ROM and NFS version 3). Support for accessing shares via NFSv3 is a major step forward, although Linux volumes will still be exported using NFSv2. Linux’s method for accessing all filesystems has been optimized, with the cache layer using a single buffer for reading and writing operations; file operations should now be faster on transfers involving multiple disks.

For the Masses

There are, of course, a large number of updates to Linux that are primarily oriented towards the desktop (rather than server) user. A generic parallel port driver has been added which enables abstract communication with devices; this can be used for things like Plug-and-Play (PnP) polling or splitting the root console off to a parallel port device (like a printer). The new Direct Rendering Manager (DRM) provides a “cleaned-up” interface to the video hardware and removes the crash-inducing problem of multiple processes writing to a single video card at once.

There are a wide variety of new graphics and sound card drivers (including support for speech synthesizer cards for the visually impaired). The UDF filesystem used by DVDs has been added, and infrared (IrDA) port support is also included. Support for Universal Serial Bus (USB) has been added but isn’t yet perfect (although IMHO, whose USB implementation is?) PCMCIA card support for laptops is now part of the standard kernel rather than requiring a custom kernel version, but an external daemon will still be required for full support. FireWire/i.Link (IEEE 1394) support is there, as well as IEEE 802.11b wireless (Apple’s “AirPort,” Lucent’s “WaveLAN”).

Probably the most far-out “consumer”-level enhancement is that kernel 2.4 has added support for the rare infrared RS-219 standard, a management interface used by specialized remote controls for Mobil and Amoco station (and some others) car washes! With the optional xwash software package, this can actually be used (on a laptop) to send signals for a “free” carwash. 

I’m kidding about that last one.

Is Anything Missing?

The 2.4 kernel itself does not have encryption technology built into it; that’s probably a wise decision, based on the various cryptography regulations of countries worldwide that might actually make it prohibitive to export or import the Linux kernel. Unlike the 2.2 kernel which included Java support automatically, you must specifically include it when building a 2.4 kernel.

Although Journaling File System (JFS) efforts have been underway for a while, their maturity was not sufficient to include in kernel 2.4. JFS systems  – a major requirement for true mission-critical servers – record (“journal”) all of their operations (analogous to a transactional database), so advanced data recovery operations (such as after a crash or power loss during read/write operations) are possible. See IBM’s open-sourced JFS project (http://oss.software.ibm.com/developerworks/opensource/jfs/?dwzone=opensource) for more information and software availability.

For Mac Linux users, support for newer Mac Extended Format (HFS+) disks has not yet been added. As of this writing, the NTFS (Windows NT/2000 file system) driver can read but not write data from within Linux. Alas, support for Intel 8086 or 80286 chips is not present either.

Lastly, you should immediately assume that things that worked with kernel 2.2 will always work with 2.4. Changes in the device filesystem and the block device API (block devices are non-serial objects; i.e. devices like hard disks or CDs that can have any sector on them accessed randomly rather than receiving input in order) may break compatibility with some existing drivers.

Getting In-Depth with Linux 2.4

In this column, I’ve only been able to touch the surface of the new functionality available in Linux kernel 2.4. The “definitive” (most frequently quoted) analysis of 2.4 kernel changes is an ongoing set of posts to the Linux kernel-developers list by Joe Pranevich. The (currently) most recent version can be found at http://linuxtoday.com/stories/15936.html.

There’s also a good “kernel 2.2 vs. 2.4 shootout” with specific application test results and details at http://www.thedukeofurl.org/reviews/misc/kernel2224 and upgrade instructions for kernel 2.2.x systems at http://www.thedukeofurl.org/reviews/misc/kernel2224/5.shtml.

For an excellent overview of Linux 2.4’s new firewalling capabilities, see the article at SecurityFocus (http://securityportal.com/cover/coverstory20010122.html). For great information on the new network system’s routing capabilities, check the HOWTO (http://www.ds9a.nl/2.4Networking/HOWTO//cvs/2.4routing/output/2.4routing.html). A detailed article on the new security features in 2.4 can be found at http://www.linuxsecurity.com/feature_stories/kernel-24-security.html.

For a more in-depth overview of the general features, read the IBM DeveloperWorks kernel preview part 1 (http://www-106.ibm.com/developerworks/library/kernel1.html) and part 2 (http://www-106.ibm.com/developerworks/library/kernel2.html).

For an interesting comparison between Linux 2.4 and FreeBSD 4.1.1 (ignoring many of the advanced features of the new Linux kernel and concentrating on common tasks), see Byte Magazine’s article (http://www.byte.com/column/BYT20010130S0010).

For the kernels themselves and the most recent change logs, visit http://www.kernel.org/pub/linux/kernel/v2.4/. For ongoing news about Linux 2.4 and life in general, see http://slashdot.org.

Webhosting with Free Software Cheat Sheet

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, April 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

So you want to run a webserver without paying a dime for software, eh? Or you want to make sure you have the source code to all your webserving applications in case you get bored and decide to try and port them to your networked office Xerox copier? Well, you’re in luck; webhosting with free (open-source, or free as in “free speech”) and free (no cost, or free as in “free beer”) software isn’t just possible, it also provides some of the best tools out there at any price.

In case you’re new to the game, or you’re looking for alternatives to packages you’re using now, the following is a brief guide to some of the more popular options that are out there. Trying to condense the features of any OS or application down to a couple sentences is inherently a dangerous thing; and I’m sure that many fans of the software listed below will disagree with elements of my “Reader’s Digest Condensed (Large Print Version)” summaries. Still, the following – based on my experiences and those of others – should provide at least a basic idea of what’s out there and why you might – or might not – want to choose it.

Operating Systems

Linux vs. BSD:

These OSes show the characteristics of their development styles: BSD was developed by small teams, largely focused on server hardware. Linux has been developed by many more people with wider uses, focusing more on desktop/workstation uses. 

BSD has been around longer and is (in some ways) more optimized for server use. Due to its hype, Linux has many more developers, and almost all new third-party software is available for Linux first. Linux has the edge in user-friendliness, because distributions are targeting new users; BSD is, for the most part, more for the “old school.” Linux has also been adopted by a number of server hardware vendors producing “integrated” solutions.

Ultimately, it’s a matter of what you feel most comfortable with. Either way, with commodity x86 hardware, your server components (RAM, drives, etc.) and network connection will affect your performance much more than your choice of Linux vs. BSD will.

• FreeBSD (www.freebsd.org)

Best known among the BSDs. Concentrates on x86 architecture, server performance, integration of utilities. Standout features include ports collection, sysinstall admin utility, Linux binary compatibility, frequent releases.

• NetBSD (www.netbsd.org)

BSD with a focus on porting it to as many platforms as possible and keeping code portable. Great for using old/odd hardware as a server. Infrequent releases, not as popular as other BSDs.

• OpenBSD (www.openbsd.org)

BSD with a focus on security. Still in the process of line-by-line security audit of the whole OS. Infrequently released, utilities/packages lag behind other OSes because of security audits, but it’s the #1 choice if security is your primary concern.

• Red Hat Linux (www.redhat.com) 

The number one U.S. distro, considered by many (rightly or wrongly) as “the standard.” As a result, it’s what many third-party/commercial Linux apps are tested against/designed for. Early adopter of new features in its releases; is on the cutting edge, but sometimes buggy until “release X.1.” Standout features: Red Hat Package Manager (RPM) installation, third-party support.

• SuSE Linux (www.suse.com)

The number one European distro. Favored by many because its six-CD set includes lots and lots of third-party software to install on CD. Less “cutting-edge” than Red Hat. Standout features include the YaST/YaST2 setup utility and the SaX X Windows setup tool.

• Slackware Linux (www.slackware.com)

Designed for experts: Slackware has no training wheels, and is probably the most “server-oriented” of Linux distros (maybe because of its close relationship to the BSDs). Not cutting-edge, few frills, but designed to be stable and familiar to BSD administrators.

• Linux Mandrake (www.linux-mandrake.com/en)

A solid, user-friendly distribution with good (but not great) documentation. Standout features include the DrakX system configuration utility and the DiskDrake disk partitioning utility.

• Debian GNU/Linux (www.debian.org)

The ideological Linux – totally supported by users rather than a corporation, and free (as is the GNU definition) software only is included. This is “ideologically pure” Linux – GNU-approved, but infrequent releases and not necessarily a good choice for beginners.

• Caldera OpenLinux (www.caldera.com/eserver)

Very user-friendly for new users. Standout features include LIZARD, its setup/configuration wizard.

• Corel LinuxOS (linux.corel.com)

By the time you read this, Corel will have sold its LinuxOS product to someone else, but the distro should remain the same. Ease of use for Windows converts is stressed, includes great SAMBA integration. Good for new users. Focus is mainly on desktop use.

Essentials

• Perl (www.perl.com)

From CGI to simple administration tasks, Perl scripts can cover a lot of territory. Perl is a must, and is practically part of Unix now. Check the Comprehensive Perl Archive Network (www.cpan.org) to find modules to extend Perl’s functionality.

• Perl’s cgi-lib.pl (cgi-lib.berkeley.edu) and/or CGI.pm (stein.cshl.org/WWW/software/CGI)

These are also “must-haves” for CGI scripts, whether you’re writing your own or using scripts found “out there” on the web.

• Sendmail (www.sendmail.org) or Qmail (www.qmail.org)

Free mailservers. Sendmail has the history and the documentation (a good thing, since its internals are famously complex), but Qmail has a less-complicated design, and a strong and growing band of followers.

• wu-ftpd (www.wu-ftpd.org)

A significant improvement in features over the classic BSD FTP daemon – for both BSD and Linux. Despite an older security flaw that was recently exploited by the “Ramen” Linux worm, it’s a very good program.

• OpenSSH (www.openssh.com/)

In this day and age, Telnet has become a liability for security reasons. There’s no reason not to migrate users who need a shell account to SSH. See www.freessh.org for a list of clients.

Web Servers

• Apache 1.3.x (www.apache.org/httpd.html)

The current king of web servers. Very good performance, stable enough to run on mission-critical systems. Very user-friendly to install and configure (due to comments in httpd.conf), but not always as easy as it should be to debug problems.

• Apache 2.x (www.apache.org/httpd.html)

Still in beta development, but may be final by the time you read this. It probably shouldn’t be used for mission-critical systems until it’s had a few months of time “out there” to find bugs after its final release. Version 2.0 will be much easier to add new protocols into (like FTP or WAP), and should have significantly better performance because of its multi-threaded nature.

• Roxen (www.roxen.com/products/webserver)

Roxen is much more than just a simple webserver – it includes its own web admin interface, secure server, and more. Used by real.com; shows promise but doesn’t have the acceptance level yet of Apache.

Secure Web Servers

Note: You may receive a “security warning” in most web browsers about your secure certificate if you generate your own secure certificate (free). For a non-free certificate created by an authority that most web browsers will accept without a warning, see VeriSign (www.verisign.com/products/site/ss/index.html), Thawte (www.thawte.com), Baltimore (www.baltimore.com/cybertrust), ValiCert (www.valicert.com/), Digital Signature Trust Co. (www.digsigtrust.com) or Equifax (www.equifaxsecure.com/ebusinessid) for more information.

• ApacheSSL (www.apache-ssl.org)

Focused on stability/reliability, and lacking in “bells and whistles” features. It’s simple and it works, but it lacks some features of mod_ssl and it isn’t updated very often.

• mod_ssl (www.modssl.org)

Originally based on ApacheSSL, mod_ssl is now largely rewritten and offers a number of extra features, plus better documentation. 

Microsoft Web Compatibility

• FrontPage Extensions for Unix (www.rtr.com/fpsupport)

On one hand, it allows you to host sites built and published with FrontPage on a Unix server. On the other hand, it’s possibly the biggest piece of junk Unix software ever created. Use it if you have to; avoid it if you can.

• Improved mod_frontpage (home.edo.uni-dortmund.de/~chripo)

Addresses a number of problems with mod_frontpage (www.darkorb.net/pub/frontpage), with extra security and documentation, support for Dynamic Shared Objects (DSOs), better logging, as well as (unverified) claims of increased performance.

• Apache::ASP (www.nodeworks.com/asp)

An alternative to the very expensive ChiliSoft or Halcyon ASP Unix solutions, using Perl as the scripting language for ASPs. Requires the Apache mod_perl.

• asp2php (asp2php.naken.cc)

As its FAQ says, “ASP2PHP was written to help you correct the mistake of using ASP.” Converts ASP scripts to PHP scripts for use with Apache/PHP.

Application Building

• Zope (www.zope.org)

A complete tool for building dynamic websites; there’s a (somewhat) stiff learning curve that may be too much for basic needs. Zope offers incredible functionality, and is well-suited to large projects and web applications; it may be overkill for simple scripting that could be done with PHP or Perl CGIs.

• PHP (www.php.net)

The favorite open-source tool for building dynamic websites, and the open-source alternative to ASP. Reliable, uses syntax that seems like a cross between Perl and C, and features native integration with Apache. Version 4 is thread-safe, modular, and reads then compiles code rather than executing it as it reads (making it much faster with large, complex scripts).

Database Software

Note: for a more in-depth comparison, I highly recommend the O’Reilly book MySQL and mSQL, as well as the article “MySQL and PostgreSQL Compared” (www.phpbuilder.com/columns/tim20000705.php3).

• MySQL (www.mysql.com)

The “Red Hat” of free relational database software. Well-documented, and its performance for most users is excellent, designed around fast “read” rather than “write” operations. It doesn’t offer “subselect” functionality, and tends to buckle under very heavy loads (more than 15 concurrent users per second), but is very fast and reliable for most sites.

• PostgreSQL (www.postgresql.org)

Has an active developer community, especially popular among the “GPL-only” crowd. Offers advanced features that MySQL doesn’t (subselects, transactional features, etc.), but traditionally wasn’t as fast for common uses and sometimes suffered data corruption. New versions appear to have remedied most of these deficiencies.

• mSQL (www.hughes.com.au) 

The first of the bunch, but appears to have fallen behind. More mature than MySQL or PostgreSQL, but may not have all of the features of its rapidly developing brethren.

Administration

• Webmin (www.webmin.com/webmin)

Fully featured web-based administration tool for web, mail, etc. Offers excellent functionality, but can present a potential security risk (I get really nervous about anything web-accessible which runs with root permissions).

Java Servlets

• Tomcat (jakarta.apache.org) and JServ/mod_jserv (java.apache.org)

Tomcat is an implementation of the Java Servlet 2.2 and JavaServer 1.1 specifications that works with other browsers as well as Apache. JServ is an Apache module for the execution of servlets. The two work together to serve JSPs independently of Apache.

Website Search

• ht://Dig (www.htdig.org)

ht://dig is relatively simple to set up, and (with a few quirks) offers excellent searching capabilities. Easily customizable, and has a good “ratings-based” results engine.

• MiniSearch (www.dansteinman.com/minisearch)

A simple Perl search engine, which can also be run from the command line. Not as fully featured as ht://dig, but good enough for basic needs.

Web Statistics

• analog (www.statslab.cam.ac.uk/~sret1/analog)

Analog is extremely fast, reliable and absolutely filled with features. Its documentation is a bit confusing for beginners, however, and it takes some configuration to make it pretty.

• wwwstat (www.ics.uci.edu/pub/websoft/wwwstat)

A no-frills, simple statistics analysis program that delivers the basics.

Other Goodies

Configuration Scripts:

• Install-Webserver (members.xoom.com/xeer)

• Install-Qmail (members.xoom.com/xeer)

• Install-Sendmail (members.xoom.com/xeer)

Shopping Carts:

• Aktivate (www.allen-keul.com/aktivate)

Aktivate is an “end-to-end e-commerce solution” for Linux and other Unixes. It is targeted at small-to-medium-sized businesses or charities that want to accept credit card payments over the Web and conduct e-commerce. 

• OpenCart (www.opencart.com)

OpenCart is an open source Perl-based online shopping cart system. It was originally built to handle the consumer demands of Walnut Creek CDROM, was later expanded to also work with The FreeBSD Mall, and was finally developed to be used by the general public.

• Commerce.cgi (www.careyinternet.com)

Commerce.cgi is a free shopping cart program. Included is a Store Manager application to update program settings, and you can add/remove products from the inventory through a web interface.

Message Boards:

• WaddleSoft Message Board (www.ewaddle.com)

WaddleSoft is a message board system that includes polls, user registration, an extensive search engine, and sessions to track visitors.

• MyBoard (myboard.newmail.ru)

MyBoard is very easy and light-weight web messageboard system. It also has some extended features such as search and user registration.

• NeoBoard (www.neoboard.net)

NeoBoard is a Web-based threaded message board written in PHP. It includes a wide variety of advanced features for those comfortable with PHP.  

• PerlBoard (caspian.twu.net/code/perlboard)

PerlBoard is a threaded messageboard system written in Perl. It is very easy to use and set up, and has been time-tested for the past several years on the site it was originally written for.

• RPGboard (www.resonatorsoft.com/software/rpgboard)

RPGBoard is a WWWBoard-style message board script. It includes a list of features as long as your arm, and is well worth checking out for those who need a rather advanced message board.

Notes from the Underground

If you see a favorite package here that I’ve overlooked, or would like to offer comments on any of the package descriptions, e-mail me at [email protected]. I’ll update this list with more information for a future column.

The Web Server First Aid Kit

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, March 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

It’s a sad fact that most system administration learning is done in the minutes and hours after you say the words, “Wow. I’ve never seen something get broken that way before.” Learning to be a sysadmin means that you discover how to fix all the problems that pop up, until you find a problem you’ve never run into before. Then you scramble to learn how to fix that, and you’re fine until the next new Unidentified Weird Thing™ happens. And so on.

Fortunately, about 90 percent of Unix web/mail/etc. server problems can be discovered or fixed with just a few tools – much like 90 percent of all household repairs can be done with a screwdriver, a wrench or a baseball bat. Knowing just a few likely trouble spots and troubleshooting tools can help you resolve a lot of that unidentified weirdness without getting so frustrated that you want to rip the hard drives out of the computer and make refrigerator magnets out of them.

The key here is that unlike cars or girlfriends, everything that goes wrong on a Unix system happens for a clearly defined reason. While that reason may sometimes be freakish or undocumented, it’s almost always one of a few fairly common issues. 

So, with that in mind, we’re going to take a look at the Handy Tools and the Usual Suspects – the top commands and tools to use, and common places to look that will at least shed a clue on most server problems. I’m going to use FreeBSD as the example system – but most other BSDs and Linux can use the same tools, even if they act slightly differently or are located in a different place in the filesystem.

The Handy Tools

• When a server is responding slowly, you need to figure out whether the problem is on the server or in the network. After you’ve logged in to the server and become root, your first stop should be uptime.

The important part of the information it provides is the server’s load averages – shown for the last one, five and 15 minutes. If the load average is high (above two or three), the most likely cause of the slowness is one or more “runaway” processes or some other processes extensively utilizing the system. If the load average isn’t high, then you’re probably looking at a networking issue that is slowing access to the server.

• If you’ve found that a high load average is the likely culprit, turn to top. The top command lists the server’s process in order of CPU and memory utilization.

By default, top shows the top 10 processes, or you can use it in the form top N, where N is the number of processes you wish it to show. If you have one or more “runaway” processes (like the tcsh process shown above – most likely from an improperly terminated login session), you can quickly identify it and issue a kill or kill –9 (which effectively means “I don’t care what you think you’re doing, just shut up and go away”) command to the process ID number (PID) of the runaway.

• For a more complete listing of the processes that are running on your computer, use ps. The ps –auxw command (on BSD-based systems; ps –ef on System V-based systems; the ps on most Linuxes will accept either) will show all system processes owned by all users, whether active or background.

You can use this to find any active process and get its PID if you need to “re-HUP” or kill it. You can find processes for a single server by using ps in combination with the venerable grep, such as finding all Apache processes by using ps –auxw | grep httpd | grep –v grep. Compare the number of web processes to the server’s “hard” and “soft” limits (the “hard” limit is set when Apache is compiled; the “soft” limit is set in the [apache_dir]/conf/httpd.conf file for recent versions) to the number of active processes. If those numbers are close to being equal, consider either upgrading your hardware or reconfiguring/recompiling Apache with higher limits.

• If you’re worried that a user on your system is running an unauthorized program, hacking the system or otherwise foobaring things, then w is a simple check.

The w command lists active users on the system and what they’re doing. If any of them are performing unauthorized activities, simply kill that user’s shell and use vipw to either give them a password (the second colon-separated field, immediately after the username) of “*” or assign them a shell (the last field of each user’s line) of /sbin/nologinuntil you have sorted out the what they were doing and whether it violated your policies. A kill –9 may be necessary for “phantom” or “zombie” processes that were left running after improper logouts.

• If your problem is a crashed or non-starting Apache webserver, use the built-in apachectl command to work out the issue. It’s generally installed in the bin subdirectory of the Apache installation; if this isn’t in your shell’s command path, you may need to specify the full path to this command. Aside from the basic apachectl start and apachectl stop commands, one of the more useful options is the apachectl configtest command, which performs a basic evaluation of Apache’s httpd.conf configuration file (where almost all options are specified for Apache 1.3.4 and later). 

Unfortunately, apachectl is notorious for providing “okay” readings when some configuration problems are still present (most notably when a directory specified for a virtual host is not found or not readable, which causes Apache to fail). For these situations, you’ll need to consult your Apache error logs (see below). Also, apachectl consults the file /var/run/httpd.pid to find its originating process; if this PID is different, the apachectl stop command won’t work. In these cases, find the httpd process owned by root using ps (this will be the “parent” Apache process) and kill that process.

• Your first tool for diagnosing whether a problem may in the server’s network connection rather than on the server itself is ping. Using ping to test the connection to a server is a common test, but some problems (such as an error in duplex or settings between a server and its switch) may not show up using ping normally. If a ping to a server appears normal but you suspect a network error is involved, try using ping with larger-than-normal packet sizes. The default size of the data packet used by ping is only 56 bytes, but many errors will only show up when large ping packets (2048 bytes or greater) are used. Use the –s flag with ping to specify a larger packet size (use the –c option to specify the number or “count” of pings to send).

With large packet sizes, a longer-than-usual round-trip time is normal, but excessively long times or packet loss are good indicators that there is a network configuration problem present. Try sending large ping packets for at least a count of 50, and compare the results with a long-count ping with normal packet sizes.

• If a network misconfiguration between a server and its switch (or router) is possible, then you’ll want to To show the status of your server’s network connections, use netstat -finet. Netstat will show you which ports are open or your server or which services are active, as well as what foreign host is connecting to the port or service in question. 

If you’re concerned that your server is being attacked across the network, this will generally show up in excessive usage of the memory that the kernel has allocated to networking. To find this out, use the –m (memory buffer, or “mbuf”) flag for netstat. If you find that normal services like httpd aren’t heavily burdened but the percentage of memory allocated to networking is still high (90 percent or more), consider shutting down network services or ports that are open and may be being attacked or misused.

• If a network issue is the likely cause of your problem, use ifconfig (the interface configuration command) to check how the NICs (Network Interface Cards) on the server are set up.

You can ignore the lo0 (loopback) interface; what really matters are the settings for your server’s NIC(s) as specified by their driver type. These will show its IP address(es), netmask, duplex and speed, as well as which driver is in use. 

Very frequently, a server which otherwise boots up and appears fine but has a problematic or nonexistent network connection can be fixed with a check of its network interface configuration. Double-check the options set for your default ifconfig startup settings in the file /etc/rc.conf (at least in recent versions of FreeBSD). Frequently, a slow network connection is the result of a NIC configured for a different speed or duplex than its switch/router port, especially when “autosense” options are set but fail for whatever reason. This can frequently be remedied by resetting the connection with a simple ifconfig down [interface] [options] OPTIONS followed by an ifconfig up [interface] [options] command.

• Weird errors with files or services may sometimes be caused by a full hard drive (preventing the system from writing logfiles or other operations). Use the df command to show your server’s mounted partitions and their available capacity.

• A whole nasty horde of seemingly inexplicable problems are caused by simple issues with file permissions. In these cases, the humble ls command can be your best ally. Using ls –l will show you the permissions settings for files in any directory. Common issues include missing “x” (executable) permissions on CGI scripts or applications, or directory permissions which don’t allow “r” (reading) or “x” (entering). 

The Usual Suspects

• When bizarre things are happening, the system logfiles are the first place to check. Under BSD, you’ll find these in /var/log; the first place to look is /var/log/messages., where syslog deposits all the messages that aren’t specified to go into another logfile. In fact, the entire /var/log directory is home to the messages for different services – from telnet/SSH or FTP logins to SMTP and POP connections to system errors and kernel messages. 

Checking these files can often provide the answers to 90 percent of “I can’t do X” messages from desperate system users. Check /etc/syslog.conf to see where the syslog daemon is sending the errors it receives; check the config files for individual applications or services to see which logfiles they’re writing to.

• If the webserver won’t start, but there aren’t any clues elsewhere, immediately look at the  webserver logfiles. Using Apache, these are generally located in the file [apache_dir]/logs/error_log or something similar. Even if apachectl runs and fails while printing a simple message like “httpd: could not be started” (this message is the winner of the “FrontPage Memorial ’Duh’ Award for Unhelpful Error Handling” three years in a row), the problem will almost certainly be logged to Apache’s errors file.

For problems with specific virtual hosts on a server, check wherever their logfiles are located. This is generally specified inside that domain’s <Virtual Host> … </Virtual Host> directive in the httpd.conf file. If no error logfile is specified for that virtual host, then errors will be logged to the main Apache error file.

Getting a Second Opinion on First Aid

Of course, all of the above are merely a few recommendations derived from my experience; if you have found other “First Aid Tools” or “Usual Suspects” that you rely on for server administration, please let me know at [email protected] and I’ll include them in an upcoming column. 

The Next Apache

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, February 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

The Apache webserver is (along with Linux and Perl) probably the more widely-used open-source software in the world. After beginning as a set of patches to the NCSA http server (“a patchy server” was how it got its name), Apache had moved by 1996 to become the most popular http server out there. According to Netcraft, Apache today powers 59 percent of all the webservers on the Internet, far more than the 20 percent share of the runner-up, Microsoft Internet Information Server. 

Apache’s last “full-point” release (Apache 1.0) was released on December 1, 1995, and it has been five years since then. Naturally, there’s a lot of excitement about the long-awaited Apache 2.0, which should be in beta release by the time you read this. To find out what’s new with Apache 2, I asked the Apache Project’s Dirk-Willem van Gulik and Ryan Bloom. The following is selected portions of an e-mail interview with these Apache team members:

Boardwatch: Why was the Apache 2.0 project started? What shortcomings in Apache 1.x was it created to address, or what missing features was it designed to implement?

Bloom: There were a couple of reasons for Apache 2.0. The first was that a pre-forking server just doesn’t scale on some platforms. One of the biggest culprits was AIX, and since IBM had just made a commitment to using and delivering Apache, it was important to them that we get threads into Apache. Since forcing threads into 1.3 would be a very complex job, starting 2.0 made more sense. Another problem was getting Apache to run cleanly on non-Unix platforms. Apache has worked on OS/2 and Windows for a long time, but as we added more platforms (Netware, Mac OS X, BeOS) the code became harder and harder to maintain. 

Apache 2.0 was written with an eye towards portability, using the new Apache Portable Run-time (APR), so adding new platforms is simple. Also, by using APR we are able to improve performance on non-Unix platforms, because we are using native function calls on all platforms. Finally, in Apache 2.0, we have implemented Filtered I/O. This is a feature that module writers have been requesting for years. It basically allows one module to modify the data from another module. This allows CGI responses to be parsed for PHP or SSI tags. It also allows the proxy module to filter data.

Boardwatch: What are the significant new features of Apache 2.0?

Bloom: Threading, APR, Filtered I/O. 🙂 And Multi-Processing modules and Protocol modules. These are two new module types that allow module writers more flexibility. A Multi-Processing module basically defines how the server is started, and how it maps requests onto threads and processes. This is an abstraction between Apache’s execution profile and the platform it is running on. Different platforms have different needs, and the MPM interface allows porters to define the best setup for their platform. For example, Windows uses two processes. The first monitors the second. The second serves pages. This is done by a Multi-Processing Module. 

Protocol modules are modules that allow Apache to serve more than just HTTP requests. In this respect, Apache can act like inetd on steroids. Basically, each thread in each process can handle either HTTP or FTP or BXXP or WAP requests at any time, as long as those protocol modules are in the server. This means no forking a new process just to handle a new request type. If 90% of your site is served by HTTP, and 10% is served by WAP, then the server automatically adjusts to accommodate that. As the site migrates to WAP instead of HTTP, the server continues to serve whatever is requested. There is no extra setup involved. I should mention now that only an HTTP protocol module has been written, although there is talk of adding others.

Boardwatch: How much of a break with the past is Apache 2.0, in terms of 1.) the existing code base, 2.) the administration interface, and 3.) the API for modules?

Bloom: I’ll answer this in three parts.

1) The protocol handling itself is mostly the same. How the server starts and stops, generates data, sends data, and does anything else is completely different. By adding threads to the mix, we realized that we needed a new abstraction to allow different platforms to start-up differently and to map a request to a thread or process the best way for that platform.

2) The administrative interface is the same. We still use a text file, and the language hasn’t changed at all. We have added some directives to the config file, but if somebody wants to use the prefork MPM (it acts just like 1.3), then a 1.3 config file will migrate seamlessly into 2.0. If the MPM is changed, then the config file will need slight modifications to work. Also, some of the old directives no longer do what they used to. The definition of the directive is the same, but the way the code works is completely different, so they don’t always map. For example, SetHandler isn’t as important as it once was, but the Filter directives take its place.

3) The module API has changed a lot. Because we now rely on APR for most of the low-level routines, like file I/O and network I/O, the module doesn’t have as much flexibility when dealing with the OS. However, in exchange, the module has greater portability. Also the module structure that is at the bottom of every module has shrunk to about 5 functions. The others are registered with function calls. This allows the Apache group to add new hooks without breaking existing modules. Also, with the filter API modules can should take more care when generating data to generate it in size-able chunks.

Boardwatch: What are the advantages and disadvantages (if any) of Apache 2.0’s multithreaded style? What does it mean to have the option of being multi-process and multi-threaded?

Bloom: The multi-threading gives us greater scalability. I have actually seen an AIX box go from being saturated at 500 connections to being saturated at more than 1000. As for disadvantages, you loose some robustness with this model. If a module segfaults in 1.3, you lose one connection, the connection currently running on that process. If a module segfaults in 2.0, you lose N connections, depending on how many threads are running in that process, which MPM you have chosen, etc. However, we have different MPMs distributed with the server, so a site that only cares about robustness can still use the 1.3 pre-forking model. A site that doesn’t care about robustness or only has VERY trusted code can run a server that has more threads in it.

van Gulik: In other words; Apache 2.0 allows the webmaster to make his or her own tradeoffs; between scalability, stability and speed. This opens a whole new world of Quality of Service (QoS) management. Another advantage of these flexible process management models is that integration with languages like Perl, PHP, and in particular Java, can be made more cleanly and more robust without loosing much performance. Especially large e-commerce integration projects will be significantly easier.

Boardwatch: What is the Apache Portable Runtime? What effect does this have on the code, and on portability?

Bloom: The Apache Portable Runtime is exactly what it says. 🙂 It is a library of routines that Apache is using to make itself more portable. This makes the code much more portable and shrinks the code size, making the code easier to maintain.  My favorite example is apachebench, a simple benchmarking tool distributed with the server. AB has never worked on anything other than Unix. We ported it to APR, and it works on Unix, Windows, BeOS, OS/2, etc without any work. As more platforms are ported to APR, AB will just work on them, as will Apache. This also improves our performance on non-POSIX platforms. Apache on Windows, the last I checked, is running as fast as Apache on Linux.

Boardwatch: Can Apache 1.3.x modules be used with Apache 2.0? How will this affect things like Apache-PHP or Apache-FrontPage?

Bloom: Unfortunately, no. However, they are very easy to port. I have personally ported many complex modules in under an hour. The PHP team is already working on a port of PHP to 2.0, as is mod_perl. Mod_perl has support for some of the more interesting features already, such as writing filters in Perl, and writing protocol modules in Perl. FrontPage will hopefully be subsumed by DAV, which is now distributed with Apache 2.0.

van Gulik: Plus it is likely that various Apache-focused companies; such as IBM, Covalent and C2/RedHat will assist customers with the transition with specific tools and migration products

Boardwatch: Do you predict that server administrators used to Apache 1.3.x will have a hard time adjusting to anything about Apache 2.0? If so, what?

Bloom: I think many admins will move slowly to Apache 2.0. Apache 2.0 has a lot of features that people have been asking for for a very long time. The threading issues will take some getting used to, however, and I suspect that will keep some people on 1.3 for a little while. Let’s be honest, there are still people running Apache 1.2 and even 0.8, so nobody things that every machine running Apache is suddenly going to migrate to 2.0. Apache tends to do what people need it to do. 2.0 just allows it to do more.

Boardwatch: Who should/shouldn’t use the alpha or beta releases?

Bloom: The alpha releases are developer releases, so if you aren’t comfortable patching code and fixing bugs, you should probably avoid the alphas. The betas should be stable enough to leave running as a production server, but there will still be issues so only people comfortable with debugging problems and helping to fix them should really be using the betas. (Napster is using alpha 6 to run their web site)

Boardwatch: For server administrators, what guidelines can you give about who should or shouldn’t upgrade to Apache 2.0 when it becomes a final release? For what reasons?

Bloom: Personally, I think EVERYBODY should upgrade to Apache 2.0. 🙂 Apache 2.0 has a lot of new features that are going to become very important once it is released. However, administrators need to take it slowly, and become comfortable with Apache 2.0. Anybody who is not on a Unix platform should definitely upgrade immediately. Apache has made huge strides in portability with 2.0, and this shows on non-Unix machines.

van Gulik: I’d concur with Ryan insofar as the non-Unix platforms are concerned; even an early beta might give your site a major boost; as for the established sites; running on well understood platforms such as Sun Solaris and the BSD’s – I am not so sure if they will upgrade quickly or see the need; The forking model has proven to be very robust and scalable. Those folks will need features; such as the filter chains to be tempted to migrate.

Boardwatch: Apache is the most popular web server out there, meeting the needs of many thousands of webmasters. What else is there to do? What is on the Apache web server team’s “wish list” for the future?

Bloom: Oh, what isn’t on it? I think for the most part, we are focusing on 2.0 right now, with the list of features that I have mentioned above. We are very interested in the effect of internationalization on the web and Apache in particular. There are people who want to see an async I/O implementation of Apache. I think that we will see some of the Apache group’s focus move from HTTP to other protocols that compliment it, such as WAP and FTP. And I think finally we want to continue to develop good stable software that does what is needed. I do think that we are close to hitting a point where there isn’t anything left to add to Apache that can’t be added with modules. When that happens, Apache will just become a framework to hang small modules off of and it will quiet down and not be released very often.

Boardwatch: Anything else you’d like to add? 😉

Bloom: Just that Apache 2.0 is coming very close to its first beta, and hopefully not long after that we will see an actual release. The Apache Group has worked long and hard on this project, and we all hope that people find our work useful and Apache continues to be successful.

Running Linux Programs on FreeBSD

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, January 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Even though many server admins prefer BSD Unix, there’s no denying that Linux is “where it’s at” for third-party software development. So, what’s a BSD admin to do?

In the bygone misty past of Unix (in the era historians call “the early ‘90s”), acceptance and market share were hampered by the diverging standards that forced application developers to write for only one version of Unix. This (as the youngsters today say) “really sucked,” and resulted in the Unix wars that left many syadmins hospitalized with “flame” wounds from heated Usenet exchanges. However, the tide has fortunately turned in favor of compatibility among the many *nixes.

To commercial software vendors, market share is everything. As such, if *BSD forced Linux vendors to develop for their OSes, it would be like your average Unix sysadmin saying, “If Cindy Crawford wants to go out, she can call me.” Fortunately, Jordan Hubbard of FreeBSD has indicated a prudent recognition that Linux’s API is becoming the default for third party developers (Indpendent Software Vendors, or ISVs) to develop for free *nixes. Therefore, it’s better for BSD users to accept the Linux ABI (Application Binary Interface) rather than to force ISVs to choose between Linux and BSD. This, in my opinion, is a very pragmatic and wise attitude.

FreeBSD, NetBSD and OpenBSD all include options to allow Linux binaries to run on their systems (at least for the Intel x86 architecture; on other architectures, your mileage may vary). For OpenBSD and NetBSD, you can check your system documentation or see their respective websites (www.openbsd.org and www.netbsd.org); for the purpose of this article, we’ll look at Linux binary compatibility on FreeBSD for x86. NOTE: Much of the information on the inner workings of Linux compatibility came from information graciously provided by Terry Lambert ([email protected]).

Under FreeBSD, Linux binary compatibility is managed by creating a “shadow” Linux file system with the requisite programs and libraries, and re-routing the Linux program’s system calls into this file system. These libraries then deal with a Linux kernel embedded into the FreeBSD kernel.

Setting Up Linux Compatibility

The development of an “executable class” loader for FreeBSD has been under development since 1994 (although when the project was started, Linux wasn’t one of the primary target OSes). There are three essential items that enable this functionality.

The first is a KLD (“Kernel LoaDable”) object called linux.ko, which is essentially a Linux kernel that can be loaded dynamically into the FreeBSD kernel. As a KLD, it can be loaded or unloaded without rebooting, using the kldload and kldunload commands. To check and see whether linux.ko is loaded properly on your system, use the kldstatcommand:

schnell# kldstat

Id Refs Address    Size     Name

 1    4 0xc0100000 1d9a60   kernel

 2    1 0xc1038000 3000     daemon_saver.ko

 3    1 0xc1822000 4d000    nfs.ko

 4    1 0xc1020000 10000    linux.ko

The second necessary item is the set of basic Linux libraries (pre-existing code that developers can link their programs to, so they don’t have to write every function from scratch) and binaries (like bash, cp, rm, etc.). These can be installed via the ports collection from /usr/ports/emulators/linux_base, which is based on a minimal Red Hat installation. As of this writing in late September, for the i386 architecture, the installed libraries matched a Red Hat 6.1-1 release including glibc 2.1.2-11, libc 5.3.12-31, glib 1.2.5-1, ld.so 1.9.5-11, libstdc++-2.9.0-24, gdbm-1.8.0-2 and XFree86 libs 3.3.5-3. 

To be able to use a wider range of Linux apps, you may also wish to install the Linux development tools, found in the ports collection at /usr/ports/devel/linux_devtools. Note that you’ll want to do this after installing linux_base, since linux_devtools requires rpm (the Red Hat Package Manager application, also available as a FreeBSD app through the ports collection) and /compat/linux/etc/redhat-release to be installed. This currently includes (also for i386) kernel-headers-2.2.12-20, glibc-devel-2.1.2-11, make-3.77-6, cpp-1.1.2-24, egcs (plus egcs-c++ and egcs-g77)-1.1.2-24, gdb-4.18-4, and XFree86-devel-3.3.5-3.

Also, any necessary binaries/libraries can be set up manually by installing them into their original paths under the FreeBSD directory /compat/linux (e.g., install the Linux /usr/X11R6/lib/libX11.so.6.1 library onto FreeBSD as /compat/linux/usr/X11R6/lib/ libX11.so.6.1). This process is also necessary for installing libraries that aren’t part of the default ports collection sets.

The third item is “branding” Linux ELF (“Executable and Linking Format”) binaries. Although the GNU toolchain now brands Linux ELF binaries with the OS name (so newer Linux binaries should work without branding), some older binaries may not include this information (although ELF has replaced a.out as the primary binary format since Linux kernel 2.x and FreeBSD 3.0). To do this, use the brandelf command (which sounds like a move from “Dungeons and Dragons” to poke elves with hot sticks) with the “-t” (type) flag like:

schnell# brandelf -t Linux /usr/local/games/doom

How Linux Compatibility Works

Originally, Unix supported only one “binary loader.” When you ran a program, it examined the first few bytes of the file to see if it was a recognized binary type. If the file type that was attempted to being loaded and run wasn’t the singular binary type that it recognized, it would pass the file in question to the basic shell interpreter (/bin/sh) and try to run the program with that. Failing that, it would spit a nasty message back to you about how you couldn’t run whatever it was that you were trying to run.

Nowadays, FreeBSD supports a number of loaders, which include an ELF loader (by the way, Linux binaries can be stored on either a FreeBSD UFS or Linux ext2 file system, assuming you have mounted the ext2 file system the app is located on using the mount -t ext2fs command). If the “brand” is seen in the ELF file as “Linux,” it replaces a pointer in the “proc” structure of the binary to point automatically to /compat/linux/, and the binary is executed using the Linux libraries and binaries, instead of the FreeBSD ones. If it isn’t a FreeBSD or Linux binary, and the list of loaders (including those examining the first line for “#!” lines like Perl, Python or shell scripts) is exhausted without recognizing a “type,” then it is attempted to be executed as a /bin/sh script. If this attempt to run is unsuccessful, it is spit back out as an “unrecognized” program.

Linux and BSD Binaries: Both “Native?”

If a binary or library is required by a Linux program that is not included under the Linux compatibility “area,” the FreeBSD tree is searched. Thus, a Linux application that is expected to run with the Linux version of /compat/linux/usr/lib/libmenu.so – if it doesn’t find it – is checked to see whether it can run under the FreeBSD /usr/lib/libmenu.so. Only if a compatible library is not found in either the Linux-compatible “re-rooted (/compat/linux/whatever)” filesystem or the regular FreeBSD libraries is an executable rejected. 

Because of this construction, it isn’t really accurate to say that Linux is “emulated” under FreeBSD, especially in the sense that, say, platforms are emulated under the MAME (www.mame.net) emulator, since it doesn’t involve a program which impersonates another platform on a binary level and translates all of its native system calls into local system calls. Linux binaries are simply re-routed to a different set of libraries, which interface natively with a (Linux kernel) module plugged into the FreeBSD kernel. Rather than “emulation,” this is largely just the implementation of a new ABI into FreeBSD.

Under the current implementation of binary compatibility, programs calling FreeBSD’s glue() functions are statically linked to FreeBSD’s libraries, and programs calling the Linux glue() function calls are statically linked to its own libraries, so their executions are dependent on completely different code libraries; however, the importance of glue()is on the wane. In the future, this might change, which would seriously open the question as to whether Linux apps under FreeBSD are any more “native” than apps written specifically for FreeBSD are. 

Running Linux Binaries on BSD

Currently, the major problem with running Linux binaries on FreeBSD is similar to the major problem with using new apps on Linux: discovering which libraries they require and downloading and installing them. Unfortunately for BSD users, the only certain way to do this right now is to have access to a Linux system which already has the app in question installed, and run the ldd command on the application to list the libraries needed. If you don’t have access to a Linux system that already has the program installed, your next best bet is to check info on the program’s website, or search for it at Freshmeat (www.freshmeat.net).

Under current FreeBSD Linux binary emulation, most Linux binaries should work without problem. However, due to operating system differences, it won’t support those that heavily use the Linux /proc filesystem (which operates differently from the FreeBSD /proc), ones that make i386-specific calls like enabling virtual 8086 mode, or ones that use a significant amount of Linux-specific assembly code. While some people have reported that some Linux binaries run faster in this BSD environment than they do natively in Linux, this is a highly subjective claim, and one that should be taken with a grain of salt (at least until some real standardized testing is done with it). While Linux binary emulation on FreeBSD shouldn’t be considered as a major security risk, it is worth noting that a buffer overflow bug was discovered in August 2000 (www.securityfocus.com/frames/?content=/vdb/bottom.html%3Fvid%3D1628).

Popular Uses for Linux Compatibility

The most popular binaries for using Linux compatibility are the ISV applications that have been developed for Linux from popular commercial/semi-commercial vendors. These include programs like Sun’s StarOffice, Wolfram’s Mathematica, Oracle 8, and games like Bungie’s Myth II and id Software’s Quake III (there are already tutorials on installing Oracle 8.0.5 and Mathematica 4.x included in the FreeBSD Handbook; see below for more). 

Other popular installations include the Netscape Navigator 4.6-7.x browser. Particular advantages to installing the Linux version (even though there is a FreeBSD version already) are that the Linux version under FreeBSD compatibility is reportedly more stable than the “native” FreeBSD version. Also, binaries for popular plug-ins (Flash 4, Real Player 7, etc.) are available, since “native” FreeBSD versions aren’t. Info on installing VMware for Linux on FreeBSD can be found at www.mindspring.com/~vsilyaev/vmware/.

Getting Help with Linux Compatibility

The primary reference on Linux binary emulation on FreeBSD is the “handbook” page at www.freebsd.org/handbook/linuxemu.html, including specific pages on Oracle (www.freebsd.org/handbook/linuxemu-oracle.html) and Mathematica (www.freebsd.org/handbook/linuxemu-mathematica.html). To keep up-to-date on Linux emulation under FreeBSD, subscribe to the mailing list freebsd-emulation (send e-mail to [email protected] with the text [in the body] subscribe freebsd-emulation).

Additional information on Linux compatibility (including instructions for older versions of FreeBSD), see www.defcon1.org/html/Linux_mode/linux_mode.html. For a list of Linux applications that are known to run on FreeBSD under Linux binary compatibility, see www.freebsd.org/ports/linux.html. For an example of setting up Word Perfect 8 in Linux compatibility – as well as an account of running Windows 2000 on top of VMware on top of Linux on top of FreeBSD, read the excellent article at BSD Today (www.bsdtoday.com/2000/August/Features252.html). You can find some (mildly outdated but still useful) instructions on installing StarOffice on FreeBSD at www.stat.duke.edu/~sto/StarOffice51a/install.html. And, for fun, there’s info on setting up an Unreal Tournament server on FreeBSD via Linux compatibility at www.doctorschwa.com/ut/freebsd_server.html.

Trying on Red Hat – Questions, Answers and Red Hat Linux 7

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, December 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Red Hat Linux is the unquestioned leader in the U.S. for Linux market share. According to research firm IDC, Red Hat shipped 48 percent of the copies of Linux that were purchased in 1999. Many smaller distributions are based on its components, and it’s often used as the “target” distro that third-party (especially commercial) software is developed for. Still, the company has found it difficult sometimes to reconcile its place in the Linux community with its place as the Linux standard-bearer on Wall Street. With the release of Red Hat Linux 7.0, is the company still on top?

Under the Hat

Red Hat has been around since the days when Linux distributions were loose, volunteer-driven projects with users numbering in the thousands. Red Hat gradually reached a position of great popularity, based on several factors. First, they put a significant amount of effort into creating ease of use for inexperienced admins, with semi-graphical installation/configuration programs and the Red Hat Package Manager (RPM) software installation system. Second, Red Hat shipped a solid distribution that tended to include the “latest and greatest” features. Lastly, Red Hat showed a genius for building and marketing a brand name, and getting a shrink-wrapped, user-friendly product into stores.

Once Linux began to appear on the radar scopes of investors, Red Hat became the high-profile “star” of the Linux world. Red Hat went public on August 11, 1999 and soared immediately – making paper fortunes overnight for of some of the Linux world’s luminaries. The stock reached a dizzying high of 151, but later became a victim of the hype as Wall Street’s love affair with Linux cooled; as of October 1, RHAT was trading near its 52-week low of 15. 

The company has also faced the very difficult problem of maintaining its standing in the open source world (which is often distrustful of corporations and for-profit ventures) while pleasing its investors and shareholders. Like Robin Hood becoming the new Sheriff of Nottingham, they have found that it’s sometimes impossible to please all of your constituencies at once. Red Hat has made some occasionally unpopular moves, and faced criticism from the more evangelical Linux enthusiasts that it was trying to “co-opt” Linux standards as its own; however, the company has overall done a good job of pleasing the majority of its disparate audiences.

Introducing Red Hat Linux 7.0

As of this writing, Red Hat Linux 7.0 has been out for only a few days, but seems to be well-received. There appear to be some viable reasons to upgrade (even beyond the fact that RH 7 is supposed to be ready out of the box for a 2.4 kernel when it’s released).

Red Hat 7 is more of an evolutionary upgrade than a revolutionary one, and this will please some users and disappoint others. Still, this is generally a positive step as far as ISP/server users are concerned – when compatibility with hardware and a stable system are your prime requirements, “conservative” is never a bad approach. 

As Red Hat itself is quick to point out, the value of a distribution is in meshing functionality with the expertise and testing to make sure that all of its included parts “play nicely with each other” as much as possible. In the past, Red Hat has been (depending on your viewpoint) applauded or derided for being an “early adopter” (the company brushes aside this characterization in the interview below) of new versions of libraries and applications. 

An example of the tack they’ve taken with Red Hat 7 is that because of stability concerns about KDE 2’s prerelease versions by Red Hat testers, version 1.1 was included with the final release. On the other hand, some users (rightly or not) questioned the use of a non-“stable release” version of the GNU C Compiler (gcc 2.96 20000731) as the default compiler. Again, whether these are steps forward or backward is a matter of personal preference; you can’t please everybody. Still, it appears that Red Hat has worked hard to avoid a “buggy x.0 release” that some have complained about in the past.

The big differences in Red Hat 7 for desktop users are built-in USB support and the default inclusion of XFree86 4.0, with its (in my opinion, much-needed) major overhaul and modularization of the X server. Also, Sawfish is now used as the default window manager with GNOME rather than Enlightenment.

Overall, administration is roughly the same as with the 6.2 distro, with a few bugfixes and improvements here and there. The basic US package still includes two CDs (other geographical versions will include more); while this won’t please SuSE users who love the extra CDs full of applications, the included software still represents a pretty good sampler of the software you’d want (with the possible exception of “office suite” software). For more information on the included software, read the interview below.

Q & A with Red Hat 

The following is from an e-mail interview with Paul McNamara, VP, Products and Platforms for Red Hat.

Q: Could you give me a brief history of Red Hat?

A: Founded in 1994, Red Hat (Nasdaq:RHAT), is the leader in development, deployment and management of Linux and open source solutions for Internet infrastructure ranging from small embedded devices to high availability clusters and secure web servers. In addition to the award-winning Red Hat Linux server operating system, Red Hat is the principal provider of GNU-based developer tools and support solutions for a wide variety of embedded processors. Red Hat provides run-time solutions, developer tools, Linux kernel expertise and offers support and engineering services to organizations in all embedded and Linux markets.

Red Hat is based in Research Triangle Park, N.C. and has offices worldwide. Please visit Red Hat on the Web at www.redhat.com.

Q: What platforms does Red Hat support? What are the minimum requirements for Red Hat?

A: Red Hat supports Intel, Alpha, and SPARC processors. Minimum requirements for the Intel product are 386 or better (Pentium recommended) 32MB RAM and 500MB free disk space.

Q: What is included with the newest release of Red Hat? (i.e., kernel version, version of Apache, Perl, Sendmail, etc. for other software packages you feel are important, plus what other third-party software do you add?)

A: A significant new feature of Red Hat Linux 7 is Red Hat Network. Red Hat Network is a breakthrough technology that gives customers access to a continuous stream of managed innovation. This facility will dramatically improve customers abilities to extract maximum value from Red Hat Linux.

We ship the following: 2.2.16 kernel, Apache 1.3.12, openssl 0.9.5, [ed: openssh 2.1.1p4 is also included] sendmail 8.11.0. Complete package description can be found at www.redhat.com/products/software/linux/pl_rhl7.html.

Third party apps can be found at www.redhat.com/products/software/linux/pl_rhl7_workstation.html and www.redhat.com/products/software/linux/pl_rhl7_server.html.

Q: What are some configurations that Red Hat would recommend, or it really excels with?

A: While Red Hat Linux is a superior general purpose OS supporting a wide range of application segments, the most popular configurations are: web servers, secure web servers, database servers, internet core services (DNS, mail, chat, etc), and technical workstations.

Q: What is Red Hat’s ‘Rawhide’ development tree? Who is it suitable for?

A: The Rawhide development tree represents our latest development code drop. It is our next release, in progress. In the traditional software development model, the developing company provides the latest engineering build to its internal developers to use to drive the development effort forward. Since Red Hat uses the collaborative development style, our ‘internal’ development release is made available to the community. This release is intended for community OS developers and is not intended to be used by customers for production environments.

Q: When Linux is united by a common kernel, what is it that keeps Red Hat as the “number one” distribution? What would you say differentiates Red Hat from other Linux distros.

A: Your question is a lot like asking since all car makers use a four cycle internal combustion engine as the primary component, what differentiates Lexus from other cars? Note that all cars are essentially compatible (can be driven on the same roads, use the same fuel, and a driver trained to drive one brand of car can easily drive another brand). What sets our brand apart is the mix of features and the quality of the finished product. We process “raw materials” (the various packages) and turn them into a finished product. Our selection of the packages, the engineering we do to create an integrated product, and our ability to deliver a quality result make the difference.

Q: Red Hat has often been cited as an “early adopter,” moving to new library versions, etc. in its releases before other distributions do. Is this a fair characterization? What advantages and/or disadvantages does this have?

A: I’ve only heard us described in this way by a competitor. I don’t know what this means. We clearly drive an agenda, and others tend to follow.

Q: Red Hat obviously has many strengths. What users, if any, should *not* choose Red Hat? For what reasons?

A: We generally discourage people interested in a legacy desktop OS from purchasing Red Hat. Red Hat is designed for servers, technical workstations, and post-PC embedded devices. It is either (1) intended for internet and IT professionals who need a high performance, internet-ready OS, or (2) is designed to be built into post-PC consumer information appliances where the device manufacturer has integrated our product into the consumer product.

Q: What advice would you give for ISP administrators about when/when not to upgrade their servers running Red Hat?

A: Red Hat Linux is a different kind of OS. Customers can choose, on a feature by feature basis, which packages to upgrade and when. Through a subscription to the Red Hat Network, customers can receive proactive notifications when new features become available and can receive a continuous stream of managed innovation to give them strategic advantage.

Q: What is the Red Hat Certification program? What benefits does it offer to Internet server administrators?

A: The Red Hat Certified Engineer (RHCE) program is the leading training and certification program on Linux. RHCE is a performance-based certification that tests actual skills and competency at installing, configuring, and maintaining Internet servers on Red Hat Linux. Complete details can be found at www.redhat.com/training/rhce/courses/ and www.redhat.com/training.

RHCE program courses and the RHCE Exam are regularly scheduled at Red Hat, Inc. facilities in Durham, NC, San Francisco CA, and Santa Clara, CA. Global Knowledge and IBM Global Services are Red Hat Certified Training Partners for the RHCE Program, offering RHCE courses and the RHCE Exam in over 40 locations in North America. Red Hat can also run Red Hat training on-site for 12 students or more.

Red Hat offers the most comprehensive Developer Training for systems programmers and application developers on Linux, as well as training on Advanced Systems and Advanced Solutions, including the only regularly scheduled training on Linux on IA-64 architecture.

Q: What plans does Red Hat have for the IA-64 platform?

A: Red Hat is a leading participant in the IA-64 consortium, and we intend to aggressively support this new platform by delivering Red Hat Linux concurrently with the availability of IA-64 hardware.

Q: What advantages would you cite for someone choosing Red Hat Linux over another server OS, like Windows 2000, Solaris or FreeBSD?

A: Red Hat offers a superior mix of reliability, performance, flexibility, total cost of ownership and application availability. We believe it is simply the best choice for deploying Internet infrastructure.

Q: Where could someone running an Internet server go for help and tips on Red Hat? Do you specifically recommend any advice?

A: There is a huge volume of information available for Red Hat Linux. Sources include a large selection of books available at leading book stores, on-line information from news groups and mailing lists, a worldwide network of Linux Users Groups (LUGs), on-line help in the form of man and info pages, and support offerings available directly from Red Hat and from www.redhat.com.

Hosting FrontPage Sites on Unix Servers – Dealing with the Great Satan of Server Extensions

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, November 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Ever since a version of FrontPage started getting bundled in with Microsoft Office 2000, FrontPage has quickly become the web site design tool of choice for people who enjoy being lectured by animated paper clips. If your hosting environment is Linux or *BSD, then FrontPage needs to be understood and dealt with.

FrontPage and How it Works

FrontPage is a simple GUI tool which automates the things that most basic web designers are looking for these days, including graphical page design, publishing their sites and CGI-type actions (forms, dynamic pages, etc.). Throughout the rest of this column, I don’t want to disparage what the creators of FrontPage have made – a truly easy, user-friendly program for the design of simple to very complex web sites – a real achievement.

But, for a FrontPage-created site to take use of all the easy-to-create “all-singing, all-dancing” features that make it such a hit with people who wouldn’t know an HTML tag if it crawled up and bit them, the FrontPage Server Extensions (FPSE) must be present on the hosting server. The FPSE are essentially Microsoft-built proprietary CGIs executed over HTTP – designed for Windows NT and Microsoft IIS, but ported to Unix and supported on Microsoft’s behalf by Unix porting house Ready-to-Run Software (RTR). This is where the trouble starts.

Can You Support FrontPage on a Freenix?

Supporting FrontPage on Unix is both a help and a hindrance to ISPs. It allows you to host sites for clients using FrontPage – and this faction is rapidly growing. It can provide you with a competitive advantage over ISPs that don’t support FrontPage sites.

The extensions are currently available for a number of platforms including FreeBSD and Linux (for x86 architectures only), and for a number of webservers including Apache, NCSA, Netscape and Stronghold (although only Apache will be discussed in this column). I’m sure that RTR has done a herculean task porting these server extensions to Unix, but the fact remains that this software is poorly documented and frequently difficult to troubleshoot.

The FrontPage extensions for Unix don’t allow you to support the full range of capabilities of FrontPage-designed sites. Active Server Pages (ASP) aren’t natively handled with the FPSE for Unix, and neither is native ActiveX nor ODBC/Microsoft Access database connectivity. 

You can, fortunately, look for outside support. Software vendor Chili!Soft (www.chilisoft.com/allaboutasp/) offers a Unix-native commercial ASP package, Halcyon (www.halcyonsoft.com/products/iasp.asp) offers a Java-based ASP commercial package, and mod_perl (perl.apache.org) with the Apache::ASP (www.nodeworks.com/asp/) module provides a free, Perl-based solution. Commercial ODBC resources are available at www.openlinksw.com, and open-source resources can be found at www.jepstone.net/FreeODBC.

The Pitfalls of FrontPage Extensions

Unfortunately, offering FPSE/Unix commits you to supporting what is, frankly, some occasionally very frustrating software. While security problems with FP-administered sites have decreased, it’s still not suitable for the truly paranoid (a full discussion would take up this whole magazine). The author.exe program that is part of the FPSE seems, on Unix, to eat up an unseemly amount of CPU cycles while it’s running. The .htaccess files that FrontPage installs for each of the directories in a FrontPage-enabled site may interfere with the server administrator’s security policies. And, in my experience, the FPSE will occasionally just stop working, with a reinstall required.

If you can’t find the free support resources you need, you can of course contact RTR for phone support ($245 per incident) or e-mail support ($195 per incident). And, if you don’t support it, FrontPage users are advised in the FrontPage/Unix FAQ to “find another ISP” (http://www.rtr.com/fpsupport/faq2000g.htm#7newISP). 

So much for “alternatives.” And the customer pressure to host FrontPage sites is quickly increasing. 

Before I go on to discuss how to support FrontPage on Unix, I should mention that there is another option – set up a server running Windows NT or Windows 2000. This has its own set of problems and complications (which could take up a book, not a column), but at least the FPSE work flawlessly. After two years and a multitude of problems supporting 50 or so FrontPage users on Red Hat Linux and FreeBSD, and seeking more scalability without headaches, I shamefully admit that this is what I ultimately did. Unfortunately, I think this is what Microsoft was hoping for all along.

Getting the FrontPage Extensions

For a detailed understanding of the installation process and some help with common problems, first visit the Server Extensions Resource Kit (“SERK”) page for Unix installation (www.rtr.com/fpsupport/serk4.0/inunix.htm). For a full list of the files included in an installation and their permissions structures, see www.rtr.com/fpsupport/serk4.0/apndx01.htm.

If you’re installing Apache on the server for the first time, you can use the apache-fp package (www.westbend.net/~hetzels/apache-fp/), available as a Linux RPM or through the FreeBSD ports collection, which integrates an Apache distribution with the Unix FPSE. If you’re installing on an existing Apache setup, go directly to RTR Software’s FPSE/Unix homepage (http://www.rtr.com/fpsupport/). Download the FrontPage 2000 SR 1.2 extensions for your platform (they’re backward-compatible with FP ’97 and ’98 clients), which come in a zipped or GNU-zipped tar file, as well as the shell scripts fp_install.sh and change_server.sh. 

The extensions tarball will be named like fp40.[osname].tar (version 4 is the FP 2000 extensions; version 3 is for FP ’98). The fp_install.sh script converts the results of the uname -a command to a $machine variable, and won’t install if the extensions tarball name doesn’t match the $machine name.

For a long time, it was necessary to get the extensions to install on FreeBSD by “hacking” the script. In this case, adding the line:

FreeBSD*)            machine="bsdi" ;;

as line 70 of the fp_install.sh script would get the BSDI extensions to install. This has been fixed if you download the most recent versions of the software.

Installing Unix FrontPage Extensions

The default install is into /usr/local/frontpage/version40, or in the directory of your choice, although it will create a link to the above. The installation script installs the FPSE, an HTML version of the SERK, the command-line administration tool (fpsrvadm.exe), the web-based administration tools (in the admcgi directory), the Apache-FP patch, and assorted server and site administrator support files. 

If you have already installed the version30 extensions, the script will move its currentversion symbolic link to point to the version40 directory while leaving the older version intact (actually a good choice, and one that many other software packages might adopt!). The installation script will also prompt you to upgrade the “stub extensions” (located inside each FrontPage-enabled host’s document root, linking to the central extension executables) for each virtual host which has had the older stub extensions installed.

While installing, note that each site’s FrontPage username and password (used for administering their site through the FrontPage client) have no relation to their Unix username and password. For security’s sake (unless the users will be administering their sites through secure server access), it is advisable to make these username/passwords different from their Unix user/password counterparts. Note that if each of your virtual hosts are associated with different Unix users/groups, those “subwebs” should be owned by the Unix user/group of the site maintainer, or they will be unable to administer their site properly.

Next, run the change_server.sh script to automatically replace the existing httpd daemon with a FPSE-supplied one which has the FrontPage patch/module installed (with only the mod_frontpage module and a few other basic modules compiled in; it moves the old one to httpd.orig). The script further creates a suidkey file, used when the FPSE exercise their suid (“set user ID”) bit to make root-level modifications. It also prompts you to upgrade any virtual hosts if necessary.

The Apache version used with the current extensions (as of this writing) is 1.3.12. Note, however, that using an older version of the extensions may replace your Apache daemon with an older Apache daemon. If you would like to compile your own Apache daemon instead of using the change_server.sh-installed one (recommended), you may wish to download the Improved Mod_Frontpage (home.edo.uni-dortmund.de/~chripo/) and then follow the installation/compilation directions listed at www.rtr.com/fpsupport/serk4.0/inunix.htm#installingtheapachepatch. If the patch fails, check out the info at home.edo.uni-dortmund.de/~chripo/faq.asp). 

Administering FrontPage Sites

For the server administrator, the command-line tool fpsrvadm.exe is your primary interface. This tool allows you to install or uninstall FPSE for virtual hosts, set authoring options for them, set directory/executable permissions, chown files, and create or merge subwebs. Running fpsrvadm.exe without any arguments brings you to a GUI-like interface; or, you can run the program directly with command-line options which are detailed at www.rtr.com/fpsupport/serk4.0/adfpsr_3.htm.

When adding FPSE to virtual hosts, choose Apache-FP for “server type” unless you have chosen not to install the FrontPage patch. During a host installation, “stub extensions” will be installed for the selected host, and its executable directories aliased appropriately.

Through fpsrvadm.exe, you can also run a “check and fix” option on existing hosts. While this option won’t tell you anything other than whether the extensions are installed or not (and which version), it can frequently be a valuable tool. When running a check or installing new virtual hosts, be sure to specify the port number (e.g., www.somehost.com:80) or it won’t be recognized.

If e-mailing FrontPage forms fails, be sure to edit your /usr/local/frontpage/we80.cnf file and add the line: 

SendMailCommand:/usr/sbin/sendmail %r

…replacing the above path with the correct path to your MTA of choice. Otherwise, FrontPage forms will not be able to send e-mail. I’m not sure why this isn’t part of the default installation, but it isn’t. Also, note that frequently users may use FTP rather than use their FrontPage client to upload files to their site. A common problem is that if an entire directory is uploaded by the user via FTP, it may overwrite the FPSE located in that directory, and it may be necessary to reinstall the FPSE for that host.

Documentation and Support

Your official first stop for documentation will be the “official” version of the SERK at officeupdate.microsoft.com/frontpage/wpp/serk/. The SERK is required reading for any sysadmin to understand FrontPage, especially those with training in Unix and not Microsoft systems. Unfortunately, the FP 2000 SERK (the FP 98 SERK did not share this problem) is the first Unix documentation package I have ever encountered which does not even mention the terms “problem” or “troubleshooting.”

For real-world administrators, your un-official first stops will be the RTR FrontPage 2000 FAQ (www.rtr.com/fpsupport/faq2000.htm) and discussion board (www.rtr.com/fpsupport/discuss.htm). While the discussion boards aren’t always much help, the FAQ is absolutely essential, answering most of the common questions that can make the difference between supporting FrontPage and chewing the Ethernet cable off the server in frustration. I’m not sure why elements of the FAQ haven’t been integrated with the SERK (by the way, would a few man pages be too much to ask?). It is essential that, because of how disorganized much of the FrontPage/Unix documentation is, you should scour everyavailable resource before considering a problem unsolvable.

Part of my problem with the Unix FPSE is that they use Microsoft’s seemingly proprietary way of describing things, which makes life difficult for Unix administrators. For example, the collections of a user’s web pages are normally known as a “web site,” which has a “document root” and may be a “virtual host.” In the Front Page Mirror Universe, the evil Captain Kirk has a goatee, and this is known as a “FrontPage web,” with a “root web” and “virtual webs.” To use my favorite acronym, “WTF?”

There is a newsgroup available at microsoft.public.frontpage.extensions.unix. However, a quick perusal of this newsgroup’s messages over the past several months shows that the postings appear to be mainly problems and questions – without many answers. There was a FrontPage/Unix mailing list ([email protected]), but as of this writing [email protected] no longer seems to be responding for subscriptions. I recommend searching Google (www.google.com) for FrontPage web resources.

Conclusions

Supporting FrontPage websites on Freenixes is very possible, and for basic purposes, it works very well. However, be prepared to do your homework, and know that it may not be easy. Administering a web server always requires serious research and work; depending on your tastes, dealing with FrontPage on Unix may require too much.

Getting to Know SuSE Linux – An Interview with SuSE CTO Dirk Hohndel

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, October 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

SuSE Linux is often referred to as “the European Red Hat,” since SuSE enjoys the kind of market domination there that Red Hat does in the U.S. For the record, SuSE is pronounced “SUE-zuh,” and the name was a catchy little acronym for the rather awkward “Gesellschaft für Software und Systementwicklung mbH.” A reader survey by a German Linux magazine found SuSE with about 75 percent market share to Red Hat’s 11 percent and Debian’s 8.5 percent. Although I wasn’t able to find figures, it’s also safe to say that SuSE enjoys a commanding market share in many other areas outside the U.S. as well.

A major factor cited for this is the fact that SuSE’s distribution includes six CDs with more than 1500 extra software packages to install. While many U.S. users with broadband connections don’t particularly care, it’s an important factor for users (especially for international users) who have slow connections, pay per-minute charges for their bandwidth or otherwise find it inconvenient to spend large amounts of time downloading new software.

Advantages frequently cited by SuSE users aside from the copious CD software collection include SaX, an excellent X Windows configuration tool; and YaST, SuSE’s LinuxConf-like administration tool. While these features are a favorite for some users, others complain that the enormous number of available applications in the install CDs makes installation cumbersome. And YaST, like LinuxConf, seems very much to be a “love it or hate it” application, with opinions varying widely by personal preferences.

SuSE distributions tend to be a little less “cutting-edge” than Red Hat’s, lagging a couple months behind with the “latest and greatest,” but as a result tending to have fewer bugs. Whether this tradeoff is acceptable is up to you; it should be noted that SuSE acquitted itself fairly well in the recent Security Portal Linux Distribution Security Report (http://www.securityportal.com/cover/coverstory20000724.html).

The current version of SuSE for x86 hardware (as of this writing) is 7.0. SuSE recently released a well-reviewed version of their 6.4 distribution for PowerPC-based Macintoshes, which includes the MOL (Mac on Linux) emulator among other goodies. To find out more, I asked SuSE Chief Technical Officer Dirk Hohndel and press representative Xenia von Wedel:

Carl: Could you give me a brief history of SuSE?

Hohndel: In 1992, the four founders ran into the (back then largely unknown) Linux operating system (or more precisely, the beginnings thereof). They quickly saw the need for Linux on “off-line media”, as Internet connections were not commonplace in Germany back then (and still are quite expensive). More importantly, a physical distribution made it easier to bundle documentation and support, and of course to make SuSE’s own developments for installation and configuration available as part of the package. Soon SuSE Linux became the standard Linux OS in Europe, and in the past few years SuSE has been quite successful outside Europe as well (less because of a strong marketing arm, but more due to its focus on sound technology and good engineering).

Von Wedel: SuSE Linux AG, headquartered in Germany, and SuSE Inc., based in Oakland, CA, are privately held companies focused entirely on supporting the Linux community, Open Source development and the GNU General Public License. With a workforce of over 450 people worldwide, SuSE has offices all over Europe, Venezuela and in the US. More than 50,000 business customers use SuSE Linux worldwide due to its stability and high quality.

SuSE received the “Show Favorites” award at LinuxWorld Expo in February 2000 and March 1999. SuSE contributes considerably to Linux development projects such as the Linux kernel, glibc, XFree86TM, KDE, ISDN4Linux, ALSA (Advanced Linux Sound Architecture) and USB (Universal Serial Bus). Additional information about SuSE can be found at http://www.suse.com.

Carl: What is included with the newest release of SuSE?

Hohndel: Linux kernel 2.2.17/PRE (2.2.16 plus the relevant patches for it) with enhanced raw device support and 4 GB main memory addressing, full USB support and ReiserFS XFree86 4.0 and SaX 2, the graphical installation tool for XFree86 4

SuSE Linux Professional includes more than 1500 apps such as StarOffice 5.2; Acrobat Reader; Samba 2.0.7; Apache 1.3.12  including PHP-4, Zope, Midgard, JServ, Tomcat, backhand and WebDAV; Lutris Enhydra 3.0; teleconferencing; Sendmail 8.10.2; Postfix 19991231pl08; Perl 5.005_03; Logical Volume Manager; PAM; KDE 1.1.2, KDE 2.0 Beta3, and GNOME 1.2.

SuSE Linux is known to be one of the richest and most complete versions of Linux, so almost all the important packages are there. One of the interesting new features is that SuSE Linux 7.0 can be installed and used using Braille; it can therefore be used and managed by the visually impaired.

Carl: What platforms does it support? What are the minimum requirements for SuSE?

Hohndel: At the moment, SuSE Linux is available as a shrink-wrapped product for Intel x86 and compatible processors (IA32), Motorola PowerPC and Compaq Alpha processors. A beta version for IBM mainframe S/390 can be taken from SuSE’s ftp server. Versions for SPARC and IA64 are under development and also available via FTP.

Von Wedel: Please see detailed information about some special hardware in our support database. Just search for the desired keyword at http://www.suse.com/support/hardware/index.html .

Carl: What are some configurations that SuSE would recommend, or it really excels with?

Hohndel: We have certified quite [a lot of] hardware with SuSE Linux, a current list can be found on our web site. I don’t think that it makes sense to point out specific configurations, as those tend to change all too frequently. 

Von Wedel: If you are running a NIS or simple file server, a slower Pentium or even a 486 would work fine. For a standard all-around machine, you would probably want the average machine found on the shelf at Best Buy or CompUSA [including] big hard drives, a Pentium III and at least 128 MB of RAM. This machine will allow you to run programs like Quake III and VMware without problem. We tell our customers to avoid any hardware where the manufacturer refuses to release open source drivers for it.

Carl: What would you say differentiates SuSE from other Linux distributions?

Hohndel: SuSE maintains consistent technical quality throughout all platforms and all languages. We also have an encyclopedic set of Linux tools, for which SuSE Linux is already famous. YaST and YaST2 are well known as solid and easy-to-use installation and configuration tools that are flexible enough to support a wide range of installation options for the enterprise environment.  Additionally, for optimized support for fully automated installation, SuSE’s new ALICE (Automatic Linux Installation and Configuration Environment) tool allows central configuration management for computer networks.

Carl: What is YaST? Where can I find more information about it?

Hohndel: YaST stands for Yet another Setup Tool. A white paper that covers some of the important features is available on the web at http://www.suse.de/de/linux/whitepapers/yast/Auto_Install_English_text.txt.

What is important to know about YaST is that it offers a central configuration and administration interface, but can be told to stay out of the way if the admin chooses to configure parts of the system “manually.” So, you can benefit from the flexibility and know-how that has been put into the tool without being stuck with it. Many typical administration tasks (adding printers, setting up accounts, adding software packages) can comfortably be done from within YaST.

Carl: For someone operating an ISP, what reasons could you give to choose SuSE over another Linux distribution?

Hohndel: SuSE has a strong focus on security within its distribution. We do not only have an internal security team within SuSE Labs that audits all major packages and closely follows all relevant information sources, but we also maintain an active dialogue with our customer base, through mailing lists and security alerts. 

Furthermore, SuSE is extremely well connected with numerous industry leaders, from both the technology and the business perspective. We maintain strategic alliances with IBM, Compaq, Fujitsu Siemens Computers, Oracle, SGI, and numerous other independent software vendors. And [since] SuSE supports the Free Standards Group, SuSE customers will be sure to use the widely spread Linux distribution that sets up the standards. Already today SuSE is compliant with all the draft standards from LSB [Linux Standards Base].

Carl: For someone operating an Internet server – what, if any, are the drawbacks for choosing SuSE?

Hohndel: Quite frankly, I don’t see any drawbacks in choosing SuSE. Our system is well tested for exactly this use case. Interestingly enough, while globally about 30 percent of all web servers run Apache on Linux, in Germany, our “home turf,” that number is above 40 percent. So SuSE Linux is in very heavy use for exactly that – an Internet server.

Carl: What advantages would you cite for someone choosing SuSE Linux over another server OS, like Windows 2000, Solaris or FreeBSD?

Hohndel: Open Source is by many seen as the software development methodology of the future. Especially in the area of infrastructure systems, for example Internet servers, Open Source has gained the respect and the trust of the companies deploying these systems.

Linux is finally keeping the old Unix promise. It is the first truly homogeneous OS in a heterogeneous hardware environment. At the same time, it is a very flexible environment that allows for easy customization and can be adapted to the needs of very different use cases, from tightly controlled firewall system to fully-fledged server to complete desktop.

And, of course, there are lots of companies that develop on Linux and for Linux, and there is plenty of companies supporting Linux and offering professional services around Linux. SuSE is obviously one of them.

Depending on the OS that you compare with, a different combination of these arguments apply. 🙂

Carl: Where could someone running an Internet server go for help and tips on SuSE?

Hohndel: We offer both support and professional services around SuSE Linux. A good place to start is http://www.suse.com/suse/news/PressReleases/proserv.html. We already offer 24×7 support here in Germany and will roll out this service globally, soon. 

And of course we offer the famous support database and component database (http://sdb.suse.de/sdb/en/html/index.html and http://cdb.suse.de/cdb_english.html).

Carl: What’s next for SuSE? What improvements are you planning for the future?

Hohndel: We continuously expand the hardware base that SuSE Linux supports. As I mentioned above, S/390, IA64 and SPARC are already in beta test (or close to production), and others will follow.

The installation and configuration tools are continuously being developed and will of course be one of the areas that we continue to focus on. Especially things like improved hardware detection, but also ergonomic aspects of the administration process.

And of course we are putting a lot of effort in many Open Source projects. [These include] “enterprise” features like high availability, improved SMP support or our work on directory services and better file systems, or desktop-oriented things like XFree86 and KDE.

An important focus of improvements is ISV [Independent Software Vendor] support. We are working closely with many ISVs to help them port their software to a standardized Linux environment to get the largest set of applications for Linux possible.

So there are a lot of things that you can expect from SuSE in the future. We are excited to be in this fast-growing industry and are looking forward to the things to come.