Tracing the Lines: The Definitive Guide to Traceroute

By Jeffrey Carl

Boardwatch Magazine
Boardwatch ISP Guide, May 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Dig through any Internet engineer’s “toolkit” of favorite utilities, and you’ll find (probably right under the empty pizza boxes) the traceroute program. Users have now joined the bandwagon, using traceroute to find out why they can’t get to site A or what link is killing their throughput to site B. 

Traceroute uses already-stored information in a packet header for queries on each part of the path to a specified host. With it, you can find out how you get to a site, why it’s failing or slow, and what might be causing the problem. Traceroute seems like a simple and perfect tool, but it can sometimes give misleading answers due to the complexities of Internet routing. While it should never be relied on to give the complete answer to any question about paths, peering or network problems, it is a very good place to start.

Traceroute, in its most basic form, allows you to print out a list of all the intermediate routers between two destinations on the Internet. It allows you to diagram a path through the network. More important to IP network administrators, however, is traceroute’s potential as a powerful tool for diagnosing breakdowns between your network and the outside world, or perhaps even within the network itself.

The Internet is vast and not all service providers are willing to talk to one another. As a result, your connection to your favorite web or FTP site is often grudgingly left to the hands (or fiber) of a middleman, perhaps your upstream, or a peer of theirs, or even more remote than that. When there is performance trouble or even a total failure in reaching that site, you might be left scratching your head, trying to determine who is at fault once you’ve determined it’s not a fault within your control.

The traceroute utility is a probe that will enable you to better determine where the breakdown begins on that path. Once you have some experience with the program, you’ll be able to see when performance trouble is likely a case of oversaturation of a network along the way, or that your target is simply hosted behind a chain of too many different providers. You will be able to see when your upstream has likely made a mistake routing your requests out to the world, and be able to place one call to their NOC; a call that would resolve the situation much more quickly than scratching your head and anxiously phoning your sales representative.

Performing a Traceroute

Initiating a traceroute is a very simple procedure  (although interpreting them is not). Traceroutes can be done from any Unix or Windows computer on which you have an account; MacOS users will need to download the shareware program IP Net Monitor, available from shareware sites or at There are also numerous traceroute gateways around the Internet which can be accessed via the web.

From a Unix shell account, you can usually just type traceroute at the prompt, followed by any of the Unix traceroute options, followed by the host or IP you’re attempting to trace to. If you receive a “command not found” message, it indicates either that traceroute isn’t installed on the computer (very unlikely), or it’s simply installed in a location which isn’t in your command path. To fix this, you may need to edit your path or specify the absolute location of the program – on many systems, it’s at /usr/sbin/traceroute or /sbin/traceroute.

Windows users with an active Internet connection can drop to a DOS prompt and type tracert followed by the hostname or IP address they want to trace to. With the Unix and DOS traceroute commands, you can use any of a number of command-line options to customize the report that the trace will give back to you. With web-based traceroute gateways, you may be able to specify which options you want, or a default set of options will be preselected

How it Works

As the Unix man page for traceroute says, “The Internet is a large and complex aggregation of network hardware, connected together by gateways. Tracking the route your packets follow (or finding the miscreant gateway that’s discarding your packets) can be difficult. Traceroute utilizes the IP protocol “time to live” field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host.”

Traceroutes go from hop to hop, showing the path taken to site A from site B. Using the 8-bit TTL (“Time To Live”) in every packet header, traceroute tries to see the latency from each hop, printing the DNS reverse lookup as it goes (or showing the IP address if there is no name). 

Traceroute works by sending a UDP (User Datagram Protocol) packet to a high-numbered port (which would be unlikely to be in use by another service), with the TTL set to a low value (initially 1). This gets partway to the destination and then the TTL expires, which provokes (if all goes as planned) an ICMP_TIME_EXCEEDED message from the router at which the TTL expires. This signal is what traceroute listens for. 

After sending out a few of these (usually three) and seeing what returns, traceroute then sends out similar packets with a TTL of 2. These get two routers down the road before generating ICMP_TIME_EXCEEDED packets. The TTL is increased until either some maximum (typically 30) is reached, or it hits a snag and reports back an error.

The only mandatory parameter is the destination host name or IP number. The default probe datagram length is 38 bytes, but this may be increased by specifying a packet size (in bytes) after the destination host name.

What it Means

What traceroute tells you (assuming everything works) is how packets from you get to another specific destination, and how long they take to get there. Armed with a little knowledge about the way the Internet works, you can then make informed guesses about a number of things.

Getting There is Half the Fun

Let’s say that you’d like to know how traffic from my website is reaching the network of Sites On Line, a large online service. So, I run a traceroute to their network from my webserver.

traceroute to (, 30 hops max, 40 byte packets
1 (  1 ms  1 ms  1 ms
2 (  1 ms  1 ms  1 ms
3 (  2 ms  2 ms  2 ms
4 (  2 ms  1 ms  2 ms
5 (  3 ms  3 ms  3 ms
6 (  4 ms 7ms  8 ms

You can see from this example that your traffic passes through a router for my ISP, then passes through what is evidently an OC-3, before entering a line that (judging by the name) is evidently a DS-3 gateway between my ISP and SOL. From there, it enters the GigaPop of SOL, passes through a router which doesn’t have a reverse lookup (I see only its IP address), and eventually to one last router (clearly marked as leading to its webserver) at SOL.

Because of the way routing works, an ISP can really only control its inter-network traffic as far as choosing what to listen to from each peer or upstream, and then deciding which of those same contact points gets its outbound packets. So when you’re tracerouting from your network to another network, you’re getting a glimpse of how your neighbor network is announcing itself to the rest of the ‘Net. Because of the existence of this “asymmetric routing” in some circumstances, you should probably do two traces (one in each direction) between each two points you’re interested in.

Reading the T3 Leaves

While you can easily read the names that appear in the traceroute, interpreting them is a hazy enterprise. If you’ve got a pretty good feel for the topology of the Internet, the reverse lookups on your traceroute can (possibly) tell you a lot about the networks you’re passing through. Geographic details, line types and even bandwidth may be hinted at in the names – but hints are really all that one can expect. Since every large network names its routers and gateways differently, you can assume some things about them, but you can’t be sure. If you want to engage in router-spotting, note that common names may  reflect:

• locations (a mae in the name might indicate a MAE connection, or la might indicate Los Angeles)

• line type (atm may indicate an ATM circuit as opposed to a clear-channel line)

• bandwidth (T3 or 45 is generally a dead giveaway, for example)

• a gateway (sometimes flagged as gw) to a customer’s network (sometimes referred to as cust or something similar) 

• positions within the network (some lines may be named with core or border, or something similar)

• engineering senses of humor (as seen by the reference to Babylon 5 in my ISP’s network)

• The network whose router it is (almost always identifiable by the domain name; if the router doesn’t have a reverse lookup, you can perform a nslookup on an IP address to find out whose IP space it is in).

However, it should be reiterated here that amateur router-ology is a dangerous sport, since really the only people who understand a router’s name are the people that named it. So don’t get too upset when you think you’ve spotted someone routing your traffic through Nome, Alaska when it fact it was named by a Hobbit-obsessed engineer with bad spelling.

Some Clues About Connectivity

A prospective ISP,, tells you that it is fully peered. Is there any way that you can check up on this? Well, yes and no.

Traceroute can tell you whether two networks communicate directly, or through a third party. First, you traceroute from a traceroute page behind to a location within

traceroute to (, 30 hops max, 40 byte packets
1 (  12 ms  17 ms  8 ms
2 (  15 ms  32 ms  12 ms
3 (  30 ms  15 ms  13 ms
4 (  82 ms  103 ms  73 ms
5 (  77 ms  74 ms  73 ms
6 (  121 ms  76 ms  75 ms
7 (  80 ms  79 ms  94 ms

It is evident from this traceroute that and travel through a third party to reach each other. While this doesn’t say anything definite about their relationship, two networks will generally pass their traffic directly to each other about if they are peers (barring strange routing circumstances or other arrangements). This doesn’t paint a full picture (and you should confirm this with a trace from prospectiveisp to hugeisp), but it leads you not to think that prospectiveisp’s claims of full peering are true. 

Note that while traceroute can tell you whether two networks communicate directly or indirectly, it can’t tell you any more about their relationship. Even if any two networks do communicate directly, traceroute can’t tell me whether their relationship is provider-customer or NAP peering (except perhaps through whatever hazy clues you obtain from router names or by calling a psychic hotline and reading them my trace). 

In the above example, you might conclude that prospectiveisp buys transit or service from, which peers with (or buys service from) Of course, the opposite may be true – that peers with, and buys service from However, common sense and a rough feel for the “pecking order” of first- and second-tier networks should guide your guesses here. 

Where’s the Traffic Jam?

Let’s say that you encounter some difficulty reaching the Somesite.Com website (you describe the site’s download speed as “glacial”), and decide to show off your newfound traceroute skills to investigate the cause of the problem. Here’s what you find:

traceroute to (, 30 hops max, 40 byte packets
1 (  1 ms  0 ms  1 ms
2 (  2 ms  2 ms  2 ms
3 (  6 ms  4 ms  22 ms
4  112.ATM2-0.XR2.HUGE.NET (  8 ms  28 ms  30 ms
5  192.ATM9-0-0.GW3.HUGE.NET (  8 ms  32 ms  28 ms
6  * * *
7  * somesite-gw.customer.HUGE.NET (  12 ms !A *

From this, you can guess (with a high degree of certainty) that the initial source of trouble is outside of our your ISP’s network and somewhere along the route used by the site’s carrier, Hop six shows some type of trouble, with none of the three packets sent to that gateway returning. Notice the traceroute continues on to the next stop despite the failure at hop six. The problem there was also likely for the loss of two of the packets sent to hop seven. Thankfully, one managed to get back and indicated the trace was complete. 

Using the example above, you could ping the address reflected in hop seven and then compare it to a ping to, say, hop four or five, and see if loss of packets to hop seven versus hop five reflects what the traceroute indicated. 

Selected Traceroute Error Messages:

• !H Host Unreachable

This frequently happens when the site or server is down.

• !N Network Unreachable

This can be caused by networks being down, or routers unable to transmit known routes to a network.

• !P Protocol Unreachable

This only happens when a router fails to carry a protocol that is used in the packet, like IPX. 

• !S Source Route Failed

This can only happen if you are using the source route functions of traceroute, (i.e.) tracing from a remote site to another remote site. The site you are using source route tracing from must have source route tracing turned on, or it will not work.

• * TTL Time Exceeded

This is caused when the path back exceeds the TTL time limit, or the router sends a ICMP Time Exceeded message to site A.

• !TTL is <=1

This happens any time the TTL is changed, via RIP or some other protocol, to a new TTL.

Caveat Tracerouter

Before you make too many decisions based on the results of traceroutes, you should be very aware that tracerouting is a complex phenomenon, and that plenty of otherwise innocuous things can interfere with it. For example, it may be possible to ping a site, but not to traceroute to it. This is because many routers are capable of being set to drop time exceeded packets, but not echo reply packets.

Traceroutes may return an unknown host response, but this frequently does not mean that the sites are down, or the network connection in between is faulty. Some domains are simply not mapped to be used without a third-level domain as part of the name. For example, tracerouting to will not work; but tracing to will.

In short, tracerouting is a valuable tool, but does not give a complete picture of a network’s status. Try to use as many gauges of network status as possible when attempting to debug an Internet connection. 

Other Traceroute Resources:

A lengthy tutorial on Traceroute by Jack Rickard. directory of Traceroute Gateways

The Multiple Simultaneous Traceroute Gateway

Boardwatch’s own list of traceroute servers

A handy site that allows you to trace from Digex’s network at MAE-East, MAE-West, Sprint NAP or PAIX.

Home Page for the program IP Net Monitor for MacOS

Freenix Support Sites

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, April 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Many times, we are faced with difficult questions about installing, maintaining and running an Internet server with a Free Unix OS. And the best answer is, “take a nap.” That’s what I always do.

Anyway … the point is that anyone who tells you they know everything about running a Unix server is lying. The power and configurability of Unix leads to “complexity.” “Complexity” leads to there being “lots of information that you don’t know.” This leads to “needing to look stuff up.” Also, “fear” leads to “anger,” “anger” leads to “hate,” and “hate” leads to “suffering2.” 

The first source for answers to any problems with your Freenix of choice is to be found in your man pages, and whatever handbook pages your OS or distribution provides. However, since these often tend to be big on abstractions and short on real-world examples, the web is often your best choice for clear answers to common problems. Since the amount of documentation available on the web is tremendous, in this column I’ll be focusing on free support resources; commercial support will be the topic of a future column.

The following is intended to be a very brief guide to the best free spots on the Web for information. The list is very incomplete; if you have a favorite Freenix resource site you don’t see here, please e-mail me.

Free Linux Support Resources

• Linux Documentation Project ( The LDP is one of the best things that the Linux community has going for it. The subjects sometimes tend toward the arcane and academic, but you’ll almost certainly find a HOTWO or FAQ guide for any program or service you want to set up for almost any Linux distribution. Its material isn’t always updated to cover the “latest and greatest,” but overall it’s an invaluable resource for how to do almost anything with Linux.

• Linux Distribution Support Sites: Most of the various Linux distributions provide support pages for their distributions. Helpfulness varies from distro to distro, but they generally provide good tips on distribution-specific issues. Among the most notable are Red Hat Linux (, Linux Mandrake (English homepage at, SuSE (, Debian GNU/Linux (, Corel LinuxOS (, Caldera OpenLinux (, Slackware (, LinuxPPC ( and WinLinux 2000( For a good list of English-language Linux distributions, go to

• Linux Journal Help Desk ( The print magazine Linux Journal offers a high-quality collection of links to outside help resources. The links are updated well, and are a good first step to finding answers.

• Linux Fool ( This site features a number of message/discussion boards on topics ranging from X windows to DNS configuration. Some of the forums contain far more questions than answers, but the site has a fairly high signal-to-noise ratio, and is an excellent way to get one-on-one answers. Linux Fool currently has few registered readers/posters as of this writing (about 1000), but hopefully this number will be higher by the time you read this (and a greater number of topics covered). 

• Fresh Meat ( The number-one source for new software and updates. If you’re looking for the newest version of anything, you’ll find it here; more importantly, you can search for a description term (like “statistics”) and find software whose description matches. Not strictly useful for Q & A, but helpful as a launching point for homepages and resources of software packages you might be using.

• Gary’s Encyclopedia ( This site is geared toward Linux, but also has some good information for any Unix user. The “pedia” features a few well-written original tutorials and literally hundreds of links, most of which are very well maintained, and many of the links feature comments about their specific subject or usefulness. Sadly, the page contains a note that as of January 2000, the links will not be actively maintained due to a lack of usage.

• Linux Developer’s Network ( Updated several times daily, this site is an excellent source for information on new software packages (especially enterprise apps and code libraries). It also features links and tutorials, and while it doesn’t have the range of informative or thought-provoking reader comments that Slashdot (, the dean of Linux/open source news sites) does, it also doesn’t have the high noise-to-signal ratio.

• Linux Online Help Center ( Hundreds of Linux support links. They aren’t as updated as they might be (when I looked at it, its links to the LDP still hadn’t been moved from the LDP’s old home at UNC), but there’s a lot of helpful material there that’s likely to at least point you in the right direction. The site lists its links with helpful comments, and can point you to resources you’re unlikely to find elsewhere.

• LinuxCare Support ( LinuxCare is a commercial Linux support company, but they maintain a free support database at this site. It’s well-organized, but doesn’t yet have too many answers, and requires you to sign up for free access before you can see the answers to any questions posted.

• LinPeople ( This isn’t technically a web resource, but it can still be quite useful. The Linux Internet Support Cooperative is a groups of sysadmins who devote time to answering Linux questions on their IRC channel. Depending on the time of day, phase of the moon, and who else is logged on, the responses to your questions can range from thoroughly helpful to utter silence; still, one-on-one dialogue is often the best way to understand important Unix problems and concepts. Find LinPeople at, channel #LinPeople.

• Linux Glossary Project ( The Linux Glossary provides a number of “user-friendly” basic definitions of important terms. It seems to borrow heavily from the Hacker Jargon File and other common sources, but can be useful if you’re looking for a down-to-earth definition of an unfamiliar term.

Free *BSD Support Resources

• FreeBSD Home ( FreeBSD’s website should probably be your first destination when looking for an answer. Most important is the documentation on the site, including the FreeBSD Resources for Newbies page (, their (slightly outdated) FreeBSD Tutorials page (, and the FreeBSD FAQ ( Most important, however, is the FreeBSD Handbook (, which can answer almost any common question you’ll have (note that a copy of the handbook should have been put in your /usr/share/doc directory when you installed the system). FreeBSD’s Support Page ( has a number of resources, including links to mailing lists and newsgroups, a list of regional user groups, links to the GNATS bug-reporting system, and lists of commercial FreeBSD consultants. 

• NetBSD Documentation ( NetBSD’s support site isn’t as in-depth as some others, but it’s an admirable effort, considering the number of platforms that it’s ported to. The documentation available is not as in-depth as for some other OSes, but it is very well organized (my personal documentation test is how quick you can find out how to boot into single-user mode). Overall, very helpful to NetBSD users, especially new sysadmins trying to find out how to perform normal tasks.

• OpenBSD Home ( OpenBSD’s site provides several high-quality resources for its users, including the OpenBSD FAQ ( a Manual Page Search feature ( and a listing of OpenBSD Mailing Lists (

• Daemon News / “Help, I’ve Fallen” ([YYYYMM]/answerman.html): Daemon News is a monthly e-zine covering all sorts of cool *BSD-related topics, including descriptions of new software and “how-to” articles. If you run any BSD, it’s definitely worth a look every month or so. 

The “Help, I’ve Fallen” column is an absolute must for newer *BSD administrators. It might not cover your more “out there” questions, but a lot of common problems (from setting up modems to setting up printers) have a very understandable explanation here. Each column includes a listing of all of the previous questions answered, so it’s best to look at the most recent column first.

• Comprehensive Guide to FreeBSD ( Covers everything from installation (Chapter 2) to setting up PPP with FreeBSD (Chapter 8). Some of the material is outdated (referring to FreeBSD 2.x), but most of it is very applicable. It’s short on theoretical grounding and explanations, but very good for real world “quick-and-dirty” explanations on almost any problem to be found with FreeBSD, and much of the information can be applied to other BSDs.

• The FreeBSD Diary ( The “diary” of a FreeBSD sysadmin, focusing heavily on links to topics regarding Internet servers (like Apache and e-mail). Not the easiest to search for answers, but featuring links to a lot of content you won’t find elsewhere (if you can find it here).

• FreeBSD “How-To”s for the Lazy and Hopeless ( Some of the links on this site are outdated, but most provide useful information. While this is long on step-by-step intros and short on explanations, it can provide links to quick info (on things from TCP Wrappers to PnP audio gear drivers) if that’s what you’re looking for. 

• FreeBSD Rocks! ( Not updated as frequently as some other sites, but still a good choice when you’re looking for FreeBSD news or support forums.

Free General Unix Support Resources

• Geocrawler ( Geocrawler is a repository of archives of more than a hundred mailing lists, on topics ranging from Linux distributions to *BSD to Apache, PHP and Perl. Each mailing list has a search interface, making the site an invaluable resource for those searching for new topics or those not well-documented in existing HOW-TOs or FAQs.

• Root Prompt ( Provides a number of informative articles and tips on various *nixes. Not much original content, but provides an excellent (and well-updated) collection of links to papers and FAQ/HOWTOs on everything from configuring Sendmail to recycling IP addresses.

• Unix Help for Users ( This site provides basic Unix answers and questions. If a question seems to simple to be answered on one of the more platform-specific sites, it’s probably here.

• Unix Guru Universe ( UGU provides a neat search interface with a number of links to outside information on almost any Unix you can think of. While it’s light on “in-house” material, it can find things in nooks and crannies that other sites don’t have.

In a future column, we’ll look at commercial support offerings online. Thanks for reading, and remember – if you’ve had half as much fun reading this as I’ve had writing it, I’ve had twice as much fun as you.

[1] My sincerest apologies to anyone offended by this, but I just had to include a John Dvorak joke.

2  The Force for Dummies, Master Yoda, Jedi Press.

Getting to Know OpenBSD – An Interview with Louis Bertrand

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, March 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

If you aren’t familiar with the OpenBSD project (, it’s worth your time to find out about it. The core of OpenBSD’s mission is to provide the most secure Unix operating system available. For many ISPs, this is a very powerful consideration for protecting their customers’ data and their own – and one that can give them a competitive advantage among security-conscious customers.

OpenBSD offers a free variant of the BSD Unix operating system. It runs on a variety of platforms (from Intel x86 to Motorola M68k to Sun SPARC and several others). It has a number of native ports of popular software, and includes binary emulation options to run programs written for other operating systems including Solaris, Linux, FreeBSD, BSD/OS, SunOS and HP-UX. 

OpenBSD runs on a tight budget. The project is funded primarily by the sales of installation CD-ROMs, T-shirts and goodies, plus donations of money and equipment. OpenBSD has various commercial support options, and is available via anonymous FTP ( or on CD (

Answers from Louis Bertrand

Recently, I had a chance to interview Louis Bertrand (an OpenBSD developer, and the project’s unofficial PR guy) on the past and future of OpenBSD, as well as why ISPs might want to deploy it. Here’s what he had to say.

Question: What’s the short version of the history of the OpenBSD project?

Bertrand: Started in 1995 by Theo de Raadt, OpenBSD’s primary goal was to address the deplorable state of Unix and network security at the time. The cryptography angle was a natural outgrowth because it allowed the team to address security problems inherent in some protocols without breaking standards. The main effort was a comprehensive code audit, looking for the kind of sloppy coding practices that lead to buffer overrun attacks, races and other root hacks. Another goal was to release a BSD-derived OS with completely open access to the source code: OpenBSD was the first to let any and all users maintain their own copy of the source tree via anonymous CVS. We also kept the multi-platform aspects of BSD, subject to manpower – security comes first.

The OpenBSD source tree was created in October 1995, the first FTP-only release (2.0) happened in Fall ‘96, the first CD-ROM release (2.1) came out in Spring 1997, and CD-ROM releases have been coming out like clockwork every six months. 2.6 just came out Dec.1, 1999.

OpenBSD is derived from NetBSD (Theo de Raadt was a co-founder of that project). NetBSD is in turn derived from the Berkeley Computer Systems Research Group final releases of 4.4BSD-Lite.

Q: What does OpenBSD see as its niche or mission? Will that stay the same in the future?

Bertrand: OpenBSD is unique because of the security stance and the code audit, and the cryptography integration. There are no plans to change that focus. I mean, why should we? No other OS vendor (open source or commercial) is doing an active code audit, and nobody integrates encryption the way we do.

OpenBSD’s mission continues to be working proof that it is possible to offer a full-function OS that is also secure. The software industry and consumers (both commercial and open source) are locked into the “New! New! New!” mindset. Consequently, the accepted security stance is to back-patch whenever someone finds a problem. We completely reject that – ideally we’ll have corrected all the problems before they can be discovered and exploited by the bad guys. 

OpenBSD’s existence is also a constant reminder that the US government’s ban on exporting strong cryptographic software (it’s considered “munitions”) has become essentially futile. It is now easier to obtain strong encryption software outside the USA than inside. Being free software, we also completely avoid the restrictions of the Wassenaar Arrangement (UN-based arms export controls).

Q: Where does the project stand now (version, newest features added)?

Bertrand: The most important addition to 2.6 was OpenSSH, a free implementation of the SSH1 protocol ( OpenSSH is integrated in OpenBSD 2.6. It’s an essential replacement for telnet and rsh for remote access where there is a danger of password sniffing over the wire. You can also use scp to transfer files instead of rcp or ftp. The last hurdle to complete crypto coverage in OpenBSD is the patent on RSA public key encryption, as used in SSL for SSH and secure web servers). OpenSSL is used everywhere

except the USA, where you can only use the RSAREF library, and then only for non-commercial applications.

Another big improvement in 2.6 was the huge effort to improve the documentation (both manual pages and web FAQ) and to make sure the information was up to date and correct. We’re trying to avoid the situation where people are dissatisfied with the main sources of information and start writing their own how-to documents – that only serves to fragment the sources of information, and users end up wasting a lot of time hunting around for reliable information.

Q: What’s coming up on the roadmap for the next major/minor versions? What new features? When might we expect to see them?

Bertrand: If all goes according to plan, there will be a release 2.7 in early summer 2000. There are no plans for a “major” release (e.g. 3.0).

Currently there’s a lot of work going on to integrate the KAME (www.kame.netIPv6 networking code into OpenBSD (it’s already supported, but this actually integrates it into the source tree). There’s also a major rework of the “ports” tree, the facility by which people can download and build applications simply by doing make install in the target directory.

There is also some exploratory work going on for multi-processor support. Previously we flatly turned down the idea because it would be a huge effort that would only benefit a minority of users who are running SMP machines. But the recent drop in prices of SMP hardware means that it’s time to revisit that decision, and a few developers are interested in doing it. We still need to make sure we’re doing it right, not just heating up the second processor.

Q: What are OpenBSD’s weak spots right now, or what needs the most work?

Bertrand: The main criticism leveled at OpenBSD is that it doesn’t track the very latest standards. It’s a fair comment because we’ll often hold back on a new feature because of stability concerns. We held back APM (power management) support from the 2.5 release because it hadn’t been tested enough, and we still use named-4.9.7 for DNS because of security concerns, even though named-8 is in routine use elsewhere (there was a remote root hole found in bind-8.2.1 as recently as last November).

A lot of people have been asking for multi-processor support. Up to now, that was out of the question, but SMP hardware has been getting cheaper and several developers are interested in starting on it.

Also we don’t support the Macintosh PPC platform or the HP PA-RISC platforms. Again, there’s some work going on to change that. We’ve also dropped active support on Alpha because we were getting no support from Compaq. It still builds and runs, but we’re falling behind on hardware support.

Q: From an ISP/webhosting perspective, what specifically would you cite as reasons to choose OpenBSD?

Bertrand: Security and code correctness, along with the “secure by default” configuration. They all go hand in hand.

Start with the “secure by default” philosophy. It means that a sysadmin doesn’t have to rummage around dark corners shutting down risky or useless server daemons that the installation script enabled by default, or run around tightening up permissions. Sysadmins are very busy and we let them get their job done by enabling only those services and daemons they actually want and need. The default installation of OpenBSD has no remote root holes, and hasn’t had one for over two years.

Then there’s the security/correctness angle. The OpenBSD code audit discovered that if you fix the bugs, you have gone a long way to securing the system. In fact, it’s not necessary to prove that some buffer overflow would have caused a security hole – it’s enough to know that the software is now doing what it’s supposed to do, with no nasty side effects. That’s the kind of proactive measures that allowed OpenSSH to avoid the recent RSAREF vulnerability in SSH.

Finally (but not least), there’s the built-in cryptography. Whether you’re setting up an IPSec VPN between subnets across the Internet, or just running a modest client/server VPN with SSH and stunnel, OpenBSD has the built-in tools, maintained and tested for interoperability with commercial vendors. You don’t have to download, build, test and integrate your own cryptographic subsystems. No other OS ships with this level of integration.

Q: Can you point to any user base or constituency that OpenBSD is a “must-have” for? If so, why?

Bertrand: Any users who need to run intrusion detection, firewall or servers carrying sensitive information (e-commerce too!). For intrusion detection, there’s no point in trying to guard against malicious behavior on your network if the IDS system or firewall itself is vulnerable. For sensitive data, the built-in SSL makes it very easy to set up a secure web server. (Note, however, that the RSA patent restriction is still something US-based service providers must deal with).

Security is also a concern if you’re offering VPNs: any encryption scheme is worthless if the underlying OS is vulnerable (this is like an intruder bypassing a steel door by breaking a window).

Q: How vital do you see security as being to the future of the ‘Net and the future of OpenBSD? 

Bertrand: Extremely vital for both. I’d like to say concern for security is going to hit a threshold where more and more people are going to ask tough questions of their software vendors, ISPs and e-commerce outlets, but it’s probably wishful thinking. We’d like to see more than just “safety blanket” assurances from vendors.

One nasty trend is the growing full-time presence of powerful servers on end-user DSL and cable modems. This means there will be a fresh batch of compromise-able machines available for concerted attacks. If more vendors adopted a “secure by default” stance, that would go a long way to reducing the exposure of naive sysadmins.

Should You Use OpenBSD?

There are positives and negatives to OpenBSD’s focus on security. On the plus side, it’s the most secure free operating system you can get anywhere. If your number-one concern is making your server safe from intrusion, look no further than OpenBSD.

All Freenixes offer options for securing your system, but the fact that OpenBSD is secure “right out of the box” is a major advantage for the inexperienced administrator who isn’t sure how to secure a system, or the busy administrator who has the knowledge but not the time. A plain-vanilla installation of the OS will already include a number of security features that might take hours of installation and configuration (plus some considerable knowledge and research) with other Unixes.

On the negative side, OpenBSD’s security comes as a tradeoff for new gadgets and features – a tradeoff you may or may not be willing to make. Also, there’s a very true old saying that “security is inversely proportional to convenience.” And tight security can be very inconvenient. A lot of administrators (myself included) are used to saving time through leaving ourselves little insecurities (like rsh, allowing logins as root, or not using TCP-Wrappers for services like ftpd or telnetd). Conversely, many of the same busy or inexperienced administrators who find benefits in a “secure by default” installation may not have the time or the knowledge to then enable these insecure items if they want them.

And remember that you, the server administrator, can easily foil OpenBSD’s security precautions by doing something boneheaded, like installing third-party software with known security holes, or running your webserver as root. You yourself can make OpenBSD insecure by twiddling the knobs and switches if you don’t know what you’re doing.

Unless security is your primary consideration, you probably aren’t going to use OpenBSD for all of your Unix servers. Linux, FreeBSD and NetBSD all excel in various areas where OpenBSD does not. However, OpenBSD certainly has its place, and should be part of any network administrator’s toolkit. For your most security-sensitive tasks, OpenBSD is very likely to be “the right tool for the right job.”

FreeBSD 4.0 and Beyond – An Interview with Jordan Hubbard on Improvements, New Platforms, and What’s to Come

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, February 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

By the time you read this, I will be dead. Just kidding. By the time you read this, FreeBSD 4.0 should be out – or close to it. So, what’s the big deal? And why should you upgrade to it, or use it as a convenient excuse to switch to FreeBSD? 

The Short Version of the Story

A quick recap for the new students in class: FreeBSD is a free version of the BSD Unix operating system (and one which claims to have roughly 30 percent market share in the ISP space). BSD Unix is descended from the original AT&T Unix code currently maintained by SCO. When BSD grew up, moved out of the house for good and broke away from the dysfunctional Ma Bell family, FreeBSD emerged, originally dedicated to enhancing the performance of BSD servers on Intel-compatible (x86) hardware.

FreeBSD began a relatively rapid growth and adoption phase, at least inside the ISP/webhosting markets. FreeBSD’s developers concentrated on developing and optimizing for the hardware they were using – which was largely x86 servers, SCSI drives and other server-centric (rather than home-user centric) equipment. As a result, they developed (arguably) the highest public profile of any of the free BSD Unixes, building a list of users that included Yahoo!, the world’s busiest FTP server (, and Hotmail.

However, from seemingly out of nowhere, Linux exploded onto the scene and left all other free Unixes behind. Although BSD Unix had been around for years, as far as most of the world was concerned, Linux was it

By 1998, FreeBSD was at its lowest ebb. Linux – through its open development model and focus on end-user hardware – had long since become the free Unix for legions of old and new hackers worldwide. Linux advocacy was loud, assertive, got press attention – and as a result, Linux had stepped into the public limelight while the BSDs were Unix’s “crazy old aunt up in the attic.” Finding the old O’Reilly books on 4.4 BSD Unix – let alone finding a mention in the press of BSD – was harder than putting lipstick on a chicken. Fortunately, FreeBSD seemed to have learned something from the experience. 

The Fall and Rise of FreeBSD

Jordan Hubbard works for Walnut Creek CD-ROM (think, which sells FreeBSD, Linux and other open-source distributions. He’s also the Current Release Engineer for FreeBSD, and its unofficial PR guy. Known as “jkh” by legions of FreeBSD devotees, he’s also responsible for /stand/sysinstall and other user-interface niceties of FreeBSD, as well as legions of other internal enhancements.

In his 1999 FreeBSD “State of the Union” address, Hubbard indicated – like Steve Jobs before him – that he understood that technical superiority of an operating system alone did not guarantee its success – it took PR and user help. He said that FreeBSD’s upgrades would henceforth come in tenth decimals, rather than hundreds, so that the public and press would take more notice. Furthermore, he actively called on FreeBSD users to advocate for their OS. 

The FreeBSD core team had fortunately learned from the divisive standards wars that Unix had undergone (and they had been a part of). FreeBSD would work with Linux rather than against it. Recognizing Linux as a sort of de facto API for free Unix applications, FreeBSD would include compatibility for Linux, rather than insist on developers writing their applications for one OS or the other. Smart move.

Since that time, FreeBSD has enjoyed a significant growth in popular support, has been the subject of articles in MSNBC and Salon, the subject of a special section on Slashdot, and has experienced a surge in downloads. Darwin, the open-source base of Apple’s forthcoming MacOS X, will have its future releases tied to FreeBSD. Even better, FreeBSD now has its own yearly tradeshow – FreeBSDcon, showing strong signs of the growth in its support.

I ran into Jordan Hubbard at ISPCON, when I stopped by the FreeBSD booth to see if anyone could tell me whether one of the core team members would be at the conference for me to interview. There was Hubbard. The result was much like going to CompUSA to buy a copy of Slackware and finding Linus Torvalds in a red shirt, explaining it was in aisle six. I fumbled and stuttered that I needed to go grab my iBook laptop, and I’d come back and interview him, and P.S. I thought he was so cool. He turned to my friend Namita who came with me and asked, “Is he always this smooth?” Anyway, this interview is the result of that meeting and a couple of E-mail follow-ups:

Q & A with Jordan Hubbard

Question: When can we expect to see FreeBSD 4.0?

Hubbard: Sometime in the January – February timeframe. We’ll enter feature freeze for the 4.0 branch on December 15th, after which we’ll take about 30 days to stabilize the branch.

What are the new architectures that FreeBSD will be coming to?

Hubbard: Well, we’re working on several fronts. First, we’re working with Compaq right now on Alpha SMP (Symmetric Multi-Processing, or multiple CPUs) support (for which Compaq has kindly sent us almost $300K worth of miscellaneous hardware) and hope to have something working there by Q1 2000. We’ve also started a PowerPC port with the Power Macintosh G3 being our initial reference platform, but there are no firm dates on when that will be available. If all goes well, we should also see some significant work on the UltraSPARC port coming into the tree around the same time (Q1 2000).

Does branching out to more architectures mean you’re changing your focus? Does this mean anything to the speed or quality of x86 development?

Hubbard: We’re not really changing our focus so much as simply leveraging the much more significant developer assets we now have available, many of whom want to work on non-x86 platforms, so why not let them? I also don’t expect this to affect the x86 development work in any way since the lion’s share of our developers will still be working on that platform and we have enough extra people now that we can do both. 

The only reason we’ve focused exclusively on the x86 platform to date is that we didn’t feel we had the resources to do a really good job there AND also tackle other architectures. However, we’re at least three times the size we were when we made that initial determination. I think it’s now more than possible for us to do a good job on at least four major architectures, especially since such work tends to “amortize” due to similarities in those various platforms. For example, the PCI bus is an increasingly common interface standard, and just about everyone offers USB.

I think going cross-platform is simply an inevitable aspect of our evolution and the Alpha port has allowed us stay more “honest” with respect to FreeBSD’s architectural dependencies.

How important do you see ISPs/webhosters/etc. as a constituency? What are you working on adding/improving that might be of special interest to them or might constitute a reason (aside from the already-existing reasons) for ISPs to use FreeBSD?

Hubbard: We see ISPs, NSPs and the SOHO market as very important parts of our constituency and we intend to work on a number of issues which will make FreeBSD more attractive to them. In no particular order those are:

• Pervasive multi-threading (in the kernel)

• Fine-grained SMP resource locking

• Clustering (on a variety of fronts)

• RAID controllers and other large-scale data storage systems

• Embedded systems support (more picoBSD work)

• Serious security auditing and more built-in security subsystems (K5IPSecopenSSH, etc).

• Fiber channel support

• Gigabit (and faster) networking (already works, but we’ll be also working on zero-copy TCP and other performance features)

• IPv6 and IPSec (both slated for 4.0).

• Better multimedia support (3D cards, audio, video capture, etc).

We’re also working with a number of vendors, like Oracle, to bring their products to the FreeBSD market and are getting far more support for this than we used to. Things are definitely looking up in that category and we hope to have some significant announcements by the end of the month.

How would you define the mission of FreeBSD now?

Hubbard: I’d say it’s much the same as it always was: Provide robust, high-performance BSD technology in a form which is accessible to the “mass market” by being easy to install and configure and by offering better documentation. It has never been our intention to be just another academic resource, BSD’s traditional niche, we want to give Solaris, NT and Linux a run for their money (in that order).

What advancements in ease-of-administration will people see in 4.0? What about directions for the future?

Hubbard: More easy to turn knobs in places like /etc/rc.conf and more tools for manipulating those knobs in a friendly fashion, to put it in a nutshell. Future directions include making kernels truly GENERIC, in that you don’t have to configure your own any more; you just have one kernel, which dynamically extends itself as necessary. It will still be possible to configure and run static non-self-extending kernels if necessary for security or other reasons, but this shouldn’t be necessary for the average user. We’re also putting more time into configuration interfaces and fancier GUI front-ends for those things, which make clear sense to give the user a “friendlier face” on.

When might FreeBSD’s planned new threads architecture (the proposed mix of kernel-level and user-level threads I’ve read a bit about in freebsd-arch) make its way into stable? What benefits will ISP/server users see?

Hubbard: …4.0 is our current target for the new threads feature set. ISPs should see the ability to multi-thread applications like mysql in such a way that a single process can take advantage of multiple CPUs, unlike the one process per CPU model we have now, and we’ll have more POSIX compliant thread behavior for people porting applications from other platforms.

In Conclusion

Jordan Hubbard indicated that FreeBSD might soon be giving OpenBSD a run for its money as the “secure” BSD. Hubbard also alluded to – but wouldn’t give any specifics on – a potential “big name” player offering commercial support for FreeBSD. By the time you read this, this name may have come to light; if not, we’ll just have to wait and see. However, it appears that FreeBSD is very much on an upswing right now.

Is FreeBSD 4 a “Must-Upgrade?” Probably not right away, since its advances are evolutionary rather than quantum. Nonetheless, if you’re using FreeBSD now, you’ll want to upgrade soon; if the new features and improvements aren’t enough, you’ll want to consider that bug and security fixes always come fastest for the most recent version. 

Having watched the rise, fall and rebirth of Apple, I think I know an OS in resurgence when I see one. And FreeBSD is definitely on the track to resurgence. It will be a very long, hard road – but I think that all of us who are looking for a free Unix with optimum performance and usability will benefit.

Freenixes by Remote Control with VNC

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, January 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Do you ever run across something that’s just so neat that you run around telling all your friends how they have to try it and how it’s the greatest thing since Mountain Dew? And eventually your friends get tired of it and want to hit you in the head with a crowbar?

Well, neither do I. Mainly because I don’t have any friends. But if I did, I’d be running around bugging them about VNC (Virtual Network Computing, So, since I don’t have any friends, I’m going to bug you about how you can have a free graphical remote-control for your Freenix machine (through XFree86) from any other Unix, Windows, MacOS  – or almost any other platform, including PalmOS – computer. Even better, you can control your Unix, Windows or Mac computer from your Freenix machine. It’s like X windows, Symantec PCanywhere and Apple Remote Access all rolled into one. 

VNC is a client/server program from the good folks at AT&T Labs in Cambridge, UK. VNC has two components: a server program and a viewer. Install the server on any supported OS, and you can use the viewer to control that computer in a graphical window. VNC is released under the GPL (GNU Public License), so the source code is freely available along with the binaries. 

So, what does this mean to you? If you’re accustomed to administering your Freenix through an X windows environment (like AfterStep, Enlightenment or KDE), VNC allows you a completely portable, free and easy-to-use way to do this from virtually any other computer, no matter what OS it’s running. In addition, this one app allows you to manage your consumer-OS workstations from your servers as well. On multi-user systems, VNC can be run for specific users and reflect their particular environments.

Why Choose VNC?

Some of you may be saying, “On Unix, this utilizes X windows. So, why is this better than running X windows sessions remotely?” Well, you should get professional help, since you’re talking to a magazine. But the answer is that VNC has three notable advantages over traditional remote X administration. 

First, the session state is stored on the server rather than on the client machine. This means that I can leave my VNC session at work in the middle of typing a line, go home and fire up a VNC viewer on my home PC, and pick things back up in the middle of the line. While this may not sound ground-breaking, it’s very useful if you need to administer a single server from multiple clients.

Second is the amazing variety of ports of the VNC viewer (see “VNC Platforms,” below). Plus, there’s a Java version, so VNC should be able to be run through a web browser on any platform for which a standards-compliant Java Virtual Machine (JVM) exists. The side benefit from this is standardization: you can install VNC across your network, train your staff on VNC and then let them administer Unix, Windows NT and MacOS servers using the same program, from almost any computer.

Third, it’s a lot easier to configure and run than free X clients generally are. Anyone with the brains to breathe on a regular basis and use their turn signals correctly should be able to get the VNC server up and running within five to 15 minutes, and the VNC viewer in less than that. Of course, if you want your Freenix sessions to run nicely, you’ll have to do a bit of tweaking to your preferences file (like changing the default window manager from the antiquated twm to something more useful). The more you know about X windows, the better; but it’s still usable even if you don’t.

VNC also performs remarkably well, considering that it’s a free, non-commercial product. On Unix machines, I found its performance to be as good as traditional remote X windows sessions. Performance on Windows machines were, in my extremely unscientific tests (I didn’t get around to testing the Windows version until after my third or fourth Heineken), about the same as I had experienced with PCanywhere. I didn’t see any “dancing Bill Gates” while I used PCanywhere, but I’m attributing this to the Heinekens. 

How Do I Install and Run It?

Installing VNC on your Freenix is as simple as following John Dvorak’s delicious Pickled Walnuts recipe (remember: soak in brine for at least six days!). The VNC server for Unix is a Perl script that utilizes XFree86. An XFree86 installation is usually pretty painless on most Freenix flavors; if you don’t have Perl installed, double-check yourself to see if you have a working brain installed in your skull (see “Windows NT”).

On FreeBSD, VNC is part of the ports collection, so installation is fast and easy. Simply cd /usr/ports/net/vnc, type make install, and you’re on your way. One of the beauties of the ports install is that it will also install necessary X components if you need them.

For Linux, .tar or .tgz archives of the binaries are available (note: newer versions require glibc, which you’re probably already using anyway). Simply use gunzip and/or tar –xvf to upack them, and you’re ready to go; if you aren’t planning on keeping VNC to yourself, you’ll want to move the new directory of VNC files under /usr/X11R6/bin. If you’re fond of using (ugh) RedHat Package Manager or Debian packages for one-click installation, those are available as well from and, respectively.

The program’s important binaries (vncserver, vncviewer and vncpasswd) are normally installed in /usr/X11R6/bin. To start a server, simply issue the vncserver command (assuming that the directory that it’s installed in is in your $PATH), or type the full path to the command. The first time that you run it, you’ll be prompted to choose a password, which will be stored in your user’s $HOME/.vnc/ directory that will be created for your session logs, password and preference files. Later, you can use the vncpasswd program to change your password if necessary. Your X/VNC session preferences are stored in the xstartup script located in this directory, as well; this file can usually be changed to be a link to your normal X startup script.

With Windows, although external programs like X and Perl aren’t required, things are a bit more complicated. Downloading, unzipping and installing the client and server files for Windows 9x are pretty simple. Unfortunately, under WinNT (NT 4.0 Service Pack 3 or higher required), there are certain important functions (like the ever-important remote CTRL-ALT-DELETE and remote unlocking) that can only be accomplished if you’re running VNC as a service. See the VNC FAQ ( or a deck of Tarot cards for more information.

On MacOS computers, to run the server, drag the associated Extension and Control Panel into your active System Folder, then restart and double-click the server application (place an alias of the server app in your System Folder/Startup Items folder to run automatically). 

To start a VNC client session, run the vncviewer program (or whatever it’s called on your OS), choose a hostname and session number. The first session you start on hostname is designated 0, and the next one is designated 1, et cetera. So, to connect to the second VNC server started by a user on that host, you’d connect to

So, What’s The Catch?

While VNC’s performance is very good, any remote-GUI solution is not going to provide performance equal to a console connection. Of course, the faster and cleaner your network connection is, the better your results will be. 

However, an equal consideration is the video card you have on the client machine – 2 MB or 4 MB video cards with older chipsets probably aren’t going to get you far. Souped-up video cards will significantly improve performance (hint: use this as an excuse to buy a 16 MB Riva TNT2 card for your Linux machine and then play Quake on it). Changing your X, Win or Mac desktop pattern to a single uniform color or shifting down your default color depth (e.g., 24-bit color to 16 or 8 bits) will also help significantly.

Over a 10 Base-T LAN connection, performance for both the client and server on the Freenix versions was good for common tasks (virtually indistinguishable from console access with terminal sessions and X-based admin tools, plus a few simple games and amusements). Over a 56 kbps modem connection, it worked fine for terminal sessions and very light graphical stuff – but even xsnow had some frameskips, so you’re probably still better off with traditional terminal access under most circumstances if you have to use a modem-speed connection.

While the performance of the VNC viewer is excellent for nearly any platform, the server performance is inherently slower with Windows or Mac (since source code for these platforms isn’t open like XFree86 is). 

With Windows 9x/NT, performance is fine but not as fast as the Unix versions, so you may be better off running VNC on Windows machines to control your Unix machines than vice versa. There are some things you can do to increase the performance of the WinVNC server; see the page for more details. The MacOS version of the server was abysmally slow, although the viewer worked just fine. The MacOS server also proved to be incompatible with MacOS 9, so this may be responsible for the slowness. A superior version of the Mac viewer (which makes use of enhancements for MacOS 8.5 and above) is available at

If you’re concerned about security (and you should be), adding any services to a machine is an issue. If you’d like, you can use VNC Unix server with TCP Wrappers to restrict VNC port access to specified hosts.

VNC requires simple password challenge-response authentication (only the first eight characters are significant on all platforms, syncing with the Unix version). However, you can use VNC over SSH (providing some compression as well as encrypting all traffic so it can’t be “sniffed”) – more info is available at It should be noted that “sniffing” even an unencrypted VNC session would be significantly more difficult than sniffing something like a telnet session, since 1.) the data is compressed (albeit using an open-source method), and 2.) there’s a whole lot of graphical garbage in the data stream. 

By default, the VNC server executable can be run by non-privileged users. VNC opens the first session on port 5900 (port 5800 for HTTP connections) and increments additional sessions on the next available port up. If you’re worried about unsanctioned users running it, you may wish to make vncserver root-owned with 700 permissions, or prohibit non-privileged users from opening up new sockets above 1024. 

VNC Platforms

Platforms aside from those discussed above which have servers include AIX, BSD/OS, HP/UX, LinuxPPC (Linux for Power Macintoshes), Linux/SPARC, NetBSD, NetWinder (Linux for ARM), OSF (DEC Alpha), SGI Irix, SCO OpenServer, and SunOS (by the way: why are you still running SunOS?).

VNC Viewers are available (with varying degrees of updatedness) for the abovementioned OSes, as well as Acorn RISC OS, Amiga, BeOS, CygWin32, MS-DOS (by the way: why are you still using MS-DOS?), Geos/Nokia 9000, OpenStep, OS/2, PalmPilot, VMS, Windows CE and Windows NT/Alpha. There’s also a version that integrates nicely with KDE on Linux for x86. For more information on both server and viewer platforms, see

Freenixes, Ease of Use and Common Administrative Tasks

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, December 1999

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Hi there, and welcome back to the only column in Boardwatch  read even less frequently than the lame Lucent ads. This month, we’ll be taking a look at common tasks for many system administrators, and whether doing them with a Free Unix (Linux or any of the various free BSDs) will make you pull out your hair and insert your foot in the disk drive instead of a system disk.

Freenixes and Ease of Use

Recently, we’ve looked at why you might want to switch to a Freenix instead of a commercial OS like Solaris (“We’re the dot in $6,000.00”) or Windows NT Server (“The best Solitaire $2500 can buy!”). But the fact remains that an operating system isn’t really “free” if you need to include the costs of divorce and therapy in it. So, can a non-Unix-guru easily accomplish the tasks with a Freenix that he or she is accustomed to doing on a commercial OS?

There are two main ease-of-use problems you’ll face with a Freenix. First is that there’s no such thing as a Unix Server for Dummies. All Unixes are – by design – operating systems by people who know what they’re doing, for people who know what they’re doing. (By way of comparison, Windows 98 and MacOS are operating systems by people who usually know what they’re doing, for people who don’t even want to know what they’re doing. Windows NT is an operating system by people who like stock options, for people who like certification classes.) You’re probably never going to be in full command of a Freenix system until you’ve taken the time to read through a stack of “O’Reilly” books and really learn your OS. This is true of any server OS; it’s just a lot harder to “fake it” with Unix. 

Second is an element that sounds obvious but shouldn’t be discounted: there’s no tech support number to call. Unless you’re willing to pay LinuxCare or one of the other Linux or OpenBSD commercial support companies, you’re stuck with books, online manual pages and documentation, and the support of your fellow Freenix users. Books and documentation cover a lot of your questions, but you’ll still run into plenty of problems where the only real solution is to ask someone who has had the same problem before. Ninety-nine percent of the problems new users will encounter can be handled by the tried-and-true RTFM (“Read The F***ing Manual”) method; but you will inevitably encounter a technical dead end where your best bet is to pray that someone responds to your newsgroup or mailing list post quickly. 

A few caveats need to be given for these ratings. I’m assuming that you’re using the most common free tools here (ApacheSendmail, etc.); using third-party applications may be significantly different (and probably easier). Also, I’m assuming that you’re willing to get your hands dirty a little with command-line administration and aren’t relying entirely on point-and-click options. So, with that being said, let’s take a look at a few common administrative tasks, the flexibility of configuration options that Freenixes provide, and their ease of use:

Networking Setup

Flexibility: Immense.

Ease for Enough-To-Get-By Administration: Very easy.

Ease for Advanced Administration: Ranges from breezy to baffling.

The greatest virtue of all Freenixes is that, for everything that you want to do, somebody else has already wanted to do that same thing. And, usually, they’ve been a Computer Science student or professor, a communist, or someone else dumb enough to write a program to do it and give it away for free. Therefore, the vast majority of common sysadmin tasks on a Freenix already have a tool to set things up and save you work.

Most Freenixes provide a networking setup tool during their installer process that allows you to set up basic (Ethernet or PPP) network connectivity with only a few pieces of information. Even for some more advanced tasks, Linux tools (like linuxconf in its command-line or GUI versions) or FreeBSD’s /stand/sysinstall program give you simple options for configuring normally arcane tasks. You can generally turn your machine into an ersatz router by running RouteD or GateD, share NFS drives or enable ISDN, ATM or other interfaces with a couple of minutes’ work. These easy-admin programs are (as all Freenix GUI/semi-GUI tools are) just tools for modifying the text configuration files hidden somewhere else on the server (e.g., /etc/, etc.) that do the actual work. If you’re willing to take a shot at editing the actual text configuration files, your options increase. 

However, be warned that certain advanced or uncommon tasks are going to require not only hunting down the requisite files but also knowing about how networking actually works on an interface and packet level. Nonetheless, the majority of us who don’t have a “seven-layer OSI model” tattoo can still get the job done using the available tools.

User Administration

Flexibility: So-so.

Ease for Enough-To-Get-By Administration: Very easy.

Ease for Advanced Administration: Nothing you can’t handle.

It should be noted that the Unix system security model defines the amount of user account configuration you can do. Unlike Windows NT, you aren’t able to specify “semi-privileged” accounts; practically speaking, you’re root or you’re nobody. (A little joke there. Very little.) However, if you’re willing to get wise in the ways of the Unix permission structure (each file or directory has settings for the permissions allowed to the owner/creator of the file, other users in the owner’s group, and all other users on the system), you can replicate much of this functionality through selectively adding users to specific groups.

For ease-of-administration, Linux leads the way here, providing GUI tools for nearly all window managers that allow you to create and delete users, set disk space quotas, define user meta information and shell info. (These tools are also available for the *BSDs, but they are native to Linux.) Overall, you’ll find simple user administration tasks (as mentioned above) to be quite simple and easily done through either a GUI tool or the command line. Advanced tasks (like putting the user in a chroot-ed environment, or limiting their access to certain methods) are less simple, but still pretty easily accomplished.

FTP Server

Flexibility: How flexible were you expecting FTP to be?

Ease for Enough-To-Get-By Administration: Super easy.

Ease for Advanced Administration: Nothing a few Unix man pages can’t fix.

If what you’re looking for is to allow users to FTP their files to and from their accounts, this is a no-brainer: it’s already set up by default in the *BSDs and most Linux distributions. Likewise, allowing anonymous FTP (even for specific users or directories) is a very simple task – albeit one handled through a command-line interface with a text editor. 

Even better, there are plenty of free FTP Daemons (servers) which give even more advanced features than the default FTPD provided with most Freenixes. FTP isn’t exactly a terribly option-heavy service, and nearly all of your needs can be easily dealt with. Note, however, that advanced issues (like denying FTP to specific users or hosts) aren’t immediately obvious, and may take a bit more work with your /etc/hosts.allow or ftpd config file.

Web Server

Flexibility: Like a gymnastics instructor.

Ease for Enough-To-Get-By Administration: Easy.

Ease for Advanced Administration: No worse than doing your taxes.

Apache (so named because it was “a patchy” upgrade to the original free NCSA webserver) is by far the most popular webserver for Freenixes, and with good reason. It’s stable, it’s almost ridiculously extensible, and it has excellent performance. 

While fairly rudimentary GUI tools exist (again, native to Linux) for Apache configuration, the command line is the way to go. The good news is that the Apache team has gone to great lengths to make this as painless as possible. There are plenty of great books out there on not only configuring Apache, but also on tweaking it for performance as well. With the newest versions of Apache (1.3.4 and greater), all configuration options have by default moved to a single file, the httpd.conf file, located in /etc/httpd/, /usr/local/apache/etc/, or some other directory depending on your OS, your version of Apache, the phase of the moon and a random 32-bit number). 

The default httpd.conf file is extremely well documented, and includes either explanations or examples (or both) for every configuration directive in the file. The great part is that most options are relatively self-explanatory, and by editing this one file you can easily set up everything from CGI execution and file icons to virtual hosts.  

Performance tuning is where things can sometimes get tricky. Most of the GUI/semi-GUI tools (as mentioned above) available will carry the heavy lifting for you – including kernel modifications and other items. However, getting the most out of your webserver may require you to recompile Apache with or without some of its default modules. Nonetheless, Apache is nothing if not exhaustively documented in books and at its website (, and things are at worst frustrating rather than impossible.

Mail Server

Flexibility: Ridiculously flexible.

Ease for Enough-To-Get-By Administration: Fairly easy.

Ease for Advanced Administration: You’d better have some “Advil” handy.

Sendmail is the most powerful and configurable mail server out there (especially for free). The default configuration installed with nearly all Freenixes is all that 99 percent of Sendmail users (like you and me) will ever need. Thank God, because we’d be shooting ourselves left and right if we ever needed to seriously configure the damn thing.

Simple mailserver elements like POP3 accounts are built in by default. E-mail aliases and redirection are easily accomplished with an absolute minimum of configuration (through the /etc/aliases and /etc/mail/virtusertable.db files). In recent versions of Sendmail, anti-spam relaying measures are included by default, and these can easily be circumvented if needed by adding mail-sending domains to the /etc/mail/relay-domains file. 

With that being said, God help you if you ever need to do some serious digging in the Sendmail configuration (/etc/ file. Sendmail’s primary configuration file is written in something that looks like a cross between C code and Swedish, or maybe both. I was looking through that file and somewhere around line 4000 I actually found a bunch of John Dvorak’s delicious recipes. Sendmail is probably the archetypal example of Unix’s configurability and inscrutability at its best and worst.

For other common mail tasks, there are plenty of common free tools out there. The free pine 4.10 package offers not only the easiest Unix mail reader out there, but an excellent IMAP server (and text editor, with pico) as well. The free majordomo 1.94.4 package provides excellent mailing list options – although at a performance price, since it’s written in Perl and tends to eat up a lot of RAM when it’s running.

The Moral of the Story

Freenixes can save you thousands of dollars if you’re willing to pay a few hundred dollars for technical books and learn how to use them (the Freenixes, not the books). For common ISP sysadmin tasks, 90 to 95 percent of your work can be easily done on an OS with friendly tools and frequent updates. If you’re brave enough to handle any Unix, you’re brave enough to handle a Freenix. However, if you’re a point-and-click addict, or need something with an unhelpful tech support phone line, a Freenix won’t be for you.

Freenix Flavors (Three Demons and a Penguin)

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, November 1999

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Hi there, and welcome back to the industry’s 216th most influential Unix column. Over the next few months, we’ll be taking an in-depth look at each of the various Freenixes and why your ISP may want to consider them. But right now, it’s time to get familiar with the four big players. How can you tell the Freenixes apart, and which of them is right for your ISP?

BSD Unix, having grown out of work on the original AT&T Unix code at UC-Berkeley, has been around for about 20 years. Only in the early-to-mid 1990s (after a series of nasty lawsuits) was the BSD project’s code freed up for use in free Unixes. The BSD development model centered around a “core group” that handled work on the code, and the free BSD Unix movement quickly splintered into three main groups, each with a different focus.

The BSD groups tended to disdain the pseudo-Leninist rantings of Richard Stallman’s  GNU/TAISR (GNU’s Not Unix/This Acronym Isn’t Self-Referential) camp, and used the “BSD” software license, which held sort of a middle ground between commercial software and free software. The BSDs attracted a following of (relatively) old-school sysadmins and hackers – the sort of people who generally disdain pine and elm as “too user friendly.” Partially as a result, development for these OSes tended towards optimizing them for server use, and neglecting support for consumer-oriented devices (like IDE drives, fancy video cards, etc.).

Meanwhile, a Finnish computer science student, Linus Torvalds … blah blah blah. I’ll skip this part, since if you haven’t heard the story of Linux already, you probably should put down Boardwatch and go pick up a copy of the Yahoo! Internet Life special edition on how to turn off your computer safely. Anyway, Linux’s development model encouraged code warriors and wackos alike to develop for the OS under the GNU Public License (GPL), and attracted the loving attention of the GNU project itself. Before long, Linux had emerged with a big stack of available software, and a large corps of devoted developers. Its more decentralized model not only encouraged people to write drivers for consumer (rather than server) oriented devices, but also bred a following of experienced admins as well as young geeks-in-training. Therein lay the difference. 

These young Linux zealots were, by and large, the force that popularized Linux. They had a fanatical love for their OS that was unmatched except for Macintosh users (who, during the mid ‘90s, had largely retreated to living in caves and praying for someone to port a game to their OS besides Solitaire). Linux became cool . Zealous advocates led to press coverage, which led to more developers, which led to better code and greater device support, which led to more new and more fanatical users … leading to the Linux love-fest currently underway. 

So, where were the BSDs? Generally, they quietly went about their way, still running their servers and occasionally poking their heads in on Linux-advocacy-oriented (but useful to all *nix users) news site, offering “(Score 1: Insightful)” comments and not rocking the boat. FreeBSD reacted to the surge in Linux development with remarkable grace, building in a Linux binary compatibility module and sidestepping a potential war over developers. But lately, some BSD users have started agitating for more attention to the BSDs … and BSD partisans have become bolder about advocating their *nix of choice.

For the three or four of you who are still reading, let’s take a dive into each of the various Freenixes:

Linux: The World’s Most Popular Unix

Focus: Unix everywhere, for everyone, as both a server OS and a desktop OS.

Platform/CPUs: You name it. I’m surprised they don’t have an Atari 2600 port yet. 

What’s Good for ISPs: If you’re relatively new to Unix, if you’re after ease of use, or if you’re looking for an Internet server platform that can run on almost any hardware and offers a wide range of cool applications, Linux is your choice.

Of all *nixes, Linux is the most oriented towards ease of use and administration. Linux has the widest user base, and the most active development community – meaning that a lot of new device drivers and third-party applications will be out for Linux first (and maybe only for Linux). Linux’s heavy consumer usage has also led to its being the *nix for cool new free graphical shells (like KDE or GNOME/Enlightenment) and administration utilities. Fruits of this wide developer base (both commercial and free) include excellent solutions for dialup authentication, webserving (including third-party ASP support, Real and QuickTime streaming, Cold Fusion, etc.), mail servers, commercial database packages, firewalls, AppleTalk or SMB networking, security tools and others. Linux is gradually joining Solaris as “the” Unix for commercial developers.

Linux has the most support options, as well. In addition to the usual free online user community support, many Linux distributions offer installation and technical support (for example, if you pay $90 for convenient Red Hat install CDs, they’ll give you 30 days of installation support and 90 days of technical support). There is an abundance of books about installing, running and administering Linux. Of all Freenixes, Linux is also the most “ready for prime time” in terms of corporate deployment: a number of companies from Red Hat to LinuxCare offer enterprise-ready tech support packages. Plus, with its press coverage, and vendors from Intel to IBM standing behind Linux, it’s closest to being the Freenix that is easiest to explain to your “cloobie” boss.

But Linux isn’t just for new users. Linux is second only to (yech) Windows NT in terms of tuning for high-end multiprocessor systems. It’s a safe bet that there will be a solid Linux for Intel’s IA-64 architecture before 64-bit NT is even in public beta. And Linux’s wide developer base makes it likely to catch up rapidly in those performance areas where it’s currently behind.

What’s Bad for ISPs: Linux may be spreading itself thin.

The more devices you try to support with an OS, the fatter (and more bug-prone) your code becomes and the more your stability is likely to suffer. Of course, the open-source, open-development nature of Linux is designed to fix these bugs quickly; but it’s still an issue. It’s relatively easy with Linux to pare down your kernel (the “core” OS software that interfaces between the hardware and applications) to support only the devices and services you need. But a default installation is likely to contain more than you need – and the inexperienced users Linux is most popular with are the least likely to be able to properly configure their OS. And the time spent by developers on writing a driver so that Linux can use 5 1/4” floppy drives is time that theoretically might have gone towards tuning it better for more common uses.

Also, the wide variety of Linux distributions can sometimes make software installation confusing. All Linux distributions are based on one of the “Linus-approved” stable kernels; but the specific kernel (and version of the code libraries to support applications) they include sometimes vary widely. Some distributions (most notably Red Hat) are more anxious to move to upgraded (and potentially less stable) versions of these libraries than others. Some Linux software is beginning to appear which is dependent upon (or at least tuned to) a specific distribution, fragmenting the Linux community.

The much-vaunted user-friendliness of Linux is also a relative term. Compared to MacOS or even Windows, Linux still has miles to go in terms of developing a fairly “idiot-proof” interface. Of course, this is a fault of all Unixes – any OS essentially written by programmers, for programmers is going to have a big gap between its developers’ idea of “user-friendly” and its actual users (who programmers refer to as “morons”).

Lastly, Linux simply lacks the time that the BSDs have had to improve the maturity of its code base. There are still plenty of things missing in Linux (like the much-lamented lack of a true multi-threaded TCP/IP stack) that the BSDs implemented long ago. As a result, if your main interest is network performance on a single-processor machine (and you aren’t dependent on any of the Linux-specific software), Linux is simply not going to be your first choice.

FreeBSD: BSD Performance for x86

Focus: The ultimate Internet server for x86 hardware – with Linux emulation for consumer/hobbyist users.

Platform/CPUs: The Intel x86 architecture, first and foremost. A port for Alpha is also available. Theoretically, Darwin (the open-source part of Mac OS X Server) is largely tied to FreeBSD for its code base, and might be considered to be a PowerPC port of the OS, running on top of the Mach Microkernel. Or maybe I’m just nitpicking.

What’s Good for ISPs: FreeBSD is the server performance-leading BSD Unix for the x86 architecture. (Note for BSDI users: BSD/OS is well-tuned for this purpose, but it’s expensive, and I’m a cheap person, so we won’t discuss BSD/OS here.)

If what you really care about is fast networking performance running Apache, Sendmail or other common apps on cheap x86 hardware, FreeBSD is your OS. End of story. The *BSD model (with a small team of experienced developers rather than a horde of free-for-all developers like Linux) tends to generate more bug-free code right out of the gate (although I wouldn’t necessarily run anything more mission-critical than Xtetris on FreeBSD-current). 

FreeBSD’s TCP/IP stack is the reference code base on which so many other network stacks have been based. FreeBSD has a fairly impressive set of users, including Yahoo, Xoom,, some parts of Hotmail (Hey, kids! Can you say ‘failed NT conversion?’ Good.”) the IMDB and others. On top of all this, FreeBSD includes a very good Linux binary compatibility module, and they’ve been very good about supporting “Linux-first” development with it instead of igniting a Freenix developer-choice war. FreeBSD also includes compatibility modules for SCO, NetBSD, and BSD/OS.

FreeBSD’s ports collection is a fantastic way of finding new software and upgrading old versions. Also, if you’re willing to get your hands dirty (read: no GUI) and make the source updates for FreeBSD, their upgrade process is very slick and relatively painless.

What’s Bad for ISPs: All of the BSDs share some common problems. First is that they’ve fallen out of commercial favor, and they lack the third-party application support of “hip” Unixes like Linux or Solaris. The FreeBSD Linux compatibility layer is great, but isn’t a “first-choice” solution (e.g., if you depend on mission-critical software for which there is a Linux port but not one for FreeBSD, you may think twice). Add to this the problem that the *BSD development model leads to higher-quality code but slower development. 

None of the BSD Unixes are an optimal choice (at least compared with Linux) for new Unix users; it’s best reserved for people who are either willing to take on its steep learning curve, or have learned Unix already. Also, finding good printed documentation on *BSD systems is like finding a network engineer with a hot blonde girlfriend.

FreeBSD (like most other *BSDs) currently suffers from an identity crisis: is it the work of part-time developers or an OS to compete with commercial *nixes? FreeBSD’s developers occasionally seem to be caught between saying “it’s enterprise-ready software you can depend on” and saying “look, we’ll fix that when we have time, what do you expect for free?” It’s excellent software, but sometimes little things (like full POSIX threads support) may get broken and not be fixed for weeks or months. FreeBSD (like the other BSDs) also isn’t as tuned for multiprocessor machines and high-end hardware as Linux is. Lastly, if you’re the corporate type looking for commercial support, your options with any free BSD are far more limited than with Linux.

NetBSD: BSD for the Masses

Focus: Bringing a solid BSD to as many platforms as possible

Platform/CPUs: x86, Alpha, Motorola m68k, PowerPC, SPARC, MIPS, ns32k, arm32, VAX (with varying degrees of stability and support)

What’s Good for ISPs: NetBSD shares the attractiveness of Linux in that you can probably pick up any old (or new) computer and get it to run. NetBSD has the advantage (and disadvantage) of sharing the other BSDs’ code maturity and development philosophy, but with the ability to run well on a wide range of platforms.

If you’re already familiar with BSD Unix and you want to use it on non-x86 hardware (or you want to standardize on one OS across multiple platforms), NetBSD is your first choice (and, depending on your target platform, maybe your only choice). If you are looking for *BSD’s proven performance with networking, and you want to use it on any platform, NetBSD is the way to go. 

What’s Bad for ISPs: NetBSD’s strength is also its weakness. It sits in sort of a middle position among BSDs, being widely available but not optimized for any one task. In a way, it’s sort of a “jack of all trades, master of none.” It’s unclear, for example, whether you’d get better network performance on a PowerPC machine with NetBSD or with LinuxPPC, which has spent a great deal of time optimizing its OS for that CPU architecture. Therefore, it likely won’t be your first choice of OS for platforms which other Freenixes tune themselves to.

Also, the various NetBSD platforms are each supported to a greater or lesser degree (depending on the activeness of their development team), and you may be left at your development team’s mercy while waiting for a critical upgrade. NetBSD shares the common faults of the other BSDs as well, and its mission has left it as sort of the “forgotten” BSD among the others which are more optimized for a given task.

OpenBSD: The Bugtraq Junkies’ Choice

Focus: Unix for security junkies.

Platform/CPUs: x86, Alpha, Motorola m68k, MIPS, some PowerPC designs, SPARC (plus some other platforms which aren’t “officially” supported but for which a port exists)

What’s Good for ISPs: OpenBSD is about security: it also considers security and software quality to be one and the same. Plus, they’re based out of Canada, and can therefore bypass some of the US’s bizarre federal cryptography/security laws.

In the OpenBSD team’s view, here’s how it works. Buggy software can lead to security vulnerabilities – buffer overruns, sloppy system calls, poor management of root (administrator) privileges and so on. The OpenBSD developers started an audit (two years and still going) of the source code and found thousands of bugs. Some were security-related, or might have been exploited in combination with other bugs; but most were not. Their end goal is not only a more secure OS, but also one that’s “more reliable and trustworthy.” Of course, since the *BSD codebase is largely similar, other BSDs are able to benefit from the security fixes made by OpenBSD.

Another important aspect of security is the “secure by default” configuration as shipped on the OpenBSD CD-ROM releases and weekly snapshots. OpenBSD’s default installation doesn’t enable potentially risky protocols without the consent of the administrator. This is very important for experienced admins on a busy schedule who don’t want to play detective with netstat and ps -auxw to secure a new server; on the other hand, if you don’t know how to enable fingerd and you want it, then you’re pretty much stuck.

OpenBSD’s integrated cryptography can help an ISP that’s looking to differentiate itself through its security. First, the built-in implementation of the emerging IP Security (IPsec) standards allow you to offer virtual private networks (VPNs) to corporate clients. OpenBSD’s IPsec interoperates with implementation from major vendors. Second, ISPs can securely access remote POPs, even for root logins. Third, OpenBSD supports (among other cryptographic tools) SSL (Secure Sockets Layer) for secure https Web servers almost “out of the box.” To enable it, sysadmins just need to download one shared library file to get around the RSA patent restrictions.

What’s Bad for ISPs: While OpenBSD can incorporate the code improvements made by the other BSDs, it has a smaller full-time development team than any of the other Freenixes (the average McDonald’s has more people working on Chicken McNuggets than OpenBSD has on full-time development), and thus upgrades may come slower. Security comes at the expense of rapid development, and hardware or software may not be supported for months (if at all) after Linux or FreeBSD can. 

OpenBSD of course shares the common faults of the *BSD family. Also, for experienced sysadmins who are confident that they can handle their own OS security (or simply don’t care), OpenBSD lacks both the x86 performance optimization of FreeBSD and some of the platform availability of NetBSD or the other benefits of Linux. Simply put, if you care more about performance or third-party application support than security, OpenBSD is probably not for you.


So … where does this leave this ISP looking for a free Unix? Probably, it leaves them with a headache, since it’s becoming more and more difficult to find an unbiased and rational comparison of the OSes involved. To sum up: Linux is relatively immature, but it has the most active developer community, it runs on almost any hardware, it’s the most user-friendly Unix for novices, and it has the best third-party application support. FreeBSD concentrates on optimizing BSD Unix for the x86 platform, and it shows in its networking performance. NetBSD concentrates on bringing stable BSD to a wide variety of platforms. If your primary concern is security, OpenBSD is the Freenix for you.

What do you think? Send questions, comments and lavish praise to [email protected]. Hate mail should be addressed to John Dvorak.

Freenixes and Unix History

By Jeffrey Carl

Boardwatch Magazine

This was my first “UNIX Flavor Country” column for Boardwatch Magazine. My first article had gone over so well – in that it filled up two full pages, did not provoke an angry letters and cost the magazine $100 – that my friend Greg Tally offered me a full-time columnist gig. These, as they say, were the salad days. People even thought was a good idea back then, seriously.

If your ISP doesn’t use Unix, it should. Many of the various flavors of Unix provide excellent solutions for everything from webhosting to dialup authentication to networking to powerful workstations. Even better, some of the Unixes (the so-called “Freenixes”) are free to use, are available on cheap hardware, and come with powerful free software suites (like Apache, Sendmail and others) that match or beat the performance of commercial software costing thousands of dollars. Plus – let’s face it – there’s an undeniable “sysadmin badass” appeal that comes from telling people you’re a Unix wizard that you simply don’t get by telling people you’re “a real guru with AppleTalk.”

However, even among Freenixes, there are many to choose from. And, if you’re a new user or even an experienced administrator looking to taste other flavors, it’s difficult to get good advice on which kinds to choose. Most people who already use Unix are very devoted to their flavor of choice, and it’s tough to get unbiased advice from someone with a lifetime subscription to “BSD Jihad” or a tattoo of a heart with a little “Kernel 2.0.36” inside it. 

So, for the next few months, we’ll look at what the strengths and weaknesses of each Freenix flavor are, and which may be right for you. To understand how and why each Freenix does what it does, it’s important to have a knowledge of where they (and Unix itself) come from – and that’s the focus of this month’s column.

Normally, histories of Unix are dull enough that they are used only in scientific experiments to anesthetize lab rats at a distance of 50 yards. In the interest of bringing everyone up to speed, the following is a completely irresponsible, glibly assertive and overly sarcastic yet theoretically amusing brief history of Unix up to the past few years.


Many millions of years ago, after the Earth cooled (roughly 1969), there was Unix. And it was good. Ken Thompson and Dennis Ritchie of Bell Labs found the legendary “little-used PDP-7 in a corner” and set to work on creating a new operating system for internal use at AT&T. Originally called “Unics” (UNiplexed Information and Computing System) as a pun on the earlier OS project “Multics” (Ha ha! Programmer humor!), its development was confined for several years to a small group of ultra-geek programmers within Bell Labs. 

A fundamental step for Unix came in 1973 with Version 4, which had been rewritten from platform-dependent assembler code into the new C language (also a creation of Bell Labs). By writing the core of the OS in a high-level programming language, it now essentially meant that Unix could be ported to almost any platform which included a C compiler. Since that time, Unix and C have gone hand-in-hand like deer ticks and Lyme disease.


Version 6 of Unix in 1975 was released to the outside world in “unsupported” (“Don’t blame us if it hoses your CPU and makes it cough blood”) format. It had two different licenses for the OS/source package: a university license for $100 and an “everybody else” license for $20,000. As a result, a generation of Computer Science students grew up on hacking Unix and improving it (most notably at MIT, Carnegie-Mellon and the University of California-Berkeley). While the “other” price tag sounds hefty, it was a small price for large companies to be able to take an existing OS and build their own around it. This eventually attracted numerous vendors to base an OS on Unix – including Sun, SGI, DEC, HP, IBM, NeXT and even Apple (anyone out there remember A/UX?). Unix grew rapidly in the mainframe world of the ‘70s and ‘80s because it was relatively cheap, and small armies of new computer science graduates entered the workforce already familiar with it.

The early development of Unix defined the characteristics it carries to this day. Every OS is a compromise of sorts between two fundamental sets of questions: maximum configurability versus ease of use, and stability versus expandability. Unix was written by programmers and for programmers, not end users – and it stressed the power of configurability over user-friendliness (although great steps in that direction have been taken lately). 

As time went on, more and more features were added to Unix by various offshoots of Bell and by university computer science students. UC Berkeley’s Computing Systems Research Group (CSRG) acquired a license for Version 6, and set a goal to write a completely AT&T code-free version of the system. The rise of the ARPAnet/NSFnet network drove Berkeley to develop innovations such as a full integration of TCP/IP into Unix and the birth of domain names (DNS and BIND). As such, the development of what would become the Internet was intimately tied to Unix – and Unix would remain the Internet’s OS of choice. By the mid-‘80s, Unix was well known within the computer science community as the operating system that could wring maximum stability and performance out of almost any hardware.


In the early ‘80s, Bell had transferred their development of Unix to Western Electric, which called their commercial version of Unix “System III” (they figured that if they called it “System I” no one would buy it since they’d think it was too buggy). Unix System V was released in 1983, and System V Release 4 (SVR4) was released in 1989, incorporating many of the improvements made by the Berkeley group and Sun Microsystems (including TCP/IP and NFS). Unix even had two Graphical User Interface schemes (Motif and OpenLook) – both commercial and (appropriately) completely incompatible implementations of the open MIT “X Window” standard. By the late 1980s, Unix was a fully commercial operating system family in all its variants; but it was a “high-end” OS that was far out of reach for individual users, and which the personal computer revolution had largely passed by.

At this point, it’s worth a brief digression to discuss “what is Unix?” If you want to be technical, only operating systems based on AT&T code and blessed by the Open Group (the Unix trademark holder) qualify as Unix. Personally, I think that’s a load of crap. For the purposes of this article (and the rest of this column’s run), any OS that you can bring to a screeching halt by typing “kill –9 1“ at a root shell prompt is Unix – and that includes SVR4, BSD and Linux-based OSes.

Anyway, during this time, the university community that had given birth to so many of Unix’s developments remained active – and they wanted a Unix that was inexpensive enough to be used by individuals or groups without the deep pockets of universities and corporations. Parallel to the growth of private Internet Service Providers seeking Internet access outside of the institutions through which they had used the Internet, hackers were still active trying to create a Unix of their own.


The Berkeley CSRG’s funding ran out before it fully completed its goal of creating an AT&T-free Unix, but just before its demise CSRG released a version (still retaining five or 10 percent AT&T code) called 4.3 BSD Net/2.They published their work under what became known as the “BSD” license, which made the source free to anyone who wanted it (but also allowed it to be incorporated into non-open-source and commercial projects). One of the many groups that picked up the Net/2 ball and ran with it was a commercial interest, Berkeley Software Design, Inc. (BSDI), which worked on filling in the gaps in Net/2 and selling the resulting product. Simultaneously, Berkeley’s Bill Jolitz made a great leap for hackers and hobbyists everywhere with a more-or-less AT&T-free port of Unix to the Intel 80386 architecture called 386BSD.

Frightened by the “free PC Unix clone” possibilities of 386BSD, AT&T sued Berkeley. Predictably, Berkeley countersued AT&T. While their lawyer landsharks battled it out, AT&T sold the SVR4 code rights to Novell (anyone remember UnixWare?). Development still went on in the meantime, and a horde of groups worked on merging the BSD and SVR4 releases or creating an entirely new Unix. Around about a million different Unix “standards” groups came into existence and were wholeheartedly ignored by everyone else.


Richard Stallman had already begun his “free-software” movement in the mid-‘80s when he started to read “Karl Marx’s Greatest Hits” and write Emacs. With some MIT buddies, he formed the GNU (GNU’s Not Unix) Project to take all of those AT&T Unix utilities and make better versions that were free/open source – and eventually to create a completely new free Unix, to be called the GNU Hurd. The GNU license(s) essentially forbade using the code in products that were not open. The GNU licenses provided the crucial distinction of the “free software” movement that would evolve from the GNU project, and separated it from the looser BSD license, which would become the model of the often semi-corporate “open source software” movement of the late 1990s.

Meanwhile, in 1991, a 21-year-old Finnish computer science student named Linus Torvalds was working on making a free and better version of Unix’s kissing cousin Minix. Torvalds developed the beginnings of a completely new kernel (the “core” part of the OS that handles memory management and device input/output, and acts as interface between hardware and software), and posted to Usenet’s comp.os.minix his source code for version 0.02 of what would soon be called “Linux.”


The BSD development model emphasized a “core team” of developers responsible for the primary guidance of a project, and a number of teams sprang up to release free versions of Unix based on the BSD code. Unfortunately, they all managed to get along “like two cats in a sack.”

At the same time, Linux – which had been released under a GNU license – emphasized a decentralized development model. Torvalds remained that head of Linux kernel development, and Linux variants were unified by their use of the Torvalds-approved kernel. However, everyone was invited to take part in adding drivers and kernel developments, and anyone was free to form their own Linux “distribution” (anyone remember Yggdrasil?) concentrating on whatever direction they felt that Linux should go. 

Suddenly, in 1994, the lawyer-welfare project known as the “AT&T/Novell vs. Berkeley war” ended. The exact details of the settlement are sketchy, but BSDI shortly thereafter discontinued its Net/2-based system and replaced it with a release based on a new code base called 4.4 BSD Lite. Novell eventually sold its SVR4 rights to the Santa Cruz Organization (SCO), and the SVR4-based non-free Unixes (including Solaris, HP/UX, Digital Unix and IBM’s AIX) continued and proliferated. Also in 1994, Linux kernel 1.0 was quietly released. In time, Stallman’s Free Software Foundation – seeing that the GNU Hurd project would be up to speed sometime around 2040 – came to adopt Linux (Debian distribution) as a replacement for the Hurd, despite some well-publicized nomenclature hissy-fits by Stallman about calling Linux “GNU/Linux.”


Today, the commercial Unixes are still around and (at least some of them) flourishing. Meanwhile, development for hobbyists, hackers, ISPs and many businesses has focused on the Freenix family. Right now, there are four primary Freenixes of interest to ISPs: Linux, FreeBSD, NetBSD and OpenBSD – each with a different philosophy, focus, and strengths or weaknesses.

In next month’s edition, we’ll explore what makes them different, and why one or the other may be right for your ISP. In the meantime, those of you with enough free time or pain tolerance to have read this far should please feel free to send your questions and comments about the varying Freenix families to [email protected]. Flames and hate mail should be directed to John Dvorak. 😉

Your ISP Can Support Macs – Even if You’ve Never Used One

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, August 1998

This was the first article I was invited to write for Boardwatch Magazine back in 1998. I had started working at an ISP in 1997 and discovered that not only was I the only Mac user, but all the hardcore techies working there actively disdained Macs and Mac users. At the time, Apple was actively circling the corporate drain and it was a bad time to be a Mac fanboy, so this was my bit of advocacy to speak to ISP operators and encourage them to not lock out the Mac market. As a result, I take full credit for Apple’s resurgence over the past 20 years.

For several years, it seemed that all the Mac users had hidden out in caves with OS/2 Warp users, or had moved to cabins in Montana and spent their time sending angry letters to the editors of PC Week magazine. However, Macintoshes are coming back – roughly 12 percent of new computers sold through retail stores are Macs, and that number is climbing. 

There are some very good reasons for making sure that your ISP supports Macintoshes (if it doesn’t already). You can gain a competitive advantage by advertising yourself as being one of the few (or the only) ISPs in your area that is “Mac-Friendly.” Millions of people have bought iMacs because they want a simple machine for surfing the web and reading E-mail – and they all need an ISP. 

Of course, there are tradeoffs to offering Mac support. On the down side, you’ll still get the same calls from Mac lusers that you get from Windows lusers who are busy looking for the RS-232 port on their toaster. And to really answer all of the questions that might come up, you’ll want an actual Mac user on staff (or at least within E-mail reach).

On the plus side, though, the vast majority of Mac users have the same few questions that you’re used to answering for PC users. By knowing five simple tools and resources, you can answer about 95 percent of the Mac tech support calls you’re likely to receive – even if you’ve never even touched a Mac yourself. These resources, all included in a default installation of the operating system, are part of MacOS (which sounds like a breakfast cereal) version 8.5 or 8.6, which all new Macintoshes are shipped with. For information on providing tech support for customers with older Mac system versions, check out

Mac Basics

Many of the calls you’ll receive with Macs are about the same UCDI (User Clue Deficiency Issues) problems you’ve heard from PC users. You know the drill: make sure that their modem is connected to the computer, make sure all the cables are firmly connected, make sure the phone line is plugged in, et cetera. 

In case you’ve never used a Mac, there are a couple of interface basics that you’ll need to be acquainted with in order to guide a user through troubleshooting. 

The MacOS analog to the Windows 95/98 “Start Button” is the Apple Menu, marked by an Apple icon (a)located in the very top left corner of the screen. Just like Windows menus, Mac menus are single-click “sticky menus” that are easily navigated. To close a Mac document window, click on the button in the upper left (not right) corner; clicking on the buttons in the top right will only expand or collapse the window.

Instead of showing the open applications or control panels in a task bar, Macs have an Application Menu in the top right corner of the screen. The name (or just the icon in some cases) of the foreground app is displayed there; clicking on the menu will allow you to switch between open applications or hide (minimize) them. This is important, since many novice Mac users may not know whether the foreground is an application or the Finder (the application which creates the Mac GUI). To change your foreground task, simply click on the appropriate item in the Application Menu.

The “meta” key on Macs (analogous to the “Windows key” on PCs) is the “command” key, sometimes known as the “Apple key” (it’s the one with the X icon). Applications are launched by double-clicking them, or selecting them with a single click from the Apple Menu. Quitting an application is done by selecting File >> Quit, or with the X–Q keystroke combination.

Internet Setup Assistant

As mentioned, Apple goes to great lengths to make its OS as friendly (“cretin-proof”) as possible, and it provides a pretty useful wizard for supplying all of the information a user needs to set up a dial-up or dedicated Internet connection. The Internet Setup Assistant runs automatically the first time a user boots their Mac, or it can be found with Apple Menu >> Internet Access >> Internet Setup Assistant

The initial screen of the Setup Assistant asks, “Would you like to set up your computer to use the Internet?” Click yes. The second screen asks, “Do you already have an Internet account?” If you answer “yes,” this brings you to an introductory screen of a separate program called the Internet Editor Assistant. If you answer “no,” you’re still taken through most of the Internet Editor Assistant steps, but at the end of the process, you can dial into an Apple database of Mac-friendly ISPs to choose one. 

The installation process is fairly straightforward, and users should be able to complete it easily if you’ve given them: 1.) their username and password; 2.) your local dialup access number; 3.) their POP and SMTP server addresses and/or passwords; 4.) which type of server they’re connecting to (e.g., PPP, DHCP, BootP, etc.); 5.) their nameserver IP addresses, subnet mask and default router (if this is required); 6.) and a PPP connect script if it’s needed. If you require a PPP connect script, make sure that your user can obtain it and place it in the System Folder >> Extensions >> PPP Connect Scripts directory on their hard drive. If any of these settings have been entered incorrectly, the user can re-run the Assistant, or change their information manually in the relevant control panel. 

Control Panel: Remote Access

To dial in, a Mac user must access the Remote Access control panel, which can be reached through Apple Menu >> Control Panels >> Remote Access. Users may not realize that this control panel is the only way to initiate a PPP connection, since there are two misleadingly-named items in the Apple Menu >> Internet Access menu (Browse the Internet and Connect To…) which don’t create a PPP connection; they simply launch the user’s default web browser, whether a connection is active or not.

To initiate a dial-in PPP session, bring up the Remote Access control panel. If they aren’t already filled in, supply your username and password, then click “Connect.” Remote Access will show the steps it goes through as it dials and connects (or any errors it encounters on the way). If it reports an error, you can use one of the other tools listed here to track down the error, or use the Internet Setup Assistant to replace all of your settings. If all goes well, the Remote Access control panel will then show a connection timer and two columns of lights, indicating data being sent and received. To terminate a session, activate the control panel again and click “Disconnect” (or just snip your phone line with a pair of scissors).

A connection will remain active if you quit the Remote Access control panel. However, if you’re having dropped-connection problems, it’s not a bad idea to just hide the control panel or leave it in the background, since the Remote Access control panel is your best indication of whether a connection is still active.

Control Panel: Modem

If you’re is having trouble dialing, the Modem control panel is the first place to check. Check your modem settings by selecting Apple Menu >> Control Panels >> Modem. Make sure that you have selected the correct port to which your modem is attached; otherwise, you’ll get a “modem could not be found” or “port is in use” error. Also, make sure that you’ve selected a proper modem type (the only effect of the modem type setting is to determine the initialization string). Other options, including pulse or tone dialing, are configured here as well.

Control Panel: TCP/IP

The TCP/IP control panel is where settings for name service, router and IP addresses are stored. To access it, select Apple Menu >> Control Panels >> TCP/IP. The relevant information should already have been entered by the Internet Setup Assistant, but it can be modified here if necessary by typing directly into the fields in the control panel. If the Remote Access control panel reports problems negotiating with the dial-up server, check the TCP/IP control panel to make sure that you have the correct server type selected (PPP, DHCP, BootP, et cetera).

Control Panel: Extensions Manager

If a user reports issues with their system crashing or another application crashing when they connect, odds are that it’s a MacOS extension that’s to blame. Extensions are files containing libraries and routines which patch system traps, and they are notorious for conflicting with each other and causing crashes (especially extensions installed as part of third-party or shareware programs).

The Extensions Manager control panel, accessed by selecting Apple Menu >> Control Panels >> Extensions Manager, controls which extensions and control panels are loaded by the system at startup time. If you’re experiencing frequent crash or freeze problems, activate the Extensions Manager. Click the “My Settings” pull-down menu and select the predefined extensions set called “MacOS 8.5 All” (or “8.6 All”) and restart. This will limit the loaded extensions to only those that are defaults for the OS – which should result in a stable system. You may temporarily lose some of your favorite system enhancements and cool widgets – but the system should be stable enough to access the Internet. If not, then you’ve got inherently bug-ridden software – or worse problems that should be taken up with Apple.

Conversely, you can examine the Extensions Manager control panel to make sure that all of the needed extensions for Internet access are loaded. A “MacOS 8.5 All” setting should include these; if you’re using a custom set of extensions, make sure that the Modem, Remote Access and TCP/IP control panels are checked, as well as any extension relating to Open Transport (the Mac’s native streams-based TCP/IP stack implementation). 

Outside Resources

While the tools listed above can cover most basic connection problems, they can’t solve everything. However, a few other resources go a long way toward fleshing out your Mac support. First, pick up copies of Macworld Mac Secrets (5th Edition), by David Pogue and Joseph Schorr, and Sad Macs, Bombs, and Other Disasters by Ted Landau. These two books are absolutely indispensable for advanced troubleshooting. Second, you should know where your local Tucows mirror is, since they have a pretty comprehensive stock of Internet software for Macs – everything from the latest versions of web browsers to free tools for FTP, telnet, ping and more. Finally, you may want to bookmark, Apple’s Tech Info Library, which contains volumes of easily-searchable technical support information.

How to… Install Linux on a G3 Power Macintosh

By Jeffrey Carl

From MacAddict Magazine, March 1999

You’ve heard the hype about Linux – a free version of the powerful Unix operating system. You’ve heard it makes a great web server or file server; or you’ve heard that it makes a fast workstation. You’ve envisioned yourself being the envy of your friends with an un-crashable OS, doing cool (yet vaguely dirty)-sounding things like “tweaking your kernel.” And, being an unrepentant geek, you can’t wait to play around with it. So you’re ready to think different – really different – and try installing it.

First, the good news – despite what you’ve heard, Linux installation can be a very simple, non-intimidating process. Even better, Linux has become available for most Macintoshes – both PowerPC and 68k. With any luck, you could be up and running in about an hour.

Now, the bad news – Linux support isn’t perfect for all Mac-compatible machines and peripherals. Most importantly, you’re taking off the training wheels here – installing an operating system on your machine which isn’t officially supported, doesn’t come with much documentation, and can require you to mess around with the very guts of your machine in ways you had never imagined were possible. 

Getting ready to install Linux on a Power Macintosh G3

If you’re still with us after reading that last paragraph, then you’re stout of heart and soul – or you’re a glutton for punishment. Either way, let’s get started. For this example, we’ll be installing LinuxPPC 4 from a CD onto a Power Macintosh G3/266, with an external SCSI 1.5 GB hard drive set up as the Linux volume.

1. Get Prepared

1. First, make sure that you have a supported Macintosh for your Linux of choice (see sidebar) and a hard drive you can repartition to make a home for Linux. At least 400 MB of space is required, and 1.2 GB or more is recommended. You can use your MacOS drive to include Linux volumes, but you’ll have to wipe it clean and repartition it first. 

2. Back up all of your files. Really. You can get away without doing this if you’re installing Linux onto a fresh new disk (and you like living on the edge); but if you’re repartitioning your current MacOS drive, you’ll need to do this because you’ll be wiping your disk clean in the process.

2. Partition Your Disk

You’ll need to create several disk partitions for Linux (at least two, and four or five is recommended). This comes from an old Unix tradition of placing files which seldom change (and important system files) on different partitions from frequently-changing user files so that they’re less likely to be corrupted by frequent writes to the hard disk. The minimum number of partitions is two: one for swap (sort of Linux’s virtual memory scratch disk) and one for /(“root,” or your regular filesystem). It is recommended that you create these two partitions, as well as one for /usr (where most of Unix’s programs are installed) and /home (where users’ personal files are stored).

1. Choose a drive-partitioning utility. Your choice here depends on what type of disk you’re going to use (SCSI or IDE/ATA). If you aren’t sure what type of disk you have, consult the documentation that came with your Mac. 

For SCSI disks, you can use the Apple_HD_SC_Setup program which came with your Mac to create the partitions and set them as the correct type (A/UX). If you have a non-Apple disk or a Mac clone, you can use the third-party utility that came with the drive or computer (like FWB Hard Disk ToolKit) for this. If you have an Apple IDE or SCSI drive, you’ll need to use the Apple Drive Setup utility that came with your Macintosh (make sure you have the newest version). If you use Drive Setup (as we’ll be doing here), you’ll also need the pdisk utility (included on the CD) to convert the HFS partitions you create to their proper type.

2. Open Drive Setup utility and choose the disk you’ll be partitioning. Note that you can’t use Drive Setup from your startup disk; you’ll have to partition another drive, or boot from your MacOS system CD.

3. Get the Tools

First, you’ll need to get Linux and the utilities for installing it. 

1. One of the nice things about “free operating systems” is that they’re just that – free. If you have a fast Internet connection, you can do an installation via FTP from the LinuxPPC site ( or one of its mirrors, and it’s absolutely free. However, it may be easier for most users to order a CD from the good folks at LinuxPPC (, which includes a recent distribution plus other programs and goodies for $XX plus shipping. In addition, if you buy a CD, you’re completely free to share it with as many people as you like.

2. Install BootX (included on your LinuxPPC CD), the utility for switching back and forth from MacOS to Linux when you boot your computer. Simply drag the BootX control panel onto your system folder to install it (you’ll need to reboot before you can use it).

If this doesn’t work for you, you can manipulate which OS you boot into through BootVars (, or included on the CD), a control panel which allows you to manipulate your Mac’s Open Firmware. However, this isn’t recommended – mucking around with Open Firmware has reduced more than one formerly confident Mac Jedi to bingeing on non-prescription cold medications in frustration (also, this option is no longer officially supported by LinuxPPC).

3. Drag two files from the CD onto your System Folder: vmlinux and ramdisk.image.gz. These files should stay at the “top” level of your System Folder (not inside any folders inside the System Folder).

4. Begin the Installation

1. Insert the LinuxPPC CD into your CD-ROM drive, and open your BootX control panel (double-click it or select it from your Control Panels menu in the Apple Menu).

2. Leave the “root device” field blank. Check the “Use RAM Disk” and “No video driver” options.

3. Click the “Linux” button to reboot your computer into the Linux Red Hat Installer.

5. Use the Red Hat Installer



The LinuxPPC website:

The LinuxPPC Installation Guide:

The MkLinux website:

The Linux Mac68k Project website:

The Linux on PowerPC newsgroup: comp.os.linux.powerpc


Which Macs Can I Get a Linux For?

LinuxPPC 5.0 supports:

Any PCI-based Power Mac, PowerBook or Macintosh clone (including iMac), as well as BeBoxes. “Blue-and-white G3” not supported yet. 

MkLinux 3.0 supports:

NuBus-Based Power Macs (6100, 7100, 8100, 9100), PCI Power Macs (7200, 7500, 7600, 8500, 9500, 7300, 8600, 9600), PCI Performas (4400, 5400, 5500, 6400, 6500), 20th Aniversary Mac, Desktop and Minitower Power Mac G3 (but not “blue-and-white G3” yet), PowerBooks (5300, 1400, 2400, 3400, G3, G3 Series)

LinuxMac68k supports:

Most 68030- and 68040-based Macs (but not most 68LC040-based). 68020-based Macs with a FPU (Mac II, or those with a FPU emulator).

For complete, up-to-date lists, please refer to the website of your Linux of choice.