Getting to Know SuSE Linux – An Interview with SuSE CTO Dirk Hohndel

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, October 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

SuSE Linux is often referred to as “the European Red Hat,” since SuSE enjoys the kind of market domination there that Red Hat does in the U.S. For the record, SuSE is pronounced “SUE-zuh,” and the name was a catchy little acronym for the rather awkward “Gesellschaft für Software und Systementwicklung mbH.” A reader survey by a German Linux magazine found SuSE with about 75 percent market share to Red Hat’s 11 percent and Debian’s 8.5 percent. Although I wasn’t able to find figures, it’s also safe to say that SuSE enjoys a commanding market share in many other areas outside the U.S. as well.

A major factor cited for this is the fact that SuSE’s distribution includes six CDs with more than 1500 extra software packages to install. While many U.S. users with broadband connections don’t particularly care, it’s an important factor for users (especially for international users) who have slow connections, pay per-minute charges for their bandwidth or otherwise find it inconvenient to spend large amounts of time downloading new software.

Advantages frequently cited by SuSE users aside from the copious CD software collection include SaX, an excellent X Windows configuration tool; and YaST, SuSE’s LinuxConf-like administration tool. While these features are a favorite for some users, others complain that the enormous number of available applications in the install CDs makes installation cumbersome. And YaST, like LinuxConf, seems very much to be a “love it or hate it” application, with opinions varying widely by personal preferences.

SuSE distributions tend to be a little less “cutting-edge” than Red Hat’s, lagging a couple months behind with the “latest and greatest,” but as a result tending to have fewer bugs. Whether this tradeoff is acceptable is up to you; it should be noted that SuSE acquitted itself fairly well in the recent Security Portal Linux Distribution Security Report (http://www.securityportal.com/cover/coverstory20000724.html).

The current version of SuSE for x86 hardware (as of this writing) is 7.0. SuSE recently released a well-reviewed version of their 6.4 distribution for PowerPC-based Macintoshes, which includes the MOL (Mac on Linux) emulator among other goodies. To find out more, I asked SuSE Chief Technical Officer Dirk Hohndel and press representative Xenia von Wedel:

Carl: Could you give me a brief history of SuSE?

Hohndel: In 1992, the four founders ran into the (back then largely unknown) Linux operating system (or more precisely, the beginnings thereof). They quickly saw the need for Linux on “off-line media”, as Internet connections were not commonplace in Germany back then (and still are quite expensive). More importantly, a physical distribution made it easier to bundle documentation and support, and of course to make SuSE’s own developments for installation and configuration available as part of the package. Soon SuSE Linux became the standard Linux OS in Europe, and in the past few years SuSE has been quite successful outside Europe as well (less because of a strong marketing arm, but more due to its focus on sound technology and good engineering).

Von Wedel: SuSE Linux AG, headquartered in Germany, and SuSE Inc., based in Oakland, CA, are privately held companies focused entirely on supporting the Linux community, Open Source development and the GNU General Public License. With a workforce of over 450 people worldwide, SuSE has offices all over Europe, Venezuela and in the US. More than 50,000 business customers use SuSE Linux worldwide due to its stability and high quality.

SuSE received the “Show Favorites” award at LinuxWorld Expo in February 2000 and March 1999. SuSE contributes considerably to Linux development projects such as the Linux kernel, glibc, XFree86TM, KDE, ISDN4Linux, ALSA (Advanced Linux Sound Architecture) and USB (Universal Serial Bus). Additional information about SuSE can be found at http://www.suse.com.

Carl: What is included with the newest release of SuSE?

Hohndel: Linux kernel 2.2.17/PRE (2.2.16 plus the relevant patches for it) with enhanced raw device support and 4 GB main memory addressing, full USB support and ReiserFS XFree86 4.0 and SaX 2, the graphical installation tool for XFree86 4

SuSE Linux Professional includes more than 1500 apps such as StarOffice 5.2; Acrobat Reader; Samba 2.0.7; Apache 1.3.12  including PHP-4, Zope, Midgard, JServ, Tomcat, backhand and WebDAV; Lutris Enhydra 3.0; teleconferencing; Sendmail 8.10.2; Postfix 19991231pl08; Perl 5.005_03; Logical Volume Manager; PAM; KDE 1.1.2, KDE 2.0 Beta3, and GNOME 1.2.

SuSE Linux is known to be one of the richest and most complete versions of Linux, so almost all the important packages are there. One of the interesting new features is that SuSE Linux 7.0 can be installed and used using Braille; it can therefore be used and managed by the visually impaired.

Carl: What platforms does it support? What are the minimum requirements for SuSE?

Hohndel: At the moment, SuSE Linux is available as a shrink-wrapped product for Intel x86 and compatible processors (IA32), Motorola PowerPC and Compaq Alpha processors. A beta version for IBM mainframe S/390 can be taken from SuSE’s ftp server. Versions for SPARC and IA64 are under development and also available via FTP.

Von Wedel: Please see detailed information about some special hardware in our support database. Just search for the desired keyword at http://www.suse.com/support/hardware/index.html .

Carl: What are some configurations that SuSE would recommend, or it really excels with?

Hohndel: We have certified quite [a lot of] hardware with SuSE Linux, a current list can be found on our web site. I don’t think that it makes sense to point out specific configurations, as those tend to change all too frequently. 

Von Wedel: If you are running a NIS or simple file server, a slower Pentium or even a 486 would work fine. For a standard all-around machine, you would probably want the average machine found on the shelf at Best Buy or CompUSA [including] big hard drives, a Pentium III and at least 128 MB of RAM. This machine will allow you to run programs like Quake III and VMware without problem. We tell our customers to avoid any hardware where the manufacturer refuses to release open source drivers for it.

Carl: What would you say differentiates SuSE from other Linux distributions?

Hohndel: SuSE maintains consistent technical quality throughout all platforms and all languages. We also have an encyclopedic set of Linux tools, for which SuSE Linux is already famous. YaST and YaST2 are well known as solid and easy-to-use installation and configuration tools that are flexible enough to support a wide range of installation options for the enterprise environment.  Additionally, for optimized support for fully automated installation, SuSE’s new ALICE (Automatic Linux Installation and Configuration Environment) tool allows central configuration management for computer networks.

Carl: What is YaST? Where can I find more information about it?

Hohndel: YaST stands for Yet another Setup Tool. A white paper that covers some of the important features is available on the web at http://www.suse.de/de/linux/whitepapers/yast/Auto_Install_English_text.txt.

What is important to know about YaST is that it offers a central configuration and administration interface, but can be told to stay out of the way if the admin chooses to configure parts of the system “manually.” So, you can benefit from the flexibility and know-how that has been put into the tool without being stuck with it. Many typical administration tasks (adding printers, setting up accounts, adding software packages) can comfortably be done from within YaST.

Carl: For someone operating an ISP, what reasons could you give to choose SuSE over another Linux distribution?

Hohndel: SuSE has a strong focus on security within its distribution. We do not only have an internal security team within SuSE Labs that audits all major packages and closely follows all relevant information sources, but we also maintain an active dialogue with our customer base, through mailing lists and security alerts. 

Furthermore, SuSE is extremely well connected with numerous industry leaders, from both the technology and the business perspective. We maintain strategic alliances with IBM, Compaq, Fujitsu Siemens Computers, Oracle, SGI, and numerous other independent software vendors. And [since] SuSE supports the Free Standards Group, SuSE customers will be sure to use the widely spread Linux distribution that sets up the standards. Already today SuSE is compliant with all the draft standards from LSB [Linux Standards Base].

Carl: For someone operating an Internet server – what, if any, are the drawbacks for choosing SuSE?

Hohndel: Quite frankly, I don’t see any drawbacks in choosing SuSE. Our system is well tested for exactly this use case. Interestingly enough, while globally about 30 percent of all web servers run Apache on Linux, in Germany, our “home turf,” that number is above 40 percent. So SuSE Linux is in very heavy use for exactly that – an Internet server.

Carl: What advantages would you cite for someone choosing SuSE Linux over another server OS, like Windows 2000, Solaris or FreeBSD?

Hohndel: Open Source is by many seen as the software development methodology of the future. Especially in the area of infrastructure systems, for example Internet servers, Open Source has gained the respect and the trust of the companies deploying these systems.

Linux is finally keeping the old Unix promise. It is the first truly homogeneous OS in a heterogeneous hardware environment. At the same time, it is a very flexible environment that allows for easy customization and can be adapted to the needs of very different use cases, from tightly controlled firewall system to fully-fledged server to complete desktop.

And, of course, there are lots of companies that develop on Linux and for Linux, and there is plenty of companies supporting Linux and offering professional services around Linux. SuSE is obviously one of them.

Depending on the OS that you compare with, a different combination of these arguments apply. 🙂

Carl: Where could someone running an Internet server go for help and tips on SuSE?

Hohndel: We offer both support and professional services around SuSE Linux. A good place to start is http://www.suse.com/suse/news/PressReleases/proserv.html. We already offer 24×7 support here in Germany and will roll out this service globally, soon. 

And of course we offer the famous support database and component database (http://sdb.suse.de/sdb/en/html/index.html and http://cdb.suse.de/cdb_english.html).

Carl: What’s next for SuSE? What improvements are you planning for the future?

Hohndel: We continuously expand the hardware base that SuSE Linux supports. As I mentioned above, S/390, IA64 and SPARC are already in beta test (or close to production), and others will follow.

The installation and configuration tools are continuously being developed and will of course be one of the areas that we continue to focus on. Especially things like improved hardware detection, but also ergonomic aspects of the administration process.

And of course we are putting a lot of effort in many Open Source projects. [These include] “enterprise” features like high availability, improved SMP support or our work on directory services and better file systems, or desktop-oriented things like XFree86 and KDE.

An important focus of improvements is ISV [Independent Software Vendor] support. We are working closely with many ISVs to help them port their software to a standardized Linux environment to get the largest set of applications for Linux possible.

So there are a lot of things that you can expect from SuSE in the future. We are excited to be in this fast-growing industry and are looking forward to the things to come.

The Continuing Evolution of Darwin – Apple’s BSD Unix Has Lots of Promise, and Lots of Rough Edges

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, September 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

There’s a new BSD out there, and it’s unquestionably the odd member of the family. “Darwin” is the BSD Unix core of Apple’s next generation OS, MacOS X (see page 39 for the lowdown on OS X). While MacOS X (which includes Darwin) is a proprietary commercial product, Darwin by itself is open-source and available for free.

So why should you care (or think about deploying it)? First, it’s being developed for both PowerPC and Intel x86 architectures. Second, it’s being developed not only by its user community by also by Apple – which, like ‘em or hate ‘em, has considerable manpower and money to throw at the project. 

Lastly, Apple is scheduled to start shipping MacOS X on new Macintoshes in January 2001, as well as making it available for current machines – meaning that there will probably be a couple million boxes out there with Darwin at their heart within a year. While that’s no guarantee that all of your favorite software will rush to support it or optimize for it, it will certainly have the user numbers to make it worth noticing.

What is Darwin?

To get a feel for where Darwin is and where it’s headed, I asked Ernie Prabhakar, Apple’s Developer Platforms Product Line Manager.

Q: Can you give me a brief history of Darwin (including its roots in NextStep/ Rhapsody/Mac OS X Server)?

Ernie Prabhakar: Darwin is based on the Mach/BSD technology that traces its roots back through Mac OS X Server, Rhapsody, OpenStep for Mach, NextStep, and ultimately the original Mach 2.0 work at CMU.  In all the NeXT/Apple products, this UNIX core provided the support for advanced GUIs and development environments.  Over time, we’ve evolved from BSD 4.3 to BSD 4.4, and from Mach 2.0 to Mach 2.5 and ultimate Mach 3.0 (based largely on OSF MK 7.3). 

Q: Who is Darwin’s intended audience of consumers?

EP: Darwin is targeted at Macintosh developers who want to leverage the power of open source, as well as Open Source developers who want to enjoy the performance, consistency, and advanced GUI available on the Apple platform. Certain large customers (e.g., research universities) will take advantage of the open core to customize it to their environments, as well as helping students and researchers studying open source.

Q: What is the relationship between Darwin, its BSD kernel, and the Mach microkernel? 

EP: Darwin is the entire product, including a hybrid Mach/BSD kernel plus assorted utilities (mostly from FreeBSD) and developer tools (primarily from the GNU project). 

Q: How does using Mach affect its characteristics or performance?

EP: Very little. We use a microkernel architecture but a monolithic implementation, so Mach and BSD normally run in the same address space.

Q: Where does Darwin come from? Which parts are built around each of the different BSDs and/or NextStep?

EP: The current 1.0 version of Darwin actually shares very little code with the original NextStep implementation.  The Mach microkernel is actually based on the OSF code used in MkLinux.   The BSD kernel is based on BSD 4.4Lite, primarily the FreeBSD 3.2 distribution, with a healthy dose of NetBSD.

Q: What does Darwin add to the mix? Are there any innovations new to *BSD in Darwin?  

EP: For an existing BSD developer, Darwin has the advantages of:

• a powerful kernel extension mechanism and I/O Kit for writing drivers

• traditional Apple technology (HFS+, AppleTalk)

•  a well-funded commercial developer with a single well-supported reference release 

•  a very cool optional (though proprietary) GUI called Aqua (i.e., Mac OS X) 

Q: Is there any other OS which Darwin is trying to match or beat in features?

EP: Darwin’s top priority is making Mac OS X the world’s best personal computer operating system.  Beyond that, we want to help our developers turn Darwin itself into a fully-functional standalone open source operating system, compatible with FreeBSD and comparable to Linux. 

Q: Will Darwin include features like FreeBSD’s sysinstall or ports tree?

EP: Actually, we already ship Darwin with the Debian packaging mechanism.

Q: Is Darwin POSIX-compliant? How well should existing open-source apps (especially server apps like Apache, sendmail, qmail, PHP, etc.) compile on it or port to it?

EP: Most BSD and GNU stuff compiles out of the box. The major limitation used to be pthreads, but that was pretty much fixed in Darwin 1.0. We have a very active developer community that is ensuring third-party UNIX software is becoming “Darwin-ready” which usually means just defining the appropriate #defines and makefiles.

Q: Does Darwin bring any particular optimizations or improvements that would make it good from an ISP/Internet server perspective?

EP: Well, we have a dedicated team of networking engineers focusing on giving it best-of-breed performance integrated with Apple hardware. As well as a really cool GUI for administration (if you buy Mac OS X). Other than that, we use generally the same code as FreeBSD and NetBSD, which is already pretty thoroughly optimized.

Q: Are there any reasons you could think of that someone might choose to use Darwin instead of FreeBSD/NetBSD/etc.?

EP: Several: 

1.  We will be shipping Darwin (as Mac OS X) on millions of machines next year, giving developers an enormous potential user base.

2.  You have the option of purchasing a very cool UI (Mac OS X).

3.  We have excellent support for the latest high-performance PowerPC hardware 

Q: How much of a priority is the x86 port of Darwin?

EP: A lot of Darwin developers are excited about being able to use the exact same open source OS on both PowerPC and Intel hardware, so we’re doing what we can to help.  We demonstrated some major progress at WWDC [the Apple World Wide Developers’ Conference], and I think the community is gearing up to finish the work on their own.

Where is Darwin Now?

As of this writing, the current version of Darwin is 1.0.2. Darwin currently boots on Intel hardware, but there are a number of unresolved issues right now which keep it from being genuinely functional. Apple provides a binary release and installer for Darwin on the PowerPC architecture (located at http://www.publicsource.apple.com/projects/darwin/release.html), and it current supports G3/G4 Macintoshes as well as a few older desktop models. The PowerPC version sits entirely inside a single UFS or Apple HFS+ disk partition (current limitations of the Darwin booter limit the boot volume size to 8 GB).

Once you install Darwin, you can tell pretty quickly that most of Apple’s efforts are (understandably) going towards making Darwin work with MacOS X, rather than on its own. (In fact, most of the Darwin projects in its CVS repository are live and being used by the Apple MacOS X team at the same time as the Darwin community.) This is most obvious in the fact that the Darwin-specific documentation and install/setup goodies included with the OS are about zero – Apple is working on the MacOS X GUI to include these things.

When you install Darwin 1.0.2, you’ll boot and be dropped off into a tcsh shell. From there, you’re on your own – don’t expect any sysinstall or LinuxConf-style goodies to help with setup and configuration. For most experienced administrators, this wouldn’t pose too much of a problem, except that Darwin has a lot of its files stashed in MacOS X-centric locations that take a while to puzzle out. For example, don’t bother looking in /etc/httpd or /usr/local for a httpd.conf file – you’ll find it as /Local/Library/WebServer/Configuration/apache.conf.

At this point, Darwin is certainly more of a toy for developers and “early adopters” than an OS worth implementing for an Internet server. OpenSSH hasn’t been finished up yet, nntpd and some other server staples seem to be MIA, there are plenty of device drivers missing, and there’s plenty of work yet to be done on the Intel side. Documentation is still pretty sparse, but some good stuff is developing out there, including a helpful “Unofficial Darwin FAQ” at http://www.synack.net/darwin/faq.html. You can keep up on Darwin’s progress by subscribing to any of the Darwin mailing lists at Apple (http://www.publicsource.apple.com/projects/mail.html) or, if you’re lazy like me, checking out project leader Wilfredo Sanchez’s updates to his Advogato page (http://www.advogato.org/person/wsanchez/).

Darwin is very new, and time will take care of many of its current deficiencies (many of the ones I’ve described will almost certainly be fixed by the time you read this). Despite its shortcomings, Darwin 1.0.2 is a fully functional BSD Unix system, and the default install includes several of the third-party packages you’d hope to find like Apache, Sendmail, Perl and BIND. Ports for XFree86 (much of the original work on this port was done by id Software’s John Carmack), OpenSSL and other common tools are available in source form from the Darwin CVS repository. 

Where is Darwin Going?

Darwin’s promise lies not in where it is now, but where it will be. Darwin can catch up on plenty of things by grabbing code from the other BSDs, and it is likely only a matter of time before Darwin develops rough feature parity with some of your favorite BSDs. The Apple Darwin team has said that future releases of Darwin will try to track FreeBSD; Jordan Hubbard was one of the speakers at the BSD session at May’s Apple World Wide Developer Conference. The IOKit driver framework that Apple has created for MacOS X has a lot of potential for (relatively) easy device driver development. 

The real promise lies in the possibility that the large installed user base of MacOS X systems that will be out there will drive developers (especially commercial ones) to bring their server apps natively to a BSD-based system for the first time. Of course, porting something to MacOS X isn’t the same as porting to BSD; most commercial applications coming to MacOS X will use its API rather than the Darwin/BSD API. Nonetheless, even if some of these apps which currently exist for [insert your least-favorite commercial Unix here] are ported to the Darwin layer of MacOS X, it will be a victory for BSD. 

The close relationship between Apple’s Darwin team, the Darwin community, and the FreeBSD community is an excellent sign for Darwin’s future. While it may not be much more than a curiosity right now, Darwin is a welcome addition to the free *BSD family, and will definitely be something to watch in the next year.

Trusting BSD FreeBSD for Security Ultra-Gurus

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, August 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

While most Freenix admins are used to securing their servers, there is a “higher” world of security that has never been touched by free *nixes. The realm of “trusted” operating systems, long the province only of military and other ultra-secure environments, represents a security level beyond that of all but a few operating systems. Now, however, the TrustedBSD (http://www.trustedbsd.org/) project is working to bring that security level to FreeBSD and other *BSD OSes. If you operate a multi-user environment and you’re looking for the optimum in security, then learning about TrustedBSD is a very exciting project. 

I asked Robert Watson, one of TrustedBSD’s lead developers, to describe the project for Internet providers and other BSD server users:

Q: What is the purpose of TrustedBSD? 

A: TrustedBSD provides a set of security extensions to the FreeBSD operating system under a liberal license. Targeting both military and commercial models, TrustedBSD includes traditional trusted operating system features, as well as an extensible authorization architecture to simplify development of new security features. The TrustedBSD project provides an organizational framework through which to discuss, design, develop, and document these extensions.

Q: What is the Common Criteria for Information Technology Security Evaluation (CCITS)? What does it mean to “real world” uses of the OS? 

A: The Common Criteria (“CC”) are a set of security description and evaluation documents developed by the United States and other governments, as well as ISO standards. Using the CC, you can use a common terminology to describe specific sets of features and degrees of evaluation. Many readers will be familiar with the Orange Book (and entire Rainbow Series) which can be considered precursors to the CC in the United States. While the Orange Book largely targeted military applications, the CC really provides a language for setting goals and determining if they have been achieved, rather than prescribing a particular set of features required for a given certification. Unlike the Orange Book, TrustedBSD specifically targets commercial and network-centric environments. 

TrustedBSD is being developed with this vocabulary and evaluation system in mind, as the CC by definition provides a common criteria for understanding security products. At this time, no formal evaluation is planned, as formal evaluation requires substantial investment of resources, including financial resources. However, TrustedBSD will make an excellent candidate for evaluation. It is possible that companies choosing to resell TrustedBSD may seek formal evaluation [as a “trusted” operating system].

Q: What benefits will the TrustedBSD extensions to FreeBSD provide to admins using FreeBSD as an Internet server? 

A: It is fair to say that some features of TrustedBSD will have an immediate impact on securing every-day server systems. Other features will come into play only in extremely security-centric contexts, such as military, electronic commerce, and banking environments. Regular systems may see the use of least privilege capabilities rather than volumes of less safe setuid and setgid programs. Similarly, they may take advantage of ACLs on files allowing more flexible discretionary access control. They may also take advantage of auditing features for intrusion detection. However, they are less likely to take advantage of more intrusive functionality, such as the confidentiality and integrity-oriented mandatory access policies that will be implemented. 

Q: Why should an administrator of a FreeBSD web/mail/etc. server use the TrustedBSD extensions to the OS? 

A: It is the intent of the TrustedBSD project to make almost all code developed for TrustedBSD available as part of the base FreeBSD operating system. Thus, all users of FreeBSD will benefit from the project by virtue of using FreeBSD. However, specific advantages, as described above, include the ability to more generally specify permissions on files and allow users to more easily manage resources, and to be able to run far less code with root privileges. As such, administrators taking advantage of this functionality will be able to run a more secure system. 

A recent paper from Argus Systems, a commercial provider of trusted extensions for Solaris, described how use of mandatory integrity policies could have protected against the recent penetration of the Apache Project’s web server. Many of the same protection services will be available as part of TrustedBSD. 

Q: What is the downside, if any, in installing the TrustedBSD extensions? Is administration significantly more complex? Are there any performance penalties involved? 

A: Improved security is almost always a tradeoff against improved usability. That said, one important goal of TrustedBSD is to make the feature set of a trusted operating system easily accessible to users of a widely distributed free operating system. Any changes in system behavior will require administrators to understand the differences, but in most cases the differences will not be as substantial. Administrators will need to learn how to read, manipulate, and back up Access Control Lists (ACLs) on files, and understand how capabilities on files behave. 

Performance implications depend on the feature in question: support for most new security checks introduces little or no overhead. However, features such as fine-granularity auditing require creation and management of large quantities of data. Performance-sensitive sites may wish to avoid using some features, such as auditing, as a result. We anticipate producing comprehensive evaluations of the performance impact as code becomes available. 

Q: Are the TrustedBSD extensions planned to be architecture-specific? e.g., are the efforts of TrustedBSD only for i386 machines, or are they portable to other architectures that FreeBSD may be ported to? 

The code base is intended to be architecture-neutral, and is written entirely in portable C. i386 platforms are the primary ones in use for development, but we also have access to Alpha-based systems for testing. As TrustedBSD’s supporting infrastructure allows for third-party extension, it is possible that third party security providers might distribute extension in binary-only form, but all base code will be freely available in source form.

Formal evaluation for certification requires selection of specific hardware, as whole systems are evaluated rather than purely software products. 

Q: Does the TrustedBSD project intend on making its extensions portable to OpenBSD/NetBSD/Apple Darwin? What are the barriers to porting them? 

A: TrustedBSD is made up of a number of components, some of which do require extensive modifications to the kernel structure. However, where such changes are made, they will improve kernel modularity and extensibility, supporting modular insertion of TrustedBSD components. If these modularity changes are introduced in other *BSD kernels, they can be used to support the higher level functionality. As the *BSD kernels are relatively close in structure, this task is well within the realm of possibility. The KAME IPv6/IPsec implementation is an example of a project that has successfully developed software for multiple BSD platforms. The BSD-style license used in TrustedBSD should pose no problem for integration with almost any other project, open source or commercial, and the TrustedBSD project would be glad to see and assist in integration into other operating systems. 

Q: Do the TrustedBSD extensions to *BSD open the doors to any new uses for the OS? e.g., Are there any tasks to which the existing BSD flavors were unsuited to before for which they are now suitable? 

A: Yes, it does, as it opens to doors to uses that have more demanding security requirements. Examples include banking, military, and electronic commerce. That is not to say that FreeBSD wasn’t used there before, as it was already a very qualified OS for these environments, but his provides a higher degree of assurance. An extensible security infrastructure will make FreeBSD/TrustedBSD a more appealing platform for security research and development, also. 

If formally evaluated, then it would be open for use on classified networks in ways in which it currently isn’t usable. 

Q: How does TrustedBSD compare to OpenBSD’s security goals? Are they competitive projects or complementary projects? 

A: Complementary. To our knowledge, OpenBSD has really taken quite a different approach to security: fine-grained source code auditing and integration of cryptography. TrustedBSD introduces new authorization models, as well as auditing. While we won’t be porting the changes over to OpenBSD at this time, this doesn’t mean others can’t, or that we won’t port them in the future. 

Q: How many active contributors are there to the TrustedBSD project? What areas are you looking for contributors to? 

A: TrustedBSD has a relatively small developer community, although we hope it will grow as interest grows. At this point, the number of active developers is around 6 to 10, but there is substantial design discussion on our mailing list from a far larger group bringing experience from a variety of other platforms. There is also substantial interest in cross-platform portability for applications. 

At this point, contributors should have strong kernel and application security experience; in particular, we’re interested in developers who have past experience with trusted operating systems. As the project progresses and the kernel component reaches greater maturity, we’ll be interested in application programmers to help adapt existing applications for any changes in the trusted OS environment. As there is interest in portable APIs, we hope to be able to leverage other work in this area. 

Q: What are TrustedBSD’s goals for the next six months/year/unspecified dates in the future? 

A: As with all operating system scale projects, TrustedBSD will involve an iterative development process, with features becoming available over time. We feel that the current goals can be broken down conceptually into a number of implementation phases: 

• In the first phase, the goal is to introduce TrustedBSD security changes directly and within the context of the existing FreeBSD security implementation. This phase is well underway, and will result in a usable, secure system. However, the goal of this phase is to gain both development and operational experience: while some of this code will be integrated into the base system (some already has – support for extended file attributes, and ACL interfaces), the majority will be made available via seperate distribution mechanisms. 

• In the second phase, the goal is to generate stronger security infrastructure in the kernel, aiming for greater generalization and modularity. In this phase, we build on the practical development experience in the first phase, having a strong understanding of the requirements base on having implemented the same features. One of the main targets of this phase is a generalized kernel authorization framework, allowing modular and pluggable security extensions to be introduced without substantial source modification (as required in the first phase). This will allow both the TrustedBSD project, and third party developers and vendors, to distributed FreeBSD security enhancements with substantially lower development and maintenance costs. 

The first phase is well underway — some interfaces (in particular, for ACLs) were included in 4.0-RELEASE, and supporting infrastructure such as extended file attributes have been committed to the FreeBSD 5.0-CURRENT development branch. Support for fine-grained least privilege (a variant of POSIX.1e capabilities), native ACL support in the FFS file system, fine-grained event auditing, and MLS MAC policy support is in the works. We see a time frame for the completion of the implementation of this component within 12 months; however, one aspect to trusted operating systems is strong supporting documentation, including design, implementation, and operation. Completing this documentation may take additional time. 

The second phase is currently under design–we’ve gained substantial experience in that which we have completed of the first phase, and are ready to begin discussing the stronger and more general abstractions in the second phase. By the time this interview is published, drafts for at least the modular authorization system will be under discussion on the TrustedBSD mailing lists. So far, the results look promising. We hope to have a first prototype of the generalized authorization system completed within six months, and to being migrating the security models implemented in the first phase over to this mechanism shortly thereafter. 

We hope to present a number of papers detailing aspects of the TrustedBSD project at the BSD Con 2000 conference in October.

What’s New with Linux-Mandrake 7.0? An interview with Steve Schafer of Macmillan

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, July 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

This column was originally supposed to be about supporting FrontPage users on Unix, but Boardwatch’s editors edited all the swear words out of my 2,000 word column and were left with a total of 12. So instead, I turned to Macmillan’s Linux-Mandrake 7.0. 

The Basics

Linux-Mandrake 7 is earning rave reviews all over the place right now, and with pretty good reason. Linux-Mandrake is heavily optimized for Intel Pentium-family chips, but should run fairly well on AMD or Cyrix-based systems. System requirements are listed as any 586 (Pentium-equivalent) or higher CPU, 16 MB RAM (64 MB recommended), 500 MB disk space minimum (1 GB+ recommended), and a 3.5″ floppy drive or CD-ROM drive for installation.

Linux-Mandrake 7 is available free via FTP from http://www.linux-mandrake.com/en/ftp.php3. It comes in standard FTP install format, as well as a downloadable ISO 9660 CD image. MandrakeSoft itself sells distro CDs, and Macmillan also distributes and sells three shrink-wrapped versions of Linux-Mandrake 7.0: Complete, Deluxe and Secure Server (see below in the interview for more information on what the different packages contain). The Complete package retails for $29.95 (although I found mine at Wal-Mart for $25); Deluxe sells for $55-$60, and the Secure Server package sells for about $100.

The Distribution

The distribution is pretty up-to-date (at least as of this writing). It includes kernel 2.2.14, glibc 2.1.2, XFree86 3.3.6, and more. All of the defaults here are stable – you won’t find anything too bleeding edge in the included components, so if you want to wreck your system with the development kernel or XFree 4.0 beta du jour, you’ll need to go out and download it for yourself.

Most of the clamor for Linux-Mandrake 7 has been over the reports that it’s the easiest Linux distribution ever for a new user to install and set up. Mandrake’s efforts in this direction are focused on three new (or at least fairly new) tools. 

The first and second are DrakX, first introduced with Mandrake 6.1, and DiskDrake. DrakX is the new all-singing, all-dancing, all-graphical installation tool – and it’s certainly the most user-friendly install tool that I’ve seen for a Freenix (an install tool with themes?). It handles all of the basics you’d expect, as well as taking a good stab at autoconfiguring the LILO boot loader and X Windows (plenty of drivers for popular video cards are included). DiskDrake is the graphical disk-partitioning tool for use with DrakX, although you can also use good old fdisk or cfdisk. 

The third new tool is DrakConf, which bundles in LinuxConf as well as its own versions of hardware, software, networking and security tweaking tools. For a user or administrator new to the platform, DrakConf keeps a lot of common tools very handy. Also available are RPMdrake (for RedHat Package Manager installations) and Lothar, a nifty program for autoconfiguring sound cards and other audio-related devices.

So, what’s not to like? With all of the goodies in this distribution, Mandrake has its sights squarely focused on its intended consumer – and that consumer probably isn’t you. Linux newbies and desktop/workstation users are the target market here, and it isn’t evident that much has been done to optimize or enhance Linux-Mandrake’s performance specifically as an Internet server.

However, that last assessment may be a little too harsh (or too much to expect). Regardless of what type of user you are, good administration tools help everybody; and the Pentium optimization certainly helps if you’re running an Intel-based server. And to some extent, every Linux distribution uses some version of the same kernel, so there isn’t much you can expect Mandrake to do to increase the performance of the Linux TCP stack. 

The Questions

To find out more about the appeal of Linux-Mandrake 7 for ISP/webhosting users, I asked Steve Schafer, the Senior Title Manager for Linux at Macmillan.

Jeff: Can you give a brief history of Linux-Mandrake?

Steve Schafer: Linux-Mandrake started as a volunteer project to help make the standard Red Hat Linux distribution more robust and yet more user friendly. Approximately two years ago the principals of the project created a company, MandrakeSoft, to oversee and lead the project into a retail venture. Version 5.3 of Mandrake was built from the Red Hat 5.2 distribution and was distributed through outlets like LinuxMall. The distribution continued to gain a following for power and ease of use. Versions 6.0 and 6.1 went on to win the “Editor’s Choice: Product of the Year” award at LinuxWorld in 1999, and various kudos for being a “better Red Hat than Red Hat.” Mandrake 7.0 raises the bar even further as you will see below.

J: Why should I use Linux instead of another Freenix like FreeBSD or OpenBSD? 

SS: One word: support. Linux remains on the cutting edge of technology, with solid support both online and paid, the latter provided through retail purchase or a support contract through a dedicated Linux support organization like Linuxcare. The interface and standards continue to evolve, presenting a clean, more user-friendly environment for a technical OS.

J: What’s new with Linux-Mandrake 7? 

SS: The main differences in 7.0 come in the way of installation and configuration. The new graphical install (DrakX) can be tailored for each user’s tastes and technical level – the “Recommended” install provides for less decision making on the part of the user and makes various assumptions about the target machine to make the install fairly seamless, while the “Expert” install allows the user full-control over how the OS will be installed. Several new customization utilities allow the user to quickly and effectively change the configuration once the OS is installed, changing the interface, security options, adding and configuring hardware, and more. Mandrake continues their tradition of offering more base utilities and pre-configured desktops as well.

J: How does Linux-Mandrake perform for Internet serving tasks in relation to other Linux distros? Is the differentiation something other than performance (e.g., ease of use, pre-installed apps, etc.)?

SS: By and large, most Linux distributions perform the same types of serving tasks since they are all cut from basically the same mold. The difference Mandrake makes comes in two areas: customization and Pentium optimization. See the other section(s) for info on the customization (both what Mandrake does automatically, and what tools exist for the user). As for Pentium optimization: Mandrake recompiles ALL packages with Pentium optimization. Although the performance increase for workstations is slight, it is much more pronounced in the server environment. When a server is loaded with several users all utilizing resources, the faster the server can complete tasks, the better. (Of course, this is only true if the server’s processor is Pentium [or derivative] based.)

J: What are Linux-Mandrake’s different products, and whom are they intended for?

SS: Macmillan offers three distinct Linux-Mandrake products, geared toward specific Linux customers:

Complete: This value-packed product is geared toward the beginning user, or the user who is taking Linux for a “test drive.”  Macmillan adds the following components to the base Linux-Mandrake OS:

• PartitionMagic and BootMagic – for ease of installation on a Windows machine for dual-booting between OSes. (Recent research showed that 70% of retail Linux purchasers install Linux in a dual-boot configuration.)

• StarOffice 5.2 – This powerful office suite provides word processing, spreadsheet, presentation, and graphics functionality. Compatibility with Microsoft products ensures maximum data transportability.

• Linux Library – 3500 pages of additional Linux documentation (in electronic form) from the Macmillan imprints Que and Sams.

Deluxe: This six-CD set provides the most Linux content for the money. Geared to address the professional user, this product provides more of everything Linux. In addition to the base Mandrake OS and its sources, these additional 4 CDs are provided:

• Contributors CD: Close to 900 additional utilities, applications, documentation, etc from the Linux community.

• Contributors Source CD: Source code for the Contributors CD.

• Applications CDs (1 & 2): Over 30 commercial and demo applications, including StarOffice, WordPerfect for Linux, etc.

[Jeff’s Note: The Commercial Applications CDs (available with the Deluxe version) includes full and limited versions of some very tasty goodies including: Acrobat Reader 4; Executor (a MacOS emulator); IBM’s JDK, Lotus Notes and ViaVoice (voice recognition word processor). It also has demos of the games Civilization: Call to Power, Railroad Tycoon II and Myth II; an evaluation version of VMware; and WordPerfect. True, almost all of this is available via download – but if you’d like it all in one easy-to-install package, this is a mother lode.]

Secure Server: For the professional “with a purpose,” this product offers the base Mandrake OS bundled with a secure Web server for starting an e-commerce operation. Additional utilities and a custom Linux Library (over 4200 pages of electronic docs from Que and Sams) round out this product.

J: What are Linux-Mandrake’s particular strengths? What are its weaknesses?

SS: Mandrake’s strengths include:

• Red Hat compatibility

• Pentium optimization

• Cutting edge components (latest kernel, XFree, etc)

• Scalable graphical installation

• Pre-configured desktops and user interfaces

• Comprehensive graphical configuration tools

• Additional packages, drivers, utilities, apps and more

Honestly, it’s hard for me to think of weaknesses regarding Mandrake. This distribution has the depth to satisfy the hardcore user, but the simplicity (install and customization) to engage the beginner as well. Linux overall does have weaknesses, mostly surrounding the implications of open source and staying ahead of the technology curve. For example, there isn’t one multi-million-dollar firm behind the Linux distros (Microsoft) and the kernel has just begun to support technologies like USB. Keep in mind that these disadvantages exist across the spectrum of Linux, not just with Mandrake.

J: Why should I use Linux-Mandrake instead of another distro for my web, mail or dialup authentication server?

SS: All the reasons stated above, particularly the ability to support the latest hardware, ease of customization, bundled components, etc. This is true across the board of applications (web, mail, dialup authentication, etc).

J: What’s the recommended hardware for a Linux-Mandrake Internet server? Is there anything that works particularly well?

SS: The answer really depends on the workload and user load the particular box will experience. One prominent ISP (CiHost) uses Red Hat for their Web hosting servers which are dual-Pentium boxes. Recently LinuxWorld (February in New York) used Mandrake for their registration system, probably high-end Pentiums using dumb terminals (low-end boxes) for input. 

When considering Linux-Mandrake, it’s important to consider running on a Pentium or derivative due to the recompilation using Pentium optimization.

J: How easy/difficult is it to migrate from another Linux distribution to Linux-Mandrake? Is there anything I should know?

SS: It depends on what you are migrating from. Although most Linux distros are pretty much the same, some vary considerably in customization, installed tool sets, libraries, etc. Like migrating between Windows versions, you want to ensure that your added tools (applications, utilities, etc) are compatible with the new system. Red Hat and derivative Linux distros use some different libraries than other distros, requiring different base level support for some programs.

J: On the client side, can ISPs support Linux-Mandrake dialup users easily? Why?

SS: You should be able to with no problem. Linux speaks the Internet natively (TCP/IP) so utilizing a PPP dialup is almost second nature to the OS. It’s a bit tougher [for the user] to set up than [for the ISP to] support really … the ISP shouldn’t have to change a thing. 

The Conclusions

As Linux distributions go, Linux-Mandrake 7.0 is very good. Aside from some strange gaps in documentation (for example, my Macmillan Linux-Mandrake Complete “User Guide and Reference Manual” didn’t even mention DrakConf), it seems pretty solid.

For the ISP/web user, there are two groups that will find it particularly compelling. First are new users/admins who want the easiest, most pain-free installation and setup. Second are users with Pentium-based machines, who may see impressive speed gains over other distributions. Note: I wasn’t able to throw enough machine load at my test server to test this out reasonably; if someone out there does, please let me know the results. For anyone, though, Linux-Mandrake 7 is certainly worth a look.

Jabber and Zope – Can Open-Source Beat the Most Popular Commercial Products?

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, June 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

“Jabber” and “Zope” sound like two characters from “Alice in Wonderland.” Or maybe characters in Quake II (“jabber saw the light from zope’s BFG-10K”). Either way, what they really represent are attempts by the open-source community to beat commercial offerings in emerging critical Internet areas: instant messaging (IM) and web application development. 

Can they do it? On one hand, projects like Apache and Linux or *BSD have shown that open source software can lead the industry in performance. On the other hand, both projects face the perennial Achilles’ heel of open-source projects – lack of commercial distribution channels and a corresponding difficulty in gaining enough market share to become a standard. Let’s take a look at both projects, how ISPs can use them, and whether they can beat the big commercial packages.

Jabber

Once upon a time, there was Mirabilis ICQ (“I Seek You”). And lo, it was good. Then it was bought by AOL and became the basis for AOL Instant Messenger (AIM). Then Microsoft’s MSN and Yahoo! got into the act with similar programs that could communicate with the zillions of ICQ/AIM users. Then AOL said, “Oh no you don’t.” Then Hell froze over and Microsoft started complaining to AOL about “open standards” and MSN et. al. whined to Congress about it…

Right now, IM is a mess. AIM, with an astonishing 45 million users, holds tight to its grip on ICQ, keeping its service proprietary. Few people are inclined to shed tears for Microsoft because it’s being disadvantaged by proprietary systems. Still, many users are seeking an IM program that, like J.R.R. Tolkien’s “One Ring,” would “bind them all together.”

What is it?

There are plenty of open-source ICQ clones (like LICQ [www.licq.org], popular on Linux). Jabber is quick to point out that it isn’t just another ICQ clone. As the project’s FAQ page says: 

Put simply, Jabber is an open source, cross-platform, completely extensible, end-all to Instant Messaging as we know it. Never again will you have to worry about finding the right client to talk to your friends, nor will you have to concern yourself with having three or four different clients open so that you can chat with all of your associates. And all you have to do is pick the client that you like for the platform you want!

Jabber is genuinely different from other instant-messaging products, although the differences are “under the hood.” A Jabber user gets an account with a Jabber server, and the server then takes care of all the “heavy lifting” – communicating with other servers and translating the protocols of other IM systems into Jabber’s own XML (eXtensible Markup Language)-based messaging protocol. In theory, server modules can be written to parse any other IM protocol, and connectivity to those systems is then automatically provided to the client. 

How does it work?

Joe User gets an account with a Jabber server. He also gets accounts with AIM and Yahoo! Pager. His Jabber server stores his account info and “buddy lists” (roster) for each IM service, allowing him to access all of them through his one server and one client. Assuming that the server has the requisite protocol “transports,” he can then communicate with users of any of these services from his Jabber client. 

The native Jabber protocol is an XML stream over a TCP socket, allowing for a number of possibilities for how and over which port it can be sent (a bonus for its potential ability to evade firewalls, critical to the success of any IM protocol). The Jabber client communicates only with its server, which then handles all of the protocol-translation and communication issues. 

What’s so great about it?

Obviously, the ability to communicate with any IM protocol is a huge plus, potentially making Jabber the “holy grail” of IM platforms. Also, its reliance on the server model simplifies matters for users significantly.

While it promotes unity among IM protocols, Jabber also promotes competition within its own ranks, and Jabber users can choose from a number of different Jabber clients, each with their own foci, strengths and weaknesses. Its open standard and protocol means that anyone who would like to create a new client or offer new features can do so with (relative) ease.

Also, Jabber is intended to be cross-platform. Work on clients is reportedly underway for *nix on command line and X-Windows, Win32, MacOS, browser-based, and even Newton.

What still needs work?

As of this writing in mid-April, just about everything. Jabber is still very much a work in progress (its server has yet to reach 1.0 status or implement working transports for other IM systems), and it remains to be seen a.) how well its compatibility with other IM protocols will work, and b.) how well it will work by itself. Most of the clients are still missing some user-interface niceties, and a working standard for security/encryption of Jabber messages has yet to be implemented.

Jabber is like Rambus – one of those long-shot bets which will either take over the world or be a perennial also-ran. It has all of the right ingredients, but only time will tell whether it fulfills its promise. 

A critical hurdle is the same interoperability issue which has plagued other IM clients: will AOL try to block it? AOL’s ICQ users are such a tremendous asset to AOL that it’s hard to imagine that AOL won’t try to block Jabber out if it gains much share of the market. The jury is still very much out on Jabber’s eventual success … and whether, if it can’t communicate with others, it can succeed on its own.

Zope

Zope (the “Z Object Publishing Environment”) is … well, it’s hard to describe in a hurry. But it’s very cool.

What is it?

It describes itself as an “application server,” and its goal is to provide a framework for creating and maintaining large websites with advanced functionality like site users, feedback and content databases. Elements of Zope (currently at version 2.1.6) compete with both higher-end commercial software like WebObjects or Cold Fusion, as well as open-source site spinoffs like slash (the source code for the backend to  slashdot.org). 

Zope is portable, since it’s written in Python (with some performance-critical parts in C); currently, it runs on Windows NT and nearly any Unix. Installing and running Zope requires that you have Python (http://www.python.org) version 1.5.2 or higher installed on your server. It requires Python to be built with threads support; if you’re installing Python to use Zope, be sure to run the pre-installation configure script with the –with-thread option.

Zope began life in the Bobo Web Object System, developed by a company called Digital Creations (http://www.digicool.com). The Bobo framework was open-sourced, while the company sold a commercial application server based on it, which was named Principia. 

Seeing that the application server market was dominated by big names whose marketing muscle Digital Creations could never compete with, the company decided in December 1998 to combine Principia and Bobo, rename it Zope, and open-source the whole shebang. Zope began to gain recognition and users, while Digital Creations shifted its business model to providing commercial support and consulting for Zope.

How does it work?

The heart of the system combines an ORB (Object Request Broker) with an object database, allowing pages to be simply and dynamically created. The main components of Zope are: 

• the ORB, which extracts information from the database

• the object database itself 

• the publisher, which interfaces with a web server

• the template system, for dressing up content in your customized page templates

• the management structure, which handles authoring/access permissions 

Here’s the short, short version: you download Zope to the server where its content will be server from. Zope is available as a Windows executable, in RPM format for Linux, or tar-gzipped source archives for Solaris, Linux or any other *nix. For a quick start, just run the Zope setup script, then run the start script it builds, and away you go. Zope launches its own web server on the port of your choice (default 8080), although you can easily configure it to use Apache, IIS or another server.

You can then use a web browser to view pages or go to its “manage” interface. Using these controls, you can create new folders and add content to them. You also use the manage interface to assign basic functionality (who can access or alter the contents, HTML templates and error messages, etc.). This interface also provides “approval” and “undo” functionality as well.

The content comes in the form of … well, lots of things – users, images, or other items usable by the database. Primarily, though, it will be DTML (Document Template Markup Language) objects, which can contain XML, DTML or good plain old HTML. DTML works like server-parsed HTML on steroids, offering very advanced functionality. An example might look like this:

<!–#var my_header–>

<!–#if “AUTHENTICATED_USER==’Jeff'”–>

<h3>Here is the customer list:</h3>

<table>

<!–#in showCustomers–>

<tr>

<td><!–#var cust_name–></td>

<td><!–#var cust_type–></td>

<td><!–#var acct_balance–></td>

</tr>

<!–#/in–>

</table>

<!–#else–>

<em>Sorry, you can’t read this!</em>

<!–#/if–>

<!–#var my_html_footer–>

And there, you have a complete, well-formatted HTML page. The DTML page above calls in an external HTML page header that you uploaded named my_header, checks to see if the user has permission to view a table, and if so calls an object which is a SQL query that you defined earlier called “showCustomers” and displays the results in a table. If the user viewing the page isn’t the one it expected, it prints an innocuous HTML message. For everyone, it then prints the external HTML footer you named my_footer.

All of this can be easily done with CGI, but learning the DTML syntax is significantly easier for most people than learning Perl or C. Furthermore, the ability to integrate this into a sitewide management structure is a big plus.

So, essentially, with Zope you don’t so much build pages of a site as you build objects of content – FAQs, stories, other information – which get stuffed into a database. You then create folders and pages that are DTML frameworks which can include one simple content object (for a simple, non-dynamic page) or make complicated queries (the last five news objects submitted, all objects submitted by a certain author, the most recent graphic of a certain type, SQL queries, etc.). If you’re unfamiliar with the process, it takes a while to “wrap your head around it,” but it provides a tremendous amount of possibilities.

A very readable introduction that lays out the functional aspects of using Zope (as well as giving a much better explanation of DTML examples like the one above) can be found at http://www.devshed.com/Server_Side/Zope/Intro/. An in-depth look focusing on Zope’s database model is at http://webreview.com/pub/1999/03/05/feature/index2.html. There is an excellent library of pre-built and member-contributed projects that provide example code to learn from at http://www.zope.org/Products, as well.

What’s so great about it?

First, Zope provides an interface to adding/administering the site’s content which is accessible through a web browser. It allows for relatively complex security/permissions setups, which allow the administrator to set access privileges for various groups of contributors (administrators, writers, designers, etc.). Zope makes use of the emerging WebDAV (“Web-based Distributed Authoring and Versioning”[www.webdav.org]) standard to collaboratively manage and handle versions of pages or objects on a site. This also allows for good integration with existing software which supports WebDAV, like Microsoft Office 2000 or Internet Explorer 5 on the client side or Apache with mod_dav on the server side. 

Zope provides its own transactional object database, which means that you won’t need to add another database (e.g., MySQL, Oracle, MS SQL, etc.) to provide a back-end. One of the primary benefits to using a transactional (keeping track of additions/changes as “transactions”) database like the one Zope provides is that it allows you to “undo” changes if you need to. The database is optimized to match the needs of most web databases, which are frequently being queried but only occasionally have new data written to them. The Zope database is also easy to back up, and is designed to work with “simple URLs” (unlike those created by WebObjects).  

One of Zope’s strong features is interoperability with existing SQL databases. You can set up a Zope database connection to an existing database and apply SQL queries from a Zope DTML page to the older database. Zope also makes good use of XML, allowing interoperation with XML data sources. The Zope database stores a great deal of information besides just your primary content objects; it also keeps track of user postings and threaded discussions, user logins and permission settings, and allows users to customize the way they view the site.

Like any good open-source product, Zope makes very good use of open standards. Aside from WebDAV, SQL and XML, Zope supports HTML 4 and CSS, HTTP 1.1, FTP, FastCGI, DOM, LDAP and other goodies. 

Lastly, it was announced in late March that Digital Creations would be open-sourcing its ZEO (Zope Enterprise Option), which can turn Zope into a distributed system. Allowing Zope to scale to multiple machines should make it a feasible choice for high-demand sites which need to be served from multiple machines. Zope has also announced interoperability with Microsoft’s SOAP (Simple Object Access Protocol) sort-of-standard. Whether this will become a “big deal” remains to be seen.

What still needs work?

At first glance, Zope is just plain difficult to understand for most people. The Zope feature tour (http://www.zope.org/Tour/ar01.html) is a good start, and the Zope Guides (http://www.zope.org/Documentation/Guides) are helpful (if understandably incomplete) but some more “hand-holding” higher up in the online documentation might convince more users to “get their feet wet” with Zope.

Zope is certainly on the right track with the emerging standards it uses, but some of these standards will take time to build the user base to allow Zope to really shine (for example, when WebDAV publishing will work from more applications than just IE 5 and Office 2000). The WebDAV standard itself isn’t fully fleshed out yet, and I have to admit that any standard which Microsoft has taken a strong interest in makes me worry a bit.

Lastly, Zope simply isn’t for everyone. If you have a relatively small or simple site, the overhead of learning and serving Zope will probably be a waste of time. If you’re willing to put in the time, using Zope from the get-go will help you scale your site if you need to – but in many cases, using Zope is swatting a fly with a sledgehammer.

The Verdict: Jabber, Zope and You

What can Jabber and Zope do for you? Well, in the short term, you can offer support for them and promote them to your customers. With Zope, ISPs can offer Zope support and make a presence in the small (but growing) market for Zope site-hosting. 

With Jabber, ISPs can run their own local Jabber servers. In the short term, it can be used by supplying your NOC with a Jabber nick through which clients can contact them; in the long term, if Jabber meets its promise, your local Jabber server can be a great way to tie users to you (to be Machiavellian, your Jabber server will hold all of their IM contacts and make it a pain to migrate elsewhere).

Will either of these projects ever gain the ubiquity of Apache, sendmail or Linux/BSD? Probably not. But Jabber holds an extraordinary amount of promise, and can be an excellent value-added tool for ISPs – and it’s free – so there’s little reason not to be an early adopter. Zope will almost certainly remain a limited-interest project, but it can be of use to you for your own web sites as well as providing a differentiating service for site hosting. 

Can Jabber and Zope beat out the commercial competition? Only time will tell. But, considering the cost (nothing but your time), it’s almost a no-brainer for you to “do the right thing” and support open standards, and to investigate providing support for both with your ISP. 

Who knows? If you bet on enough long shots, one of them might turn out to be a winner.

BSDI, Walnut Creek CD-ROM Merge – Move Brings Two Biggest BSD Unix Variants, FreeBSD and BSD/OS, Under One Tent

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, June 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Confirming a deal that had been whispered about among the BSD Unix community for months, Berkeley Software Design, Inc. (BSDI) and Walnut Creek CD-ROM announced on March 9 that they would merge to form a new company under the BSDI name. The significance of the announcement lies in that it brings the vendor of commercial BSD-based systems (BSDI’s BSD/OS operating system) together with the primary distributor of the largest free BSD variant (FreeBSD), signaling a major push toward unity among the BSD Unix flavors. 

The move also opens the door to more intermingling of code between the two OSes, since some of FreeBSD’s key developers were employed by Walnut Creek and will now be employed by BSDI, and members of BSDI’s BSD/OS engineering team may be contributing code to FreeBSD. In a separate move, Walnut Creek’s Slackware Linux distribution will be spun off into a separate company, run by longtime Slackware chief Patrick Volkerding.

In the short term, users of FreeBSD and BSD/OS won’t see much change to the current products (FreeBSD 4.0 and BSD/OS 4.1). BSDI will continue to offer their commercial BSD/OS and provide paid support and consulting, while FreeBSD will remain free. However, BSDI will soon offer commercial support options for FreeBSD – potentially paving the way for FreeBSD’s entry into the enterprise market in the same way that commercial support for Linux from RedHat, LinuxCare and others did for Linux. Just as important to the enterprise market, BSDI CEO Gary Johnson said that there are plans to implement a “BSD engineer” certification program by the end of Q2 2000.

In the longer term, the move offers a wealth of potential possibilities by placing the resources of a well-funded corporate entity behind promotion of BSD Unix and encouraging commonality between the OSes. BSDI has said that it will work to develop a common Application Binary Interface (ABI) between FreeBSD and BSD/OS, enabling an application written for one OS to run on the other without any modification. While specifics are still very much up in the air, it is likely that some elements from each OS will find their way into the other, over the course of the next few OS releases. 

Unconfirmed possibilities include extending FreeBSD’s hardware support with the addition of some BSD/OS code, and making user-friendly elements like the FreeBSD “ports collection” software installation scheme available as an optional add-on to BSD/OS. Still, BSDI is adamant that the merger won’t make users of FreeBSD or BSD/OS sacrifice any of the qualities they treasure in their OS of choice.

Most exciting for many *BSD users is the likelihood that many of the “old school” computer scientists who were part of the UC Berkeley Computer Science Research Group (CSRG) that developed BSD Unix and are currently affiliated with BSDI will be brought into the BSD community. While BSDI hasn’t firmed up its message about which OS to push to what constituency, it is likely that FreeBSD will continue to appeal to the open-source community while BSD/OS will be aimed at commercial and enterprise users. 

Why was the merger done? To paraphrase Benjamin Franklin, the BSDs must all hang together, or surely they will hang separately. BSDI says that, between the two operating systems, there are at least two million BSD servers running out there. BSDI cites their collective market share as 15 percent of all Internet sites, as well as being used by nine out of ten ISPs or NSPs. Between the two OSes, customers include Microsoft Hotmail (which reportedly tried to convert to Windows NT but reverted to FreeBSD after numerous problems), UUnet and Yahoo! (which also made an equity investment into the new BSDI). 

Still, the public presence of BSD Unix outside these groups is nearly nil, while Linux has garnered massive mindshare. “BSD is pervasive throughout the Internet … but the world at large doesn’t know it,” said BSDI Marketing Manager Kevin Rose. While this can be attributed to large number of factors, it lies largely in the fact that commercial BSD/OS has never had mass mindshare, while the free BSD variants with more users have never had the money to undertake real promotional campaigns. 

While BSDI won’t discuss what the relative portions of those two million servers are FreeBSD or BSD/OS, it is relatively certain that the majority are running FreeBSD – which has never had the funding to promote its offerings on a level equivalent to what Linux distribution makers have done. The merger will likely put significant cash resources behind the promotion of BSD Unix for the first time. BSDI CEO Johnson said, “everything we’re doing is for the betterment of BSD …  the idea is to promote not BSD/OS, not FreeBSD, not NetBSD, not OpenBSD, but BSD.”

FreeBSD Chief Evangelist Jordan Hubbard said that the merger also leaves the door open to cooperation with the other free BSD variants, NetBSD and OpenBSD. BSDI will be working to bring more third-party applications to BSD, as well as promoting publication of BSD books and developing user groups. 

The fact that many of the technical details of the merger are still undecided leaves the door open to speculation that philosophical differences (or developer egos) may complicate the implementation of the full possibilities of the merger. Still, the merger announcement is a true turning point in the history of BSD Unix, and a much-needed encouraging sign for the platform.

Webmin Easy Freenix Administration through a Web Interface

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, May 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Are you looking for an easy web-based GUI to administer Unix servers? Want to provide your Unix clue-challenged hosting customers with an easy way to administer their machines? 

If so, try Webmin (http://www.webmin.com/webmin/), a free application which allows you or your users to easily administer their Freenix system through a web interface. You can use it yourself, or you can offer it to clients with dedicated web or mail servers to do some of their own administration and take the burden off of you.

Installation and Setup

Webmin is a Perl 5 program that allows you to configure everything from user accounts to DNS and Samba filesharing and more. Webmin is free, and runs on a wide variety of Linuxes (including Caldera OpenLinux, RedHat, S.u.S.E., Slackware, Debian, TurboLinux, Mandrake, Delix DLD, Apple MkLinux) as well as FreeBSD and OpenBSD. It has been most thoroughly tested on Caldera OpenLinux, RedHat and FreeBSD, but it should run fine on other systems with potentially a bit of tweaking. 

To install, go to ftp://ftp.webmin.com, and select the most recent version of Webmin (webmin-0.78.tar.gz at the time of this writing). Unzip and untar the file, then run the included setup.sh to install Webmin. Answer a few questions about your system setup, create a username and password for yourself, select a port for Webmin to run on, and you’re ready to go. To upgrade, download the source for a new version and specify the source file’s location in the Webmin interface’s Webmin Configuration -> Webmin Upgrade option.

Webmin is modular in nature, and comprises a “core” Webmin server with a number of default modules. Each module (like CronBINDSyslog, etc.) provides administration functionality for its specified service. At installation time, all default modules are installed; you can remove modules through the Webmin interface, or download new third-party modules from a link on the Webmin home page. Webmin stores configuration files for all of its modules inside directories located (usually) in /etc/webmin/modulename/. The start and stop scripts for Webmin are also stored (somewhat confusingly) in /etc/webmin, rather than in /usr/local/sbin or in the Webmin home directory. Its logs are by default stored in /var/webmin/, rather than in /var/log/webmin/.

Webmin includes its own “miniature” webserver, so you don’t need to alter your Apache (or other web server) configuration to use it. The mini server is also a Perl script (/usr/local/webmin-0.78/miniserv.pl or something similar), which runs (owned by the root user) until the process is killed. This isn’t a terribly elegant solution, and it eats up about 3 MB of RAM as long as Webmin is running, but we’re assuming here that convenience is more of an issue here than absolute maximum performance.

If the idea of running a root-owned process over unencrypted HTTP scares you, you’re right. Webmin includes functionality to use Perl’s Net::SSLeay module to run its miniserver through HTTPS. If you don’t have this Perl module installed (and you’ll need to have the C libraries included with OpenSSL to get SSLeay to work), you’ll find download links and (relatively) helpful instructions for OpenSSL and SSLeay on the Webmin home page. Keep in mind, however, that setting SSLeay up can sometimes be, to use the technical term, a “major pain in the butt.”

Even better for security, you can also use Webmin’s interface to specify specific IP addresses from which Webmin can be accessed. This isn’t a foolproof setup, but it should be good enough for many system administrators.

Fun with Modules

Webmin’s interface is no-frills. It has very plain and simple graphics, loads quickly and gets the job done – a very wise choice, in my opinion. All functionality is provided in HTML tables instead of through a contrived graphical user interface.

As mentioned before, Webmin’s functionality is based on its included modules, each of which provides an interface to a specific service, application or daemon. The default installation includes all of the Webmin modules, which include such helpful items as MySQLWU-FTPDNFSApacheUsers, Groups and Passwords, and a large number of other actions (for a complete list, see www.webmin.com/webmin/standard.html). Some default modules are OS-specific, like LinuxRAIDLinux Boot Loader and Network Configuration (for Linux and Solaris). Third-party modules which are available for Webmin (including ones for QMailVNC and one which allows SSH logins in place of Webmin’s telnet tool) are available at www.webmin.com/webmin/third.html. There’s also a “wish list” of modules currently planned or under development at www.coastnet.com/~ken/webmin/wish.html.

Of course, having all of these modules available doesn’t mean that all of these services are available to you. Despite the fact that there’s a Samba Windows File Sharing module, for example, you’ll still need to manually download and install Samba on your machine before you can use Webmin to configure it. 

Each of the included modules is well written, and provides a wide range of functionality. For example, the Apache module allows you to set up or alter virtual hosts, set permissions, add or modify MIME types, change Apache’s process limits and more. Even better, the module-writing spec is open, allowing you to write your own modules if you have a good knowledge of Perl and the application or service that you’re writing your module for. 

One exception to this is the included Telnet Login module, which offers up a Java applet allowing you a telnet login through the web browser. This module is (surprise!) unfortunately dependent upon the Java Virtual Machine (JVM)/ Just-In-Time compiler (JIT) your browser is using, and can be unreliable in some cases. For example, it runs fine with the Apple JVM/JIT used by Netscape/MacOS, but is unusable with the Symantec JVM/JIT used by Microsoft Internet Explorer/MacOS. 

Overall, however, Webmin’s functions are well defined and easily accessible. If you are at all familiar with the service that you’re configuring, Webmin provides a simple point-and-click interface that absolves you from needing to remember file locations and command-line switches.

Fun with Configurability

Through the Webmin Configuration option on its index page, you can set up a variety of options, including Webmin logging, port/address for Webmin, and interface look and feel. Perhaps unsurprisingly, this is a significant improvement over command-line based programs which often leave no clues as to where their configuration files are. Also, most of these configuration options can be set manually via the command line in /etc/webmin/miniserv.conf.

Another handy feature if you’re using Webmin to administer a number of machines is its Webmin Servers Index function (available from your Webmin index page). Choose a central machine where you do most of your administration, and then fill out the forms to “register” the other servers you’re running Webmin on. Alternatively, you can set up a list of servers on one machine, then copy the files in /etc/webmin/servers/ from that server to all of your other servers and have those links automatically established. 

Every time thereafter that you click on the Servers Index button, you’ll be presented with a quick link to all of your other Webmin-enabled servers. You can specify a username and password to quickly log in to the other servers for convenience, or you can create a normal connection that will prompt you for a username and password for extra security.

An especially useful configuration option on the index page is Webmin Users. Through this, you can set up a variety of username/password logins for Webmin, and the modules that they’re allowed to access. This is particularly worthwhile if you want to set up one user for you (allowing access to all modules) and another user for your customer (only allowing access to modules for adding/removing users, Sendmail, Apache, etc.). With this setup, you can allow customers access to commonly used features but keep them from doing anything which might seriously “hose” their system. 

This isn’t a completely secure setup, however, since information about the modules that users can access is store in a plain text file as /etc/webmin/webmin.acl (usernames and passwords are stored in /etc/webmin/webmin.users), and a user with root access could easily change this.

You Can Lead a User to Man Pages, But You Can’t Make Them Think

Webmin provides a great deal of functionality in the modules it provides; but what it doesn’t provide is help in understanding them. This is almost certainly too much to ask from a free admin program, but it does limit Webmin’s usefulness in some ways (at least for users who are not already very familiar with Unix). For example, it can allow a user to enable or disable the chargen service or edit /etc/fstab, but it provides no information about what those things are, or why you might want (or might not!) want to change them. 

While a truly novice-friendly administration interface is too much to ask for, clickable keywords with glossary listings probably aren’t too much to ask for. The lack of documentation and help defeats one of Webmin’s primary benefits: the ability for a Unix novice to easily administer their system. While Webmin certainly aids new users by removing the burden of needing to know command-line options, it certainly won’t help them to configure Apache options if they don’t know what “Maximum requests per server process” means. Novice users are one of Webmin’s potentially largest markets, and it would be a shame if they didn’t provide explanatory text for their options in a future release version.

Still, this is a very forgivable gripe for a program which still isn’t even at release 1.0. What isn’t forgivable, however, is Webmin’s severe lack of documentation about itself, what it does, and how it does what it does. While this won’t deter the experienced system administrator, it limits how useful Webmin can be to administrators who would like an explanation of what they’re doing to their system, but don’t have the skills or knowledge to examine Webmin and its modules closely.

On the positive side, the Webmin site answers many frequently asked questions, and each built-in module also contains its own help information. A Webmin mailing list (frequently posted to by the authors) is also available; subscription information and a searchable archive are available from the Webmin home page. Even better, a Webmin Helpoption is always available from the index screen. Unfortunately, the help that is available appears to have been written as an afterthought, and the regexp searches that Webmin appears to do when you’re looking for help aren’t always very useful. 

The installation doesn’t even include a documentation directory or man page, and users are left to figure out for themselves how the system works and what goes on. Most of the information I managed to gather about Webmin’s internal workings was from reading the source for the installer shell script and the Perl code of the individual modules. If you’re like me (and I feel bad for you if you are), you will want lots of documentation about any third-party tool that runs as root.

Conclusions: Good, But Not Perfect

Is Webmin worth downloading, installing and trying? Absolutely – it offers excellent features, and the price (none) can’t be beat. Is it worth deploying for your technical support staff or customers? It depends on whether you’re willing to accept its limitations (and potential security or system integrity risks).

Nonetheless, Webmin has tremendous potential to provide a great web interface for Unix control. If your needs match its strengths and you aren’t too concerned about its weaknesses, then it’s something you should add to your administration arsenal right away. Even if it doesn’t meet your needs now, it certainly is a tool worth watching for the future.

Tracing the Lines: The Definitive Guide to Traceroute

By Jeffrey Carl

Boardwatch Magazine
Boardwatch ISP Guide, May 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Dig through any Internet engineer’s “toolkit” of favorite utilities, and you’ll find (probably right under the empty pizza boxes) the traceroute program. Users have now joined the bandwagon, using traceroute to find out why they can’t get to site A or what link is killing their throughput to site B. 

Traceroute uses already-stored information in a packet header for queries on each part of the path to a specified host. With it, you can find out how you get to a site, why it’s failing or slow, and what might be causing the problem. Traceroute seems like a simple and perfect tool, but it can sometimes give misleading answers due to the complexities of Internet routing. While it should never be relied on to give the complete answer to any question about paths, peering or network problems, it is a very good place to start.

Traceroute, in its most basic form, allows you to print out a list of all the intermediate routers between two destinations on the Internet. It allows you to diagram a path through the network. More important to IP network administrators, however, is traceroute’s potential as a powerful tool for diagnosing breakdowns between your network and the outside world, or perhaps even within the network itself.

The Internet is vast and not all service providers are willing to talk to one another. As a result, your connection to your favorite web or FTP site is often grudgingly left to the hands (or fiber) of a middleman, perhaps your upstream, or a peer of theirs, or even more remote than that. When there is performance trouble or even a total failure in reaching that site, you might be left scratching your head, trying to determine who is at fault once you’ve determined it’s not a fault within your control.

The traceroute utility is a probe that will enable you to better determine where the breakdown begins on that path. Once you have some experience with the program, you’ll be able to see when performance trouble is likely a case of oversaturation of a network along the way, or that your target is simply hosted behind a chain of too many different providers. You will be able to see when your upstream has likely made a mistake routing your requests out to the world, and be able to place one call to their NOC; a call that would resolve the situation much more quickly than scratching your head and anxiously phoning your sales representative.

Performing a Traceroute

Initiating a traceroute is a very simple procedure  (although interpreting them is not). Traceroutes can be done from any Unix or Windows computer on which you have an account; MacOS users will need to download the shareware program IP Net Monitor, available from shareware sites or at There are also numerous traceroute gateways around the Internet which can be accessed via the web.

From a Unix shell account, you can usually just type traceroute at the prompt, followed by any of the Unix traceroute options, followed by the host or IP you’re attempting to trace to. If you receive a “command not found” message, it indicates either that traceroute isn’t installed on the computer (very unlikely), or it’s simply installed in a location which isn’t in your command path. To fix this, you may need to edit your path or specify the absolute location of the program – on many systems, it’s at /usr/sbin/traceroute or /sbin/traceroute.

Windows users with an active Internet connection can drop to a DOS prompt and type tracert followed by the hostname or IP address they want to trace to. With the Unix and DOS traceroute commands, you can use any of a number of command-line options to customize the report that the trace will give back to you. With web-based traceroute gateways, you may be able to specify which options you want, or a default set of options will be preselected

How it Works

As the Unix man page for traceroute says, “The Internet is a large and complex aggregation of network hardware, connected together by gateways. Tracking the route your packets follow (or finding the miscreant gateway that’s discarding your packets) can be difficult. Traceroute utilizes the IP protocol “time to live” field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host.”

Traceroutes go from hop to hop, showing the path taken to site A from site B. Using the 8-bit TTL (“Time To Live”) in every packet header, traceroute tries to see the latency from each hop, printing the DNS reverse lookup as it goes (or showing the IP address if there is no name). 

Traceroute works by sending a UDP (User Datagram Protocol) packet to a high-numbered port (which would be unlikely to be in use by another service), with the TTL set to a low value (initially 1). This gets partway to the destination and then the TTL expires, which provokes (if all goes as planned) an ICMP_TIME_EXCEEDED message from the router at which the TTL expires. This signal is what traceroute listens for. 

After sending out a few of these (usually three) and seeing what returns, traceroute then sends out similar packets with a TTL of 2. These get two routers down the road before generating ICMP_TIME_EXCEEDED packets. The TTL is increased until either some maximum (typically 30) is reached, or it hits a snag and reports back an error.

The only mandatory parameter is the destination host name or IP number. The default probe datagram length is 38 bytes, but this may be increased by specifying a packet size (in bytes) after the destination host name.

What it Means

What traceroute tells you (assuming everything works) is how packets from you get to another specific destination, and how long they take to get there. Armed with a little knowledge about the way the Internet works, you can then make informed guesses about a number of things.

Getting There is Half the Fun

Let’s say that you’d like to know how traffic from my website is reaching the network of Sites On Line, a large online service. So, I run a traceroute to their network from my webserver.

traceroute to www.sol.com (127.188.146.18), 30 hops max, 40 byte packets
1  epsilon3.myisp.net (127.50.252.2)  1 ms  1 ms  1 ms
2  sc-mc-4-0-A-OC3.myisp.net (127.50.254.50)  1 ms  1 ms  1 ms
3  sol-hn-1-0-H-T3.myisp.net (127.50.254.58)  2 ms  2 ms  2 ms
4  gpopr-rre2-P2-2.sol.com (127.163.134.61)  2 ms  1 ms  2 ms
5  127.168.0.30 (127.168.0.30)  3 ms  3 ms  3 ms
6  www-dr4.rri.sol.com (127.188.128.254)  4 ms 7ms  8 ms

You can see from this example that your traffic passes through a router for my ISP, then passes through what is evidently an OC-3, before entering a line that (judging by the name) is evidently a DS-3 gateway between my ISP and SOL. From there, it enters the GigaPop of SOL, passes through a router which doesn’t have a reverse lookup (I see only its IP address), and eventually to one last router (clearly marked as leading to its webserver) at SOL.

Because of the way routing works, an ISP can really only control its inter-network traffic as far as choosing what to listen to from each peer or upstream, and then deciding which of those same contact points gets its outbound packets. So when you’re tracerouting from your network to another network, you’re getting a glimpse of how your neighbor network is announcing itself to the rest of the ‘Net. Because of the existence of this “asymmetric routing” in some circumstances, you should probably do two traces (one in each direction) between each two points you’re interested in.

Reading the T3 Leaves

While you can easily read the names that appear in the traceroute, interpreting them is a hazy enterprise. If you’ve got a pretty good feel for the topology of the Internet, the reverse lookups on your traceroute can (possibly) tell you a lot about the networks you’re passing through. Geographic details, line types and even bandwidth may be hinted at in the names – but hints are really all that one can expect. Since every large network names its routers and gateways differently, you can assume some things about them, but you can’t be sure. If you want to engage in router-spotting, note that common names may  reflect:

• locations (a mae in the name might indicate a MAE connection, or la might indicate Los Angeles)

• line type (atm may indicate an ATM circuit as opposed to a clear-channel line)

• bandwidth (T3 or 45 is generally a dead giveaway, for example)

• a gateway (sometimes flagged as gw) to a customer’s network (sometimes referred to as cust or something similar) 

• positions within the network (some lines may be named with core or border, or something similar)

• engineering senses of humor (as seen by the reference to Babylon 5 in my ISP’s network)

• The network whose router it is (almost always identifiable by the domain name; if the router doesn’t have a reverse lookup, you can perform a nslookup on an IP address to find out whose IP space it is in).

However, it should be reiterated here that amateur router-ology is a dangerous sport, since really the only people who understand a router’s name are the people that named it. So don’t get too upset when you think you’ve spotted someone routing your traffic through Nome, Alaska when it fact it was named by a Hobbit-obsessed engineer with bad spelling.

Some Clues About Connectivity

A prospective ISP, prospectiveisp.net, tells you that it is fully peered. Is there any way that you can check up on this? Well, yes and no.

Traceroute can tell you whether two networks communicate directly, or through a third party. First, you traceroute from a traceroute page behind hugeisp.net to a location within prospectiveisp.net.

traceroute to www.prospectiveisp.net (127.50.225.13), 30 hops max, 40 byte packets
1  s8-3.oakland-cr2.hugeisp.net (127.0.68.77)  12 ms  17 ms  8 ms
2  h2-0-0.paloalto-br1.hugeisp.net (127.0.1.61)  15 ms  32 ms  12 ms
3  sl-bb10-sj-9-0.intermediary.net (127.232.3.25)  30 ms  15 ms  13 ms
4  sl-gw11-dc-8-0-0.intermediary.net (127.232.7.198)  82 ms  103 ms  73 ms
5  sl-prospective-1-0-0-T3.intermediary.net (127.228.220.14)  77 ms  74 ms  73 ms
6  border1-fddi.charlottesville.prospectiveisp.net (127.152.42.1)  121 ms  76 ms  75 ms
7  ns.prospectiveisp.net (127.50.225.13)  80 ms  79 ms  94 ms

It is evident from this traceroute that hugeisp.net and prospectiveisp.net travel through a third party to reach each other. While this doesn’t say anything definite about their relationship, two networks will generally pass their traffic directly to each other about if they are peers (barring strange routing circumstances or other arrangements). This doesn’t paint a full picture (and you should confirm this with a trace from prospectiveisp to hugeisp), but it leads you not to think that prospectiveisp’s claims of full peering are true. 

Note that while traceroute can tell you whether two networks communicate directly or indirectly, it can’t tell you any more about their relationship. Even if any two networks do communicate directly, traceroute can’t tell me whether their relationship is provider-customer or NAP peering (except perhaps through whatever hazy clues you obtain from router names or by calling a psychic hotline and reading them my trace). 

In the above example, you might conclude that prospectiveisp buys transit or service from intermediary.net, which peers with (or buys service from) hugeisp.net. Of course, the opposite may be true – that prospectiveisp.net peers with intermediary.net, and hugeisp.net buys service from intermediary.net. However, common sense and a rough feel for the “pecking order” of first- and second-tier networks should guide your guesses here. 

Where’s the Traffic Jam?

Let’s say that you encounter some difficulty reaching the Somesite.Com website (you describe the site’s download speed as “glacial”), and decide to show off your newfound traceroute skills to investigate the cause of the problem. Here’s what you find:

traceroute to somesite.com (127.8.29.15), 30 hops max, 40 byte packets
1  epsilon3.yourisp.net (127.50.252.2)  1 ms  0 ms  1 ms
2  other-FDDI.yourisp.net (127.50.254.46)  2 ms  2 ms  2 ms
3  br2.tco1.huge.net (127.41.177.249)  6 ms  4 ms  22 ms
4  112.ATM2-0.XR2.HUGE.NET (127.188.160.94)  8 ms  28 ms  30 ms
5  192.ATM9-0-0.GW3.HUGE.NET (127.188.161.125)  8 ms  32 ms  28 ms
6  * * *
7  * somesite-gw.customer.HUGE.NET (127.130.32.234)  12 ms !A *

From this, you can guess (with a high degree of certainty) that the initial source of trouble is outside of our your ISP’s network and somewhere along the route used by the site’s carrier, huge.net. Hop six shows some type of trouble, with none of the three packets sent to that gateway returning. Notice the traceroute continues on to the next stop despite the failure at hop six. The problem there was also likely for the loss of two of the packets sent to hop seven. Thankfully, one managed to get back and indicated the trace was complete. 

Using the example above, you could ping the address reflected in hop seven and then compare it to a ping to, say, hop four or five, and see if loss of packets to hop seven versus hop five reflects what the traceroute indicated. 

Selected Traceroute Error Messages:

• !H Host Unreachable

This frequently happens when the site or server is down.

• !N Network Unreachable

This can be caused by networks being down, or routers unable to transmit known routes to a network.

• !P Protocol Unreachable

This only happens when a router fails to carry a protocol that is used in the packet, like IPX. 

• !S Source Route Failed

This can only happen if you are using the source route functions of traceroute, (i.e.) tracing from a remote site to another remote site. The site you are using source route tracing from must have source route tracing turned on, or it will not work.

• * TTL Time Exceeded

This is caused when the path back exceeds the TTL time limit, or the router sends a ICMP Time Exceeded message to site A.

• !TTL is <=1

This happens any time the TTL is changed, via RIP or some other protocol, to a new TTL.

Caveat Tracerouter

Before you make too many decisions based on the results of traceroutes, you should be very aware that tracerouting is a complex phenomenon, and that plenty of otherwise innocuous things can interfere with it. For example, it may be possible to ping a site, but not to traceroute to it. This is because many routers are capable of being set to drop time exceeded packets, but not echo reply packets.

Traceroutes may return an unknown host response, but this frequently does not mean that the sites are down, or the network connection in between is faulty. Some domains are simply not mapped to be used without a third-level domain as part of the name. For example, tracerouting to aol.com will not work; but tracing to www.aol.com will.

In short, tracerouting is a valuable tool, but does not give a complete picture of a network’s status. Try to use as many gauges of network status as possible when attempting to debug an Internet connection. 

Other Traceroute Resources:

http://boardwatch.internet.com/mag/96/dec/bwm38.html

A lengthy tutorial on Traceroute by Jack Rickard.

http://www.traceroute.org

Traceroute.org directory of Traceroute Gateways

http://www.tracert.com/cgi-bin/trace.pl

The Multiple Simultaneous Traceroute Gateway

http://boardwatch.internet.com/traceroute.html

Boardwatch’s own list of traceroute servers

http://nitrous.digex.net/

A handy site that allows you to trace from Digex’s network at MAE-East, MAE-West, Sprint NAP or PAIX.

http://www.sustworks.com/products/product_ipnm.html.

Home Page for the program IP Net Monitor for MacOS

Freenix Support Sites

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, April 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Many times, we are faced with difficult questions about installing, maintaining and running an Internet server with a Free Unix OS. And the best answer is, “take a nap.” That’s what I always do.

Anyway … the point is that anyone who tells you they know everything about running a Unix server is lying. The power and configurability of Unix leads to “complexity.” “Complexity” leads to there being “lots of information that you don’t know.” This leads to “needing to look stuff up.” Also, “fear” leads to “anger,” “anger” leads to “hate,” and “hate” leads to “suffering2.” 

The first source for answers to any problems with your Freenix of choice is to be found in your man pages, and whatever handbook pages your OS or distribution provides. However, since these often tend to be big on abstractions and short on real-world examples, the web is often your best choice for clear answers to common problems. Since the amount of documentation available on the web is tremendous, in this column I’ll be focusing on free support resources; commercial support will be the topic of a future column.

The following is intended to be a very brief guide to the best free spots on the Web for information. The list is very incomplete; if you have a favorite Freenix resource site you don’t see here, please e-mail me.

Free Linux Support Resources

• Linux Documentation Project (www.linuxdoc.org): The LDP is one of the best things that the Linux community has going for it. The subjects sometimes tend toward the arcane and academic, but you’ll almost certainly find a HOTWO or FAQ guide for any program or service you want to set up for almost any Linux distribution. Its material isn’t always updated to cover the “latest and greatest,” but overall it’s an invaluable resource for how to do almost anything with Linux.

• Linux Distribution Support Sites: Most of the various Linux distributions provide support pages for their distributions. Helpfulness varies from distro to distro, but they generally provide good tips on distribution-specific issues. Among the most notable are Red Hat Linux (www.redhat.com/apps/support/), Linux Mandrake (English homepage at www.linux-mandrake.com/en/), SuSE (www.suse.com/Support/), Debian GNU/Linux (www.debian.org/support), Corel LinuxOS (linux.corel.com/products/linux_os/techsupport/support.htm), Caldera OpenLinux (support.calderasystems.com/caldera), Slackware (www.slackware.com/), LinuxPPC (www.linuxppc.org) and WinLinux 2000(www.winlinux.net/support.html). For a good list of English-language Linux distributions, go to www.linux.org/dist/english.html.

• Linux Journal Help Desk (www2.linuxjournal.com/cgi-bin/frames.pl/help/): The print magazine Linux Journal offers a high-quality collection of links to outside help resources. The links are updated well, and are a good first step to finding answers.

• Linux Fool (www.linuxfool.com): This site features a number of message/discussion boards on topics ranging from X windows to DNS configuration. Some of the forums contain far more questions than answers, but the site has a fairly high signal-to-noise ratio, and is an excellent way to get one-on-one answers. Linux Fool currently has few registered readers/posters as of this writing (about 1000), but hopefully this number will be higher by the time you read this (and a greater number of topics covered). 

• Fresh Meat (freshmeat.net): The number-one source for new software and updates. If you’re looking for the newest version of anything, you’ll find it here; more importantly, you can search for a description term (like “statistics”) and find software whose description matches. Not strictly useful for Q & A, but helpful as a launching point for homepages and resources of software packages you might be using.

• Gary’s Encyclopedia (members.aa.net/~swear/pedia/): This site is geared toward Linux, but also has some good information for any Unix user. The “pedia” features a few well-written original tutorials and literally hundreds of links, most of which are very well maintained, and many of the links feature comments about their specific subject or usefulness. Sadly, the page contains a note that as of January 2000, the links will not be actively maintained due to a lack of usage.

• Linux Developer’s Network (lindev.net): Updated several times daily, this site is an excellent source for information on new software packages (especially enterprise apps and code libraries). It also features links and tutorials, and while it doesn’t have the range of informative or thought-provoking reader comments that Slashdot (slashdot.org, the dean of Linux/open source news sites) does, it also doesn’t have the high noise-to-signal ratio.

• Linux Online Help Center (www.linux.org/help/): Hundreds of Linux support links. They aren’t as updated as they might be (when I looked at it, its links to the LDP still hadn’t been moved from the LDP’s old home at UNC), but there’s a lot of helpful material there that’s likely to at least point you in the right direction. The site lists its links with helpful comments, and can point you to resources you’re unlikely to find elsewhere.

• LinuxCare Support (www.linuxcare.com/support/): LinuxCare is a commercial Linux support company, but they maintain a free support database at this site. It’s well-organized, but doesn’t yet have too many answers, and requires you to sign up for free access before you can see the answers to any questions posted.

• LinPeople (www.linpeople.org/): This isn’t technically a web resource, but it can still be quite useful. The Linux Internet Support Cooperative is a groups of sysadmins who devote time to answering Linux questions on their IRC channel. Depending on the time of day, phase of the moon, and who else is logged on, the responses to your questions can range from thoroughly helpful to utter silence; still, one-on-one dialogue is often the best way to understand important Unix problems and concepts. Find LinPeople at irc.openprojects.net, channel #LinPeople.

• Linux Glossary Project (glossary.linux-support.net): The Linux Glossary provides a number of “user-friendly” basic definitions of important terms. It seems to borrow heavily from the Hacker Jargon File and other common sources, but can be useful if you’re looking for a down-to-earth definition of an unfamiliar term.

Free *BSD Support Resources

• FreeBSD Home (www.freebsd.org): FreeBSD’s website should probably be your first destination when looking for an answer. Most important is the documentation on the site, including the FreeBSD Resources for Newbies page (www.freebsd.org/projects/newbies.html), their (slightly outdated) FreeBSD Tutorials page (www.freebsd.org/tutorials/), and the FreeBSD FAQ (www.freebsd.org/FAQ/FAQ.html). Most important, however, is the FreeBSD Handbook (www.freebsd.org/handbook/), which can answer almost any common question you’ll have (note that a copy of the handbook should have been put in your /usr/share/doc directory when you installed the system). FreeBSD’s Support Page (www.freebsd.org/support.html) has a number of resources, including links to mailing lists and newsgroups, a list of regional user groups, links to the GNATS bug-reporting system, and lists of commercial FreeBSD consultants. 

• NetBSD Documentation (www.netbsd.org/Documentation/): NetBSD’s support site isn’t as in-depth as some others, but it’s an admirable effort, considering the number of platforms that it’s ported to. The documentation available is not as in-depth as for some other OSes, but it is very well organized (my personal documentation test is how quick you can find out how to boot into single-user mode). Overall, very helpful to NetBSD users, especially new sysadmins trying to find out how to perform normal tasks.

• OpenBSD Home (www.openbsd.org): OpenBSD’s site provides several high-quality resources for its users, including the OpenBSD FAQ (www.openbsd.org/faq/) a Manual Page Search feature (www.openbsd.org/cgi-bin/man.cgi) and a listing of OpenBSD Mailing Lists (www.openbsd.org/mail.html).

• Daemon News / “Help, I’ve Fallen” (www.daemonnews.orgwww.daemonnews.org/[YYYYMM]/answerman.html): Daemon News is a monthly e-zine covering all sorts of cool *BSD-related topics, including descriptions of new software and “how-to” articles. If you run any BSD, it’s definitely worth a look every month or so. 

The “Help, I’ve Fallen” column is an absolute must for newer *BSD administrators. It might not cover your more “out there” questions, but a lot of common problems (from setting up modems to setting up printers) have a very understandable explanation here. Each column includes a listing of all of the previous questions answered, so it’s best to look at the most recent column first.

• Comprehensive Guide to FreeBSD (www.vmunix.com/fbsd-book/): Covers everything from installation (Chapter 2) to setting up PPP with FreeBSD (Chapter 8). Some of the material is outdated (referring to FreeBSD 2.x), but most of it is very applicable. It’s short on theoretical grounding and explanations, but very good for real world “quick-and-dirty” explanations on almost any problem to be found with FreeBSD, and much of the information can be applied to other BSDs.

• The FreeBSD Diary (www.freebsddiary.org/): The “diary” of a FreeBSD sysadmin, focusing heavily on links to topics regarding Internet servers (like Apache and e-mail). Not the easiest to search for answers, but featuring links to a lot of content you won’t find elsewhere (if you can find it here).

• FreeBSD “How-To”s for the Lazy and Hopeless (flag.blackened.net/freebsd/): Some of the links on this site are outdated, but most provide useful information. While this is long on step-by-step intros and short on explanations, it can provide links to quick info (on things from TCP Wrappers to PnP audio gear drivers) if that’s what you’re looking for. 

• FreeBSD Rocks! (www.freebsdrocks.com/): Not updated as frequently as some other sites, but still a good choice when you’re looking for FreeBSD news or support forums.

Free General Unix Support Resources

• Geocrawler (www.geocrawler.com/): Geocrawler is a repository of archives of more than a hundred mailing lists, on topics ranging from Linux distributions to *BSD to Apache, PHP and Perl. Each mailing list has a search interface, making the site an invaluable resource for those searching for new topics or those not well-documented in existing HOW-TOs or FAQs.

• Root Prompt (rootprompt.org/): Provides a number of informative articles and tips on various *nixes. Not much original content, but provides an excellent (and well-updated) collection of links to papers and FAQ/HOWTOs on everything from configuring Sendmail to recycling IP addresses.

• Unix Help for Users (www.geek-girl.com/Unixhelp/): This site provides basic Unix answers and questions. If a question seems to simple to be answered on one of the more platform-specific sites, it’s probably here.

• Unix Guru Universe (www.ugu.com): UGU provides a neat search interface with a number of links to outside information on almost any Unix you can think of. While it’s light on “in-house” material, it can find things in nooks and crannies that other sites don’t have.

In a future column, we’ll look at commercial support offerings online. Thanks for reading, and remember – if you’ve had half as much fun reading this as I’ve had writing it, I’ve had twice as much fun as you.

[1] My sincerest apologies to anyone offended by this, but I just had to include a John Dvorak joke.

2  The Force for Dummies, Master Yoda, Jedi Press.

Getting to Know OpenBSD – An Interview with Louis Bertrand

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, March 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

If you aren’t familiar with the OpenBSD project (www.openbsd.org), it’s worth your time to find out about it. The core of OpenBSD’s mission is to provide the most secure Unix operating system available. For many ISPs, this is a very powerful consideration for protecting their customers’ data and their own – and one that can give them a competitive advantage among security-conscious customers.

OpenBSD offers a free variant of the BSD Unix operating system. It runs on a variety of platforms (from Intel x86 to Motorola M68k to Sun SPARC and several others). It has a number of native ports of popular software, and includes binary emulation options to run programs written for other operating systems including Solaris, Linux, FreeBSD, BSD/OS, SunOS and HP-UX. 

OpenBSD runs on a tight budget. The project is funded primarily by the sales of installation CD-ROMs, T-shirts and goodies, plus donations of money and equipment. OpenBSD has various commercial support options, and is available via anonymous FTP (www.openbsd.org/ftp.html) or on CD (www.openbsd.org/orders.html).

Answers from Louis Bertrand

Recently, I had a chance to interview Louis Bertrand (an OpenBSD developer, and the project’s unofficial PR guy) on the past and future of OpenBSD, as well as why ISPs might want to deploy it. Here’s what he had to say.

Question: What’s the short version of the history of the OpenBSD project?

Bertrand: Started in 1995 by Theo de Raadt, OpenBSD’s primary goal was to address the deplorable state of Unix and network security at the time. The cryptography angle was a natural outgrowth because it allowed the team to address security problems inherent in some protocols without breaking standards. The main effort was a comprehensive code audit, looking for the kind of sloppy coding practices that lead to buffer overrun attacks, races and other root hacks. Another goal was to release a BSD-derived OS with completely open access to the source code: OpenBSD was the first to let any and all users maintain their own copy of the source tree via anonymous CVS. We also kept the multi-platform aspects of BSD, subject to manpower – security comes first.

The OpenBSD source tree was created in October 1995, the first FTP-only release (2.0) happened in Fall ‘96, the first CD-ROM release (2.1) came out in Spring 1997, and CD-ROM releases have been coming out like clockwork every six months. 2.6 just came out Dec.1, 1999.

OpenBSD is derived from NetBSD (Theo de Raadt was a co-founder of that project). NetBSD is in turn derived from the Berkeley Computer Systems Research Group final releases of 4.4BSD-Lite.

Q: What does OpenBSD see as its niche or mission? Will that stay the same in the future?

Bertrand: OpenBSD is unique because of the security stance and the code audit, and the cryptography integration. There are no plans to change that focus. I mean, why should we? No other OS vendor (open source or commercial) is doing an active code audit, and nobody integrates encryption the way we do.

OpenBSD’s mission continues to be working proof that it is possible to offer a full-function OS that is also secure. The software industry and consumers (both commercial and open source) are locked into the “New! New! New!” mindset. Consequently, the accepted security stance is to back-patch whenever someone finds a problem. We completely reject that – ideally we’ll have corrected all the problems before they can be discovered and exploited by the bad guys. 

OpenBSD’s existence is also a constant reminder that the US government’s ban on exporting strong cryptographic software (it’s considered “munitions”) has become essentially futile. It is now easier to obtain strong encryption software outside the USA than inside. Being free software, we also completely avoid the restrictions of the Wassenaar Arrangement (UN-based arms export controls).

Q: Where does the project stand now (version, newest features added)?

Bertrand: The most important addition to 2.6 was OpenSSH, a free implementation of the SSH1 protocol (www.openssh.com). OpenSSH is integrated in OpenBSD 2.6. It’s an essential replacement for telnet and rsh for remote access where there is a danger of password sniffing over the wire. You can also use scp to transfer files instead of rcp or ftp. The last hurdle to complete crypto coverage in OpenBSD is the patent on RSA public key encryption, as used in SSL for SSH and secure web servers). OpenSSL is used everywhere

except the USA, where you can only use the RSAREF library, and then only for non-commercial applications.

Another big improvement in 2.6 was the huge effort to improve the documentation (both manual pages and web FAQ) and to make sure the information was up to date and correct. We’re trying to avoid the situation where people are dissatisfied with the main sources of information and start writing their own how-to documents – that only serves to fragment the sources of information, and users end up wasting a lot of time hunting around for reliable information.

Q: What’s coming up on the roadmap for the next major/minor versions? What new features? When might we expect to see them?

Bertrand: If all goes according to plan, there will be a release 2.7 in early summer 2000. There are no plans for a “major” release (e.g. 3.0).

Currently there’s a lot of work going on to integrate the KAME (www.kame.netIPv6 networking code into OpenBSD (it’s already supported, but this actually integrates it into the source tree). There’s also a major rework of the “ports” tree, the facility by which people can download and build applications simply by doing make install in the target directory.

There is also some exploratory work going on for multi-processor support. Previously we flatly turned down the idea because it would be a huge effort that would only benefit a minority of users who are running SMP machines. But the recent drop in prices of SMP hardware means that it’s time to revisit that decision, and a few developers are interested in doing it. We still need to make sure we’re doing it right, not just heating up the second processor.

Q: What are OpenBSD’s weak spots right now, or what needs the most work?

Bertrand: The main criticism leveled at OpenBSD is that it doesn’t track the very latest standards. It’s a fair comment because we’ll often hold back on a new feature because of stability concerns. We held back APM (power management) support from the 2.5 release because it hadn’t been tested enough, and we still use named-4.9.7 for DNS because of security concerns, even though named-8 is in routine use elsewhere (there was a remote root hole found in bind-8.2.1 as recently as last November).

A lot of people have been asking for multi-processor support. Up to now, that was out of the question, but SMP hardware has been getting cheaper and several developers are interested in starting on it.

Also we don’t support the Macintosh PPC platform or the HP PA-RISC platforms. Again, there’s some work going on to change that. We’ve also dropped active support on Alpha because we were getting no support from Compaq. It still builds and runs, but we’re falling behind on hardware support.

Q: From an ISP/webhosting perspective, what specifically would you cite as reasons to choose OpenBSD?

Bertrand: Security and code correctness, along with the “secure by default” configuration. They all go hand in hand.

Start with the “secure by default” philosophy. It means that a sysadmin doesn’t have to rummage around dark corners shutting down risky or useless server daemons that the installation script enabled by default, or run around tightening up permissions. Sysadmins are very busy and we let them get their job done by enabling only those services and daemons they actually want and need. The default installation of OpenBSD has no remote root holes, and hasn’t had one for over two years.

Then there’s the security/correctness angle. The OpenBSD code audit discovered that if you fix the bugs, you have gone a long way to securing the system. In fact, it’s not necessary to prove that some buffer overflow would have caused a security hole – it’s enough to know that the software is now doing what it’s supposed to do, with no nasty side effects. That’s the kind of proactive measures that allowed OpenSSH to avoid the recent RSAREF vulnerability in SSH.

Finally (but not least), there’s the built-in cryptography. Whether you’re setting up an IPSec VPN between subnets across the Internet, or just running a modest client/server VPN with SSH and stunnel, OpenBSD has the built-in tools, maintained and tested for interoperability with commercial vendors. You don’t have to download, build, test and integrate your own cryptographic subsystems. No other OS ships with this level of integration.

Q: Can you point to any user base or constituency that OpenBSD is a “must-have” for? If so, why?

Bertrand: Any users who need to run intrusion detection, firewall or servers carrying sensitive information (e-commerce too!). For intrusion detection, there’s no point in trying to guard against malicious behavior on your network if the IDS system or firewall itself is vulnerable. For sensitive data, the built-in SSL makes it very easy to set up a secure web server. (Note, however, that the RSA patent restriction is still something US-based service providers must deal with).

Security is also a concern if you’re offering VPNs: any encryption scheme is worthless if the underlying OS is vulnerable (this is like an intruder bypassing a steel door by breaking a window).

Q: How vital do you see security as being to the future of the ‘Net and the future of OpenBSD? 

Bertrand: Extremely vital for both. I’d like to say concern for security is going to hit a threshold where more and more people are going to ask tough questions of their software vendors, ISPs and e-commerce outlets, but it’s probably wishful thinking. We’d like to see more than just “safety blanket” assurances from vendors.

One nasty trend is the growing full-time presence of powerful servers on end-user DSL and cable modems. This means there will be a fresh batch of compromise-able machines available for concerted attacks. If more vendors adopted a “secure by default” stance, that would go a long way to reducing the exposure of naive sysadmins.

Should You Use OpenBSD?

There are positives and negatives to OpenBSD’s focus on security. On the plus side, it’s the most secure free operating system you can get anywhere. If your number-one concern is making your server safe from intrusion, look no further than OpenBSD.

All Freenixes offer options for securing your system, but the fact that OpenBSD is secure “right out of the box” is a major advantage for the inexperienced administrator who isn’t sure how to secure a system, or the busy administrator who has the knowledge but not the time. A plain-vanilla installation of the OS will already include a number of security features that might take hours of installation and configuration (plus some considerable knowledge and research) with other Unixes.

On the negative side, OpenBSD’s security comes as a tradeoff for new gadgets and features – a tradeoff you may or may not be willing to make. Also, there’s a very true old saying that “security is inversely proportional to convenience.” And tight security can be very inconvenient. A lot of administrators (myself included) are used to saving time through leaving ourselves little insecurities (like rsh, allowing logins as root, or not using TCP-Wrappers for services like ftpd or telnetd). Conversely, many of the same busy or inexperienced administrators who find benefits in a “secure by default” installation may not have the time or the knowledge to then enable these insecure items if they want them.

And remember that you, the server administrator, can easily foil OpenBSD’s security precautions by doing something boneheaded, like installing third-party software with known security holes, or running your webserver as root. You yourself can make OpenBSD insecure by twiddling the knobs and switches if you don’t know what you’re doing.

Unless security is your primary consideration, you probably aren’t going to use OpenBSD for all of your Unix servers. Linux, FreeBSD and NetBSD all excel in various areas where OpenBSD does not. However, OpenBSD certainly has its place, and should be part of any network administrator’s toolkit. For your most security-sensitive tasks, OpenBSD is very likely to be “the right tool for the right job.”