Random Ramblings About BSD on MacOS X (Part 2)

By Jeffrey Carl and Matt Loschert

Daemon News, December 2001

This is the first chapter in a series of observations, representing the adventures of a couple of BSD admins (one with a lot of prior MacOS experience, the other with more on the BSD side) poking around the command line on an iBook laptop running Apple’s  Mac OS X Public Beta. We’ll attempt to provide a few notes and observations that may make a BSD admin’s work with Mac OS X easier.

Note: Again, as members of the FreeBSD Borg collective MIND-STABLE, we’ll refer to our various comments/sections by the singular “I.” This also prevents either of us from admitting which dumb parts of the article were ours specifically. 🙂

New Information from Last Time

The quality and quantity of the feedback that we (OK, Borg segfault here) received after part one of this series was fantastic. Thanks to everyone (you know who you are) who wrote in to clear up points or to show us a better way to do things. Aside from a few flames on Slashdot (“You are stupid and blow goats!”, “Duh!”, and “N4t4li3 P0rtm4n r00tz y00”, etc.) the feedback was very helpful.

The number of NeXT gurus (and some BSD overlords, as well) out there who came to the rescue to correct mistakes and offer answers to the questions posed in the article was amazing. This time around, our readers who are l33t NeXT h4Xors, BsD r00tz or k3wl m4c d00dz are still invited to help clear up the questions or postulations herein on Mac OS X. So, to follow up and answer some of the questions posed by the first article, here are some of the best responses received:

From: Dwarf
Subject: Daemon News Article
Not sure if any of this is new info to you guys....
OSXPB inherits a lot of "philosophy" from OS X Server. Thus, the lack of logging that occurs. Apple seems to have come down in favor of system responsiveness versus use monitoring. Their rationale for turning off logging(for almost everything) by default is that it impacts network thruput. If the logs are something you need, they can all be enabled from the command line, but network (and probably GUI) responsiveness will likely suffer as a result Apple also seems to have made several assumptions about how OS X (any flavor) will be used.
Apparently their idea is that it will provide services to a LAN and be hidden from the world by a firewall of some sort, thus the default enabling of NFS and having NetInfo socketed to the network by default. Since NetInfo is a multi-tiered database, your "local" NI Server may also be the "master" NI Server for subnetted machines, while being either a clone or a client of a still higher level NI Server. So, they hook it to the net by default. This also provides the mechanism by which other machines can automatically be added to your network. At bootup, each machine tries to learn who it is and where it lives by querying the network for its "mommy" (a Master NI Server). If it finds one it accepts the name and IP that server furnishes and initializes itself accordingly. If it doesn't, it uses the defaults from its initialization scripts. Getting this all to work painlessly is one of the things about which the NetInfo documentation is pretty obscure. Owing primarily to the fact that it is written in terms of tools that no longer exist as separate entities, but have been combined into more powerful tools. Further, if NFS is properly setup, each machine will automount the appropriate NFS volumes at startup. Another area where making it work is not clearly explained. I will only touch on the confusion that exists about setting up MailApp and making it work. Another documentation shortcoming.
Another facet of operation that isn't clearly explained is the Apple philosophy about how the file tree is organized. Their thinking is that users should only install things into the /Local tree. /System should be reserved for those things that administrators add. My guess is that naïve users will be fine so long as they confine themselves to operating within the GUI, as the GUI tools seem to be pretty smart about where they put things. But, if those users start installing things from the CLI....
A problem area about which not much has been written is the fundamental incompatibility between the Mac HFS and HFS+ filesystem and BSD. Mac files are complex, containing both a Data Fork and a Resource Fork. BSD knows nothing about complex files and thus when BSD filesystem tools are used to manipulate Mac files, the resource forks get orphaned. See: http://www.mit.edu/people/wsanchez/papers/USENIX_2000/ for a better explanation of this. This may be the source of a longstanding OS X Server problem whereby the Desktop Database process eventually goes walkabout and consumes over 90% of the CPU.

Authors’ Note: I’ve received a large number of comments about how the existing state of Mac OS X/Darwin documentation sucks. Frankly, I agree – that’s why I/we wrote these articles. While there’s a certain thrill to “spelunking” a new OS, it’s not what an administrator would like to have to be doing in their spare time. However, it’s hard to point a finger at Apple, since they’re currently under a hiring freeze after their recent absurd stock devaluation (post-Q3 results), and they would be perfectly right to have every man/woman/droid/vertebrate/etc. working on developing the OS rather than documenting it. 

Nonetheless, there is still a significant problem lurking. There are tens of thousands of otherwise non-super-technical folks who have become MacOS gurus through inclination and experience, able to roam around a school or office and fix traditional MacOS problems. At the moment, the folks who (working with the current paltry documentation) can do this for MacOS X are incredibly few, since it requires significant knowledge of MacOS as well as Unix experience – and even then, it’s only NeXTstep mavens who will be truly at home with some of its aspects.

The good folks at Daemon News have provided a space here to try to answer some of these questions, but it’s up to the knowledgeable folks already out there to contribute to sites like Daemon News, Darwinfo, Xappeal, MacFixit, etc. to make whatever knowledge is out there available to the soon-to-be Mac OS X community. If Apple can’t document this OS thoroughly while rushing every available resource to develop it, it’s up to the folks who (at least marginally) understand it to do so, for the good of all its users.

From: Brian Bergstrand <[email protected]>
Subject: Re: Daemon News : Random Ramblings about BSD on MacOS X (part 1)
In your article you said: "(as mentioned, etc, tmp and var are all links to inside /private; I refuse to speculate why without having several drinks of Scotch first)".
The /private dir. is again a part of Mac OS X's NeXT heritage. Originally, the thought behind /private was that it could be mounted as a local drive for NeXT stations that were Net booted. That way you would not have to mount volumes for /etc, /var, or whatever else needed write perms. This also worked well if you booted from a CD. /private, meant data that was to be used only on a specific machine.
Brian Bergstrand
Systems Programmer Northern Illinois University
Imagination is more important than knowledge. - Albert Einstein

Authors’ Note: We’re discovering more and more that Mac OS X seems very much like the next revision of OpenStep – with MacOS 9 compatibility and a new GUI thrown in. Not that this is necessarily a bad thing; it just seems like NeXT was the “Addams Family” member of the BSD clan that nobody else noticed, and we’re not sure why. If anyone would like to speculate on the reasons that NeXT’s new ideas were largely ignored by the industry (aside from the typical Steve Jobs-ian tendency to make your computers too expensive for normal people to buy), we’d love to find out more.

From: Daniel Trudell <[email protected]>
Subject: random bsd os x ramblings...netinfo and ipconfigd
Netinfo is interesting. One thing I noticed is that most of the stuff in the utility applications are like mini versions of netInfoManager. IE I can edit/add/delete users in netInfoManager including root and daemon, and those changes are present in multiple users after the fact, and vice-versa. However, some things still depend on /etc/passwd even in multiuser mode. I installed samba, and I needed an /etc/passwd file for it. i used "nidump passwd .> /etc/passwd" to generate one from netinfo....but there was a twist...some of the users were shadowed, some were not. I s'pose that might be an issue. Also, I was conforming UID's on my box to the company UID's....if somebody with a UID of 500 logs in remotely the machine forgets how to handle users...a reboot fixes this.
In general, i think there's a consensus about acceptance of netinfo. When you first run around in tcsh, a geek asks themself "what the f**k...this is jacked up, what's up with /etc?", but once you figure out netinfo, a geek says "hey, check this out, it's nifty!"

Authors’ Note: I agree. Still, it will take a while (and some seriously improved documentation [see above]) to get used to.

From: Rick Roe
Subject:Re: Mac OS X article
Well, I'm not the foremost expert on Darwin, but I've learned a few things from "this side of the playground" that might help...
- The "Administrator" issue is Apple's compromise between the single-powerful-user paradigm of Classic Mac OS and the Unix/NeXT multiuser system with it's too-powerful-for-your-own-good root.
An "administrator" has the privileges an upgrading Mac user expects: ability to change system settings and edit machine-wide domains on the disk (like /Applications). However, it still protects them from the dangers of running as root all the time, since they don't get write access to the likes of /etc (except through configuration utilities), or to /System (which is a partitioning that keeps the Apple-provided stuff separate from the stuff you install, like /usr vs. /usr/local).
The inability of "administrator" users to make changes to items at the top level of the filesystem is a bug in the current version.
- Actually, we got NTP support back in Mac OS 8.5, not 9.0 :)
- The developer tools are available separately from Mac OS X through Apple's developer program. The basic membership level is free, and gets you access to not only the BSD/GNU developer tools, but also the cool GUI tools, headers, examples, and documentation out the wazoo. Of course, you can also get a lot of this stuff from the Darwin distribution, too.
- Regarding the list of top-level files and directories in .hidden:
- Desktop DB and Desktop DF are used by Mac OS 9 to match files to their "parent" applications. OS X maintains them for the sake of the Classic environment but only uses them as a fallback, as it has a more sophisticated per-user system for this purpose.
- Desktop Folder is where native OS 9 stores items that show up on the desktop. In OS X, they're in ~/Library/Desktop.
- SystemFolderX was where the BootX file (a file containing info for Open Firmware and some bootstrap stuff to get the kernel started) was kept in previous developer releases. It's elsewhere now.
- Trash is the Mac OS 9 version. OS X uses /.Trash, /.Trashes, and ~/.Trash.
- So you've discovered how cool NetInfo is. I got tired of reading reviews that were just complaints about not being able to edit stuff in /etc to change things. :) Here's some extra info for you:
- There's a convenient GUI utility for editing NetInfo domains,
- root's password can be changed in NetInfoManager, or the Password panel in System Preferences, in addition to the command line.
- NetInfo is a pretty cool "directory service" for administering groups of computers... one of those unfortunate "best kept secrets" of NeXT... but what's cooler is that, in OS X, it's just one possible backend to a generic directory services API. So it's also possible to run your network using LDAP, or Kerberos/WSSAPI (er, whatever that acronym was), or NDS, or (god help you) Active Directory -- and the user experience for Mac OS X will be the same.
- You might like this... try entering ">console" at the login window.

Authors’ Note: Typing “>console” at the login window (no password necessary) and hitting “Return [Enter]” boots you directly into Darwin, skipping the Mac OS X GUI layer entirely. Sooper cool.

From: Larry Mills-Gahl <[email protected]>
Subject: NetInfo and changing network settings
One bit that I've been sending feedback on since the Rhapsody builds (pre-OS X Server) is the suggestion that you must reboot to have network settings changes take effect. This is one area in NT that drives me absolutely nuts and I feel like billing Bill G for the time it takes for multiple restarts of every NT or 9X machine you setup!!! Unix seems to have figured this out long ago. The Mac OS has figured this out long ago!!! I appreciate the engineers being conservative because their market is notoriously unforgiving about issues that work, but are not clairvoyant and anticipate how each luser wants the system to work. I hope that they will have this cleared up by release time.
In the interim, here is a script that HUPs netinfo services to get a hot restart.
case `whoami` in
echo "Not Administrator (root). You need to be in order to restart the network."
echo "Restarting the network, network will be unavailable."
kill `ps aux | grep ipconfigd | grep -v grep | awk '{print $2}'`
echo " - Killed 'ipconfigd'."
echo " - Started 'ipconfigd' right back up."
sleep 1
ipconfig waitall
echo " - Ran 'ipconfig waitall' to re-configure for new settings."
sleep 1
kill -HUP `cat /var/run/nibindd.pid`
echo " - Killed 'nibindd' with a HUP (hang up)."
sleep 2
kill -HUP `cat /var/run/lookupd.pid`
echo " - Killed 'lookupd' with a HUP (hang up)."
echo "The network has successfully been restarted and/or re-configured and is now available."

Authors’ Note: Larry gives credit to “Timothy Hatcher” as the original author of this script. You can find the original script at the bottom of the page at http://macosrumors.com/?view=archive/10-00, as well as another script about 3/4 of the way down the page which reproduces (roughly) the very useful MacOS 8-9 “Location Manager” functionality. I don’t like to cite the site MacOS Rumors as any kind of source of reliable info, since it’s 90 percent pretentiously uninformed speculation that doesn’t admit itself as such, but one of its readers did give the Mac community “the scoop” here, as far as I can tell. If Timothy Hatcher or anyone else out there wants to speak up as the original author of this script, please let me know – I’d love to ask you some questions about NetInfo. 😉


From: Dag-Erling Smorgrav <[email protected]>
Subject: mach.sym in Darwin
I'll bet you a dime to a dollar the mysterious mach.sym in MacOS/X's root directory is simply a debugging kernel, i.e. an unstripped copy of mach_kernel.


From: Paul Lynch <[email protected]>
Subject: MacOS X Daemon News
I can give you a few updates on some parts that might be of interest. In no particular order:
- the kernel (Mach) is only supplied in binary. Most MacOS X admins won't be expected to be able to build a new kernel; that requires a BSD/Mach background (and who's got that outside of Apple?) and a Darwin development system. So building it with firewall options enabled is reasonably smart.
- as well as /.hidden, you will notice that dot files aren't visible in the Finder. .hidden should be looked for in the root of any mounted filesystem, not just /.
- /private is a hangover from the old days of diskless workstations. NeXT had a good netboot option, which meant that you could stash all the local configuration and high access files (like swap - /private/vm/swapfile) in a locally mounted disk. This is all part of the Mach, as opposed to BSD, option.
- MacOS X doesn't only support HFS. It also supports UFS, and that may shed some light on some of the "but HFS does this" quirks.

Authors’ Note: The inclusion of a distinct /private now seems to make a lot of sense, especially for those who are willing to believe in grand computer-industry conspiracy theories. 🙂 I saw a Steve Jobs keynote in which he showed 50 iMacs net-booting from a single server, showing their abilities as (relatively) low-cost “Network Computers.” And who is a greater believer in NCs than Apple board member (and reputed best buddy of Steve Jobs) than Oracle chief Larry Ellison? No specific documentation of the role of /private has thus far been provided by Apple (as far as I can tell), but the above explanation seems very plausible, leaving open the door for future uses as described.

From: Peter Bierman
Subject: http://www.daemonnews.org/200011/osx-daemon.html
Try 'nicl .'
Then try combinations of cd, ls, and cat.
cd moves around the netinfo directory structure
cat prints the properties of the current directory
ls prints the subdirectories of the current directory
A few minutes in nicl, and NetInfo will make a lot more sense.
Unfortunately, there's no man page yet.
Another tidbit:
In X-GM, volumes will mount under /Volumes instead of at /


From: James F. Carter <[email protected]>
Subject: Comments on Random Ramblings about BSD on MacOS X
Why a firewall with no rules? Because the firewall code has to be selected at compile time, but when the CD is burned they don't know what rules the end user will want, and they don't want to lock out any "traditional" behavior, such as the ability to play host to script kiddies with a port of Trinoo for the Mac. I agree that a set of moderately restrictive default rules would be a good idea for the average grandmother, but I can understand the developers' attitude too.
Why have a plain file /tmp/console.log in addition to the syslog? In case syslogd dies. I have this problem on Linux: there's a timing dependency which if violated kills syslogd, and I'm running a driver (you don't want to know the gory details when I suspend the laptop to RAM and restart an hour later). If "sync" got done, and if the file is rotated by the code that opens it, you have a chance to see your machine's death cry when it crashed and burned. I've hacked my Linux startup files to do this partially to catch (sysadmin-induced) screwups during the boot process.
Why put resolv.conf in /var/run? ppp and dhcpd often obtain the addresses of the ISP's DNS servers during channel setup, and have to write them into resolv.conf. Modern practice, at least on SysV-ish systems like Solaris and Linux, is that /etc is potentially mounted read-only on a diskless workstation or CDROM, and dynamic info goes in /var/something.
Running the NFS daemon: Agreed that it's a security hole. Solaris only starts nfsd if /etc/dfs/sharetab (the successor of /etc/exports) contain the word "nfs". I've hacked my Linux startup files to do something similar.
/private/Drivers: I assume this contains drivers, similar to /lib/modules/$version on Linux. You wouldn't want to intermix code segments with device inodes, would you? :-) Or does recent BSD do something weird and wonderful along these lines? I've thought about a hypothetical UNIXoid operating system in which the device inode is [similar to] a symbolic link to its driver. (Paraquat on the grass?)
James F. Carter 
Internet: [email protected] (finger for PGP key)

Authors’ Note: The NFS inclusion seems like yet another attempt by Apple to include functionality “in the background” that they may or may not make use of. It’s an opposite to their attempts in recent versions of Mac OS (via the “Extension Manager”) to allow users to enable or disable anything that patches traps or otherwise alters the functions of the base OS, it still makes sense. The current Extension Manager functionality was most likely included because third-party utilities included this functionality, rather than because Apple really wanted end-users to have fine-grained control over the OS, and because so many poorly-written current MacOS extensions could interfere with Apple-provided OS functionality (if not hose the OS completely).

The prevailing attitude at Apple regarding OS X may very likely be that since it would be very difficult for typical  users to modify their kernel (at least with existing tools), it’s best to open up everything that might be needed at some current or future point. However, this holds only until someone creates a kernel extension/module interface as easy as the current MacOS Extension Manager (something that, despite FreeBSD’s /stand/sysinstall attempts is still far away for any other *nix).


Further Exploration: Das Boot

Last time, we mentioned that holding down the “v” key at startup shows a BSD-esque text console startup rather than the standard MacOS X GUI startup. Considering that the hardware on a revision-A iBook is pretty different than the hardware in your average x86 Free/Open/NetBSD box, we thought it would be interesting to see just what XNU (the Darwin kernel) does on startup.

Let’s look at what happens at boot time (as shown in the message buffer using dmesg just after boot time). Comments are shown on following lines (after “<=”).

minimum scheduling quantum is 10 ms

<= Haven’t seen this before on any BSD boot.

vm_page_bootstrap: 37962 free pages

<= RAM on this iBook is 160 MB.

video console at 0x91000000 (800x600x8)

<= The iBook’s screen, 800×600 resolution (x 8 bit color?)

IOKit Component Version 1.0:
Wed Aug 30 23:17:00 PDT 2000; root(rcbuilder):RELEASE_PPC/iokit/RELEASE
_cppInit done

<= IOKit is Apple’s Darwin device driver scheme

IODeviceTreeSupport done
Copyright (c) 1982, 1986, 1989, 1991, 1993
      The Regents of the University of California. All rights reserved.

<= It’s nice that they included this BSD-style, without any “copyright Apple blah blah” stuff

AppleOHCI: config @ 5505000 (80080000)
AppleUSBRootHub: USB Generic Hub @ 1

<= This is the iBook’s one built-in USB port

AppleOHCI: unimplemented Set Overcurrent Change Feature

<= I believe that OHCI is the USB Open Host Controller Initiative, a generic standard for USB devices. This option appears to be a standard USB driver parameter which is not currently implemented (?).

AppleUSBRootHub: Hub attached - Self powered, power supply good
PMU running om NonPolling hardware
IOATAPICDDrive: Using DMA transfers
IOCDDrive drive: MATSHITA, CD-ROM CR-175, rev 5AAE [ATAPI].

<= The iBook’s ATAPI 24x CD-ROM drive

IOATAHDDrive: Using DMA transfers

<= The iBook’s HD is UltraDMA/66, if I recall correctly

IOHDDrive drive: , TOSHIBA MK3211MAT, rev J1.03 G [ATA].
IOHDDrive media: 6354432 blocks, 512 bytes each, write-enabled.

<= The iBook’s 3.2 GB hard drive; there are three logical volumes on this one.

ADB present:8c

<= Not sure about this. ADB in Apple-speak generally refers to the legacy Apple Desktop Bus, which was a low-speed serial bus used for connecting keyboards/mice. The iBook does not have an ADB port; so this probably just indicates the presence of the ADB driver.

struct nfsnode bloated (> 256bytes)
Try reducing NFS_SMALLFH
nfs_nhinit: bad size 268

<= Not sure why it’s reporting errors against its own default settings for NFS (asking the average Mac user to recompile their kernel with this option is like asking the average driver to rebuild their engine with an extra cylinder). This is presumably a “beta” bug.

devfs enabled

<= The Unix devfs (separate from MacOS X drivers?) is enabled.

IP packet filtering initialized, divert enabled, rule-based forwarding enabled, default to accept, logging disabled

<= The packet filtering that we mentioned last time. 

From path: "/[email protected]/[email protected]/[email protected]/@0:11,\\mach_kernel", Waiting on <dict ID="0"><key>IOProviderClass</key>
<string ID="1">IOMedia</string><key>IOPath Separator</key>
<string ID="2">:</string><key>IOPath Extension</key>
<string ID="3">11</string><key>IOLocationMatch</key>
<dict ID="4"><key>IOUnit</key>
<integer size="32" ID="5">0x0</integer><key>IOLocationMatch</key>
<dict ID="6"><key>IOPathMatch</key>
<string ID="7">IODeviceTree:/[email protected]/[email protected]/[email protected]</string></dict></dict></dict>

<= System preferences in MacOS X are set with XML files. Kudos to Apple for this forward-looking use of XML. See below for more on this and the “defaults” command.

UniNEnet: Debugger client attached
UniNEnet: Ethernet address 00:0a:27:92:04:3a

<= I think that “UniN” here refers to the “UniNorth” Apple MoBo chipset used in the iBook, which has a 10/100BT RJ-45 interface, among other things, built into it (and Ethernet network set up as the default under this configuration).

ether_ifattach called for en

<= Presumably “en” is the device driver type for this NIC

Got boot device = IOService:/Core99PE/[email protected]/AppleMacRiscPCI/[email protected]/KeyLargo/[email protected]/AppleUltra66ATA/IOATAStandardDevice/IOATAHDDrive/IOATAHDDriveNub/IOHDDrive/TOSHIBA MK3211MAT Media/IOApplePartitionScheme/Hard [email protected]
BSD root: disk0s11, major 14, minor 11
bsd_init: rootdevice = 'disk0s11'.

<= Finding the boot device; unsure why it calls it “BSD root” rather than “Darwin root” or just “MacOS X root.”

devfs on /dev
Ethernet(UniN): Link is up at 10 Mbps - Half Duplex

<= Yep, it’s plugged into the 10BT/half-duplex hub in my Netopia SDSL router.

Resetting IOCatalogue.
kext "IOFWDV" must change "IOProbe Score" to "IOProbeScore"

<= This appears to be a debugging (?) warning in a Darwin kernel extension.

kmod_create: ATIR128 (id 1), 23 pages loaded at 0x5878000, header size 0x1000

<= This appears to describe a kernel module/driver for the ATI Rage 128 chipset, although this Rev. A iBook uses only an ATI Rage Pro chipset. Perhaps this is a driver for the ATI family up to the Rage 128 series?

kmod_create: com.apple.IOAudioFamily (id 2), 16 pages loaded at 0x588f000, header size 0x1000
kmod_create: com.apple.AppleDBDMAAudio (id 3), 5 pages loaded at 0x589f000, header size 0x1000
kmod_create: com.apple.AppleDACAAudio (id 4), 9 pages loaded at 0x58a4000, header size 0x1000

<= Loading drivers for the audio chips on the iBook MoBo.

PPCDACA:setSampleParameters 45158400 / 2822400 =16
kmod_create: com.apple.IOPortServer (id 5), 13 pages loaded at 0x58be000, header size 0x1000
kmod_create: com.apple.AppleSCCSerial (id 6), 9 pages loaded at 0x58cb000, header size 0x1000

<= More drivers for Apple MoBo chipsets.

creating node ttyd.irda-port...
ApplePortSession: not registry member at registerService()

<= This looks like the IrDA infrared transfer port drive attempting to create a connection and failing.

creating node ttyd.modem...
ApplePortSession: not registry member at registerService()

<= It looks like it’s trying to create a connection to the modem port and failing.

.Display_ATImach64_3DR3 EDID Version 1, Revision 1
Vendor/product 0x0610059c, Est: 0x01, 0x00, 0x00,
Std: 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101,
.Display_ATImach64_3DR3: user ranges num:1 start:91800480 size:ea680
.Display_ATImach64_3DR3: using ([email protected],16 bpp)

<= These appear to be setting GUI resolution at 800 x 600 in 16-bit color (which is what they had been set to in the GUI controls) 

kmod_create: SIP-NKE (id 7), 7 pages loaded at 0x59b8000, header size 0x1000
kmod_destroy: ATIR128 (id 1), deallocating 23 pages starting at 0x5878000

<= Not sure about this; probably unloading the ATI Rage 128 driver from the kernel (?)

MacOS X’s Hardware Drivers and Support

Following on the above: many people, when they think about hardware and drivers that Apple needs to create for Darwin/Mac OS X, have one of two thoughts. It’s either “That should be really easy since Apple has all standardized hardware,” or “Won’t that be hard, since Macs use a bunch of whacked-out hardware that nobody else has ever heard of?” The answer is somewhere in between.

One of Apple’s few built-in advantages has always been that, since it creates its own hardware as well as software, it needs to support only a small fraction of the devices that any commodity x86-based OS might need to support. Furthermore, Apple’s dictum that Mac OS X will support only “Apple G3-based computers” makes it seem that hardware driver support would be relatively straightforward. This, however, is not the case. Apple’s “G3” support actually involves support for two wildly differing branches of hardware (and some “in-between” models).

Apple (circa 1996 or so) suffered in terms of hardware manufacturing and compatibility because of the amount of “non-standard” hardware it used. Aside from the obvious use of Motorola/IBM PowerPC CPUs instead of commodity x86 CPUs, (some) Apple’s desktops used the Texas Instruments NuBus expansion system instead of PCI, AGP or ISA; a proprietary serial bus for printers/modems; the Apple Desktop Bus (ADB) for keyboards and mice; external SCSI-1 for all other peripherals; and a variety of other custom Apple MoBo (motherboard) components.

However, Apple in the “Age of Steve” has moved to a more industry standard-compliant position. When Apple ditched beige colors (starting with the Jobs-directed iMac in 1998), it moved to a legacy-free environment, ditching a lot of its older custom hardware. Apple also effected a number of other hardware changes, moving more of the Mac OS “Toolbox” routines from custom ROM chips into software (MacOS X shouldn’t need them at all) and moving to unify all of its lines with a unified motherboard architecture (UMA-1). 

The new machines (the ever-fly Steve Jobs’ “kickin’ it new-school legacy-free style”) ditched floppy disk drives and the old Apple Desktop Bus and serial ports in favor of USB for keyboards, mice and low/medium-speed peripherals. Built-in external SCSI was soon eliminated in favor of a mixture of USB and IEEE 1394 (“FireWire”) for higher-speed peripherals. With the introduction of the G4-based desktops, 2xAGP replaced PCI as the video card slot, leaving three 33-MHz PCI slots for internal expansion.

Drawing OS X’s “supported models” cut-off line at Apple’s G3s (which excludes Apple or clone-vendor models with G3 CPU upgrade cards) eliminates needing to support much of the legacy hardware that Apple has used in the past. However, there are a few Apple G3 models that bridge both technology generations, creating a notable thorn in Apple’s side. Because the original G3 desktops and PowerBooks included legacy technology (ADB, Apple serial ports, older chipsets), Apple must support these devices in Darwin and Mac OS X.

Type I/II PC card support does not appear to be available in Mac OS X Public Beta (possibly the reason there is no *official* support for Apple’s IEEE 802.11 “Airport” cards); IEEE 1394 (FireWire) support does not appear to be available, either. Apple’s new UMA-2 chipset is rumored to be introduced with new models at January’s Macworld expo; however, the specs of this chipset can only be guessed at now.

The Mysteries of “defaults”

Much of this article and the previous one have been the result of aimlessly exploring the file system of MacOS X from the command line, finding things that seemed odd or interesting and wondering, “Hmm, what will this do when I poke it?” I almost wish I didn’t stumble onto this next item. As I investigated its operation and the history of its implementation, I just became more curious about certain design decisions. Sort of like … well, NetInfo.

The item in question is the defaults command. After finding it, reading its man page (defaults(1)), and playing with it a little, it appeared sort of cool, but pretty mundane. The command allows for the storage and retrieval of system and application level user preferences. Please forgive the reference if it’s too far off, but it’s like a sort of “Windows registry” for MacOS X.

To get an idea of what information the system stored, I experimented with the command to see what it would tell me. Typing ‘defaults domains’ spit back the following list of information categories, or “domains” in Apple parlance:

% defaults domains
NSGlobalDomain ProcessViewer System%20Preferences com.apple.Calculator com.apple.Console com.apple.HIToolbox com.apple.Sherlock com.apple.Terminal com.apple.TextEdit com.apple.clock com.apple.dock com.apple.finder com.apple.internet com.apple.keycaps loginwindow

Those domains that related to applications illustrated Apple’s suggested domain naming convention of preceding the application name with the software vendor’s name. In this case, on a system containing solely Apple software, the application domains all began with ‘com.apple’.

I found that there were a variety of ways to view the data contained in these domains. In order to view the data for all domains, simply type ‘defaults read’. To query instead for information specific to a domain, use the form ‘defaults read <domain name>’ (without the arrows), or to grab a specific value, use ‘defaults read <domain name> <key>’. The command also allows the setting of values using the form ‘defaults write <domain name> <key> <value>’. The man page describes a variety of other ways to use the command.

Having read about XML-based plists (property lists) in MacOS X (originally from John Siracusa’s excellent series of Ars Technica articles on the OS, indexed at http://arstechnica.com/reviews/4q00/macosx-pb1/macos-x-beta-1.html), I assumed that this service might be based on an XML back-end. A quick search through the filesystem confirmed this suspicion. Per user preferences were found under ~/Library/Preferences, with each application having its own plist file. System-wide preferences showed up under /Library/Preferences, and although they were not present on this non-networked machine, I would be willing to bet that network-wide preferences could be found under /Network/Library/Preferences.

I did a little comparison using the preference data from the system clock application. First, I read the data using the defaults command, then I located the plist version in my ~/Library/Preferences directory. The results below show how the system translates the clock data into the XML plist format.

% defaults read com.apple.clock 
 24Hour = 0; 
 ColonsFlash = 0; 
 InDock = 0; 
 "NSWindow Frame Clock" = "-9 452 128 128 0 4 800 574 "; 
 ShowAnalogSeconds = 1;  
 Transparancy = 4.4;  
 UseAnalogClock = 1;  
 UseDigitalClock = 0;  

% cat Library/Preferences/com.apple.clock.plist 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist SYSTEM "file://localhost/System/Library/DTDs/PropertyList.dtd">
<plist version="0.9">
 <key>NSWindow Frame Clock</key>
 <string>-9 452 128 128 0 4 800 574 </string>

I was a bit intrigued by the date of the defaults man page. It was dated March 7, 1995, which as far as I knew was prior to the advent of XML. Apparently, there was some amount of history to this command and database that was not immediately obvious. A little research revealed that the defaults command and database date back at least to (yes, you guessed it) NeXTstep. The information found confirms that it was subsequently adopted for OpenStep and finally MacOS X. It also appears that the “back-end” evolved over time, from straight ASCII string based configuration files to today’s XML-based plist files. (As usual, if you know more about the history of this database, please feel free to share your knowledge, and please forgive my lack of knowledge of all things NeXT.)

With this history information in hand, I became curious about the present implementation. I was also curious if an API existed. Obviously, normal applications would not use this command to store and retrieve persistent data. However there was no mention of an API in the defaults man page, nor was there a link to any other information. A quick trip to the Apple developer website provided the information I was looking for. The API was called “Preference Services” and was a part of MacOS X’s “Core Foundation (http://developer.apple.com/techpubs/corefoundation/PreferenceServices/Preference_Services/Concepts/CFPreferences.html). Apple provides two sets of APIs for use based on the developer’s needs, a high-level one that allows the developer to quickly set and retrieve data in the default domain (defined as the current user, application, and host), and a low-level API for setting and retrieving data in very specific domains (specific user(s), application(s), and host(s)). The API also provides a method of storing data for suites of applications such as the standard office productivity apps we all know and love to hate.

During this exploration, I have been having a hard time deciding whether I think this is all cool or not. On one hand the idea of a system-(and even network-)wide preferences database seems pretty cool, especially to a BSD-er like myself, but at the same time, it is really nothing new. In the same way, at first glance, the idea of an XML-based back-end seems pretty innovative, but is it? Sure it’s cool to look at, but so what? The existence of the defaults command and an access API mean that the actual plist files are not intended for public consumption.

The defaults command and Preference Services API indicate that the whole database is supposed to be a black box to the system, the application, and especially the user. If this is the case, why not go with a high-horsepower back-end, one that offers more robust searchability and speed than that which could be achieved via the “crapload of text files” approach. I think the argument from Apple was supposed to be that the files should be architecture-neutral in order to be easily portable. If this was the case, why not just leverage an existing architecture independent binary database format. I know, for instance, that MySQL can do it, why not Apple?

The other argument I can think of might be that the XML format is essentially patch-able. Assuming the data is not customized too much by the running application, updates could be distributed and installed from small plist patch files. However, that doesn’t seem like a very convincing argument. All this having been said, this article will probably be published and the next day, a reader will say, “Well, <profoundly obvious answer succinctly stated>. Please unset your $LUSER variable.” The first person to write in with this wins a prize. 😉

Random Ramblings about BSD on MacOS X (Part 1)

By Jeffrey Carl and Matt Loschert

Daemon News, November 2001

This is the first chapter in a series of observations, representing the adventures of a couple of BSD admins (one with a lot of prior MacOS experience, the other with more on the BSD side) poking around the command line on an iBook laptop running Apple’s  Mac OS X Public Beta. We’ll attempt to provide a few notes and observations that may make a BSD admin’s work with Mac OS X easier.

Mac vs. Unix

Right up front, I should mention what a strange animal Mac OS X is. Since 1984, the Mac OS has always been about the concept that “my OS should work easily, without my knowing how it works.” Unix has always been about the idea that “I should be able to control every aspect of my OS, even if it isn’t always easy to figure out how.” Apple – whose very name is anathema to many Unix admins – is now trying to combine both, in the world’s first BSD-based OS that is likely to ship 500,000 copies in its first year.

Apple isn’t alone in this hybridization effort; Microsoft will presumably ship “Whistler,” its NT/2000-based consumer OS, sometime in mid-2001. It appears that everywhere, the rush is on to deliver consumer OSes based on the kernels of server operating systems. In the long run, this is probably a Good Thing®, but Mac OS X and MS Whistler will have to introduce millions of desktop users to the advantages and problems of server OS design that we as server OS admins have been dealing with for years. Real protected memory and pre-emptive multitasking, sure … but also real multi-user systems and all of the permission and driver complexities that requires, as well. However, both Microsoft and Apple are investing the time and effort into developing a real user-friendly interface that your grandmother could configure (sorry, KDE and GNOME). It should be interesting to watch.

If you, like most other Internet server admins with a fully-functional brain stem, prefer Unix over the NT kernel, then Mac OS X will be the first true consumer OS that you will ever feel comfortable with. And, whether you love or hate Apple, it’s worth your time to get acquainted with Mac OS X and what it offers to a BSD Unix administrator.

Right now, there are a lot of questions out there about Mac OS X and its attempts to marry a BSD / Mach microkernel with an all-singing, all-dancing Mac GUI. This is largely because there is unfortunately a comparatively small crossover between Mac administrators and Unix administrators (much like the slim crossover between supermodels and serious “Dungeons and Dragons” players). Though neither Jeff nor Matt is a total über-guru in Mac OS or BSD, we’ll attempt to bridge the gap a little and give the average BSD admin an idea of what he or she will see if they peek around under the hood of Mac OS X Public Beta. Much of this is “first impression” notes and questions, without too much outside research behind it; explanations or clarifications from those who have more detailed answers are gratefully appreciated. 

On a side note, Matt and Jeff will both use “I” rather than “we” in this article as we go, since we are both members of the FreeBSD Borg Collective and think of ourselves as one entity. 😉

Poking Under the Hood: First Impressions

To examine the hidden Unix world on OS X, navigate in the GUI to Applications >> Utilities >> Terminal. The first thing you may notice is that – as you might expect – the default shell is tcsh. You’ll quickly find that many of the user apps you’d expect in a typical Unix installation are there, plus a few extras and minus a few others (showing the refreshing influence of many developers, each pushing for the inclusion of their favorite tools). Interestingly, pico is there, although Pine is not; Emacs and vi are there, also (ed may be there, but I didn’t check since I thought everyone who might use ed was fossilized and hanging on display in a museum by now). Other fun inclusions are gawk, biff, formail, less, locate, groff, perl 5.6, wget, procmail, fetchmail, and an openssh-based client and server.

The first thing that I do when beginning to work on a new system is to set my shell and copy over my personal config files.  When presented with the Mac OS X shell prompt, I was pleased to see that it was tcsh; but at the same time I was a little confused at how to customize it. Usually, I just drop my .cshrc file in ~/, and off I go. Well, ~/ in this case is /Users/myusername.  This is unlike the normal /usr/home or /home that most admins are used to.  Would tcsh be able to find my .cshrc there?

Well, since I didn’t have the machine network-connected while I was originally exploring, and I was too lazy to hand-copy parts of my personal .cshrc file over to this directory, I instead began poking around /etc looking at how the default .cshrc worked.  I quickly found a small csh.cshrc which contained a single line sourcing /usr/share/init/tcsh/rc.  This is where it gets interesting.  

This rc file sets up a very cool method of providing standard system shell features that can be overriden by each Mac OS X user (assuming he/she realizes that the “shell” does not refer to the funky computer case).  The rc file first checks to see if the user has a ~/Library/init/tcsh directory.  If so, it sets this as the tcsh initialization directory; otherwise it sets the /usr/share/init/tcsh directory as the default.  It then proceeds to source other files in the default directory including, for instance, environment, aliases, and completions files which each in turn source files (if they exist) found in the abovementioned tcsh initialization directory.

In this way, the system provides some powerful “standard” features, but still gives the experienced user the ability to override anything and everything.  I dropped some customizations in my personal ~/Library/init/tcsh directory and immediately felt at home.  Without a doubt, the old school UNIX curmudgeons will hate the built-in shortcuts, but I still must applaud the developers for making an attempt at providing a useful set of features for the “new” user.  No one will ever agree on a standard set, but it’s good to see that the average Mac “What is this, some kind of DOS prompt?” user will have a very functional command line, should he/she choose to explore it.  I must admit that some of these completions can be a little annoying though, while typing quickly.

Digging Around

When you drop to the shell, you’ll notice that you’re logged in with the “user name” you selected for yourself when you installed the OS. When you create a user during the installation process, you are informed that “you will be an administrator of this computer.” Yet, when in the shell, you take a look at /etc/passwd, you’ll see no entry for your username. Nonetheless, if you su to “root,” you’ll notice that root’s password is the same as the administrative user you created. Even stranger, although you are an “administrator,” as long as you’re logged in as that user, you still don’t have the permissions of “root.” What’s going on here?  Well, it turns out that this functionality has been co-opted by Apple’s NeXT-inherited Net Info (which I will describe below).

I couldn’t resist running dmesg to see what would come up.  To my surprise, the output was pretty boring.  The most bizarre thing was that a packet filter was enabled, but with a default to open policy and no logging.  An open firewall … why?  If they weren’t planning to enable firewall rules, why compile packet filtering code into the kernel in the first place?  At this point, I felt I had to do at least a little searching for some sign that this had been considered at one point.  Well, /etc held no clues, and there was nothing obvious in /var (the only other probable location I could think of).  I guess they simply wanted to leave the door conveniently ajar should they choose to go back and reconsider the decision.

When I visit a machine, I am always curious to check out /etc.  This directory usually reveals some interesting information about the operating system and/or the administrator.  In this case, while touring the usual suspects, I noticed something that, again, I did not expect.  The wheel group listed in /etc/group did not contain any users (it seemed like some Communist Linux system!).  You typically expect to find root along with at least one administrative user account in /etc/group.  In this case, wheel was empty, leaving me to wonder why they even kept the concept, since this functionality had apparently been folded into an alternate privilege manager. As I found out, this functionality is also handled by NetInfo.

While checking out /etc/syslog.conf, I noted that quite a bit of system information gets routed to /dev/console, as one would expect.  What was strange was that later while exploring /var/tmp (to look for tell-tale temp files), I found a file named console.log.  It appeared to contain a file version of the most recent couple of console messages.  I verified that identical (albeit fewer) entries appear when using the GUI-based console viewer.  I’m no super OS guru … but this strikes me as a strange hack.  Why not simply send the same information to a standard log file via syslog and have the GUI console app read from that?  That’s what syslog’s there for!  Maybe someone else out there can shed some light on this one.

Another oddity was /etc/resolv.conf.  It was symlinked to /var/run/resolv.conf, which seemed a little strange.  Even more so when /var/run/resolv.conf turned out to be non-existent. Okay … well, there was no named running (I checked) and the resolver man pages gave no clue.  So what was going on here?  Apparently something odd, since a quick nslookup responded with the DNS server listed as being the local machine.  No named, no resolv.conf, how about /etc/hosts?  Well, /etc/hosts had no special magic in it, but it did have a note mentioning that the file is only consulted while the machine is in single-user mode.  At all other times, the request is handled by lookupd working through the facilities provided by NetInfo.  Hmm … NetInfo was beginning to sound very interesting.

After a bit of roaming around, I became curious as to whether root would have a home directory on the machine.  Given the difference in user and administrator handling on Mac OS X, I wasn’t even sure that root would have a home directory.  But, lo and behold, there was ~root under /var (or more properly /private/var since it’s a symlink) with a set of default files and directories resembling any other user in /Users.  None of this should have come as a surprise to me since /etc/passwd contained this information.

Strange Services

While exploring the system, you’ll find some interesting services enabled.  I was surprised to see the nfs daemon running. Thus, it didn’t faze me when I found the portmap daemon (a service required in order to run nfs) hanging around.  Finding nfs active was unexpected, since I would only run it if necessary, due to potential security concerns.

Another surprise was seeing a full-fledged NTP daemon running.  It was cool to see this service as a standard part of Mac OS; but I thought it a bit strange though to do continuous high accuracy time synchronization via ntp for a desktop, consumer OS.  Why not use ntp’s little brother, ntpdate, on a periodic basis for this same functionality?  Do Mac OS users really need the accuracy of continuous ntp, when a functionally similar result could be obtained without the security risk of running another network daemon?

The answer is that, for now, “yes they do.” Since the introduction of Mac OS 9, Mac users have been given the default option of using NTP to synchronize their desktop clocks via NTP servers, and it appears that Apple wants to give Mac OS X users (presumably in a server setting) the option of running an NTP server for other Macs on a LAN/WAN.

Speaking of network services, a little netstat lovin’ showed a couple of other interesting tidbits.  The machine was not as quiet network-wise as I had expected.  Along with the nfs, portmap, and ntp network activity, I found listening sockets open on ports 752 and 755, with an active connection to port 752 from another local privileged port.

After some man page reading and poking at a few of the running daemons, I was able to narrow things down a little.  I found that HUP-ing lookupd (an interface daemon to NetInfo) caused the active connection to be disconnected and reestablished (obvious since the new connection originated from a new port).  I also found that HUP-ing nibindd (the NetInfo binder, a sort of librarian for NetInfo) caused the listening sockets to be closed and reappear on new ports. That struck me as quite odd.  I would have expected them to re-bind to the same ports.  Even with these addresses changing, lookupd somehow knew where to find the ports, as I saw new connections established shortly after the HUP signal was received.

Not knowing much about NetInfo, I was curious why this service was implemented using tcp sockets.  I assumed that the service must be distributed, available to and from remote hosts.  Otherwise, the developers probably would have implemented this using Unix domain sockets since they are substantially faster and are safer for local-only protocols. Not having much background on NetInfo at the time, I was somewhat puzzled.

Welding the Hood Shut

Apple’s longstanding (since Mac System 1.0 in 1984) policy has been to prevent users from screwing up their systems by hiding important items from them, or making it difficult to access the cool things that they might play with and hose their hard drive. This attitude has extended to OS X, and things like header files, gcc and gdb are not present (although these were included in Apple’s earlier “developers-only” previews of Mac OS X), not that the average Mac user would be able to “accidentally” damage their system with gcc.

Not to worry, though; the Mach / BSD portion of Mac OS X is maintained separately as an open-source operating system that Apple calls “Darwin.” In fact, a uname -a at the command line in MOSXPB reveals:

Darwin ibook 1.2 Darwin Kernel Version 1.2: Wed Aug 30 23:32:53 PDT 2000; root:xnu/xnu-103.obj~1/RELEASE_PPC  Power Macintosh powerpc

You can get the abovementioned developer goodies (and much more) by downloading Darwin or following the CVS snapshots for various packages. 

At the root of the filesystem, it’s not hard to see that most of what you would expect to find is there somewhere; it may just be in a different place than you expected. The first thing you’ll likely notice is that a lot of what’s there is being hidden from the user by the MacOS X GUI. The Mac OS has always had its share of hidden files (for volume settings, the desktop database, etc.) However, you’ll notice a file in / called “.hidden” which lists the items in / that the GUI hides. The contents of .hidden are:

bin (the Unix directory for binaries needed before /usr is mounted)

cores (a link to /private/cores; a directory – theoretically, a uniform location for placing Unix core dumps)

Desktop DB (the GUI desktop database for files, icons, etc.)

Desktop DF (don’t remember what this does)

Desktop Folder (for items that are shown on the desktop)

dev (the Unix devices directory)

etc (a link to /private/etc; contains normal Unix configuration files and other normal /etc stuff)

lib (listed, but not present in /)

lost+found (the Unix directory for “orphaned” files after an unclean system shutdown. Classic Mac OS users are used to looking in the Trash for temp files saved after a crash or other emergency shutdown)

mach (a link to mach.sym)

mach_kernel (the Mach kernel itself)

mach.sym (listed as a Mach-O executable for PowerPC, but I’m not sure exactly what it does. Any Darwin users out there with more knowledge should correct me.)

private (contains var, tmp, etc, cores and a Mac OS X directory called Drivers, whose relationship to /dev is unclear)

sbin (Unix system binaries traditionally needed before /usr mounts)

SystemFolderX (not present in /; may be a typo or used in the future?) 

tmp (a link to /private/tmp; the Unix temporary files directory)

Trash (files selected in the GUI to be deleted)

usr (just about everything in BSD Unix these days 😉 )

var (Unix directory meant once upon a time for “variable” files – mail, FTP, PID files, and other goodies)

VM Storage (the classic MacOS virtual memory filesystem – usually equal to the size of physical memory + swap space)

A brief digression about .hidden: Editing this file and adding any directory or file name will cause the GUI to hide the item. For example, edit this file in the shell and add ‘Foo,’ create a directory in / with that name, log out and log back in, and it won’t be seen. 

However, it isn’t as simple as it sounds. Strangely, there’s an item in / called “TheVolumeSettingsFolder” whose name isn’t in .hidden, but still doesn’t show up in the GUI. This indicates that there is more to what’s shown and hidden in the GUI than just the “.hidden” file. Also, adding a “.hidden” file to another directory than “/” does not appear to cause files of that name to be hidden in that directory. Furthermore, /.hidden does not prevent files with those names from appearing in lower directories, nor does it hide directories in lower directories when an absolute path is placed in .hidden. If anyone can clear this up for me, I’d love to know what’s at work here.

Getting back on track: You’ll notice that all of the traditional BSD file hierarchies are present, although the GUI hides them from the user so that they are unaware that these Unix directory structures exist. Furthermore, some of the directories you see are in fact soft links to unusual places (as mentioned, etc, tmp and var are all links to inside /private; I refuse to speculate why without having several drinks of neat Scotch first). Also, unlike previous incarnations of the Mac OS, the user is unable to rename the “System” directory from the GUI.

As a brief aside, it should be noted that Apple’s great struggle here (and the immensity of their effort should be appreciated) is to combine the classic Mac OS “I can move or rename damn near anything I want” ethos with Unix’s “it’s named that way for a reason” ethos. How Apple chooses to resolve this dilemma in the final release will speak volumes about their dilemma over (I’m ashamed to admit I’ve forgotten who put it this way, but it’s brilliant) an OS that “the user controls,” versus an OS that “allows you to use it.” And, let’s face it, Unix has always been the latter.

Also, logical or physical volumes are listed under “/” using their “name.” Rather than use a Unix filesystem hierarchy, DOS’s A: or C: drives or even Windows 9x’s “My Computer,” Mac users have always been able to name their hard drives or partitions arbitrarily. Each drive or partition thereof was then always mounted and visible on the desktop. If I have a drive in my computer which I’ve named “Jeff’s Drive,”  you’ll see it from the shell as: /Jeff’s\ Drive, and all of its file and directory contents are viewable underneath it. Similarly, if a user installs MOSX PB on a drive with Mac OS 9 already installed, at installation time their old files and directory hierarchies are all moved to a directory called “Mac OS 9” in /.

** What is strange about the above?? It sounds very UNIXy.  As an example, many people mount their floppy drives as ‘/floppy’, but nothing is preventing them from mounting the drives as ‘/Cool\ Ass \Plastic\ Coaster\ Partition’. **

NetInfo & Friends

Well, by the time I got this far, I realized that this article wouldn’t really be complete without some discussion of the mysterious NetInfo.  I also knew that a little bit of research was in order.  While searching, I learned that NetInfo is a distributed configuration database consisting of information that would normally be queried through a half-dozen separate sub-systems on a typical UNIX platform.  For example, NetInfo encompasses user and group authentication and privilege information, name service information, and local and remote service information, to name just a few.

When the NetInfo system is running (i.e. when the system is not in single-user mode), it supercedes the information provided in the standard /etc configuration files, as well as being favored as an information source by system services, such as the resolver.  The Apple engineers have accomplished this by hooking a check into each libc system data lookup function to see if NetInfo is running.  If so, NetInfo is consulted, otherwise the standard files or services are used.

The genius of NetInfo is that it provides a uniform way of accessing and manipulating all system and network configuration information.  A traditional UNIX program can call the standard libc system lookup functions and use NetInfo without knowing anything about it.  On the other hand, MacOSX-centric programs may directly talk to NetInfo using a common access and update facility for all types of information.  No longer does one have to worry about updating multiple configuration files in multiple formats, then restarting one or more system daemon or daemons as necessary.

The other benefit of the system is that it is designed to be network-aware from the ground up.  If information cannot be found on the local system, NetInfo may query upward to a possibly more knowledgable information host.

NetInfo also knows how to forward requests to the apropriate traditional services if it does not have the requisite information.  It can hook into dns, nis, and other well-known services, all without the knowledge of the application making the initial data request.

NetInfo is a complex beast, easily worth an article of its own.  If you want more information, here are a few tips.  I found that reading NetInfo man pages was frustrating.  Most of the pages tended to heavily use NetInfo-related terms and concepts with little to no definition.  Nevertheless, if interested, check out netinfo(5) (netinfo(3) is simply the API definition), netinfod(8), nibindd(8), and lookupd(8).  However, the best information that I found was on Apple’s Tech Info Library at http://til.info.apple.com/techinfo.nsf/artnum/n60038?OpenDocument&macosxs.

Getting More Information

Finally, for a great resource on all things Darwin, don’t go to Apple’s website (publicsource.apple.com/projects/darwin/). Instead, go to www.darwinfo.org, and you’ll find lots of great stuff, including the excellent “Unofficial Darwin FAQ.”

If you don’t have access to MacOS X Public Beta but would like to read its man pages to get a better idea of how some of these commands are implemented, you can find a collection of MOSXPB/Darwin man pages online at www.osxfaq.com/man.

Jeff and Matt hope this little tour has been semi-informative and has raised the curiosity of the few brave souls who have made it to the end of this MacOS travelogue.  Next time we will take a look at … and discuss ….  See you next time.

Open-Source vs. Commercial Shoot-Out

By Jeffrey Carl, Matt Loschert and Eric Trager

Web Hosting Magazine
Web Hosting Magazine, May 2001

Web Hosting Magazine represented the high water mark of the market for print publications around the Internet Service Provider industry. In other words, it was probably my last-ever opportunity to get a paid gig writing for something that would appear physically on a newsstand. It was intended to be more focused than Boardwatch and, according to Wikipedia, “was written and edited with a deliberately ‘edgy’ style, designed to appeal to the primarily young and male constituency of the ISP industry.” Matt Loschert, Eric Trager and I contributed a few long-form articles before the magazine – like the rest of the telecom industry at the end of the dot-com era – vanished without much of a trace.

Welcome to part one of the Open Source vs. Commercial Server Software Shootout. This time around, we’ll be looking at Web Server Software (Microsoft IIS vs. Apache); Site Scripting (Microsoft ASP vs. PHP); and Microsoft Web Compatibility (FrontPage extensions and ASP options). Before you send nasty response e-mails (these, especially nasty threats, should be addressed to [email protected]), keep in mind that this is merely the advice of three long-time server admins; your mileage may vary). Tax, tags, title and freight not included.

Web Server Software


The breakdown between commercial and open-source web server engines is more about the operating system than the software itself. The most popular commercial option is Microsoft’s IIS, or Internet Information Server. It runs only on commercial operating systems: Windows NT, or the more robust Windows 2000. The far-and-away leader in open-source server options is Apache, and, while it features a version that runs in the Windows server environment, its flagship server runs under even obscure flavors of Unix and has no real competition. 

The battle of IIS vs. Apache in raw performance evaluations is as tangled as the Linux vs. NT benchmarks are. Depending on whose tests or opinion you listen to, you could hear everything from “Apache is blown away at heavy web loads” to “IIS all-around blows goats.” However – all OS and configuration issues aside – it’s probably safe to say that IIS performs better with heavy loads of static pages and Apache tends to handle lots of dynamic DB/CGI stuff better (at low loads, there’s little performance difference to speak of). Keep in mind, though, that because IIS is so tied to Windows and the Apache/Win32 version lags behind its Unix counterparts, it’s unlikely that there will ever be a really even contest between the two.

Since IIS is the clear leader with Windows servers and Apache is by far the most popular choice for Unix foundations, the comparisons here are largely about the duets of OS and webserver. Such a comparison is appropriate, considering how critical the operating system is in funneling the horsepower to a server with hundreds if not thousands of simultaneous clients (we hope your state motor vehicles department is listening).


Commercial Option: Microsoft Internet Information Server 5.0

Platforms: Windows NT/2000

Publisher: Microsoft, www.microsoft.com/windows2000/library/howitworks/iis/iis5techoverview.asp

IIS is a free package when Windows 2000 server software is acquired, and was available as part of an option pack for Windows NT server. IIS is technically free, but if your goal is to run a webserver and you choose IIS, you’re paying for it by choosing NT or Windows 2000. NT still starts at around $500 for the base server package, while Windows 2000 goes for around the same price and that’s only starting at a five-client license. While that license permits as many web hits as possible, those hits can only be unauthenticated requests; simultaneous accesses that are under a username/password are limited to the number of users permitted by the license (which really cramps your style if you’re running adult-oriented membership houses).

There are advantages to using IIS. One is a relatively simple installation and set-up process. Using an all-too-familiar GUI, a few points and clicks will have IIS serving pages on your Windows system in no time. Once running, IIS is administered with the same familiar Windows GUI and allows for easy access to run-time information including current access statistics and resource consumption. IIS also includes some niceties Apache doesn’t offer, such as making it easy to throttle bandwidth for individual hosts. The IIS setup allows file security to be set easily through the GUI (Apache allows some of this, but not as many options as IIS, and relies on Unix file security options for the rest). Also, it has a clear advantage in dealing with sites created using Microsoft tools (see Microsoft Web Compatibility below).

Using IIS also has its drawbacks. When looking at the open-source model as another option, the potential for security issues with IIS has to make you sleep less easily. Open-source bugs are usually squashed pretty quickly, when there are thousands of developers who examine the insides of a program. Microsoft has had issues in the past dealing with revealed exploits in an (un-)timely fashion, and there’s no reason to believe that IIS is any more immune than Internet Explorer or Windows itself.

Second, because it is tied to Microsoft’s server OSes, you’re … well, you’re tied to Microsoft’s server OSes. We don’t have the several thousand pages it would take to get into this, so we’ll leave it as a matter of preference. If it’s “the right tool for the job” for you, so be it. 

ALTERNATIVES: There are plenty of other commercial web server software options. The leader in marketshare after IIS as measured by Netcraft (www.netcraft.co.uk/survey/), is iPlanet Enterprise Edition 4.1 (www.iplanet.com/products/infrastructure/web_servers)– the former Netscape server, now a Netscape/Sun collaboration – which starts at $1495; the iPlanet FastTrack edition is available for free. Zeus (www.zeus.com) is widely known as a heavily optimized, very speedy server for Unix platforms, and starts at $1699. 

WebLogic (www.bea.com/products/weblogic/server) is a popular application server that costs some amount of money, which I wasn’t able to figure out by reading their website. WebSite Professional (website.oreilly.com), the “original Win32 webserver,” starts at $799. And, for those brave enough to run a webserver on MacOS 8/9, there’s WebStar (www.webstar.com/products/webstar.html), which starts at $599.

Open-Source Option: Apache 1.3.12

Platforms: AIX, BSD/OS, DEC Unix, FreeBSD, HP/UX, Irix, Linux, MacOS X, NetBSD, Novell NetWare, OpenBSD, OS/2, QNX, Solaris, SCO, Windows 9x/NT/2000 (plus many others; see www.apache.org/dist/binaries for a list of supported platforms with pre-compiled binaries)

Publisher: The Apache Software Foundation, www.apache.org/httpd.html

The Netcraft (www.netcraft.co.uk/survey/) survey listed the Apache webserver running 61 percent of sites reporting. While Apache’s awesome domination has slipped a little in recent years, it still holds the majority and runs well ahead of IIS, which has (as of this writing) a (relatively) paltry 19.63 percent of the market. Why is it that Apache has such a Goliath-like grasp on the field of webservers in use?

The reasons are numerous and simple. First, Apache is free. Consider that the business of the Internet is still relatively very young and, therefore, not terribly consolidated… yet. There are quite a few small hosting providers that need powerful web servers, and Apache presents an obvious choice, running on equally inexpensive operating systems like Linux and FreeBSD. Consider also that many businesses host sites remotely with those same providers, and Apache presents what has traditionally been the easiest server to configure and maintain remotely (through Telnet/SSH).

Second, Apache is stable and powerful. In years of work on both lightly and heavily worked Apache servers on different operating systems, I’ve had to work very hard to create a scenario that choked the server. The only problem that consistently knocked down Apache was the human one: like any program, Apache can’t work around typos in the configuration file and sometimes isn’t too descriptive about what’s wrong even through its apachectl configtest mechanism (although you’d be amazed at what you can get away with). Also, while IIS allows easy setup of most security permissions through its GUI, it can’t compete with the easy flexibility of Apache’s htaccess feature (especially when doing things like limiting access to a host with password permissions).

The Apache system is modular in nature as well. The initial package has what is necessary to get a web server online, but to accommodate special functions like FrontPage compatibility, persistent CGI, on-the-fly server status or WebDAV, server modules can be downloaded and installed to be incorporated into the Apache daemon. Apache maintains a directory of resources on available modules at www.apache.org/related_projects.html#modulereg. The benefit is a streamlined program by default for what the majority of users need, and the ability to fatten it up for special functions only when necessary.

Apache has a few disadvantages. It isn’t nearly as easy to install as IIS; but for anyone familiar with administrative aspects of Unix systems (and that base is growing faster with Linux’s ballooning popularity), it’s rather simple to obtain, install, and configure. Support for Apache is free, but is not as easy to obtain as picking up the phone and calling Microsoft. Of course, considering the user base and its roots in a largely helpful Unix culture, plenty of assistance is there if you’re willing to look (and, please, look first). Lastly, Apache’s performance in some situations isn’t all it could be – hopefully the impending arrival of the heavily-rewritten Apache 2.0 will fix much of that.

ALTERNATIVES: Popular (relatively speaking) alternatives include AOL’s in-house (but open-source) AOLserver (www.aolserver.com); and the tiny but very fast thttpd (www.acme.com/software/thttpd).

Final Judgement:

Even the O.J. Simpson jury wouldn’t have much problem with this one. Apache takes the lead in so many categories that it’s not much of a contest. The user statistics don’t lie, either: in terms of servers out there, Apache beats IIS by three to one. While on the surface this could be attributed its cost (zero), remember that IIS is also free if you already have Windows NT/2000 (you did pay for that, right?). Ignoring that factor, Apache is generally regarded to be superior to IIS in reliability and overall performance, which, with a price tag of zero and a support network of users across the globe, makes it an extraordinary value. 

IIS has some bright points, which is important if you’re in the position of being locked into running your webserver on a Microsoft platform. If Microsoft is your exclusive IT vendor, if you’re hosting sites created with Microsoft tools or using third-party apps only available for Windows, or you have lots of Win NT/2K experience and little or no desire to learn Apache and Unix, IIS is your clear choice. Otherwise, even if you have Microsoft platforms running your IT infrastructure, administrative functions, etc, it may be well worth it to purchase additional systems to run Unix, just so your web functions can be handled by Apache.

Microsoft FrontPage Compatibility

For a web hoster, it’s becoming increasingly imperative to be able to provide compatibility for the primary issues associated with sites designed using Microsoft tools: support for FrontPage (Microsoft’s website-building tool, included with Office 2000) and Active Server Pages (Microsoft’s site scripting/application development framework with ODBC database connectivity). On Windows, the commercial option of Microsoft Internet Information Server (IIS) enjoys an obvious advantage, since it’s playing on its “home field,” built by its “home team.” Still, there is a free FrontPage alternative willing to put up a fight on Unix, as well as open-source and commercial alternatives for ASP and ODBC on Unix and Windows.

Commercial Option: Internet Information Server 5.0

Platforms: Windows NT, Windows 2000

Publisher: Microsoft, www.microsoft.com/windows2000/library/howitworks/iis/iis5techoverview.asp

Basically, IIS is the software that FrontPage and ASP were built to run with. Windows 2000 Server (which includes IIS 5) runs for about $950 – $1000 with a 10-client license. If you’re hosting on a Windows NT/Windows 2000 platform, this is a no-brainer. If, however, you’re hosting in a Unix (or anything else besides WinNT/2K) environment, IIS simply isn’t an option. There are no open-source alternatives I’m aware of for Apache/Win32.

Under Windows 2000, setup of IIS Server Extensions is pretty simple. After creating a new Web Site in the IIS administration interface, right-click on a site and select “Install Server Extensions.” A wizard will guide you through setting up the extensions for that host, including e-mail options (which you can set for that server or another mail host). After installation, you can select properties for that host’s server extensions through the “properties” dialog box by right-clicking on the host file. 

Once they’re set up, IIS (at least in my experience) supports FrontPage extensions and ASP scripts very well and without much need for special administration. If (when) you experience problems, you should check out the FrontPage Server Extensions Resource Kit (SERK) at officeupdate.microsoft.com/frontpage/wpp/serk/. Note that IIS 5 also provides support for Active Server Pages and ODBC (see below) on the Windows platform.

Free Software Option: FrontPage Extensions 4.0 for Unix

Platforms: AIX, BSD/OS, FreeBSD, HP/UX, Irix, Linux, Solaris/SPARC, Solaris/x86, Tru64 (Digital)

Publisher: Ready to Run Software, www.rtr.com/fpsupport

The FrontPage Server Extensions for Unix (FPSE-U) version 4.x (version 3 was for FrontPage 98, 4.x is for FrontPage 2000) are available only for Apache on Unix platforms. These consist of an Apache module (mod_frontpage) and special binary server extensions that duplicate the publishing and server-side features offered by FrontPage. Note that I say this is a “free” software alternative, since it doesn’t cost you anything, but it isn’t open-source (sorry, Richard Stallman).

To install the FPSE-U, go to the Ready to Run Software site mentioned above, and download the extensions for your platform. These contain the FrontPage Server Extensions, the Apache module, and two installation scripts: fp_install.sh (to install the extensions on the server and for each required host/virtual host) and change_server.sh (to replace your Apache installation with a default one including the FP module). I strongly recommend that instead of running change_server.sh, you download the “Improved mod_frontpage” (home.edo.uni-dortmund.de/~chripo) and then compile your own Apache (following the instructions at www.rtr.com/fpsupport/serk4.0/inunix.htm#installingtheapachepatch), which will allow you to customize it and include the modules of your choice.

Once you’ve installed it, the FPSE-U generally work fine. You’ll need to learn to use the FPSE-U administration tool, fpsrvadm.exe, (check the FPSE-U SERK for documentation) usually installed into /usr/local/frontpage/currenversion/bin/, which generally isn’t much of a hassle. Note that I say “generally,” since (in my experience, at least) the FPSE-U have a habit of occasionally springing up weird errors which require a reinstall (especially as you try to support more FP users on Unix with more advanced requirements).

For troubleshooting, check out the RTR FrontPage 2000 documentation (www.rtr.com/fpsupport/faq2000.htm) and their message boards (www.rtr.com/fpsupport/discuss.htm) as well as the official FrontPage SERK (listed above). Note that the FPSE-U don’t provide any sort of compatibility for ASP or ODBC connectivity to Microsoft Access databases (which IIS/Win32 does, as do the ASP alternatives listed below).

Final Judgement:

This simply comes down to your choice of platform. If you’re running Windows NT/2000 Server, there’s no reason (you’ve already paid for it) not to use IIS for this. If you’re running any version of Unix, the free software alternatives are all you have. If you’re running Apache for Win32 … well, why are you running Apache for Win32?

Active Server Page (.asp) Support on Unix

Commercial Option: Chili!Soft ASP 3.x

Platforms: AIX, Solaris, OS/390, HP/UX, Linux

Publisher: Chili!Soft, www.chilisoft.com/ chiliasp/default.asp

ASP on Unix is already playing at a disadvantage, but for those wanting to support it on Unix hosting platforms (which have their own long list of benefits), it’s a necessary evil to deal with. Chili!Soft’s ASP product is relatively expensive ($795 for a single-CPU Linux or NT license; $1995 for Solaris, HP/UX or AIX). However, it seems to be reliable enough that IBM and Cobalt (recently purchased by Sun) have agreed to make Chili!Soft’s product an option on their server offerings; and, from most reports, it seems to work as advertised. 

For all supported platforms, Chili!Soft offers a fully functional 30-day trial version at www.chilisoft.com/downloads/default.asp. Installation may require some pain-in-the-butt steps, but these are documented pretty well by going to www.chilisoft.com/caspdoc and following the directions for your platform of choice. After installation, things usually go pretty much as expected.

For support, you can check out their well-maintained forums (www.chilisoft.com/newforum), which offer frequent responses from Chili!Soft support employees, read their other documentation resources (www.chilisoft.com/support/default.asp), or call a phone number (for certain paid support packages).

ALTERNATIVES: These include HalcyonSoft’s Instant ASP [iASP] (www.halcyonsoft.com/products/iasp.asp), which is a Java-based solution that will run on any platform which supports a Java Virtual Machine (JVM). HalcyonSoft seems very committed to making their product available for as many platforms as possible, and offers a free “developer version” download. Note that for Win32 platforms, your commercial alternatives include Microsoft IIS, Chili!Soft ASP, iASP, and others.

Open-Source Option: Apache::ASP

Platforms: Any Unix platform supported by mod_perl (perl.apache.org)

Publisher: NodeWorks, www.nodeworks.com/asp

Apache::ASP is a Perl-based Apache module that works with mod_perl (with the common CGI.pm Perl module to handle uploads, and Perl’s DBI.pm/DBD.pm modules for connectivity to databases) to deliver ASP functionality. This module is heavily dependent on Perl, and the better you are with Perl, the better you will be with using and debugging Apache::ASP.

To install, find the latest version of Apache::ASP at http://www.perl.com/CPAN-local/modules/by-module/Apache/. Run Perl on the Makefile, then run a ‘make’ and ‘make install’ on the module (for more help, see www.nodeworks.com/asp/install.html). Additional resources for using Apache::ASP can be found at its homepage.

For support, try checking out the the Apache::ASP FAQ at www.nodeworks.com/asp/faq.html, or the mod_perl mailing list archives at www.egroups.com/group/modperl

ALTERNATIVES: For more information on ODBC database connectivity through Perl’s DBI/DBD, see www.symbolstone.org/technology/perl/DBI (free) or www.openlinksw.com (commercial). 

Final Judgement:

Since Apache::ASP works through Apache’s Perl module, be aware that its performance should be good, but, under heavy server loads, is unlikely to be as good as ASP support through a native binary (a similar concern exists for Halcyon’s Java-based iASP product). Apache::ASP is a good choice for basic installations or non-high-usage installations, but heavy usage, absolute mission-criticality or need for commercial-quality support will probably demand you use the Chili!Soft product (iASP provides commercial-quality support, as well).

Site Scripting Options

Commercial Option: Active Server Pages (ASP)

Platforms: Windows NT/2000, Windows 9x

Publisher: Microsoft, http://support.microsoft.com/support/default.asp?PR=asp&FR=0&SD=GN&LN=EN-US

Open Source Option: PHP Hypertext Preprocessor 4

Platforms: Most Unix platforms, Windows NT/2000, Windows 9x (See Apache platform information above for details)

Publisher: PHP Project, http://www.php.net


As the Internet and the web have matured, CGI and dynamic content have rapidly replaced static HTML content for many sites. Unfortunately, as most developers will tell you, the increasingly complex requirements for dynamic sites have exposed many of the limitations of traditional CGI. However useful CGI was in extending the old infrastructure, it falls way short of current requirements for dynamic site design.

Quite a few software products are attempting to address the shortcomings of CGI and the desires of developers. They include products such as Microsoft ASP, Allaire ColdFusion, PHP, Apache ModPerl, and JSP, as well as others. 

We’ll take a look at ASP and PHP, two popular scripting packages that, at first glance, appear to be very different beasts; but on closer examination, show a striking number of similarities. 

ASP/PHP Common Features:

Script-Handling in the Web Server

One of the problems with CGI is that script execution usually requires the web server to spawn a new process in order to run the CGI script. The overhead of process creation is very high on some platforms. Even on platforms where it is efficient, CGI process creation can devastate performance on even moderately busy sites.

ASP and PHP have both sidestepped this problem by allowing the web server to handle the script itself. Neither one requires the web server to create an external process to handle a dynamic request. The script execution functionality resides in a module that is either built-in or dynamically loaded into the server on demand. The ASP functionality is part of Microsoft IIS, while PHP may be compiled directly into a variety of web servers including Apache and IIS. Although ASP and PHP handle this differently, the end result is more or less the same.

Session Handling

Due to the “stateless” nature of HTTP and CGI, it’s very difficult to “follow” a site visitor and keep track of his/her previous selections. As a kludge, developers have often posted data from page to page in hidden HTML form fields. This is neither convenient nor fail-safe, since it’s unwieldy and data can be lost between pages or compromised by malicious users.

ASP and PHP both have added support for “sessions.” A session allows the web server to simulate a continuous interaction between the user and the server. The server assigns the user a form of identification, and then remembers it and the data attached to it until the next request from that user arrives. Sessions minimize the amount of housekeeping work the developer must do to track the user across page requests. Again, ASP and PHP both provide good facilities for this.

Persistent Database Connections

Database access is a critical element for dynamic sites, providing a way to store and retrieve vast amounts of data that can be used to display tables of information, store user information/preferences, or control the operation of a site. A traditional CGI script has to connect to the database for each web request, query it, read the data returned, close the connection, and finally output the data to the user. If that sounds inefficient, it’s because it is.

To address this problem, ASP and PHP both support “persistent” database connections. In this environment, the web server creates a database connection the first time a user makes a request. The connection remains open, and the server reuses it for subsequent requests that arrive. The connection is closed only after a set period of inactivity (or when the web server is shut down). This is possible because the script execution environment remains intact inside the web server, as opposed to CGI where it is created and destroyed with the execution of each web request.

Again, this feature is a win for both ASP and PHP. Their syntaxes (though different) are simple and require no relearning of one’s database development knowledge.

Facilities for Object Design

A lot of useless information about Object-Oriented (“OO”) programming has been written, much of it presumably by people who either 1.) didn’t know what they were talking about, or 2.) were under the influence of Jack Daniel’s and horse tranquilizers. Here’s the real deal: OO design and programming allows you to encapsulate ideas in code through the combination of data and related data manipulation methods. Traditional programming has focused on the procedure that a program should follow, with data used as a “glue” to store information between and during procedures.

OO design focuses instead on the data and how it should act under different circumstances. This is done by housing data and function in distinct packages, or “objects,” that can’t interfere (except in predefined ways) with other objects. It is not a cure-all for poor programming, but it is a facility that allows for the creation of better software through good design and programming techniques. ASP and PHP both support the OO notion of classes (the definition of an object) in order to develop applications that are more modular, robust, and easier to maintain.

Access to External Logic Sources

Many designers would argue that for a truly complex installation, the site’s application or business logic should consist of objects completely external to the website and its presentation facilities. This is highly desirable since it allows you to build business objects that can be used for more than just your website and can be maintained apart from your site presentation code.

ASP and PHP allow you to access two types of external logic sources. The first, COM objects, are a Windows-based object framework. The second, CORBA, is a generalized network-transparent, architecture-neutral object framework. This support, especially of CORBA, is a huge win for both ASP and PHP.

ASP Features:

Ability to Utilize Different Programming Languages

When designing ASP, Microsoft made the decision not to tie the technology to a particular programming language. Although VBScript is what most people think of when they hear ASP, other popular language options include VBScript, Microsoft’s JScript, and PerlScript. This flexibility is especially useful if you or your development team has previous experience with one of these languages and your schedule precludes learning a new language for your current project.

Tight Integration with other Microsoft Software and Tools

For some, the ability to use Microsoft tools is a blessing; for others, a curse. For better or for worse, ASP is tightly integrated with other Microsoft tools. If you are comfortable with a Microsoft development environment, this could easily decide your scripting package for you. On the other hand, this tight integration can turn into a nightmare if you encounter insurmountable platform issues, as many Microsoft Windows tools do not have analogues on other platforms.

PHP Features:

Written Specifically As a Web Scripting Language

PHP benefits from being written from the ground up as a web-scripting language. It is tight and targeted, without the bloat that a general purpose programming language would have. This results in a package that is extremely stable, has a small memory footprint, and is lightning fast.

Integrates With Many Platforms and Web Servers

Version 4 of PHP was written using a new generalized API, which allows it to be ported more easily to other architectures and integrated into a variety of web servers, including Apache, Roxen, thttpd, and Microsoft’s IIS. It also is available for just about every type of UNIX as well as Windows 9x/NT/2000.

Final Judgement:

Both PHP and ASP have addressed many of today’s most critical web development needs. Both offer a ton of features, easy interfacing with all sorts of databases, and the facilities to create complex applications. For many, the decision will come down to a preference for one language or another, depending on what he/she has experience with already.

If the programming language is not a decision-making factor, the decision will probably come down to a choice of hosting platform. If you host on Windows, IIS/ASP is the obvious choice because of their tight integration. PHP operates quite nicely under Windows, but who in his/her right mind would turn down all of the resources Microsoft provides when working on its own platform? On the other hand, if you’re Unix-savvy, the decision to go with PHP is just as simple. PHP was born on Unix and built on top of Apache.

In the end, the question may not really be which solution is better, but which better fits the resources and needs of your project.


About the Authors:

Jeffrey Carl (Microsoft Web Compatiblity) is the Linux/BSD columnist for Boardwatch Magazine, and has written for the Boardwatch Directory of Internet Service ProvidersCLEC Magazine and ISPworld. His writings have been featured on Daemon NewsLinux Today, and Slashdot. He saw the plane dragging the “Web Hosting Magazine Kicks Ass” banner over the Spring 2000 ISPCON and agreed.

Matt Loschert (Site Scripting) is a web-based systems developer with ServInt Internet Services. Although he works with a variety of systems, programming languages, and database packages, he has a soft spot in his heart for FreeBSD, Perl, and MySQL. He begged Jeff to let him pretend to be a writer after hearing how cool Web Hosting Magazine was.

Eric Trager (Web Servers) currently serves as managing director for ServInt. With a background in personnel and operational management, Mr. Trager oversees ServInt’s daily business functions. His technical background includes four years in Unix server administration. He’s never read Web Hosting Magazine, but Matt and Jeff told him it was cool, so he foolishly believed them and wrote an article.