The NFL Super Insider #1

By Jeffrey Carl

Bloggers To Be Named Later, February 26 2012

Bloggers To Be Named Later was Paul Caputo’s fabulous sports-blogging empire of the mid-2010s. My role in the enterprise was to promise to write humor articles and then not do that, or at least not remotely on time. Ultimately, after a flirtation with viral Internets fame, the site basically turned into an excuse for Paul to get free baseball tickets, which is actually about the only good reason to run a blog of any sort. After the BTBNL site wound down, I realized that I hadn’t kept local copies of most of the stories I had written, so I ended up scouring through The Internet Archive to find as many as I could in order to prevent a tragic loss to the world’s cultural canon of blog posts complaining about the Seattle Mariners. You’re welcome.

It is an immense honor for a podunk blog of this type to add The NFL Super Insider to its roster of writers. The NFL Super Insider has a hidden identity because he, she—or it—is constantly in contact with the league’s most elite and powerful. That’s why the NFL Super Insider is privy to the biggest scoops, the deepest secrets, and the hottest insider knowledge that prick Jay Glazer can only dream about.

Agent 66
WHO is it??? Is this the NFL Super Insider???

With that being said – on to this week’s NFL Super Insider Report!

Maybe THIS is the NFL Super Insider! Could it be???

Hot Item: At least one of the Green Bay Packers is spending his offseason well: B.J. Raji is starring in a new set of TV commercials. In these commercials, he has even invented his own dance, called the Disco Double-Check! Personally I don’t think the dance is very good, but I’m just happy to see an under-appreciated offensive lineman like Raji getting work. Rumor has it that in future commercials a certain Green Bay quarterback (maybe Matt Flynn!) plus a Packers sideline dancer with a beard will make a guest appearance as well!

BC Lions cheerleaders
Is THIS the NFL Super Insider??? Probably not but you should check closely.

Breaking NewsChicago Bears fans have been looking forward to next year, as their legendary offense returns in healthy form. But I’m hearing from those “in the know” in Chicago that quarterback Jay Cutler may not be 100% next year as he continues to struggle with what one team source called a “hurt vagina.” I’m not familiar with with the injury but from what I’m hearing it has been a recurring problem throughout Cutler’s career—stay tuned!

Wonder Woman
Is this the NFL Super Insider? Unlikely, but do you notice a trend? Keep reading to find out if your answer is correct.

Flash: Very highly placed League sources tell me exclusively that a blockbuster trade is on the way for the Indianapolis Colts! According to these Mega-Insiders, the Colts are set to deal away Peyton Manning to a dark-horse suitor: the St. Louis Cardinals! It’s said that new Colts General Manager Bill Pullman is pulling out all the stops to deal the longtime Indy quarterback for the Cardinals’ first-round picks in 2012 and 2013. The last holdup to getting a deal done is the Cardinals’ request for a “left-handed reliever” which may be a code name for a cornerback, or it may be some slang reference to gay sex. Best of luck to Peyton with the Cardinals either way!

Megan Fox
Yeah, at this point it’s just gratuitous

Hot Item: One of the NFL’s most prolific tweeters has caused a scandal yet again! Fox on NFL’s beloved robotic mascot Cleatus (@CLEATUSonFOX) ignited a firestorm last week with this verbal barb:

Infamous Cleatus tweet

Whoa, big guy – let’s leave the politics out of things. I prefer the “classic” Cleatus, known for his hilarious insightson everyday life covering the NFL like:

Like we all haven’t thought that before!

That’s all for this week! Keep your ears to the ground, keep reaching for the stars, and keep your hands to yourself – just like famous bluesman Leonard Skinnerd used to say!

How Mutants Can Save Major League Baseball

By Jeffrey Carl

Bloggers To Be Named Later, February 26 2012

Bloggers To Be Named Later was Paul Caputo’s fabulous sports-blogging empire of the mid-2010s. My role in the enterprise was to promise to write humor articles and then not do that, or at least not remotely on time. Ultimately, after a flirtation with viral Internets fame, the site basically turned into an excuse for Paul to get free baseball tickets, which is actually about the only good reason to run a blog of any sort. After the BTBNL site wound down, I realized that I hadn’t kept local copies of most of the stories I had written, so I ended up scouring through The Internet Archive to find as many as I could in order to prevent a tragic loss to the world’s cultural canon of blog posts complaining about the Seattle Mariners. You’re welcome.

Since the runaway success of Bloggers To Be Named Later, every week I get hundreds of e-mails from avid fans asking me common sports-related questions, like “Do you need C1AL1S or V1AGRA cheap???!?”

Wonder Woman
Apparently she’s very excited to meet me and just needs a credit card!

But occasionally I get actual questions from readers, and by far the most common one is “how to save Major League Baseball?” Each time, I patiently explain that it’s complicated, because you have to have pitched at least three innings unless the lead in the game was less than three runs, in which case you only have to pitch one inning. Then they tell me that I misunderstood their question and we start over.

So here are the most popular questions I get about how Major League Baseball can be saved and the honest answer to each one:

Q: Is MLB suffering from the lack of a roster of fan-friendly superstars in the post-steroids era? What can be done to restore a pantheon of baseball players with mass market appeal like there were in the ’90s?

A: There is a lack of big-name baseball players today that hurts the sport as a whole. (Unless you are looking at key growing fan demographics, such as “Venezuelan families with 12-year old Yankees pitching prospects” or “Puerto Ricans who hope to come to the mainland under the name Bruce Wayne.”)

The fact is that the league has tried belatedly banning Performance Enhancing Drugs (PEDs) with little real result. (True fact: the official MLB test for PED use is looking straight at the players with a very serious expression and asking, “Did you take any drugs, son?” So far only Manny Ramirez has been caught that way, although they blood-tested Ryan Braun because he couldn’t answer the question since he was so high on Angel Dust.)

Manny Ramirez
If Cheech or Chong ever dies, there’s your replacement.

So what does that tell us? Simply that PEDs aren’t really the problem, and to regain its popularity MLB should go completely in the other direction: mandating the use of PEDs, but taking it to the next level. A competition between ‘roided-up hulks to hit 70 home runs a year? Boooring.

Instead, we need a close 12-way race between full-blown mutants, doped up on elephant aphrodisiacs and freebasing Ben-Gay, trying to break the 140-home run barrier … while struggling with the societal prejudice brought on by their third arms and occasional feeding on the blood of children.

This Island Earth
The Yankees will pay this guy $25M a year for 10 years, even after he has turned 300.

Just think of the competition between this new breed of hitters vs. a new generation of pitchers throwing 110 mph change-ups while hallucinating from their massive infusions of Velociraptor Growth Hormone and horse tranquilizers. Not to mention the first base coaches high on Orangutan pituitary secretions mixed with Day-Quil and constantly waving all the players from 3rd base in, the wrong way around the bases.

That is must-see baseball, my friends, and I challenge anyone who disagrees with me to fight after I take my next intravenous shot of Armadillo liver and Grape Ludens Coughdrops.

Q: Do you want cheap drugs from Canadian Pharmacy to Enhance Male Performance tonight??!?? Rare Chinese herbs Three-Penis Wine for low cost!!!!

A: Sorry, I think I put this question in the wrong pile.

Q: What can be done about the chronic competitive imbalance in the AL East?

A: The obvious answer is to create a special two-team league with just the Yankees and the Red Sox in it so they play each other every day. This will create three key benefits:

Red Sox Fans Are From Mars
Can you imagine a book like this written by a Royals fan about the Indians? That’s why these people need to be quarantined.
  1. It will generate huge TV ratings for MLB, and allow ESPN to stop pretending like it cares about any other team in the league.
  2. Having the Sox and Yankees play each other constantly will lead to enough stadium brawls to thin their respective herds of devotees a little.
  3. Best of all, it will prevent the legions of unruly Red Sox Nation acolytes from crowding out the home fans at every other team’s away games, drowning out the local 7th inning stretch song with “Sweet Caroline” and complaining loudly about the lack of “lobstah rolls” at the stadium cotton candy stands.

Q: Can’t we just fire Bud Selig somehow? That would fix a lot right there.

A: Bud Selig cannot be fired. He cannot be made to retire, and he cannot even be killed. Bud Selig can only be destroyed by casting him back into the fires of Mount Doom in the Land of Mordor, where he was created.

The Shadow Land of Mordor
The Lord of the Rings doesn’t specify the exact location of Mordor but from the pictures I’m guessing Pittsburgh.

So if any of our readers live in Mordor, you might try to do that if you have some free time.

Seattle Sports Insecurity and Why the NBA Is Dead To Me

By Jeffrey Carl

Bloggers To Be Named Later, February 1 2012

Bloggers To Be Named Later was Paul Caputo’s fabulous sports-blogging empire of the mid-2010s. My role in the enterprise was to promise to write humor articles and then not do that, or at least not remotely on time. Ultimately, after a flirtation with viral Internets fame, the site basically turned into an excuse for Paul to get free baseball tickets, which is actually about the only good reason to run a blog of any sort. After the BTBNL site wound down, I realized that I hadn’t kept local copies of most of the stories I had written, so I ended up scouring through The Internet Archive to find as many as I could in order to prevent a tragic loss to the world’s cultural canon of blog posts complaining about the Seattle Mariners. You’re welcome.

Seattle Skyline
Oh, your city doesn’t look like this at night? Suck it, Cleveland.

Sometimes I will tell a friend how February and March are my least favorite months of the year because there are no professional sports to watch. They will say, “but what about the NHL?” And we will both laugh and laugh and laugh.

After a few minutes of convulsive laughter, though, we pick ourselves up off the floor and they will follow up:

Friend: Seriously, what about professional basketball?

Me: I don’t think the WNBA season starts until September. Or maybe that’s the Curling Premier League.

Friend: No, I mean men’s professional basketball.

Me: I don’t know what cable package you have, but mine definitely doesn’t include the Italian-Serbian All Stars League.

Friend: No, the NBA.

Me: Who?

That’s right, the NBA has been on the official Jeff Carl Dead To Me List since July 2nd 2008 when the Seattle Supersonics officially left town to become the Oklahoma City Ford F-250 With Optional Towing Packages or the Oklahoma City Trailer Park Tornado Debris Scavengers or whatever they are now.

Please understand that this was not an ill-considered or capricious decision to add the League Who Must Not Be Named to my highly select Dead To Me List. After spending 10 years in Washington DC subjected to the “basketball” practiced by the Washington Wizards, I was already pretty disposed to stop caring about the NBA. To me, NBA players seemed like little more than a horde of spoiled prima donnas and feckless thugs who starred in terrible genie-themed movies and occasionally had NRA-sponsored gun shows in the locker room.

Shaq-Fu
That just happened.

But the factor that pushed me over the edge to permanently “un-friend” the NBA was an issue that I call Seattle Sports Insecurity Syndrome.

Seattle sports fans have a chronic insecurity problem. Despite the facts that Seattle is the 13th largest media market in the country, a thriving technology industry growth area and inarguably the most naturally beautiful major city in the nation, its sports teams seem to be perpetual also-rans or transplant candidates.

This is due to a variety of factors. Sure, Seattle does have some disadvantages in attracting sports teams: we have one rain shower a year (it starts on November 15th and ends in late May); the looming threat of multiple nearby volcanoes seems to turn off a few timid souls; and some people get jittery after their 14th cup of coffee in the afternoon. I have even heard a local sports radio host suggest that Seattle fans don’t have the same rabid sports interest seen in other cities because “people in Seattle have actual things to do besides watching sports.” (I think he was talking about you, Cleveland.) But none of these can adequately explain how Seattle and its teams are forever outside the “cool kids club” of the professional sports world.

This first hit home for me when I was watching a Fox NFL pre-game show in 2005 and Jimmy Johnson was discussing why the Seahawks’ running back Shaun Alexander wasn’t a national media star despite the fact that he was on pace for a 2,000-yard rushing season. “I think,” said Mr. Bob’s Big Boy Hair, “that it has something to do with the fact that he plays in Southeast Alaska.” TRUE FACT.*

Bob's Big Boy
Jimmy Johnson before he lost all the weight

In fact, Seattle sports teams have an unfortunate history of frequently being on the brink of moving out of town. The first major league sports franchise in Seattle, the MLB Seattle Pilots, left town after one season in 1969 to become the Milwaukee Brewers. Their replacement, the Seattle Mariners, were almost moved to St. Petersburg Florida in 1993, before the team was sold to a Japanese ownership group led by Super Mariothe chick from Metroid and Godzilla. The Seattle Seahawks were almost moved to Los Angeles in 1996 (just like every other team in the NFL that has wanted a new stadium).

That was bad enough to give Seattle sports fans a permanent case of the relocation jitters. But then, to top it all off, in 2006 the SuperSonics were sold to an ownership group led by Tom Joad or whoever the hell lives in Oklahoma. This was especially galling since the Sonics were Seattle’s only championship-winning team.** (The city came close twice when the Mariners lost the 2001 ALCS to the Yankees and the Seahawks lost Super Bowl XL in 2006 to the referees.)

The city of Seattle had a strong case against the NBA and the Sonics’ new carpetbagger ownership for breaking their lease. But Seattle’s doofus elected officials fumbled the trial strategy, and ultimately let the team go for a $45 million lease termination payment and a vague promise from NBA commissioner David Stern that Seattle might get a team again someday once they had filled up all the long-time proven basketball markets. You know, like Toronto and Charlotte.

Mayor McCheese
Seattle’s then-mayor, Greg Nickels

Seriously, the team left for Oklahoma City. I’m sure it’s lovely there and crap like that, but… really? Oklahoma City? That’s a little like having the following conversation with your girlfriend:

Girlfriend: We have to break up, I’m leaving you for another guy.

You: What? Is it the tall blond jet fighter pilot I saw you talking with earlier?

Now Ex-Girlfriend: No… it was the guy next to him.

You: The brilliant wealthy neurosurgeon?

Ex-Girlfriend: No… the guy on the other side.

You: The little kid with a backpack?

Ex-Girlfriend: He’s not a little kid, he’s 4’2″. And that’s not a backpack, it’s a hump.

The point of all this being that until such time as the “Why Does Anyone Care What Team Juwan Howard Wants To Play For?” league returns to Seattle, they are on the official Jeff Carl Dead To Me list. Until then, I will know Kobe Bryant only as “that guy who’s in the commercials with Tom from Parks and Recreation” and February/March will be the Months Without Professional Sports.

Except for the NHL.

———————————————

* P.S. Screw you, Jimmy Johnson.

** Yes, Seattle has an actual championship-winning pro sports team, the WNBA Seattle Storm. They are awesome and deserve mad props and lots of fans, but it ruins the narrative of my rant. Go Storm.

The Belichick Inverse Likability Theorem, Part 2

By Jeffrey Carl

Bloggers To Be Named Later, January 27 2012

Bloggers To Be Named Later was Paul Caputo’s fabulous sports-blogging empire of the mid-2010s. My role in the enterprise was to promise to write humor articles and then not do that, or at least not remotely on time. Ultimately, after a flirtation with viral Internets fame, the site basically turned into an excuse for Paul to get free baseball tickets, which is actually about the only good reason to run a blog of any sort. After the BTBNL site wound down, I realized that I hadn’t kept local copies of most of the stories I had written, so I ended up scouring through The Internet Archive to find as many as I could in order to prevent a tragic loss to the world’s cultural canon of blog posts complaining about the Seattle Mariners. You’re welcome.

Last week, we introduced the first truly solid, mathematically proven theory that finally takes the guesswork out of determining a NFL team’s success. The Belichick Inverse Likability Theorem simply states:

Inverse Likability Theorem

In Part 1 of this series, the theory’s startling accuracy was demonstrated using the records of NFL coaches in 2011. “But how does it hold up over time?” you ask.

To prove just how deeply I deserve a NFL Nerdy Math Thing award, I will inconvenience myself to show you that the “BILT” shows itself true throughout NFL history as well. Let’s start with some of the all-time NFL standout coaches for one reason or another:

  • Vince Lombardi (.739 career winning percentage, 7 NFL championships): Packers Guard Jerry Kramer once joked, “Lombardi treated us all the same, like dogs.” That seemed funny until after a bad game in 1966 he outright sold RB Paul Hornung to a shady Korean restaurant.
  • Tom Landry (.602 career winning percentage, 2 NFL championships): Stabbed “Dandy” Don Meredith in the kidney for touching his fedora, ending Meredith’s career. Set NFL record for consecutive games never showing human emotions, which stood until Belichick beat it in 2010.
  • Marty Schottenheimer (.595 career winning percentage, 0 NFL championships): Best known for his infuriatingly conservative (“one yard and a cloud of dust”) playcalling style, his shockingly blatant nepotism, and his occasional attempts to hire ninja assassins to kill John Elway in revenge for repeated playoff losses. Earns back a few likability points for coaching the UFL Virginia Destroyers to a championship – unlike the Browns, Chiefs or Redskins.
  • Buddy Ryan (.500 career winning percentage, 0 championships): May or may not have put bounties on opposing players and/or punched assistant coaches on the sideline. Nonetheless gets likability points for being pure bats**t crazy enough to enjoy watching (see also Ryan, Rex).
Steve Spurrier
Coaching them up, Riverdance style!


  • Steve Spurrier (.375 career winning percentage, 0 championships): Okay, so maybe putting all your chips on Danny Wuerffel as your quarterback and resigning your coaching job from the 8thhole of a golf course aren’t Hall of Fame qualifiers. But the Old Ball Coach (“OBC”) never failed to amuse fans or reporters at his comically inept press conferences, and his bold, fashion-forward sense for womens’ golf visors made him a standout in likability. 
  • Joe Bugel (.300 career winning percentage, 0 championships): Absolutely everybody loved “Buges,” a players’ coach and two-time Super Bowl winning assistant with the Redskins who proceeded to win approximately negative 1 zillion games as the head coach of the Cardinals and Raiders.
Madden NFL 13 Quarterback Vision Cone
Seriously, to put this in they removed Madden Cards? Or Madden Challenge points? Or mini-camps that added to player stats? WHHYYYYYYYYY

John Madden (.759 career winning percentage, 1 championship): John Madden was a great coach and better commentator, but he gets +.500 unlikability points for willingly putting his name on the last several “Madden NFL” video games. Anyone who accepts money in return for using their name to pimp this chronically over-rated annual series of $60 roster updates has basically abdicated their rights to enter the Kingdom of Heaven when they die. 

So let’s see where that all nets out:

NFL Coaches
The Belichick Inverse Likability Theorem is scarily accurate.

“Okay,” you may be saying, “but what about the nice guys who were big winners?” Technically, it is true that several seemingly likable people were coaches with Hall of Fame winning percentages. But when you look at them closer, you will find the IBT holds true:

Viet Cong
Joe Gibbs sits next to Jane Fonda, 3rd from left


  • Joe Gibbs: Joe Gibbs is widely viewed as the archetypal “nice guy” coach and all-around decent human being. But he had two distinct phases of his coaching career:
  • Joe Gibbs Part II (.468 career winning percentage, 0 championships): During the kind grandfatherly years of his second turn with the Redskins, Gibbs had the highest-paid coaching staff in football and managed only a 30-34 record with a single playoff win. Note that as with John Madden above, coaching for a d-bag owner does not improve a coach’s winning percentage under the Belichick Inverse Likability Theorem.
  • Joe Gibbs Part I (.648 career winning percentage, 3 championships): During his first tenure as Redskins coach, Gibbs was the dominant coach of his era but was secretly a rabid sympathizer of the Viet Cong, despite the fact that the war had been over for many years.
Dick Vermeil gets choked up watching “Finding Nemo”
  • Dick Vermeil: Vermeil is famous for having changed his style from angry and heartless during his days in Philadelphia to warm, emotional and sentimental during his return to coaching in St. Louis when he won a Super Bowl. But Vermeil also had distinct phases to his hallowed coaching career:
    • Dick Vermeil Part I (.641, 0 champships): During his ultra-Type A years in Philadelphia, Vermeil went to the playoffs four out of six years. He was known for his players hating his guts, and setting the Eagles’ all-time coaching high blood pressure record which was later broken by Andy Reid only with the help of more than 35,000 McNuggets.
    • Dick Vermeil Part II “Electric Boogaloo” (.49, 1 championship): Despite Vermeil’s heartwarming yet off-putting crying jag during the Super Bowl, his winning percentage during his tenure in St. Louis was only .458 in the regular season and he racked up 10+ losses two out of three seasons with the Rams. He would have gotten fired if Kurt Warner hadn’t paid Tonya Harding with a ton of crystal meth to dress up as Houston Texans tackle Travis Johnson and cripple Trent Green. 
"You Can Do It" by Tony Dungy
Of course you can do it if you jump on the other player’s back and drag them down.

Tony Dungy (.651, 1 championship): Dungy is widely known for his avuncular TV style, strong religious faith and commitment to charities promoting involved and caring fathership. But I’m just adding +.600 unlikability to Dungy for “having a weird-shaped head” so that it fits my theory.

Author’s math-y science words note: Many people who are not expert science-y people like me are unaware that a large portion of science is specifically related to assessing the shape of people’s heads and modifying mathematical formulas based on this information. You are now a more educated person. You’re welcome.

So with this additional historical data, how does the Belichick Inverse Likability Theorem hold up?

NFL Coaches
Note the incredible accuracy, like Nostradamus or Tim Tebow.

As the chart above shows, “pretty darn well.”

In the next part of the series, we will apply the Belichick Inverse Likability Theorem to college football and literallyblow your mind. No, really, I mean “literally.” As in if you read it, you will die. If that doesn’t encourage future readership of this blog, I’m really not sure what does.

The Belichick Inverse Likability Theorem, Part 1

By Jeffrey Carl

Bloggers To Be Named Later, January 20 2012

Bloggers To Be Named Later was Paul Caputo’s fabulous sports-blogging empire of the mid-2010s. My role in the enterprise was to promise to write humor articles and then not do that, or at least not remotely on time. Ultimately, after a flirtation with viral Internets fame, the site basically turned into an excuse for Paul to get free baseball tickets, which is actually about the only good reason to run a blog of any sort. After the BTBNL site wound down, I realized that I hadn’t kept local copies of most of the stories I had written, so I ended up scouring through The Internet Archive to find as many as I could in order to prevent a tragic loss to the world’s cultural canon of blog posts complaining about the Seattle Mariners. You’re welcome.

Bill Belichick
Pretending to make human smile DOES NOT COMPUTE

Statistics are essential to modern sports. Football coaches have situational analysis tables to help them justify “punt it on 4th and inches” calls more frequently.

Baseball has “sabermetrics,” which is an intricate mathematical system for determining results that is calculated by nerdy people who don’t have a big enough group of friends to play “Dungeons & Dragons” with.

Worldwide, soccer has all sorts of crazy crap that they do in metric units like “KiloBeckhams” or “Injury Time per Hectare.”

David Beckham
1 GigaBeckham (938 Imperial MegaBeckhams)

Yet the NFL has always lacked a true benchmark statistic (like WAR in baseball or Remaining Teeth divided by Penalty Minutes in hockey) that can accurately predict a team’s future success.

That is why we are proud to introduce a solid, mathematically proven theory that finally takes the guesswork out of NFL success. The Belichick Inverse Likability Theorem simply states:

Not convinced? We’ll prove this theorem by examining prominent NFL coaches and their unlikability. Let’s start by looking at the 2011 NFL postseason conference championship coaches:

  • Tom Coughlin, New York Giants: Famous for doing things like fining players for not being five minutes early to meetings; losing the confidence of his locker room; and looking like The Simpsons’ Mr. Burns except less healthy.
  • Jim Harbaugh, San Francisco 49ers: Got into a fight with Pete Carroll onfield when he was with Stanford. Got into a fight with Lions coach Jim Schwartz onfield when he was with the Niners. Got into a fight with a crippled nun onfield when she asked for his autograph.
  • Bill Belichick, New England Patriots: Each year, sends Christmas cards to every single reporter covering the NFL that just say “F**k You.” Writes bad checks for Girl Scout Cookies and then poops on the Girl Scouts’ lawns when asked to return them. Once shot a man in Reno just to watch him die.
  • John Harbaugh, Baltimore Ravens: He actually seems like a pretty decent guy, but he gets a gratuitous +.100 unlikability added for coaching in Baltimore, and +.200 for being Jim Harbaugh’s brother.

Now let’s see where the four remaining playoff coaches stand according to the theorem:

NFL Coaches
Actual math involved
Math is hard
Math is hard, and also hard to draw

The theorem is derived from the inverse of a well-known sports mathematical axiom, Sir Leo Durocher’s proof that “nice guys finish last.” It’s that simple – the bigger an obvious d-bag your team’s coach is, the better their record will be within a certain margin of error.

This is actual math, people! I can say this with absolute certainty since nobody’s going to bother with checking my calculations because math is boring.

But you may be saying, “but how does this theorem hold true for coaches outside the final four NFL playoff teams?” Okay, let’s flesh this out with some other carefully chosen examples based on the coach’s general likability as a person:

NFL Coaches
Spookily accurate once you insert modifiers to fit the theory

At this point, some of you may be saying, “why do Steve Spagnuolo and Tony Sparano get such high ratings for being likable?” Well, “Tony Sparano” sounds a lot like “Tony Soprano,” and saying bad things about him always seemed to get people killed. And the Rams performed so poorly in 2011 largely because Steve Spagnuolo was always being called away for missions as part of the SEAL Team Six that killed Osama bin Laden. But he couldn’t tell anyone about it or he would have had to kill them. True fact.

Navy SEALs
Spagnuolo is 3rd from left, next to Chuck Norris

In summary, the Belichick Inverse Likability Theorem provides us with the definitive mathematical formula for determining NFL team success or failure, replacing such irrational and illogical methods as astrology, or listening to Trent Dilfer. Next week we will apply the theorem to historical coaches to demonstrate further just how right I am.

I am not sure whether the NFL is technically qualified to just hand out Nobel Prizes for Awesome Math-Based Stuff, but I’m pretty sure they are, and if so I expect one.

The Top 10 Reasons Madden 09 Sucks

August 17, 2008 – Jeffrey Carl

Disclaimer: I’m a longtime avid Madden player, and also a longtime avid complainer about things. I have a love/hate relationship with Madden NFL – it’s one of my all-time favorite games, but it also has a long history of sucking up cash from idiots like myself by providing only annual roster updates and dubious “innovations” (quarterback vision cone, anyone?) Many of my most important criticisms deal with the user interface and feature set on the PS3; I have only played the Playstation 3 edition of Madden 09, so some of these comments may not hold true on other platforms.

  1. My first preseason game of Madden 09 in franchise mode locked up on the fifth play and forced a reboot. Not a great start, and yet another example of Electronic Arts (EA) quality control. Note to Sony: the last two PS3 games I have bought have both had serious crashing bugs (Madden and Grand Theft Auto IV). This is not acceptable. If I didn’t have the original 60 GB unit with backward compatibility for my PS2 game library, this thing would be on eBay right now. This is unrelated to Madden, but while I have Sony on the hook – $30 for Blu-Ray movies that look only marginally better than DVDs that cost half as much? Really?

  2. The Madden series has an illustrious history of being the pinnacle of popular software that doesn’t explain to its users how to actually, you know, “use” it. For years, the game has offered users playcalling choices without any explanation whatsoever of what those plays mean. (Has the game or even its “official guide” ever explained what a Mike or a Sam blitz is? Not so much.) I know the difference between a Cover 2 and a Cover 3 … but there are plenty of plays in the playbook that I’m choosing g without really understanding why. Would it kill EA to put up a web page somewhere explaining why I want to use each play in any given situation?

  3. The Madden series also has a history of vanishing features. Maybe nobody else did, but I liked the Madden Challenge points system and Madden Cards. It was bad enough a couple years ago when they eliminated the cheerleader cards from the deck (Note to EA: know your audience. They like cheerleaders.) But to eliminate it entirely? Did they really need to save a few megs of space on that Blu-Ray disc? I’ll tell you what: you can pitch out your lead blocker controls and give me back my Madden Challenge, and we’ll call it even. Oh, and throw back in the mini-camp drills with the bronze, silver and gold trophies too. I loved those minigames and how they “bought” me something for playing them (other than upgrading player skills, which is a dubious concept in and of itself – shouldn’t my player skill levels be tied to the actual player?)

  4. Every year, each new Madden game comes out, and the gaming press corps scrambles to praise it. Does anyone ever hold EA’s feet to the fire and actually ask them questions like, “do you actually do any user testing to see if they want this year’s new ‘features?'” or “where did these old things go that I liked?'” This can probably be chalked up to the laziness of the gaming press (and their publishers’ needs to keep collecting EA advertising dollars), but EA doesn’t help matters. Maybe I’m just missing it, but I don’t see see anywhere that they actually have two-way conversations with either the press or the user community about why they did what they did with the game (e.g., what features they kept or lost and why). They probably don’t because they know suckers like me will keep buying it year after year for basically a $50 (now $60) roster update.

  5. So let’s talk about roster updates. I understand if EA doesn’t want to post regular roster updates, but why not at least provide a schedule? And there is no shortage of football-obsessed users who would be happy to build their own weekly (or probably even daily) roster updates online … why not let users subscribe to third-party rosters like RSS feeds? If my team of choice (in this case, the Seahawks) changes the rotation on a regular basis, I’d like to have the new starting lineup as I play through the season without making all those tweaks myself. With all the cash EA is raking in from suckers like me, can’t they provide this as a download or at least let third parties do it?

  6. This is probably just me being old and cranky, but WTF is up with the soundtrack choices? A few years ago, it seemed like the songs playing over the between-game menus were selected by somebody with a good taste in music that would get you pumped up for the next round. In Madden 09, I’m willing to bet that the music choices are dictated by which record companies throw the most cash at EA to get their dog acts selling no records in front of a captive audience. Give me back my Andrew WK and Rooney songs, you can keep your “”Kardinal Offishall feat. Lindo P.” There’s a few decent tunes on there, but for the most part I set the music volume down low to get away from them. About the only positive thing I can say here is at least they haven’t incorporated “Daughtry” into the mix. Yet. (shudder) Oh, and they do at least offer a way in the User Interface to turn down the music volume compared to the rest of the game … but see more about the UI below.

  7. The “my skill” concept, probably the most trumpeted new “feature” in Madden 09, is a great idea on paper. Then again, so is Communism. In reality, it leaves a lot to be desired. When the user loads the game for the first time, they are prompted to take a Madden “test” to figure out their skill level. The problem is that this test is about timed button pressing, not actual skills in running, passing, or defending. Therefore, the difficulty level is set based on the user’s ability to push the right button quickly rather than actual game situations. On top of that, it somehow feeds into the user’s “Madden IQ,” which is determined by … uh … elves in a secret base in Antarctica. I don’t see an upfront explanation of what goes into Madden IQ, how it impacts the game, or what I can do specifically to change it. So what’s the point? I’m sure there is one, but the game and its manual don’t tell me.

  8. This is supposed to be a “Madden” game. In the previous installments, you actually had John Madden providing color commentary during the game. To be fair, the John Madden “commentary” was a lot of canned crap like “he really put a lot of mustard on that ball” and “BOOM!” that had nothing useful to contribute … but at least Madden was there.

    In this game he appears as a ghostly figure – much like the Emperor in ‘The Empire Strikes Back’ to introduce features, tell you to take quizzes and so forth, but the actual in-game announcing is done by Cris Collinsworth and some guy whose name I can’t remember. Don’t get me wrong; me likey Collinsworth; he’s very smart for a football player and he’s not afraid (at least in his actual game broadcasts) to venture some interesting and controversial opinions. (Oh, and the “backtrack” feature wherein Collinsworth explains to you why you were an idiot for throwing an interception is interesting, at least the first few times.) But – no offense to Collinsworth or whatsisname – this is a serious step down from the Madden/Pat Summerall or Madden/Al Michaels announcing teams of previous years. In my Madden game, I’d like to see some actual insightful input (I know, wishful thinking) from John Madden in exchange for whatever ludicrous checks he’s getting from EA for these things.

  9. So this one really grinds my gears. No, not Lindsay Lohan. (That five year old Family Guy reference shows just how hip I am.) The one place that you do actually hear from John Madden during a typical franchise mode game is when you get to see some replays of your game at Sprint Halftime. No, it’s not halftime, it’s Sprint halftime. The product placement is not particularly obtrusive – it’s just the Sprint name and “Sprint ahead” tagline – but it’s a disturbing harbinger of things to come. Normally, we consumers see advertisements in things that we don’t directly pay for, like broadcast TV or websites with free content. BuI paid for Madden 09 already – I don’t want to see any ads. And when Sprint goes bankrupt or gets bought – which it will – does EA then update the game with a new advertiser? And what part of the game is next for ads?

  10. Here we get to the core of my complaints: Madden NFL 2009 has the single worst User Interface (UI) of any software I have ever used. And I say this as someone who tried Gimp 1.0 running under MKLinux on a Mac with one mouse button. There are two user interface cardinal sins: not explaining to a user what they’re s supposed to do, and making the most common actions hard or requiring extra steps. Madden 09 does both with astonishing regularity. 

    Let’s start with the opening screen. The Brett Fahr-vuh-ruh splash screen displays once the game has gotten through all the opening crapola telling you about how “it’s in the game” (“it” in this case means “sucking”), “EA HD” is the future of wireframe model l33tn3ss, etc. The user is prompted to press “start” to begin. This may seem like a tiny issue, but think about this from a User Interface perspective: this screen accomplishes nothing except to make you press start while you stare at Brett Favre in the jersey of a team he doesn’t play for anymore. Why not just take the user to the main interface? What do you need a splash screen for? I have the box with Brett’s picture if I want to look at it, and I certainly already know what game I’m playing. Imagine if Microsoft Word showed you a splash screen advertising the fact that you were using it, and forced you to click a button before you could actually begin typing a document. Wouldn’t that get real old real fast?

    In the big picture, this is a minor quibble and it only inconveniences the user by requiring a single button push; but it is symptomatic of a larger issue: the EA Madden team simply does not think about designing interfaces with the goal of getting you to what you want to do. In fact, let’s look a the very next screen to illustrate another example of the same UI sin. The user is presented with an empty room with video monitors showing … nothing (so why are they there?). User action hints are shown (as throughout the game) in a small bar at the bottom of the screen. Users are prompted to hit the start button to bring up the game menu. Okay, since you can’t t really do anything here without the menu, why not just show it? Am I missing something where this is brilliant and I just don’t get it?

    So let’s look at another UI cardinal sin: not telling the user how to actually USE the interface. For example, in franchise mode the user has an option to train the players for the week’s game (this replaces the previously much more accessible mini-camp option). To begin the training, the user must select a player, select a drill and a level. Here’s the problem: the User Action hints displayed in the bottom of the screen don’t actually show the button you’re supposed to press to begin the training. I ended up in a two-minute loop of pressing all the buttons shown in order to begin, but none of them actually started the training; you need to press ts the start button to do that, and there is nowhere that you are told this, you have to figure it out by pressing buttons until something works. Almost everywhere else in the game, the “X” button makes your selection or signals your choice; why in this particular instance is it the “start” button? UI element inconsistency is a hallmark of bad software design.

    Here’s another doozy (Does anyone say “doozy” anymore? Probably just me.) of a terrible UI decision. The playcalling interface in Madden 09 is radically different. It has some advantages and some disadvantages. But what it does not offer is a way to revert to the “classic” UI – a problem that is shared by Microsoft Office 2007, by the way. It’s okay to radically revamp your UI, but when you do that, you need to provide users with a transition or compatibility mode. It doesn’t really “cost” the developer anything, and it assists the user with getting the things done that they expect from the software. So why not?

Now, to be fair – there are a lot of things I do like about Madden 09. Online leagues are a great – if much overdue – feature. The game does look exceedingly pretty, even if the crowd and sidelines are still wooden, and the game sits uncomfortably close to the “uncanny valley” of human models. The depth of football knowledge that the game’s makers expose is greater than ever before, and the franchise mode is very deep (although, do I really care about screwing fans with stadium concession prices if I’m not Dan Snyder?). Of course the core game on the field itself is still incredibly fun to play. So don’t take this rant to mean that I’m not going to play through at least one full franchise mode season as the Seahawksassuming it doesn’t crash again).

But I complain about Madden the same way I complain about Apple or the Star Wars franchise: as someone who likes the product so much that I’m a slave to buying it, but has a lot of discontent with the changes made to the product lately. EA, Steve Jobs and George Lucas are of course under no obligation to listen to me – they already have my money. And I am just some crank complaining on the Internets; it’s product sales that really talk. But when you care passionately about a product, you passionately care about wanting to make it better. I just wish that any of the above would display some sign of listening.

Forgotten Son: The Birthplace of President James Monroe

By Jeffrey Carl

From Legacy Magazine, January 2003

Nearly ten years ago, I spent a college summer as a reporter for the county newspaper in rural Westmoreland County, Virginia. Westmoreland, nestled in the “northern neck” of Virginia between the Potomac and the Rappahannock rivers, is blessed with an enviable surplus of historical sites. 

James Monroe birthplace monument, January 2003

Almost anywhere, the birthplace of a president would be marked as a site of significant historical importance and tourism interest. But Westmoreland boasts the birthplaces of George Washington and Robert E. Lee (both of whom have lavish commemorative historical sites). In a county with an abundance of historical favorite sons, former President James Monroe finishes as a distant third place. In the summer of 1994, I was assigned a story about a barely-noticed granite marker and state historical signpost on a roadside, dedicated to the birthplace of perhaps the most overlooked of America’s founding fathers.

James Monroe was born on April 28, 1758 on a 505-acre plantation near what is today Colonial Beach, Virginia.  He left at age 16 to attend the College of William and Mary, then quit school to join the army when the Revolutionary War broke out. Monroe facilitated the Louisiana Purchase during his time as minister plenipotentiary to France, and as minister to Spain he negotiated the purchase of the Floridas.  In 1817 he was elected to the first of two terms as president, in a time that was later called “the era of good feelings.” He was the author of the “Monroe Doctorine,” which became the cornerstone of American foreign policy for generations.

Access across the fence to the Monroe birthplace monument

We know comparatively little of James Monroe personally. He stood 6’2”, while his wife was a petite 4’8”.  Thomas Jefferson called him “a man whose soul might be turned wrong-side outwards without discovering a blemish to the world.”  We know that he had a fondness for waffles.

After Monroe retired from public office, he fell on financial hard times. He petitioned Congress for back pay, but President Andrew Jackson blocked the funding of his request; in 1831, he was finally given only half of what he had asked for originally.  On July 4, 1831 – five years to the day after the deaths of his friends John Adams and Thomas Jefferson – James Monroe died.

The James Monroe monument alongside Virginia State Route 205

Neither the man or his birthplace knew much peace after his death: Monroe was buried in New York, but was later exhumed and moved to Richmond. The owner of his birthplace site after the Civil War used the tombstones of the Monroe ancestors as weights for his harrow, and then flung them into the creek when the work was finished.  Over time, the land was parceled into numerous plots and sold. 

In 1941, a Monroe Birthplace Monument Association was formed, which acquired the area around Monroe’s actual birth site. An access road was built to the site, but the Association’s plans never progressed beyond that stage and in 1973 the land fell to public ownership. For years, various government and private organizations were approached about sponsoring the development of the historic site, but all refused or were unable to raise the needed funds. In 1993, several chapters of the Veterans of Foreign Wars were kind enough to pay for a granite marker at the site, nestled among a grove of trees along the side of State Route 205.

When I visited the site in 1994, there was a certain thrill to the lonely and solemn spot, and a feeling that the site was my little secret. With no noise or other visitors present, it was blissfully easy to envision the area as it once was – a luxury almost never available at most historical sites. But there was also a sense of vacancy, a tangible knowledge that something should be there which was not.

The Monroe birthplace monument in its clearing

I returned to the site this past winter and found that the site remained just as it was a decade ago. But in the intervening years, dedicated area residents had continued to push for something to be done, and it appears now that things are at last changing for the better. Plans were drawn up for a memorial site that would include a nature trail, picnic area and historical signage, and the Westmoreland County government has been awarded a grant to begin developing the site. But the work has not yet begun, and today the site remains just as it was.

The lonely granite marker still stands there as a reminder of both the sadness of the neglect of historical sites and the hope that the work of determined and caring individuals can help to bring that neglect to an end.

Random Ramblings About BSD on MacOS X (Part 2)

By Jeffrey Carl and Matt Loschert

Daemon News, December 2001

This is the first chapter in a series of observations, representing the adventures of a couple of BSD admins (one with a lot of prior MacOS experience, the other with more on the BSD side) poking around the command line on an iBook laptop running Apple’s  Mac OS X Public Beta. We’ll attempt to provide a few notes and observations that may make a BSD admin’s work with Mac OS X easier.

Note: Again, as members of the FreeBSD Borg collective MIND-STABLE, we’ll refer to our various comments/sections by the singular “I.” This also prevents either of us from admitting which dumb parts of the article were ours specifically. 🙂

New Information from Last Time

The quality and quantity of the feedback that we (OK, Borg segfault here) received after part one of this series was fantastic. Thanks to everyone (you know who you are) who wrote in to clear up points or to show us a better way to do things. Aside from a few flames on Slashdot (“You are stupid and blow goats!”, “Duh!”, and “N4t4li3 P0rtm4n r00tz y00”, etc.) the feedback was very helpful.

The number of NeXT gurus (and some BSD overlords, as well) out there who came to the rescue to correct mistakes and offer answers to the questions posed in the article was amazing. This time around, our readers who are l33t NeXT h4Xors, BsD r00tz or k3wl m4c d00dz are still invited to help clear up the questions or postulations herein on Mac OS X. So, to follow up and answer some of the questions posed by the first article, here are some of the best responses received:

------------------------------------------------
From: Dwarf
Subject: Daemon News Article
Not sure if any of this is new info to you guys....
OSXPB inherits a lot of "philosophy" from OS X Server. Thus, the lack of logging that occurs. Apple seems to have come down in favor of system responsiveness versus use monitoring. Their rationale for turning off logging(for almost everything) by default is that it impacts network thruput. If the logs are something you need, they can all be enabled from the command line, but network (and probably GUI) responsiveness will likely suffer as a result Apple also seems to have made several assumptions about how OS X (any flavor) will be used.
Apparently their idea is that it will provide services to a LAN and be hidden from the world by a firewall of some sort, thus the default enabling of NFS and having NetInfo socketed to the network by default. Since NetInfo is a multi-tiered database, your "local" NI Server may also be the "master" NI Server for subnetted machines, while being either a clone or a client of a still higher level NI Server. So, they hook it to the net by default. This also provides the mechanism by which other machines can automatically be added to your network. At bootup, each machine tries to learn who it is and where it lives by querying the network for its "mommy" (a Master NI Server). If it finds one it accepts the name and IP that server furnishes and initializes itself accordingly. If it doesn't, it uses the defaults from its initialization scripts. Getting this all to work painlessly is one of the things about which the NetInfo documentation is pretty obscure. Owing primarily to the fact that it is written in terms of tools that no longer exist as separate entities, but have been combined into more powerful tools. Further, if NFS is properly setup, each machine will automount the appropriate NFS volumes at startup. Another area where making it work is not clearly explained. I will only touch on the confusion that exists about setting up MailApp and making it work. Another documentation shortcoming.
Another facet of operation that isn't clearly explained is the Apple philosophy about how the file tree is organized. Their thinking is that users should only install things into the /Local tree. /System should be reserved for those things that administrators add. My guess is that naïve users will be fine so long as they confine themselves to operating within the GUI, as the GUI tools seem to be pretty smart about where they put things. But, if those users start installing things from the CLI....
A problem area about which not much has been written is the fundamental incompatibility between the Mac HFS and HFS+ filesystem and BSD. Mac files are complex, containing both a Data Fork and a Resource Fork. BSD knows nothing about complex files and thus when BSD filesystem tools are used to manipulate Mac files, the resource forks get orphaned. See: http://www.mit.edu/people/wsanchez/papers/USENIX_2000/ for a better explanation of this. This may be the source of a longstanding OS X Server problem whereby the Desktop Database process eventually goes walkabout and consumes over 90% of the CPU.

Authors’ Note: I’ve received a large number of comments about how the existing state of Mac OS X/Darwin documentation sucks. Frankly, I agree – that’s why I/we wrote these articles. While there’s a certain thrill to “spelunking” a new OS, it’s not what an administrator would like to have to be doing in their spare time. However, it’s hard to point a finger at Apple, since they’re currently under a hiring freeze after their recent absurd stock devaluation (post-Q3 results), and they would be perfectly right to have every man/woman/droid/vertebrate/etc. working on developing the OS rather than documenting it. 

Nonetheless, there is still a significant problem lurking. There are tens of thousands of otherwise non-super-technical folks who have become MacOS gurus through inclination and experience, able to roam around a school or office and fix traditional MacOS problems. At the moment, the folks who (working with the current paltry documentation) can do this for MacOS X are incredibly few, since it requires significant knowledge of MacOS as well as Unix experience – and even then, it’s only NeXTstep mavens who will be truly at home with some of its aspects.

The good folks at Daemon News have provided a space here to try to answer some of these questions, but it’s up to the knowledgeable folks already out there to contribute to sites like Daemon News, Darwinfo, Xappeal, MacFixit, etc. to make whatever knowledge is out there available to the soon-to-be Mac OS X community. If Apple can’t document this OS thoroughly while rushing every available resource to develop it, it’s up to the folks who (at least marginally) understand it to do so, for the good of all its users.

From: Brian Bergstrand <[email protected]>
Subject: Re: Daemon News : Random Ramblings about BSD on MacOS X (part 1)
In your article you said: "(as mentioned, etc, tmp and var are all links to inside /private; I refuse to speculate why without having several drinks of Scotch first)".
The /private dir. is again a part of Mac OS X's NeXT heritage. Originally, the thought behind /private was that it could be mounted as a local drive for NeXT stations that were Net booted. That way you would not have to mount volumes for /etc, /var, or whatever else needed write perms. This also worked well if you booted from a CD. /private, meant data that was to be used only on a specific machine.
HTH.
Brian Bergstrand
Systems Programmer Northern Illinois University
Imagination is more important than knowledge. - Albert Einstein

Authors’ Note: We’re discovering more and more that Mac OS X seems very much like the next revision of OpenStep – with MacOS 9 compatibility and a new GUI thrown in. Not that this is necessarily a bad thing; it just seems like NeXT was the “Addams Family” member of the BSD clan that nobody else noticed, and we’re not sure why. If anyone would like to speculate on the reasons that NeXT’s new ideas were largely ignored by the industry (aside from the typical Steve Jobs-ian tendency to make your computers too expensive for normal people to buy), we’d love to find out more.

------------------------------------------------
From: Daniel Trudell <[email protected]>
Subject: random bsd os x ramblings...netinfo and ipconfigd
[...]
Netinfo is interesting. One thing I noticed is that most of the stuff in the utility applications are like mini versions of netInfoManager. IE I can edit/add/delete users in netInfoManager including root and daemon, and those changes are present in multiple users after the fact, and vice-versa. However, some things still depend on /etc/passwd even in multiuser mode. I installed samba, and I needed an /etc/passwd file for it. i used "nidump passwd .> /etc/passwd" to generate one from netinfo....but there was a twist...some of the users were shadowed, some were not. I s'pose that might be an issue. Also, I was conforming UID's on my box to the company UID's....if somebody with a UID of 500 logs in remotely the machine forgets how to handle users...a reboot fixes this.
[...]
In general, i think there's a consensus about acceptance of netinfo. When you first run around in tcsh, a geek asks themself "what the f**k...this is jacked up, what's up with /etc?", but once you figure out netinfo, a geek says "hey, check this out, it's nifty!"
tack

Authors’ Note: I agree. Still, it will take a while (and some seriously improved documentation [see above]) to get used to.

------------------------------------------------
From: Rick Roe
Subject:Re: Mac OS X article
Well, I'm not the foremost expert on Darwin, but I've learned a few things from "this side of the playground" that might help...
- The "Administrator" issue is Apple's compromise between the single-powerful-user paradigm of Classic Mac OS and the Unix/NeXT multiuser system with it's too-powerful-for-your-own-good root.
An "administrator" has the privileges an upgrading Mac user expects: ability to change system settings and edit machine-wide domains on the disk (like /Applications). However, it still protects them from the dangers of running as root all the time, since they don't get write access to the likes of /etc (except through configuration utilities), or to /System (which is a partitioning that keeps the Apple-provided stuff separate from the stuff you install, like /usr vs. /usr/local).
The inability of "administrator" users to make changes to items at the top level of the filesystem is a bug in the current version.
- Actually, we got NTP support back in Mac OS 8.5, not 9.0 :)
- The developer tools are available separately from Mac OS X through Apple's developer program. The basic membership level is free, and gets you access to not only the BSD/GNU developer tools, but also the cool GUI tools, headers, examples, and documentation out the wazoo. Of course, you can also get a lot of this stuff from the Darwin distribution, too.
- Regarding the list of top-level files and directories in .hidden:
- Desktop DB and Desktop DF are used by Mac OS 9 to match files to their "parent" applications. OS X maintains them for the sake of the Classic environment but only uses them as a fallback, as it has a more sophisticated per-user system for this purpose.
- Desktop Folder is where native OS 9 stores items that show up on the desktop. In OS X, they're in ~/Library/Desktop.
- SystemFolderX was where the BootX file (a file containing info for Open Firmware and some bootstrap stuff to get the kernel started) was kept in previous developer releases. It's elsewhere now.
- Trash is the Mac OS 9 version. OS X uses /.Trash, /.Trashes, and ~/.Trash.
- So you've discovered how cool NetInfo is. I got tired of reading reviews that were just complaints about not being able to edit stuff in /etc to change things. :) Here's some extra info for you:
- There's a convenient GUI utility for editing NetInfo domains,
/Applications/Utilities/NetInfoManager.
- root's password can be changed in NetInfoManager, or the Password panel in System Preferences, in addition to the command line.
- NetInfo is a pretty cool "directory service" for administering groups of computers... one of those unfortunate "best kept secrets" of NeXT... but what's cooler is that, in OS X, it's just one possible backend to a generic directory services API. So it's also possible to run your network using LDAP, or Kerberos/WSSAPI (er, whatever that acronym was), or NDS, or (god help you) Active Directory -- and the user experience for Mac OS X will be the same.
- You might like this... try entering ">console" at the login window.

Authors’ Note: Typing “>console” at the login window (no password necessary) and hitting “Return [Enter]” boots you directly into Darwin, skipping the Mac OS X GUI layer entirely. Sooper cool.

------------------------------------------------
From: Larry Mills-Gahl <[email protected]>
Subject: NetInfo and changing network settings
[...]
One bit that I've been sending feedback on since the Rhapsody builds (pre-OS X Server) is the suggestion that you must reboot to have network settings changes take effect. This is one area in NT that drives me absolutely nuts and I feel like billing Bill G for the time it takes for multiple restarts of every NT or 9X machine you setup!!! Unix seems to have figured this out long ago. The Mac OS has figured this out long ago!!! I appreciate the engineers being conservative because their market is notoriously unforgiving about issues that work, but are not clairvoyant and anticipate how each luser wants the system to work. I hope that they will have this cleared up by release time.
In the interim, here is a script that HUPs netinfo services to get a hot restart.
#!/bin/sh
case `whoami` in
root)
;;
*)
echo "Not Administrator (root). You need to be in order to restart the network."
return
;;
esac
echo "Restarting the network, network will be unavailable."
kill `ps aux | grep ipconfigd | grep -v grep | awk '{print $2}'`
echo " - Killed 'ipconfigd'."
ipconfigd
echo " - Started 'ipconfigd' right back up."
sleep 1
ipconfig waitall
echo " - Ran 'ipconfig waitall' to re-configure for new settings."
sleep 1
kill -HUP `cat /var/run/nibindd.pid`
echo " - Killed 'nibindd' with a HUP (hang up)."
sleep 2
kill -HUP `cat /var/run/lookupd.pid`
echo " - Killed 'lookupd' with a HUP (hang up)."
echo "The network has successfully been restarted and/or re-configured and is now available."

Authors’ Note: Larry gives credit to “Timothy Hatcher” as the original author of this script. You can find the original script at the bottom of the page at http://macosrumors.com/?view=archive/10-00, as well as another script about 3/4 of the way down the page which reproduces (roughly) the very useful MacOS 8-9 “Location Manager” functionality. I don’t like to cite the site MacOS Rumors as any kind of source of reliable info, since it’s 90 percent pretentiously uninformed speculation that doesn’t admit itself as such, but one of its readers did give the Mac community “the scoop” here, as far as I can tell. If Timothy Hatcher or anyone else out there wants to speak up as the original author of this script, please let me know – I’d love to ask you some questions about NetInfo. 😉

————————————————

From: Dag-Erling Smorgrav <[email protected]>
Subject: mach.sym in Darwin
I'll bet you a dime to a dollar the mysterious mach.sym in MacOS/X's root directory is simply a debugging kernel, i.e. an unstripped copy of mach_kernel.

————————————————

From: Paul Lynch <[email protected]>
Subject: MacOS X Daemon News
I can give you a few updates on some parts that might be of interest. In no particular order:
- the kernel (Mach) is only supplied in binary. Most MacOS X admins won't be expected to be able to build a new kernel; that requires a BSD/Mach background (and who's got that outside of Apple?) and a Darwin development system. So building it with firewall options enabled is reasonably smart.
- as well as /.hidden, you will notice that dot files aren't visible in the Finder. .hidden should be looked for in the root of any mounted filesystem, not just /.
- /private is a hangover from the old days of diskless workstations. NeXT had a good netboot option, which meant that you could stash all the local configuration and high access files (like swap - /private/vm/swapfile) in a locally mounted disk. This is all part of the Mach, as opposed to BSD, option.
- MacOS X doesn't only support HFS. It also supports UFS, and that may shed some light on some of the "but HFS does this" quirks.
Paul

Authors’ Note: The inclusion of a distinct /private now seems to make a lot of sense, especially for those who are willing to believe in grand computer-industry conspiracy theories. 🙂 I saw a Steve Jobs keynote in which he showed 50 iMacs net-booting from a single server, showing their abilities as (relatively) low-cost “Network Computers.” And who is a greater believer in NCs than Apple board member (and reputed best buddy of Steve Jobs) than Oracle chief Larry Ellison? No specific documentation of the role of /private has thus far been provided by Apple (as far as I can tell), but the above explanation seems very plausible, leaving open the door for future uses as described.

------------------------------------------------
From: Peter Bierman
Subject: http://www.daemonnews.org/200011/osx-daemon.html
Try 'nicl .'
Then try combinations of cd, ls, and cat.
cd moves around the netinfo directory structure
cat prints the properties of the current directory
ls prints the subdirectories of the current directory
A few minutes in nicl, and NetInfo will make a lot more sense.
Unfortunately, there's no man page yet.
Another tidbit:
In X-GM, volumes will mount under /Volumes instead of at /

————————————————

From: James F. Carter <[email protected]>
Subject: Comments on Random Ramblings about BSD on MacOS X
Why a firewall with no rules? Because the firewall code has to be selected at compile time, but when the CD is burned they don't know what rules the end user will want, and they don't want to lock out any "traditional" behavior, such as the ability to play host to script kiddies with a port of Trinoo for the Mac. I agree that a set of moderately restrictive default rules would be a good idea for the average grandmother, but I can understand the developers' attitude too.
Why have a plain file /tmp/console.log in addition to the syslog? In case syslogd dies. I have this problem on Linux: there's a timing dependency which if violated kills syslogd, and I'm running a driver (you don't want to know the gory details when I suspend the laptop to RAM and restart an hour later). If "sync" got done, and if the file is rotated by the code that opens it, you have a chance to see your machine's death cry when it crashed and burned. I've hacked my Linux startup files to do this partially to catch (sysadmin-induced) screwups during the boot process.
Why put resolv.conf in /var/run? ppp and dhcpd often obtain the addresses of the ISP's DNS servers during channel setup, and have to write them into resolv.conf. Modern practice, at least on SysV-ish systems like Solaris and Linux, is that /etc is potentially mounted read-only on a diskless workstation or CDROM, and dynamic info goes in /var/something.
Running the NFS daemon: Agreed that it's a security hole. Solaris only starts nfsd if /etc/dfs/sharetab (the successor of /etc/exports) contain the word "nfs". I've hacked my Linux startup files to do something similar.
/private/Drivers: I assume this contains drivers, similar to /lib/modules/$version on Linux. You wouldn't want to intermix code segments with device inodes, would you? :-) Or does recent BSD do something weird and wonderful along these lines? I've thought about a hypothetical UNIXoid operating system in which the device inode is [similar to] a symbolic link to its driver. (Paraquat on the grass?)
James F. Carter 
Internet: [email protected] (finger for PGP key)

Authors’ Note: The NFS inclusion seems like yet another attempt by Apple to include functionality “in the background” that they may or may not make use of. It’s an opposite to their attempts in recent versions of Mac OS (via the “Extension Manager”) to allow users to enable or disable anything that patches traps or otherwise alters the functions of the base OS, it still makes sense. The current Extension Manager functionality was most likely included because third-party utilities included this functionality, rather than because Apple really wanted end-users to have fine-grained control over the OS, and because so many poorly-written current MacOS extensions could interfere with Apple-provided OS functionality (if not hose the OS completely).

The prevailing attitude at Apple regarding OS X may very likely be that since it would be very difficult for typical  users to modify their kernel (at least with existing tools), it’s best to open up everything that might be needed at some current or future point. However, this holds only until someone creates a kernel extension/module interface as easy as the current MacOS Extension Manager (something that, despite FreeBSD’s /stand/sysinstall attempts is still far away for any other *nix).

————————————————

Further Exploration: Das Boot

Last time, we mentioned that holding down the “v” key at startup shows a BSD-esque text console startup rather than the standard MacOS X GUI startup. Considering that the hardware on a revision-A iBook is pretty different than the hardware in your average x86 Free/Open/NetBSD box, we thought it would be interesting to see just what XNU (the Darwin kernel) does on startup.

Let’s look at what happens at boot time (as shown in the message buffer using dmesg just after boot time). Comments are shown on following lines (after “<=”).

minimum scheduling quantum is 10 ms

<= Haven’t seen this before on any BSD boot.

vm_page_bootstrap: 37962 free pages

<= RAM on this iBook is 160 MB.

video console at 0x91000000 (800x600x8)

<= The iBook’s screen, 800×600 resolution (x 8 bit color?)

IOKit Component Version 1.0:
Wed Aug 30 23:17:00 PDT 2000; root(rcbuilder):RELEASE_PPC/iokit/RELEASE
_cppInit done

<= IOKit is Apple’s Darwin device driver scheme

IODeviceTreeSupport done
Copyright (c) 1982, 1986, 1989, 1991, 1993
      The Regents of the University of California. All rights reserved.

<= It’s nice that they included this BSD-style, without any “copyright Apple blah blah” stuff

AppleOHCI: config @ 5505000 (80080000)
AppleUSBRootHub: USB Generic Hub @ 1

<= This is the iBook’s one built-in USB port

AppleOHCI: unimplemented Set Overcurrent Change Feature

<= I believe that OHCI is the USB Open Host Controller Initiative, a generic standard for USB devices. This option appears to be a standard USB driver parameter which is not currently implemented (?).

AppleUSBRootHub: Hub attached - Self powered, power supply good
PMU running om NonPolling hardware
IOATAPICDDrive: Using DMA transfers
IOCDDrive drive: MATSHITA, CD-ROM CR-175, rev 5AAE [ATAPI].

<= The iBook’s ATAPI 24x CD-ROM drive

IOATAHDDrive: Using DMA transfers

<= The iBook’s HD is UltraDMA/66, if I recall correctly

IOHDDrive drive: , TOSHIBA MK3211MAT, rev J1.03 G [ATA].
IOHDDrive media: 6354432 blocks, 512 bytes each, write-enabled.

<= The iBook’s 3.2 GB hard drive; there are three logical volumes on this one.

ADB present:8c

<= Not sure about this. ADB in Apple-speak generally refers to the legacy Apple Desktop Bus, which was a low-speed serial bus used for connecting keyboards/mice. The iBook does not have an ADB port; so this probably just indicates the presence of the ADB driver.

struct nfsnode bloated (> 256bytes)
Try reducing NFS_SMALLFH
nfs_nhinit: bad size 268

<= Not sure why it’s reporting errors against its own default settings for NFS (asking the average Mac user to recompile their kernel with this option is like asking the average driver to rebuild their engine with an extra cylinder). This is presumably a “beta” bug.

devfs enabled

<= The Unix devfs (separate from MacOS X drivers?) is enabled.

IP packet filtering initialized, divert enabled, rule-based forwarding enabled, default to accept, logging disabled

<= The packet filtering that we mentioned last time. 

IOKitBSDInit
From path: "/pci@f2000000/mac-io@17/ata-4@1f000/@0:11,\\mach_kernel", Waiting on <dict ID="0"><key>IOProviderClass</key>
<string ID="1">IOMedia</string><key>IOPath Separator</key>
<string ID="2">:</string><key>IOPath Extension</key>
<string ID="3">11</string><key>IOLocationMatch</key>
<dict ID="4"><key>IOUnit</key>
<integer size="32" ID="5">0x0</integer><key>IOLocationMatch</key>
<dict ID="6"><key>IOPathMatch</key>
<string ID="7">IODeviceTree:/pci@f2000000/mac-io@17/ata-4@1f000</string></dict></dict></dict>

<= System preferences in MacOS X are set with XML files. Kudos to Apple for this forward-looking use of XML. See below for more on this and the “defaults” command.

UniNEnet: Debugger client attached
UniNEnet: Ethernet address 00:0a:27:92:04:3a

<= I think that “UniN” here refers to the “UniNorth” Apple MoBo chipset used in the iBook, which has a 10/100BT RJ-45 interface, among other things, built into it (and Ethernet network set up as the default under this configuration).

ether_ifattach called for en

<= Presumably “en” is the device driver type for this NIC

Got boot device = IOService:/Core99PE/pci@f2000000/AppleMacRiscPCI/mac-io@17/KeyLargo/ata-4@1f000/AppleUltra66ATA/IOATAStandardDevice/IOATAHDDrive/IOATAHDDriveNub/IOHDDrive/TOSHIBA MK3211MAT Media/IOApplePartitionScheme/Hard Drive@11
BSD root: disk0s11, major 14, minor 11
bsd_init: rootdevice = 'disk0s11'.

<= Finding the boot device; unsure why it calls it “BSD root” rather than “Darwin root” or just “MacOS X root.”

devfs on /dev
Ethernet(UniN): Link is up at 10 Mbps - Half Duplex

<= Yep, it’s plugged into the 10BT/half-duplex hub in my Netopia SDSL router.

Resetting IOCatalogue.
kext "IOFWDV" must change "IOProbe Score" to "IOProbeScore"

<= This appears to be a debugging (?) warning in a Darwin kernel extension.

kmod_create: ATIR128 (id 1), 23 pages loaded at 0x5878000, header size 0x1000

<= This appears to describe a kernel module/driver for the ATI Rage 128 chipset, although this Rev. A iBook uses only an ATI Rage Pro chipset. Perhaps this is a driver for the ATI family up to the Rage 128 series?

kmod_create: com.apple.IOAudioFamily (id 2), 16 pages loaded at 0x588f000, header size 0x1000
kmod_create: com.apple.AppleDBDMAAudio (id 3), 5 pages loaded at 0x589f000, header size 0x1000
kmod_create: com.apple.AppleDACAAudio (id 4), 9 pages loaded at 0x58a4000, header size 0x1000

<= Loading drivers for the audio chips on the iBook MoBo.

PPCDACA:setSampleParameters 45158400 / 2822400 =16
kmod_create: com.apple.IOPortServer (id 5), 13 pages loaded at 0x58be000, header size 0x1000
kmod_create: com.apple.AppleSCCSerial (id 6), 9 pages loaded at 0x58cb000, header size 0x1000

<= More drivers for Apple MoBo chipsets.

creating node ttyd.irda-port...
ApplePortSession: not registry member at registerService()

<= This looks like the IrDA infrared transfer port drive attempting to create a connection and failing.

creating node ttyd.modem...
ApplePortSession: not registry member at registerService()

<= It looks like it’s trying to create a connection to the modem port and failing.

.Display_ATImach64_3DR3 EDID Version 1, Revision 1
Vendor/product 0x0610059c, Est: 0x01, 0x00, 0x00,
Std: 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101,
.Display_ATImach64_3DR3: user ranges num:1 start:91800480 size:ea680
.Display_ATImach64_3DR3: using (800x600@0Hz,16 bpp)

<= These appear to be setting GUI resolution at 800 x 600 in 16-bit color (which is what they had been set to in the GUI controls) 

kmod_create: SIP-NKE (id 7), 7 pages loaded at 0x59b8000, header size 0x1000
kmod_destroy: ATIR128 (id 1), deallocating 23 pages starting at 0x5878000

<= Not sure about this; probably unloading the ATI Rage 128 driver from the kernel (?)

MacOS X’s Hardware Drivers and Support

Following on the above: many people, when they think about hardware and drivers that Apple needs to create for Darwin/Mac OS X, have one of two thoughts. It’s either “That should be really easy since Apple has all standardized hardware,” or “Won’t that be hard, since Macs use a bunch of whacked-out hardware that nobody else has ever heard of?” The answer is somewhere in between.

One of Apple’s few built-in advantages has always been that, since it creates its own hardware as well as software, it needs to support only a small fraction of the devices that any commodity x86-based OS might need to support. Furthermore, Apple’s dictum that Mac OS X will support only “Apple G3-based computers” makes it seem that hardware driver support would be relatively straightforward. This, however, is not the case. Apple’s “G3” support actually involves support for two wildly differing branches of hardware (and some “in-between” models).

Apple (circa 1996 or so) suffered in terms of hardware manufacturing and compatibility because of the amount of “non-standard” hardware it used. Aside from the obvious use of Motorola/IBM PowerPC CPUs instead of commodity x86 CPUs, (some) Apple’s desktops used the Texas Instruments NuBus expansion system instead of PCI, AGP or ISA; a proprietary serial bus for printers/modems; the Apple Desktop Bus (ADB) for keyboards and mice; external SCSI-1 for all other peripherals; and a variety of other custom Apple MoBo (motherboard) components.

However, Apple in the “Age of Steve” has moved to a more industry standard-compliant position. When Apple ditched beige colors (starting with the Jobs-directed iMac in 1998), it moved to a legacy-free environment, ditching a lot of its older custom hardware. Apple also effected a number of other hardware changes, moving more of the Mac OS “Toolbox” routines from custom ROM chips into software (MacOS X shouldn’t need them at all) and moving to unify all of its lines with a unified motherboard architecture (UMA-1). 

The new machines (the ever-fly Steve Jobs’ “kickin’ it new-school legacy-free style”) ditched floppy disk drives and the old Apple Desktop Bus and serial ports in favor of USB for keyboards, mice and low/medium-speed peripherals. Built-in external SCSI was soon eliminated in favor of a mixture of USB and IEEE 1394 (“FireWire”) for higher-speed peripherals. With the introduction of the G4-based desktops, 2xAGP replaced PCI as the video card slot, leaving three 33-MHz PCI slots for internal expansion.

Drawing OS X’s “supported models” cut-off line at Apple’s G3s (which excludes Apple or clone-vendor models with G3 CPU upgrade cards) eliminates needing to support much of the legacy hardware that Apple has used in the past. However, there are a few Apple G3 models that bridge both technology generations, creating a notable thorn in Apple’s side. Because the original G3 desktops and PowerBooks included legacy technology (ADB, Apple serial ports, older chipsets), Apple must support these devices in Darwin and Mac OS X.

Type I/II PC card support does not appear to be available in Mac OS X Public Beta (possibly the reason there is no *official* support for Apple’s IEEE 802.11 “Airport” cards); IEEE 1394 (FireWire) support does not appear to be available, either. Apple’s new UMA-2 chipset is rumored to be introduced with new models at January’s Macworld expo; however, the specs of this chipset can only be guessed at now.

The Mysteries of “defaults”

Much of this article and the previous one have been the result of aimlessly exploring the file system of MacOS X from the command line, finding things that seemed odd or interesting and wondering, “Hmm, what will this do when I poke it?” I almost wish I didn’t stumble onto this next item. As I investigated its operation and the history of its implementation, I just became more curious about certain design decisions. Sort of like … well, NetInfo.

The item in question is the defaults command. After finding it, reading its man page (defaults(1)), and playing with it a little, it appeared sort of cool, but pretty mundane. The command allows for the storage and retrieval of system and application level user preferences. Please forgive the reference if it’s too far off, but it’s like a sort of “Windows registry” for MacOS X.

To get an idea of what information the system stored, I experimented with the command to see what it would tell me. Typing ‘defaults domains’ spit back the following list of information categories, or “domains” in Apple parlance:

% defaults domains
NSGlobalDomain ProcessViewer System%20Preferences com.apple.Calculator com.apple.Console com.apple.HIToolbox com.apple.Sherlock com.apple.Terminal com.apple.TextEdit com.apple.clock com.apple.dock com.apple.finder com.apple.internet com.apple.keycaps loginwindow

Those domains that related to applications illustrated Apple’s suggested domain naming convention of preceding the application name with the software vendor’s name. In this case, on a system containing solely Apple software, the application domains all began with ‘com.apple’.

I found that there were a variety of ways to view the data contained in these domains. In order to view the data for all domains, simply type ‘defaults read’. To query instead for information specific to a domain, use the form ‘defaults read <domain name>’ (without the arrows), or to grab a specific value, use ‘defaults read <domain name> <key>’. The command also allows the setting of values using the form ‘defaults write <domain name> <key> <value>’. The man page describes a variety of other ways to use the command.

Having read about XML-based plists (property lists) in MacOS X (originally from John Siracusa’s excellent series of Ars Technica articles on the OS, indexed at http://arstechnica.com/reviews/4q00/macosx-pb1/macos-x-beta-1.html), I assumed that this service might be based on an XML back-end. A quick search through the filesystem confirmed this suspicion. Per user preferences were found under ~/Library/Preferences, with each application having its own plist file. System-wide preferences showed up under /Library/Preferences, and although they were not present on this non-networked machine, I would be willing to bet that network-wide preferences could be found under /Network/Library/Preferences.

I did a little comparison using the preference data from the system clock application. First, I read the data using the defaults command, then I located the plist version in my ~/Library/Preferences directory. The results below show how the system translates the clock data into the XML plist format.

% defaults read com.apple.clock 
{
 24Hour = 0; 
 ColonsFlash = 0; 
 InDock = 0; 
 "NSWindow Frame Clock" = "-9 452 128 128 0 4 800 574 "; 
 ShowAnalogSeconds = 1;  
 Transparancy = 4.4;  
 UseAnalogClock = 1;  
 UseDigitalClock = 0;  
} 

% cat Library/Preferences/com.apple.clock.plist 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist SYSTEM "file://localhost/System/Library/DTDs/PropertyList.dtd">
<plist version="0.9">
<dict>
 <key>24Hour</key>
 <false/>
 <key>ColonsFlash</key>
 <false/>
 <key>InDock</key>
 <false/>
 <key>NSWindow Frame Clock</key>
 <string>-9 452 128 128 0 4 800 574 </string>
 <key>ShowAnalogSeconds</key>
 <true/>
 <key>Transparancy</key>
 <real>4.4e+00</real>
 <key>UseAnalogClock</key>
 <true/>
 <key>UseDigitalClock</key>
 <false/>
</dict>
</plist>

I was a bit intrigued by the date of the defaults man page. It was dated March 7, 1995, which as far as I knew was prior to the advent of XML. Apparently, there was some amount of history to this command and database that was not immediately obvious. A little research revealed that the defaults command and database date back at least to (yes, you guessed it) NeXTstep. The information found confirms that it was subsequently adopted for OpenStep and finally MacOS X. It also appears that the “back-end” evolved over time, from straight ASCII string based configuration files to today’s XML-based plist files. (As usual, if you know more about the history of this database, please feel free to share your knowledge, and please forgive my lack of knowledge of all things NeXT.)

With this history information in hand, I became curious about the present implementation. I was also curious if an API existed. Obviously, normal applications would not use this command to store and retrieve persistent data. However there was no mention of an API in the defaults man page, nor was there a link to any other information. A quick trip to the Apple developer website provided the information I was looking for. The API was called “Preference Services” and was a part of MacOS X’s “Core Foundation (http://developer.apple.com/techpubs/corefoundation/PreferenceServices/Preference_Services/Concepts/CFPreferences.html). Apple provides two sets of APIs for use based on the developer’s needs, a high-level one that allows the developer to quickly set and retrieve data in the default domain (defined as the current user, application, and host), and a low-level API for setting and retrieving data in very specific domains (specific user(s), application(s), and host(s)). The API also provides a method of storing data for suites of applications such as the standard office productivity apps we all know and love to hate.

During this exploration, I have been having a hard time deciding whether I think this is all cool or not. On one hand the idea of a system-(and even network-)wide preferences database seems pretty cool, especially to a BSD-er like myself, but at the same time, it is really nothing new. In the same way, at first glance, the idea of an XML-based back-end seems pretty innovative, but is it? Sure it’s cool to look at, but so what? The existence of the defaults command and an access API mean that the actual plist files are not intended for public consumption.

The defaults command and Preference Services API indicate that the whole database is supposed to be a black box to the system, the application, and especially the user. If this is the case, why not go with a high-horsepower back-end, one that offers more robust searchability and speed than that which could be achieved via the “crapload of text files” approach. I think the argument from Apple was supposed to be that the files should be architecture-neutral in order to be easily portable. If this was the case, why not just leverage an existing architecture independent binary database format. I know, for instance, that MySQL can do it, why not Apple?

The other argument I can think of might be that the XML format is essentially patch-able. Assuming the data is not customized too much by the running application, updates could be distributed and installed from small plist patch files. However, that doesn’t seem like a very convincing argument. All this having been said, this article will probably be published and the next day, a reader will say, “Well, <profoundly obvious answer succinctly stated>. Please unset your $LUSER variable.” The first person to write in with this wins a prize. 😉

Random Ramblings about BSD on MacOS X (Part 1)

By Jeffrey Carl and Matt Loschert

Daemon News, November 2001

This is the first chapter in a series of observations, representing the adventures of a couple of BSD admins (one with a lot of prior MacOS experience, the other with more on the BSD side) poking around the command line on an iBook laptop running Apple’s  Mac OS X Public Beta. We’ll attempt to provide a few notes and observations that may make a BSD admin’s work with Mac OS X easier.

Mac vs. Unix

Right up front, I should mention what a strange animal Mac OS X is. Since 1984, the Mac OS has always been about the concept that “my OS should work easily, without my knowing how it works.” Unix has always been about the idea that “I should be able to control every aspect of my OS, even if it isn’t always easy to figure out how.” Apple – whose very name is anathema to many Unix admins – is now trying to combine both, in the world’s first BSD-based OS that is likely to ship 500,000 copies in its first year.

Apple isn’t alone in this hybridization effort; Microsoft will presumably ship “Whistler,” its NT/2000-based consumer OS, sometime in mid-2001. It appears that everywhere, the rush is on to deliver consumer OSes based on the kernels of server operating systems. In the long run, this is probably a Good Thing®, but Mac OS X and MS Whistler will have to introduce millions of desktop users to the advantages and problems of server OS design that we as server OS admins have been dealing with for years. Real protected memory and pre-emptive multitasking, sure … but also real multi-user systems and all of the permission and driver complexities that requires, as well. However, both Microsoft and Apple are investing the time and effort into developing a real user-friendly interface that your grandmother could configure (sorry, KDE and GNOME). It should be interesting to watch.

If you, like most other Internet server admins with a fully-functional brain stem, prefer Unix over the NT kernel, then Mac OS X will be the first true consumer OS that you will ever feel comfortable with. And, whether you love or hate Apple, it’s worth your time to get acquainted with Mac OS X and what it offers to a BSD Unix administrator.

Right now, there are a lot of questions out there about Mac OS X and its attempts to marry a BSD / Mach microkernel with an all-singing, all-dancing Mac GUI. This is largely because there is unfortunately a comparatively small crossover between Mac administrators and Unix administrators (much like the slim crossover between supermodels and serious “Dungeons and Dragons” players). Though neither Jeff nor Matt is a total über-guru in Mac OS or BSD, we’ll attempt to bridge the gap a little and give the average BSD admin an idea of what he or she will see if they peek around under the hood of Mac OS X Public Beta. Much of this is “first impression” notes and questions, without too much outside research behind it; explanations or clarifications from those who have more detailed answers are gratefully appreciated. 

On a side note, Matt and Jeff will both use “I” rather than “we” in this article as we go, since we are both members of the FreeBSD Borg Collective and think of ourselves as one entity. 😉

Poking Under the Hood: First Impressions

To examine the hidden Unix world on OS X, navigate in the GUI to Applications >> Utilities >> Terminal. The first thing you may notice is that – as you might expect – the default shell is tcsh. You’ll quickly find that many of the user apps you’d expect in a typical Unix installation are there, plus a few extras and minus a few others (showing the refreshing influence of many developers, each pushing for the inclusion of their favorite tools). Interestingly, pico is there, although Pine is not; Emacs and vi are there, also (ed may be there, but I didn’t check since I thought everyone who might use ed was fossilized and hanging on display in a museum by now). Other fun inclusions are gawk, biff, formail, less, locate, groff, perl 5.6, wget, procmail, fetchmail, and an openssh-based client and server.

The first thing that I do when beginning to work on a new system is to set my shell and copy over my personal config files.  When presented with the Mac OS X shell prompt, I was pleased to see that it was tcsh; but at the same time I was a little confused at how to customize it. Usually, I just drop my .cshrc file in ~/, and off I go. Well, ~/ in this case is /Users/myusername.  This is unlike the normal /usr/home or /home that most admins are used to.  Would tcsh be able to find my .cshrc there?

Well, since I didn’t have the machine network-connected while I was originally exploring, and I was too lazy to hand-copy parts of my personal .cshrc file over to this directory, I instead began poking around /etc looking at how the default .cshrc worked.  I quickly found a small csh.cshrc which contained a single line sourcing /usr/share/init/tcsh/rc.  This is where it gets interesting.  

This rc file sets up a very cool method of providing standard system shell features that can be overriden by each Mac OS X user (assuming he/she realizes that the “shell” does not refer to the funky computer case).  The rc file first checks to see if the user has a ~/Library/init/tcsh directory.  If so, it sets this as the tcsh initialization directory; otherwise it sets the /usr/share/init/tcsh directory as the default.  It then proceeds to source other files in the default directory including, for instance, environment, aliases, and completions files which each in turn source files (if they exist) found in the abovementioned tcsh initialization directory.

In this way, the system provides some powerful “standard” features, but still gives the experienced user the ability to override anything and everything.  I dropped some customizations in my personal ~/Library/init/tcsh directory and immediately felt at home.  Without a doubt, the old school UNIX curmudgeons will hate the built-in shortcuts, but I still must applaud the developers for making an attempt at providing a useful set of features for the “new” user.  No one will ever agree on a standard set, but it’s good to see that the average Mac “What is this, some kind of DOS prompt?” user will have a very functional command line, should he/she choose to explore it.  I must admit that some of these completions can be a little annoying though, while typing quickly.

Digging Around

When you drop to the shell, you’ll notice that you’re logged in with the “user name” you selected for yourself when you installed the OS. When you create a user during the installation process, you are informed that “you will be an administrator of this computer.” Yet, when in the shell, you take a look at /etc/passwd, you’ll see no entry for your username. Nonetheless, if you su to “root,” you’ll notice that root’s password is the same as the administrative user you created. Even stranger, although you are an “administrator,” as long as you’re logged in as that user, you still don’t have the permissions of “root.” What’s going on here?  Well, it turns out that this functionality has been co-opted by Apple’s NeXT-inherited Net Info (which I will describe below).

I couldn’t resist running dmesg to see what would come up.  To my surprise, the output was pretty boring.  The most bizarre thing was that a packet filter was enabled, but with a default to open policy and no logging.  An open firewall … why?  If they weren’t planning to enable firewall rules, why compile packet filtering code into the kernel in the first place?  At this point, I felt I had to do at least a little searching for some sign that this had been considered at one point.  Well, /etc held no clues, and there was nothing obvious in /var (the only other probable location I could think of).  I guess they simply wanted to leave the door conveniently ajar should they choose to go back and reconsider the decision.

When I visit a machine, I am always curious to check out /etc.  This directory usually reveals some interesting information about the operating system and/or the administrator.  In this case, while touring the usual suspects, I noticed something that, again, I did not expect.  The wheel group listed in /etc/group did not contain any users (it seemed like some Communist Linux system!).  You typically expect to find root along with at least one administrative user account in /etc/group.  In this case, wheel was empty, leaving me to wonder why they even kept the concept, since this functionality had apparently been folded into an alternate privilege manager. As I found out, this functionality is also handled by NetInfo.

While checking out /etc/syslog.conf, I noted that quite a bit of system information gets routed to /dev/console, as one would expect.  What was strange was that later while exploring /var/tmp (to look for tell-tale temp files), I found a file named console.log.  It appeared to contain a file version of the most recent couple of console messages.  I verified that identical (albeit fewer) entries appear when using the GUI-based console viewer.  I’m no super OS guru … but this strikes me as a strange hack.  Why not simply send the same information to a standard log file via syslog and have the GUI console app read from that?  That’s what syslog’s there for!  Maybe someone else out there can shed some light on this one.

Another oddity was /etc/resolv.conf.  It was symlinked to /var/run/resolv.conf, which seemed a little strange.  Even more so when /var/run/resolv.conf turned out to be non-existent. Okay … well, there was no named running (I checked) and the resolver man pages gave no clue.  So what was going on here?  Apparently something odd, since a quick nslookup responded with the DNS server listed as being the local machine.  No named, no resolv.conf, how about /etc/hosts?  Well, /etc/hosts had no special magic in it, but it did have a note mentioning that the file is only consulted while the machine is in single-user mode.  At all other times, the request is handled by lookupd working through the facilities provided by NetInfo.  Hmm … NetInfo was beginning to sound very interesting.

After a bit of roaming around, I became curious as to whether root would have a home directory on the machine.  Given the difference in user and administrator handling on Mac OS X, I wasn’t even sure that root would have a home directory.  But, lo and behold, there was ~root under /var (or more properly /private/var since it’s a symlink) with a set of default files and directories resembling any other user in /Users.  None of this should have come as a surprise to me since /etc/passwd contained this information.

Strange Services

While exploring the system, you’ll find some interesting services enabled.  I was surprised to see the nfs daemon running. Thus, it didn’t faze me when I found the portmap daemon (a service required in order to run nfs) hanging around.  Finding nfs active was unexpected, since I would only run it if necessary, due to potential security concerns.

Another surprise was seeing a full-fledged NTP daemon running.  It was cool to see this service as a standard part of Mac OS; but I thought it a bit strange though to do continuous high accuracy time synchronization via ntp for a desktop, consumer OS.  Why not use ntp’s little brother, ntpdate, on a periodic basis for this same functionality?  Do Mac OS users really need the accuracy of continuous ntp, when a functionally similar result could be obtained without the security risk of running another network daemon?

The answer is that, for now, “yes they do.” Since the introduction of Mac OS 9, Mac users have been given the default option of using NTP to synchronize their desktop clocks via NTP servers, and it appears that Apple wants to give Mac OS X users (presumably in a server setting) the option of running an NTP server for other Macs on a LAN/WAN.

Speaking of network services, a little netstat lovin’ showed a couple of other interesting tidbits.  The machine was not as quiet network-wise as I had expected.  Along with the nfs, portmap, and ntp network activity, I found listening sockets open on ports 752 and 755, with an active connection to port 752 from another local privileged port.

After some man page reading and poking at a few of the running daemons, I was able to narrow things down a little.  I found that HUP-ing lookupd (an interface daemon to NetInfo) caused the active connection to be disconnected and reestablished (obvious since the new connection originated from a new port).  I also found that HUP-ing nibindd (the NetInfo binder, a sort of librarian for NetInfo) caused the listening sockets to be closed and reappear on new ports. That struck me as quite odd.  I would have expected them to re-bind to the same ports.  Even with these addresses changing, lookupd somehow knew where to find the ports, as I saw new connections established shortly after the HUP signal was received.

Not knowing much about NetInfo, I was curious why this service was implemented using tcp sockets.  I assumed that the service must be distributed, available to and from remote hosts.  Otherwise, the developers probably would have implemented this using Unix domain sockets since they are substantially faster and are safer for local-only protocols. Not having much background on NetInfo at the time, I was somewhat puzzled.

Welding the Hood Shut

Apple’s longstanding (since Mac System 1.0 in 1984) policy has been to prevent users from screwing up their systems by hiding important items from them, or making it difficult to access the cool things that they might play with and hose their hard drive. This attitude has extended to OS X, and things like header files, gcc and gdb are not present (although these were included in Apple’s earlier “developers-only” previews of Mac OS X), not that the average Mac user would be able to “accidentally” damage their system with gcc.

Not to worry, though; the Mach / BSD portion of Mac OS X is maintained separately as an open-source operating system that Apple calls “Darwin.” In fact, a uname -a at the command line in MOSXPB reveals:

Darwin ibook 1.2 Darwin Kernel Version 1.2: Wed Aug 30 23:32:53 PDT 2000; root:xnu/xnu-103.obj~1/RELEASE_PPC  Power Macintosh powerpc

You can get the abovementioned developer goodies (and much more) by downloading Darwin or following the CVS snapshots for various packages. 

At the root of the filesystem, it’s not hard to see that most of what you would expect to find is there somewhere; it may just be in a different place than you expected. The first thing you’ll likely notice is that a lot of what’s there is being hidden from the user by the MacOS X GUI. The Mac OS has always had its share of hidden files (for volume settings, the desktop database, etc.) However, you’ll notice a file in / called “.hidden” which lists the items in / that the GUI hides. The contents of .hidden are:

bin (the Unix directory for binaries needed before /usr is mounted)

cores (a link to /private/cores; a directory – theoretically, a uniform location for placing Unix core dumps)

Desktop DB (the GUI desktop database for files, icons, etc.)

Desktop DF (don’t remember what this does)

Desktop Folder (for items that are shown on the desktop)

dev (the Unix devices directory)

etc (a link to /private/etc; contains normal Unix configuration files and other normal /etc stuff)

lib (listed, but not present in /)

lost+found (the Unix directory for “orphaned” files after an unclean system shutdown. Classic Mac OS users are used to looking in the Trash for temp files saved after a crash or other emergency shutdown)

mach (a link to mach.sym)

mach_kernel (the Mach kernel itself)

mach.sym (listed as a Mach-O executable for PowerPC, but I’m not sure exactly what it does. Any Darwin users out there with more knowledge should correct me.)

private (contains var, tmp, etc, cores and a Mac OS X directory called Drivers, whose relationship to /dev is unclear)

sbin (Unix system binaries traditionally needed before /usr mounts)

SystemFolderX (not present in /; may be a typo or used in the future?) 

tmp (a link to /private/tmp; the Unix temporary files directory)

Trash (files selected in the GUI to be deleted)

usr (just about everything in BSD Unix these days 😉 )

var (Unix directory meant once upon a time for “variable” files – mail, FTP, PID files, and other goodies)

VM Storage (the classic MacOS virtual memory filesystem – usually equal to the size of physical memory + swap space)

A brief digression about .hidden: Editing this file and adding any directory or file name will cause the GUI to hide the item. For example, edit this file in the shell and add ‘Foo,’ create a directory in / with that name, log out and log back in, and it won’t be seen. 

However, it isn’t as simple as it sounds. Strangely, there’s an item in / called “TheVolumeSettingsFolder” whose name isn’t in .hidden, but still doesn’t show up in the GUI. This indicates that there is more to what’s shown and hidden in the GUI than just the “.hidden” file. Also, adding a “.hidden” file to another directory than “/” does not appear to cause files of that name to be hidden in that directory. Furthermore, /.hidden does not prevent files with those names from appearing in lower directories, nor does it hide directories in lower directories when an absolute path is placed in .hidden. If anyone can clear this up for me, I’d love to know what’s at work here.

Getting back on track: You’ll notice that all of the traditional BSD file hierarchies are present, although the GUI hides them from the user so that they are unaware that these Unix directory structures exist. Furthermore, some of the directories you see are in fact soft links to unusual places (as mentioned, etc, tmp and var are all links to inside /private; I refuse to speculate why without having several drinks of neat Scotch first). Also, unlike previous incarnations of the Mac OS, the user is unable to rename the “System” directory from the GUI.

As a brief aside, it should be noted that Apple’s great struggle here (and the immensity of their effort should be appreciated) is to combine the classic Mac OS “I can move or rename damn near anything I want” ethos with Unix’s “it’s named that way for a reason” ethos. How Apple chooses to resolve this dilemma in the final release will speak volumes about their dilemma over (I’m ashamed to admit I’ve forgotten who put it this way, but it’s brilliant) an OS that “the user controls,” versus an OS that “allows you to use it.” And, let’s face it, Unix has always been the latter.

Also, logical or physical volumes are listed under “/” using their “name.” Rather than use a Unix filesystem hierarchy, DOS’s A: or C: drives or even Windows 9x’s “My Computer,” Mac users have always been able to name their hard drives or partitions arbitrarily. Each drive or partition thereof was then always mounted and visible on the desktop. If I have a drive in my computer which I’ve named “Jeff’s Drive,”  you’ll see it from the shell as: /Jeff’s\ Drive, and all of its file and directory contents are viewable underneath it. Similarly, if a user installs MOSX PB on a drive with Mac OS 9 already installed, at installation time their old files and directory hierarchies are all moved to a directory called “Mac OS 9” in /.

** What is strange about the above?? It sounds very UNIXy.  As an example, many people mount their floppy drives as ‘/floppy’, but nothing is preventing them from mounting the drives as ‘/Cool\ Ass \Plastic\ Coaster\ Partition’. **

NetInfo & Friends

Well, by the time I got this far, I realized that this article wouldn’t really be complete without some discussion of the mysterious NetInfo.  I also knew that a little bit of research was in order.  While searching, I learned that NetInfo is a distributed configuration database consisting of information that would normally be queried through a half-dozen separate sub-systems on a typical UNIX platform.  For example, NetInfo encompasses user and group authentication and privilege information, name service information, and local and remote service information, to name just a few.

When the NetInfo system is running (i.e. when the system is not in single-user mode), it supercedes the information provided in the standard /etc configuration files, as well as being favored as an information source by system services, such as the resolver.  The Apple engineers have accomplished this by hooking a check into each libc system data lookup function to see if NetInfo is running.  If so, NetInfo is consulted, otherwise the standard files or services are used.

The genius of NetInfo is that it provides a uniform way of accessing and manipulating all system and network configuration information.  A traditional UNIX program can call the standard libc system lookup functions and use NetInfo without knowing anything about it.  On the other hand, MacOSX-centric programs may directly talk to NetInfo using a common access and update facility for all types of information.  No longer does one have to worry about updating multiple configuration files in multiple formats, then restarting one or more system daemon or daemons as necessary.

The other benefit of the system is that it is designed to be network-aware from the ground up.  If information cannot be found on the local system, NetInfo may query upward to a possibly more knowledgable information host.

NetInfo also knows how to forward requests to the apropriate traditional services if it does not have the requisite information.  It can hook into dns, nis, and other well-known services, all without the knowledge of the application making the initial data request.

NetInfo is a complex beast, easily worth an article of its own.  If you want more information, here are a few tips.  I found that reading NetInfo man pages was frustrating.  Most of the pages tended to heavily use NetInfo-related terms and concepts with little to no definition.  Nevertheless, if interested, check out netinfo(5) (netinfo(3) is simply the API definition), netinfod(8), nibindd(8), and lookupd(8).  However, the best information that I found was on Apple’s Tech Info Library at http://til.info.apple.com/techinfo.nsf/artnum/n60038?OpenDocument&macosxs.

Getting More Information

Finally, for a great resource on all things Darwin, don’t go to Apple’s website (publicsource.apple.com/projects/darwin/). Instead, go to www.darwinfo.org, and you’ll find lots of great stuff, including the excellent “Unofficial Darwin FAQ.”

If you don’t have access to MacOS X Public Beta but would like to read its man pages to get a better idea of how some of these commands are implemented, you can find a collection of MOSXPB/Darwin man pages online at www.osxfaq.com/man.

Jeff and Matt hope this little tour has been semi-informative and has raised the curiosity of the few brave souls who have made it to the end of this MacOS travelogue.  Next time we will take a look at … and discuss ….  See you next time.

Why Microsoft Will Rule the World: A Wake-Up Call at Open-Source’s Mid-Life Crisis

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, August 2001

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

In a Nutshell: The original hype over open-source software has died down – and with it, many of the companies built around it. Open-source software projects like Linux, *BSD, Apache and others need to face up to what they’re good at (power, security, speed of development) and what they aren’t (ease of use, corporate-friendliness, control of standards). They will either have to address those issues or remain niche players forever while Microsoft goes on to take over the world.

The Problem

There is a gnawing demon at the center of the computing world, and its name is Microsoft. 

For all the Microsoft-bashing that will go on in the rest of this column, let me state this up front: Microsoft has done an incredible job at what any company wants to do – leverage its strengths (sometimes in violation of anti-trust laws) to support its weaknesses and keep pushing until it wins the game. That’s the reason I hold Microsoft stock – I can’t stand the company, but I know a ruthless winner when I see it. I hope against reason that my investment will fail.

It has been nearly two years since I wrote a column that wasn’t “about” something, that was just a commentary on the state of things. Many of you may disagree with this, and assign it a mental Slashdot rating of “-1: Flamebait.” Nonetheless, I feel very strongly about this, and I think it needs to be said.

Here’s the bottom line: no matter how good the software you create is, it won’t succeed unless enough people choose to use it. Given enough time and the accelerating advances of other software, I guarantee you it will happen. You may not think this could ever be true of  Linux, Apache, or any other open-source software that is best-of-breed. But ask any die-hard user of AmigaOS, OS/2, WordPerfect, and they’ll tell you that you’re just wishing. 

Sure, there plenty of reasons these comparisons are dissimilar; Amiga was tied to proprietary hardware, and OS/2 and WordPerfect are the properties of companies which must produce profitable software or die. “Open-source projects are immune to these problems,” you say. Please read on, and I hope the similarities will become obvious.

For the purposes of this column, I’m going to include Apple and MacOS among the throng (even though Apple’s commitment to open source has frequently been lip service at best), because they’re the best example of software that went up against Microsoft and failed.

You say it’s the best software out there for you – so why does it care what anyone else thinks? It doesn’t, at first. But, slowly, the rest of the world makes life harder for you until you don’t have much choice.

In the Software Ghetto

Look at Apple, for example (disclaimer: I’m a longtime supporter of MacOS, along with FreeBSD and Linux). My company’s office computers are all Windows PCs; my corporate CIO insisted, despite my arguments for “the right tool for the job,” that I couldn’t get Macs for my graphics department. “We’ve standardized on a single platform,” is what he said. He’s not evil or dumb; it’s just that Windows networks are all he knows and are comfortable with.

Big deal, right? Most Mac users are fanatics. There’s a registered multi-million-person-plus community out there of fellow Mac-heads that I can always count on to keep buying Macs and keep the platform alive forever, right? An installed base of more than 20 million desktops plus sales of 1.5 million new computers in the past year alone is enough for perpetual life, right?

Sure, until that number is fewer than the number of licenses that Microsoft decides that it needs to keep producing MS Office for Mac. Right now, the fact that my Mac coexists with the Windows-only network at my office, because I can seamlessly exchange files with my Windows/Office brethren. But as soon as (which Microsoft could easily do by upgrading windows with a proprietary document format that can’t be decoded by other programs without violating the DMCA or something asinine like that) platform-neutral file-sharing goes out the Window (pun intended) … I’m going to have to get a Windows workstation.

Or Intuit decides there’s just not enough users to justify a Mac version of Quicken. Or, several years from now, Adobe decides it’s just not profitable to make a Mac version of Photoshop, InDesign or Illustrator … or even make critical new releases six months or a year behind the Windows version. I can still keep buying Macs … but I’ll need a Windoze box to run my critical applications. As more people do this, Apple won’t have the revenues to fund development of hardware and software worth keeping my loyalty.. And I’ll keep using the Windows box more and more until I finally decide I can’t justify the expense of paying for a computer I love that can’t do what I need.

“Apple,” you say, “is a for-profit company tied to a proprietary hardware architecture! This could never happen to open-source software running on inexpensive, common hardware!”

Open Source with a Closed Door

Let’s step back and look at Linux. A friend of mine works as a webmaster at a company that recently made a decision about what software to use to track website usage statistics. His boss found a product which provided live, real-time statistics – which only ran on Windows with Microsoft IIS. My friend showed off the virtues of Analog as a web stats tool, but they were too complicated for my friend’s boss to decipher. Whatever arguments my friend provided (“Stability! Security! The Virtues of Open Development!”) were simply too intangible to outweigh the benefits his boss wanted, which this one Windows/IIS-only software package provided. So, they switched to Windows as the hosting environment. 

There may come a day when you suggest an open-source software solution (let’s say Apache/Perl/PHP) to your boss or bosses, and they ask you who will run it if you’re gone. “There are plenty of people who know these things,” you say, and your boss says, “Who? I know plenty of MCSEs we can hire to run standardized systems. How do we know we can hire somebody who really knows about ‘Programming on Pearls’ or running our website on ‘PCP’ or whatever you’re talking about? There can’t be that many of them, so they must be more expensive to hire.” Protest as you might, there isn’t a single third-party statistic or study you can cite to prove them wrong.

If you ask the average corporate IT manager about open source, they’ll point to the previous or imminent failures of most Linux-based public companies as “proof” that open-source vendors won’t be there to provide paid phone support in two years like Microsoft will. 

I’m willing to bet that most of you out there can cite examples of the dictum that corporate IT managers don’t ever care about the costs they will save by using Linux. They are held responsible to a group of executives and users that aren’t computer experts, aren’t interested in becoming computer experts, and wouldn’t know the virtues of open source if it walked up and bit them on the ass. They want it to be easy for these people, and fully and seamlessly compatible with what the rest of the world is using, cost be damned. Say what you will – right now, there’s just no logical reason for these people not to choose Windows.

So maybe Linux users drop down to the point where Mac users have (still a significant number) – only the die-hard supporters. But how many of you Linux gurus out there don’t have a separate Windows box or boot partition to play all the games you like that aren’t developed for Linux because of lack of users/market share? Well, what about the next killer app that’s Windows-only until you use Linux less and less? Or the next cool web hosting feature that only MS/IIS has? Or as more MS Internet Explorer-optimized websites appear? 

I’m not arguing that Linux or BSD would ever truly disappear (there are still plenty of OS/2 users out there). I am, however, saying that as market share erodes, so does development; and, over the long run – if things continue on the present course – Windows has already won.

The main point is this: niche software will eventually die. It may take a very long time, but it eventually will die. Mac or Linux supporters claim that market share isn’t important: look at BMW, or Porsche, which have tiny market shares but still thrive. The counterpoint is that if they could only drive on compatible roads, and the local Department of Transportation had to choose between building roads that 95% of cars could ride on or building separate roads for these cars, they would soon have nowhere to drive. True, Linux/BSD has several Windows ABI/API compatibility projects and Macs have the excellent Connectix VirtualPC product for running Windows on Mac, but very few corporate IT managers or novice computer users are going to choose those over “the real thing.” And I’m willing to bet that those two groups make up 90% of the total computer market. 

You can argue all you like that small market share doesn’t mean immediate death. You’re right. But it means you’re moribund. One of the last bastions of DOS development, the MAME arcade game emulator, is switching after all these years to Win32 as its base platform – because the lead developer simply couldn’t use DOS as a true operating platform anymore. It will take time, but it will happen. Think of all the hundreds of thousands (if not millions) of machines out there right now running Windows 3.11 for Workgroups, OS/2, VMS, WordPerfect 5.1, FrameMaker, or even Lotus 1-2-3. They do what they do just fine. But, eventually, they’ll be replaced. With something else.

The Solution

All this complaining aside, the situation certainly isn’t hopeless. The problems are well known; it’s easier to point out problems than solutions. So, what’s the answer? 

For that 90% of users that will decide marketshare and acceptance, two things matter: visible advantages in ease of use, or quantifiable bottom-line cost savings. Note for example how Mac marketshare declined from 25% to less than 10% when the “visible” ease-of-use differential between Mac System 7 and Windows 95 declined. Or, look at how the cost of more-expensive Mac computers and fewer support personnel versus cheaper Windows PCs and more (but certified with estimable salary costs) support personnel.

Open-source software development is driven by programmers. Bless their hearts, they create great software but they’re leading it to its eventual doom. They need to ally firmly with their most antithetical group: users. Every open-source group needs to recruit (or conversely, a signup is needed by) at least one user-interface or marketing person. Every open-source project that doesn’t have at least one person asking the developers at all steps “Why can’t my grandmother figure this out?” is heading for disaster. Those that do are making progress.

Similarly, those open-source software projects that have proprietary competitors and are dealing with some sort of industry standard that aren’t taking a Microsoft-esque “embrace, extend” approach are going to fall behind. If they don’t provide (and there’s nothing against making these open and well-documented) new APIs or hooks for new features, Microsoft will when they release a competing product (and, believe me, they will; wait until Adobe has three consecutive bad quarters and Microsoft buys them). The upshot of this point is that open-source projects can’t just conform to standards that others with greater marketshare will extend; they need to provide unique, fiscally-realizable features of their own. 

Although Red Hat has made steps in this direction, other software projects (including Apple, Apache, GNOME, KDE and others) should work much harder to provide some rudimentary for of certification process to provide some form of standardized qualification. Otherwise, corporate/education/etc. users will have no idea what it costs to hire qualified support personnel.

Lastly, those few corporate entities staking their claims on open source should be sponsoring plenty of studies to show the quantifiable benefits of using their products (including the costs of support personnel, etc.). The concepts of “ease of use” or “open software” don’t mean jack to anyone who isn’t a computer partisan; those who consider computers to merely be tools must be shown why something is better than the “safe” choice.