Webmin Easy Freenix Administration through a Web Interface

By Jeffrey Carl

Boardwatch Magazine
Boardwatch Magazine, May 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Are you looking for an easy web-based GUI to administer Unix servers? Want to provide your Unix clue-challenged hosting customers with an easy way to administer their machines? 

If so, try Webmin (http://www.webmin.com/webmin/), a free application which allows you or your users to easily administer their Freenix system through a web interface. You can use it yourself, or you can offer it to clients with dedicated web or mail servers to do some of their own administration and take the burden off of you.

Installation and Setup

Webmin is a Perl 5 program that allows you to configure everything from user accounts to DNS and Samba filesharing and more. Webmin is free, and runs on a wide variety of Linuxes (including Caldera OpenLinux, RedHat, S.u.S.E., Slackware, Debian, TurboLinux, Mandrake, Delix DLD, Apple MkLinux) as well as FreeBSD and OpenBSD. It has been most thoroughly tested on Caldera OpenLinux, RedHat and FreeBSD, but it should run fine on other systems with potentially a bit of tweaking. 

To install, go to ftp://ftp.webmin.com, and select the most recent version of Webmin (webmin-0.78.tar.gz at the time of this writing). Unzip and untar the file, then run the included setup.sh to install Webmin. Answer a few questions about your system setup, create a username and password for yourself, select a port for Webmin to run on, and you’re ready to go. To upgrade, download the source for a new version and specify the source file’s location in the Webmin interface’s Webmin Configuration -> Webmin Upgrade option.

Webmin is modular in nature, and comprises a “core” Webmin server with a number of default modules. Each module (like CronBINDSyslog, etc.) provides administration functionality for its specified service. At installation time, all default modules are installed; you can remove modules through the Webmin interface, or download new third-party modules from a link on the Webmin home page. Webmin stores configuration files for all of its modules inside directories located (usually) in /etc/webmin/modulename/. The start and stop scripts for Webmin are also stored (somewhat confusingly) in /etc/webmin, rather than in /usr/local/sbin or in the Webmin home directory. Its logs are by default stored in /var/webmin/, rather than in /var/log/webmin/.

Webmin includes its own “miniature” webserver, so you don’t need to alter your Apache (or other web server) configuration to use it. The mini server is also a Perl script (/usr/local/webmin-0.78/miniserv.pl or something similar), which runs (owned by the root user) until the process is killed. This isn’t a terribly elegant solution, and it eats up about 3 MB of RAM as long as Webmin is running, but we’re assuming here that convenience is more of an issue here than absolute maximum performance.

If the idea of running a root-owned process over unencrypted HTTP scares you, you’re right. Webmin includes functionality to use Perl’s Net::SSLeay module to run its miniserver through HTTPS. If you don’t have this Perl module installed (and you’ll need to have the C libraries included with OpenSSL to get SSLeay to work), you’ll find download links and (relatively) helpful instructions for OpenSSL and SSLeay on the Webmin home page. Keep in mind, however, that setting SSLeay up can sometimes be, to use the technical term, a “major pain in the butt.”

Even better for security, you can also use Webmin’s interface to specify specific IP addresses from which Webmin can be accessed. This isn’t a foolproof setup, but it should be good enough for many system administrators.

Fun with Modules

Webmin’s interface is no-frills. It has very plain and simple graphics, loads quickly and gets the job done – a very wise choice, in my opinion. All functionality is provided in HTML tables instead of through a contrived graphical user interface.

As mentioned before, Webmin’s functionality is based on its included modules, each of which provides an interface to a specific service, application or daemon. The default installation includes all of the Webmin modules, which include such helpful items as MySQLWU-FTPDNFSApacheUsers, Groups and Passwords, and a large number of other actions (for a complete list, see www.webmin.com/webmin/standard.html). Some default modules are OS-specific, like LinuxRAIDLinux Boot Loader and Network Configuration (for Linux and Solaris). Third-party modules which are available for Webmin (including ones for QMailVNC and one which allows SSH logins in place of Webmin’s telnet tool) are available at www.webmin.com/webmin/third.html. There’s also a “wish list” of modules currently planned or under development at www.coastnet.com/~ken/webmin/wish.html.

Of course, having all of these modules available doesn’t mean that all of these services are available to you. Despite the fact that there’s a Samba Windows File Sharing module, for example, you’ll still need to manually download and install Samba on your machine before you can use Webmin to configure it. 

Each of the included modules is well written, and provides a wide range of functionality. For example, the Apache module allows you to set up or alter virtual hosts, set permissions, add or modify MIME types, change Apache’s process limits and more. Even better, the module-writing spec is open, allowing you to write your own modules if you have a good knowledge of Perl and the application or service that you’re writing your module for. 

One exception to this is the included Telnet Login module, which offers up a Java applet allowing you a telnet login through the web browser. This module is (surprise!) unfortunately dependent upon the Java Virtual Machine (JVM)/ Just-In-Time compiler (JIT) your browser is using, and can be unreliable in some cases. For example, it runs fine with the Apple JVM/JIT used by Netscape/MacOS, but is unusable with the Symantec JVM/JIT used by Microsoft Internet Explorer/MacOS. 

Overall, however, Webmin’s functions are well defined and easily accessible. If you are at all familiar with the service that you’re configuring, Webmin provides a simple point-and-click interface that absolves you from needing to remember file locations and command-line switches.

Fun with Configurability

Through the Webmin Configuration option on its index page, you can set up a variety of options, including Webmin logging, port/address for Webmin, and interface look and feel. Perhaps unsurprisingly, this is a significant improvement over command-line based programs which often leave no clues as to where their configuration files are. Also, most of these configuration options can be set manually via the command line in /etc/webmin/miniserv.conf.

Another handy feature if you’re using Webmin to administer a number of machines is its Webmin Servers Index function (available from your Webmin index page). Choose a central machine where you do most of your administration, and then fill out the forms to “register” the other servers you’re running Webmin on. Alternatively, you can set up a list of servers on one machine, then copy the files in /etc/webmin/servers/ from that server to all of your other servers and have those links automatically established. 

Every time thereafter that you click on the Servers Index button, you’ll be presented with a quick link to all of your other Webmin-enabled servers. You can specify a username and password to quickly log in to the other servers for convenience, or you can create a normal connection that will prompt you for a username and password for extra security.

An especially useful configuration option on the index page is Webmin Users. Through this, you can set up a variety of username/password logins for Webmin, and the modules that they’re allowed to access. This is particularly worthwhile if you want to set up one user for you (allowing access to all modules) and another user for your customer (only allowing access to modules for adding/removing users, Sendmail, Apache, etc.). With this setup, you can allow customers access to commonly used features but keep them from doing anything which might seriously “hose” their system. 

This isn’t a completely secure setup, however, since information about the modules that users can access is store in a plain text file as /etc/webmin/webmin.acl (usernames and passwords are stored in /etc/webmin/webmin.users), and a user with root access could easily change this.

You Can Lead a User to Man Pages, But You Can’t Make Them Think

Webmin provides a great deal of functionality in the modules it provides; but what it doesn’t provide is help in understanding them. This is almost certainly too much to ask from a free admin program, but it does limit Webmin’s usefulness in some ways (at least for users who are not already very familiar with Unix). For example, it can allow a user to enable or disable the chargen service or edit /etc/fstab, but it provides no information about what those things are, or why you might want (or might not!) want to change them. 

While a truly novice-friendly administration interface is too much to ask for, clickable keywords with glossary listings probably aren’t too much to ask for. The lack of documentation and help defeats one of Webmin’s primary benefits: the ability for a Unix novice to easily administer their system. While Webmin certainly aids new users by removing the burden of needing to know command-line options, it certainly won’t help them to configure Apache options if they don’t know what “Maximum requests per server process” means. Novice users are one of Webmin’s potentially largest markets, and it would be a shame if they didn’t provide explanatory text for their options in a future release version.

Still, this is a very forgivable gripe for a program which still isn’t even at release 1.0. What isn’t forgivable, however, is Webmin’s severe lack of documentation about itself, what it does, and how it does what it does. While this won’t deter the experienced system administrator, it limits how useful Webmin can be to administrators who would like an explanation of what they’re doing to their system, but don’t have the skills or knowledge to examine Webmin and its modules closely.

On the positive side, the Webmin site answers many frequently asked questions, and each built-in module also contains its own help information. A Webmin mailing list (frequently posted to by the authors) is also available; subscription information and a searchable archive are available from the Webmin home page. Even better, a Webmin Helpoption is always available from the index screen. Unfortunately, the help that is available appears to have been written as an afterthought, and the regexp searches that Webmin appears to do when you’re looking for help aren’t always very useful. 

The installation doesn’t even include a documentation directory or man page, and users are left to figure out for themselves how the system works and what goes on. Most of the information I managed to gather about Webmin’s internal workings was from reading the source for the installer shell script and the Perl code of the individual modules. If you’re like me (and I feel bad for you if you are), you will want lots of documentation about any third-party tool that runs as root.

Conclusions: Good, But Not Perfect

Is Webmin worth downloading, installing and trying? Absolutely – it offers excellent features, and the price (none) can’t be beat. Is it worth deploying for your technical support staff or customers? It depends on whether you’re willing to accept its limitations (and potential security or system integrity risks).

Nonetheless, Webmin has tremendous potential to provide a great web interface for Unix control. If your needs match its strengths and you aren’t too concerned about its weaknesses, then it’s something you should add to your administration arsenal right away. Even if it doesn’t meet your needs now, it certainly is a tool worth watching for the future.

Tracing the Lines: The Definitive Guide to Traceroute

By Jeffrey Carl

Boardwatch Magazine
Boardwatch ISP Guide, May 2000

Boardwatch Magazine was the place to go for Internet Service Provider industry news, opinions and gossip for much of the 1990s. It was founded by the iconoclastic and opinionated Jack Rickard in the commercial Internet’s early days, and by the time I joined it had a niche following but an influential among ISPs, particularly for its annual ranking of Tier 1 ISPs and through the ISPcon tradeshow. Writing and speaking for Boardwatch was one of my fondest memories of the first dot-com age.

Dig through any Internet engineer’s “toolkit” of favorite utilities, and you’ll find (probably right under the empty pizza boxes) the traceroute program. Users have now joined the bandwagon, using traceroute to find out why they can’t get to site A or what link is killing their throughput to site B. 

Traceroute uses already-stored information in a packet header for queries on each part of the path to a specified host. With it, you can find out how you get to a site, why it’s failing or slow, and what might be causing the problem. Traceroute seems like a simple and perfect tool, but it can sometimes give misleading answers due to the complexities of Internet routing. While it should never be relied on to give the complete answer to any question about paths, peering or network problems, it is a very good place to start.

Traceroute, in its most basic form, allows you to print out a list of all the intermediate routers between two destinations on the Internet. It allows you to diagram a path through the network. More important to IP network administrators, however, is traceroute’s potential as a powerful tool for diagnosing breakdowns between your network and the outside world, or perhaps even within the network itself.

The Internet is vast and not all service providers are willing to talk to one another. As a result, your connection to your favorite web or FTP site is often grudgingly left to the hands (or fiber) of a middleman, perhaps your upstream, or a peer of theirs, or even more remote than that. When there is performance trouble or even a total failure in reaching that site, you might be left scratching your head, trying to determine who is at fault once you’ve determined it’s not a fault within your control.

The traceroute utility is a probe that will enable you to better determine where the breakdown begins on that path. Once you have some experience with the program, you’ll be able to see when performance trouble is likely a case of oversaturation of a network along the way, or that your target is simply hosted behind a chain of too many different providers. You will be able to see when your upstream has likely made a mistake routing your requests out to the world, and be able to place one call to their NOC; a call that would resolve the situation much more quickly than scratching your head and anxiously phoning your sales representative.

Performing a Traceroute

Initiating a traceroute is a very simple procedure  (although interpreting them is not). Traceroutes can be done from any Unix or Windows computer on which you have an account; MacOS users will need to download the shareware program IP Net Monitor, available from shareware sites or at There are also numerous traceroute gateways around the Internet which can be accessed via the web.

From a Unix shell account, you can usually just type traceroute at the prompt, followed by any of the Unix traceroute options, followed by the host or IP you’re attempting to trace to. If you receive a “command not found” message, it indicates either that traceroute isn’t installed on the computer (very unlikely), or it’s simply installed in a location which isn’t in your command path. To fix this, you may need to edit your path or specify the absolute location of the program – on many systems, it’s at /usr/sbin/traceroute or /sbin/traceroute.

Windows users with an active Internet connection can drop to a DOS prompt and type tracert followed by the hostname or IP address they want to trace to. With the Unix and DOS traceroute commands, you can use any of a number of command-line options to customize the report that the trace will give back to you. With web-based traceroute gateways, you may be able to specify which options you want, or a default set of options will be preselected

How it Works

As the Unix man page for traceroute says, “The Internet is a large and complex aggregation of network hardware, connected together by gateways. Tracking the route your packets follow (or finding the miscreant gateway that’s discarding your packets) can be difficult. Traceroute utilizes the IP protocol “time to live” field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host.”

Traceroutes go from hop to hop, showing the path taken to site A from site B. Using the 8-bit TTL (“Time To Live”) in every packet header, traceroute tries to see the latency from each hop, printing the DNS reverse lookup as it goes (or showing the IP address if there is no name). 

Traceroute works by sending a UDP (User Datagram Protocol) packet to a high-numbered port (which would be unlikely to be in use by another service), with the TTL set to a low value (initially 1). This gets partway to the destination and then the TTL expires, which provokes (if all goes as planned) an ICMP_TIME_EXCEEDED message from the router at which the TTL expires. This signal is what traceroute listens for. 

After sending out a few of these (usually three) and seeing what returns, traceroute then sends out similar packets with a TTL of 2. These get two routers down the road before generating ICMP_TIME_EXCEEDED packets. The TTL is increased until either some maximum (typically 30) is reached, or it hits a snag and reports back an error.

The only mandatory parameter is the destination host name or IP number. The default probe datagram length is 38 bytes, but this may be increased by specifying a packet size (in bytes) after the destination host name.

What it Means

What traceroute tells you (assuming everything works) is how packets from you get to another specific destination, and how long they take to get there. Armed with a little knowledge about the way the Internet works, you can then make informed guesses about a number of things.

Getting There is Half the Fun

Let’s say that you’d like to know how traffic from my website is reaching the network of Sites On Line, a large online service. So, I run a traceroute to their network from my webserver.

traceroute to www.sol.com (, 30 hops max, 40 byte packets
1  epsilon3.myisp.net (  1 ms  1 ms  1 ms
2  sc-mc-4-0-A-OC3.myisp.net (  1 ms  1 ms  1 ms
3  sol-hn-1-0-H-T3.myisp.net (  2 ms  2 ms  2 ms
4  gpopr-rre2-P2-2.sol.com (  2 ms  1 ms  2 ms
5 (  3 ms  3 ms  3 ms
6  www-dr4.rri.sol.com (  4 ms 7ms  8 ms

You can see from this example that your traffic passes through a router for my ISP, then passes through what is evidently an OC-3, before entering a line that (judging by the name) is evidently a DS-3 gateway between my ISP and SOL. From there, it enters the GigaPop of SOL, passes through a router which doesn’t have a reverse lookup (I see only its IP address), and eventually to one last router (clearly marked as leading to its webserver) at SOL.

Because of the way routing works, an ISP can really only control its inter-network traffic as far as choosing what to listen to from each peer or upstream, and then deciding which of those same contact points gets its outbound packets. So when you’re tracerouting from your network to another network, you’re getting a glimpse of how your neighbor network is announcing itself to the rest of the ‘Net. Because of the existence of this “asymmetric routing” in some circumstances, you should probably do two traces (one in each direction) between each two points you’re interested in.

Reading the T3 Leaves

While you can easily read the names that appear in the traceroute, interpreting them is a hazy enterprise. If you’ve got a pretty good feel for the topology of the Internet, the reverse lookups on your traceroute can (possibly) tell you a lot about the networks you’re passing through. Geographic details, line types and even bandwidth may be hinted at in the names – but hints are really all that one can expect. Since every large network names its routers and gateways differently, you can assume some things about them, but you can’t be sure. If you want to engage in router-spotting, note that common names may  reflect:

• locations (a mae in the name might indicate a MAE connection, or la might indicate Los Angeles)

• line type (atm may indicate an ATM circuit as opposed to a clear-channel line)

• bandwidth (T3 or 45 is generally a dead giveaway, for example)

• a gateway (sometimes flagged as gw) to a customer’s network (sometimes referred to as cust or something similar) 

• positions within the network (some lines may be named with core or border, or something similar)

• engineering senses of humor (as seen by the reference to Babylon 5 in my ISP’s network)

• The network whose router it is (almost always identifiable by the domain name; if the router doesn’t have a reverse lookup, you can perform a nslookup on an IP address to find out whose IP space it is in).

However, it should be reiterated here that amateur router-ology is a dangerous sport, since really the only people who understand a router’s name are the people that named it. So don’t get too upset when you think you’ve spotted someone routing your traffic through Nome, Alaska when it fact it was named by a Hobbit-obsessed engineer with bad spelling.

Some Clues About Connectivity

A prospective ISP, prospectiveisp.net, tells you that it is fully peered. Is there any way that you can check up on this? Well, yes and no.

Traceroute can tell you whether two networks communicate directly, or through a third party. First, you traceroute from a traceroute page behind hugeisp.net to a location within prospectiveisp.net.

traceroute to www.prospectiveisp.net (, 30 hops max, 40 byte packets
1  s8-3.oakland-cr2.hugeisp.net (  12 ms  17 ms  8 ms
2  h2-0-0.paloalto-br1.hugeisp.net (  15 ms  32 ms  12 ms
3  sl-bb10-sj-9-0.intermediary.net (  30 ms  15 ms  13 ms
4  sl-gw11-dc-8-0-0.intermediary.net (  82 ms  103 ms  73 ms
5  sl-prospective-1-0-0-T3.intermediary.net (  77 ms  74 ms  73 ms
6  border1-fddi.charlottesville.prospectiveisp.net (  121 ms  76 ms  75 ms
7  ns.prospectiveisp.net (  80 ms  79 ms  94 ms

It is evident from this traceroute that hugeisp.net and prospectiveisp.net travel through a third party to reach each other. While this doesn’t say anything definite about their relationship, two networks will generally pass their traffic directly to each other about if they are peers (barring strange routing circumstances or other arrangements). This doesn’t paint a full picture (and you should confirm this with a trace from prospectiveisp to hugeisp), but it leads you not to think that prospectiveisp’s claims of full peering are true. 

Note that while traceroute can tell you whether two networks communicate directly or indirectly, it can’t tell you any more about their relationship. Even if any two networks do communicate directly, traceroute can’t tell me whether their relationship is provider-customer or NAP peering (except perhaps through whatever hazy clues you obtain from router names or by calling a psychic hotline and reading them my trace). 

In the above example, you might conclude that prospectiveisp buys transit or service from intermediary.net, which peers with (or buys service from) hugeisp.net. Of course, the opposite may be true – that prospectiveisp.net peers with intermediary.net, and hugeisp.net buys service from intermediary.net. However, common sense and a rough feel for the “pecking order” of first- and second-tier networks should guide your guesses here. 

Where’s the Traffic Jam?

Let’s say that you encounter some difficulty reaching the Somesite.Com website (you describe the site’s download speed as “glacial”), and decide to show off your newfound traceroute skills to investigate the cause of the problem. Here’s what you find:

traceroute to somesite.com (, 30 hops max, 40 byte packets
1  epsilon3.yourisp.net (  1 ms  0 ms  1 ms
2  other-FDDI.yourisp.net (  2 ms  2 ms  2 ms
3  br2.tco1.huge.net (  6 ms  4 ms  22 ms
4  112.ATM2-0.XR2.HUGE.NET (  8 ms  28 ms  30 ms
5  192.ATM9-0-0.GW3.HUGE.NET (  8 ms  32 ms  28 ms
6  * * *
7  * somesite-gw.customer.HUGE.NET (  12 ms !A *

From this, you can guess (with a high degree of certainty) that the initial source of trouble is outside of our your ISP’s network and somewhere along the route used by the site’s carrier, huge.net. Hop six shows some type of trouble, with none of the three packets sent to that gateway returning. Notice the traceroute continues on to the next stop despite the failure at hop six. The problem there was also likely for the loss of two of the packets sent to hop seven. Thankfully, one managed to get back and indicated the trace was complete. 

Using the example above, you could ping the address reflected in hop seven and then compare it to a ping to, say, hop four or five, and see if loss of packets to hop seven versus hop five reflects what the traceroute indicated. 

Selected Traceroute Error Messages:

• !H Host Unreachable

This frequently happens when the site or server is down.

• !N Network Unreachable

This can be caused by networks being down, or routers unable to transmit known routes to a network.

• !P Protocol Unreachable

This only happens when a router fails to carry a protocol that is used in the packet, like IPX. 

• !S Source Route Failed

This can only happen if you are using the source route functions of traceroute, (i.e.) tracing from a remote site to another remote site. The site you are using source route tracing from must have source route tracing turned on, or it will not work.

• * TTL Time Exceeded

This is caused when the path back exceeds the TTL time limit, or the router sends a ICMP Time Exceeded message to site A.

• !TTL is <=1

This happens any time the TTL is changed, via RIP or some other protocol, to a new TTL.

Caveat Tracerouter

Before you make too many decisions based on the results of traceroutes, you should be very aware that tracerouting is a complex phenomenon, and that plenty of otherwise innocuous things can interfere with it. For example, it may be possible to ping a site, but not to traceroute to it. This is because many routers are capable of being set to drop time exceeded packets, but not echo reply packets.

Traceroutes may return an unknown host response, but this frequently does not mean that the sites are down, or the network connection in between is faulty. Some domains are simply not mapped to be used without a third-level domain as part of the name. For example, tracerouting to aol.com will not work; but tracing to www.aol.com will.

In short, tracerouting is a valuable tool, but does not give a complete picture of a network’s status. Try to use as many gauges of network status as possible when attempting to debug an Internet connection. 

Other Traceroute Resources:


A lengthy tutorial on Traceroute by Jack Rickard.


Traceroute.org directory of Traceroute Gateways


The Multiple Simultaneous Traceroute Gateway


Boardwatch’s own list of traceroute servers


A handy site that allows you to trace from Digex’s network at MAE-East, MAE-West, Sprint NAP or PAIX.


Home Page for the program IP Net Monitor for MacOS