As
you might imagine, the differences between Microsoft Windows and the Linux
operating system cannot be completely discussed in the confines of this
section. Throughout this topic, we’ll examine the specific contrasts between
the two systems.But
before we attack the details, let’s take a moment to discuss the primary architectural
differences between the two operating systems.
Single Users vs. Multiple Users vs. Network Users
Windows
was designed according to the “one computer, one desk, one user” vision of Microsoft’s
cofounder Bill Gates. For the sake of discussion, we’ll call this philosophy single-user.
In this arrangement, two people cannot work in parallel running
(for example) Microsoft Word on the same machine at the same time. (On the
other hand, one might question the wisdom of doing this with an overwhelmingly
weighty program like Word!) You can buy Windows and run what is known as
Terminal Server, but this requires huge computing power and extra costs in
licensing. Of course, with Linux, you don’t run into the cost problem, and
Linux will run fairly well on just about any hardware.
Linux
borrows its philosophy from UNIX. When UNIX was originally developed at Bell
Labs in the early 1970s, it existed on a PDP-7 computer that needed to be
shared by an entire department. It required a design that allowed for multiple
users to log into the central machine at the
same time. Various people could be editing documents, compiling programs, and
doing other work at the exact same time. The operating system on the central
machine took care of the “sharing” details so that each user seemed to have an individual
system. This multiuser tradition continues through today on other versions of UNIX
as well. And since Linux’s birth in the early 1990s, it has supported the
multiuser arrangement.
Today,
the most common implementation of a multiuser setup is to support servers
systems dedicated to running large programs for use by many
clients. Each member of a department can have a smaller workstation on the
desktop, with enough power for day to day work. When they need to do something
requiring significantly more processing power or memory, they can run the
operation on the server.
“But,
hey! Windows can allow people to offload computationally intensive work to a
single machine!” you may argue. “Just look at SQL Server!” Well, that position
is only half correct. Both Linux and Windows are indeed capable of providing
services such as databases over the network. We can call users of this
arrangement network user, since
they are never actually logged into the server, but rather, send requests to
the server.
The
server does the work and then sends the results back to the user via the
network. The catch in this case is that an application must be specifically
written to perform such server/client duties. Under Linux, a user can run any
program allowed by the system administrator on the server without having to
redesign that program. Most users find the ability to run arbitrary programs on
other machines to be of significant benefit.
The Monolithic Kernel and the Micro-Kernel
In
operating systems, there are two forms of kernels. You have a monolithic kernel
that provides all the services the user applications need. And then you have
the micro-kernel, a small core set of services and other modules that perform
other functions. Linux, for the most, part adopts the monolithic kernel
architecture; it handles everything dealing with the hardware and system calls.
Windows works off a micro-kernel design. The kernel provides a small set of
services and then interfaces with other executive
services
that provide process management, input/output (I/O) management, and other
services. It has yet to be proved which methodology is truly the best way.
Separation of the GUI and the Kernel
Taking
a cue from the Macintosh design concept, Windows developers integrated the GUI
with the core operating system. One simply does not exist without the other.
The benefit with this tight coupling of the operating system and user interface
is consistency in the appearance of the system.
Although
Microsoft does not impose rules as strict as Apple’s with respect to the appearance
of applications, most developers tend to stick with a basic look and feel among
applications. One reason this is dangerous is that the video card driver is now
allowed to run at what is known as “Ring 0” on a typical x86
architecture. Ring 0 is a protection mechanism—only privileged processes can
run at this level, and typically user processes run at Ring 3. Since the video
card is allowed to run at Ring 0, the video card could misbehave (and it
does!), which can bring down the whole system.
On
the other hand, Linux (like UNIX in general) has kept the two elements—user interface
and operating system—separate. The X Window System interface is run as a user-level
application, which makes it more stable. If the GUI (which is complex for both Windows
and Linux) fails, Linux’s core does not go down with it. The process simply crashes,
and you get a terminal window. The X Window System also differs from the Windows
GUI in that it isn’t a complete user interface. It only defines how basic
objects should be drawn and manipulated on the screen.
The
most significant feature of the X Window System is its ability to display
windows across a network and onto another workstation’s screen. This allows a
user sitting on host A to log into host B, run an application on host B, and
have all of the output routed back to host A. It is possible for two people to
be logged into the same machine, running a Linux equivalent of Microsoft Word
(such as OpenOffice) at the same time. In addition to the X Window System core,
a window manager is needed to create a useful environment. Linux distributions
come with several window managers and include support for GNOME and KDE, both
of which are available on other variants of UNIX as well. If you’re concerned
with speed, you can look into the WindowMaker and Free Virtual Window Manager
(FVWM) window managers. They might not have all the glitz of KDE or GNOME, but
they are really fast. When set as default, both GNOME and KDE offer an
environment that is friendly, even to the casual Windows user.
So
which approach is better—Windows or Linux—and why? That depends on what you are
trying to do. The integrated environment provided by Windows is convenient and
less complex than Linux, but out of the box, it lacks the X Window System
feature that allows applications to display their windows across the network on
another workstation. Windows’ GUI is consistent, but cannot be turned off,
whereas the X Window System doesn’t have to be running (and consuming valuable
memory) on a server.
The Network Neighborhood
The
native mechanism for Windows users to share disks on servers or with each other
is through the Network Neighborhood. In a typical scenario, users attach
to a share and have the system assign it a drive letter. As a
result, the separation between client and server is clear. The only problem
with this method of sharing data is more people-oriented than technology-oriented:
People have to know which servers contain which data.
With
Windows, a new feature borrowed from UNIX has also appeared: mounting.
In Windows terminology, it is called reparse
points. This is the ability to mount a CD-ROM drive
into a directory on your C drive. The concept of mounting resources (optical
media, network shares, etc.) in Linux/UNIX may seem a little strange, but as
you get used to Linux, you’ll understand and appreciate the beauty in this
design. To get anything close to this functionality in Windows, you have to map
a network share to a drive letter.
Linux,
using the Network File System (NFS), has supported the concept of mounting since
its inception. In fact, the Linux Auto mounter can dynamically mount and unmounts
partitions on an as-needed basis.
A
common example of mounting partitions under Linux involves mounted home directories.
The user’s home directories reside on a server, and the client mounts the directories
at boot time (automatically). So the /home directory
exists on the client, but the /home/username directory
(and its contents) can reside on the server.
Under
Linux NFS, users never have to know server names or directory paths, and their
ignorance is your bliss. No more questions about which server to connect to.
Even better, users need not know when the server configuration must change.
Under Linux, you can change the names of servers and adjust this information on
client-side systems without making any announcements or having to reeducate
users. Anyone who has ever had to reorient users to new server arrangements is aware
of the repercussions that can occur.
Printing
works in much the same way. Under Linux, printers receive names that are independent
of the printer’s actual host name. (This is especially important if the printer
doesn’t speak Transmission Control Protocol/Internet Protocol, or TCP/IP.)
Clients point to a print server whose name cannot be changed without
administrative authorization. Settings don’t get changed without you knowing
it. The print server can then redirect all print requests as needed. The Linux
uniform interface will go a long way toward improving what may be a chaotic
printer arrangement in your installation. This also means you don’t have to
install print drivers in several locations.
The Registry vs. Text Files
Think
of the Windows Registry as the ultimate configuration database—thousands upon thousands
of entries, only a few of which are completely documented.
“What?
Did you say your Registry got corrupted?”
“Well, yes, we can try to restore it from last night’s backups, but then Excel
starts acting funny and the technician (who charges $50 just to answer the
phone) said to reinstall.…” In other words, the Windows Registry system is, at
best, difficult to manage. Although it’s a good idea in theory, most people who
have serious dealings with it don’t emerge from battle without a scar or two.
Linux
does not have a registry. This is both a blessing and a curse. The blessing is
that configuration files are most often kept as a series of text files (think
of the Windows .ini files before the days of the Registry). This setup means
you’re able to edit configuration files using the text editor of your choice
rather than tools like regedit.
In many cases, it also means you can liberally comment those configuration
files so that six months from now you won’t forget why you set something up in
a particular way. With most tools that come with Linux, configuration files
exist in the /etc directory
or one of its subdirectories.
The
curse of a no-registry arrangement is that there is no standard way of writing configuration
files. Each application can have its own format. Many applications are now coming
bundled with GUI-based configuration tools to alleviate some of these problems.
So you can do a basic setup easily and then manually edit the configuration
file when you need to do more complex adjustments.
In
reality, having text files hold configuration information usually turns out to
be an efficient method. Once set, they rarely need to be changed; even so, they
are straight text files and thus easy to view when needed. Even more helpful is
that it’s easy to write scripts to read the same configuration files and modify
their behavior accordingly. This is especially helpful when automating server
maintenance operations, which is crucial in a large site with many servers.
Domains and Active Directory
If
you’ve been using Windows long enough, you may remember the Windows NT domain controller
model. If twinges of anxiety ran through you when reading the last sentence, you
may still be suffering from the shell shock of having to maintain Primary
Domain Controllers (PDCs), Backup Domain Controllers (BDCs), and their
synchronization.
Microsoft,
fearing revolt from administrators all around the world, gave up on the Windows
NT model and created Active Directory (AD). The idea behind AD was simple:
Provide
a repository for any kind of administrative data, whether it is user logins,
group information, or even just telephone numbers, and manage authentication
and authorization for a domain. The domain synchronization model was also
changed to follow a Domain Name System (DNS)–style hierarchy that has proved to
be far more reliable.
NT
LAN Manager (NTLM) was also dropped in favor of Kerberos. (Note that AD is
still compatible with NTLM.) While running dcpromo may
not be anyone’s idea of a fun afternoon, it is easy to see that AD works pretty
well.
Out
of the box, Linux does not use a tightly coupled authentication/authorization and
data store model the way that Windows does with Active Directory. Instead,
Linux uses an abstraction model that allows for multiple types of stores and
authentication schemes to work without any modification to other applications.
This is accomplished through the Pluggable Authentication Modules (PAM) infrastructure
and the name resolution libraries that provide a standard means of looking up
group information for applications and a flexible way of storing that group
information using a variety of schemes.
For
administrators looking to Linux, this abstraction layer can seem peculiar at
first. However, consider that you can use anything from flat files to Network
Information Service (NIS) to Lightweight Directory Access Protocol (LDAP) or
Kerberos for authentication. This means you can pick the system that works best
for you. For example, if you have an existing UNIX infrastructure that uses
NIS, you can simply make your Linux systems plug into that. On the other hand,
if you have an existing AD infrastructure, you can use PAM with Samba or LDAP
to authenticate against the domain. Use Kerberos?
No
problem. And of course, you can choose to make your Linux system not interact with
any external authentication system. In addition to being able to tie into
multiple authentication systems, Linux can easily use a variety of tools, such
as OpenLDAP, to keep directory information available as well.
0 comments:
Post a Comment