Server Clustering Basics

Posted on
Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone

With the cost of computer hardware decreasing on a daily basis, many organizations are turning to server clustering as a means of increasing their server uptime. The term “clustering” might be familiar from your IT travels. It certainly gets quite a bit of press, and for some very good reasons.

This article will explore server clustering, with the assumption that you have no prior familiarity with clustering technology, but do have an intermediate understanding of PC hardware, computer networking and server operating systems.

What Is a Server Cluster?

In its most elementary definition, a server cluster is at least two independent computers that are logically and sometimes physically joined and presented to a network as a single host. That is to say, although each computer (called a node) in a cluster has its own resources, such as CPUs, RAM, hard drives, network cards, etc., the cluster as such is advertised to the network as a single host name with a single Internet Protocol (IP) address. As far as network users are concerned, the cluster is a single server, not a rack of two, four, eight or however many nodes comprise the cluster resource group.

Several different server operating systems support cluster configurations. Probably the two dominant cluster-aware server operating systems in today’s IT marketplace are the myriad Linux distributions and Microsoft Windows Server 2003 Enterprise Edition and Datacenter Edition. Novell NetWare 6.x also supports clustering services.

Why Deploy a Server Cluster?

The chief advantages for organizations that deploy cluster server configurations are high availability, high reliability and high scalability. High availability refers to the ability of a server to provide applications and services to users often enough to meet or exceed an organization’s uptime goals. A cluster server configuration provides a higher degree of availability to services and applications than a non-clustered server configuration.

High reliability means that a server computer provides fault tolerance in the event of system failure. Fault tolerance, in turn, eliminates a single point of failure for a particular subsystem (be it the hard-disk subsystem, CPU subsystem, power supply subsystem, etc.) by providing redundancy. Server clustering takes high reliability a step further by providing fault tolerance for applications and services running on the cluster resource group. For instance, if one node in a cluster were to fail, the other nodes could continue to provide applications and services for the rest of the network. The network’s end users never need to know there was a hardware or software failure on a server computer.

High scalability denotes the capacity of a network environment for future growth with an eye toward improved performance. Specifically with regard to clustered server implementations, server nodes can be scaled up by adding additional hardware resources to each node, such as additional CPUs, RAM, hard drives, etc. Clustered servers can be scaled outward by adding more nodes to the resource group.

Availability, reliability and scalability lead many organizations to set up a clustered server environment. But are there any immediate downsides to a clustered environment? Obviously, additional cost is a concern. Even with server hardware costs being relatively low nowadays, a quad-processor RAID-5 server computer does not come cheap. Add to that the licenses involved for enterprise versions of your server operating system, relational database management system (RDBMS) software, Web server software and so on, and the costs are not insignificant.

Another consideration before deploying and maintaining a clustered server environment is the additional training that may be required for an organization’s IT staff to become proficient in setting up and operating the cluster. Again, these costs, which might involve instructor-led training, certification exams and overtime pay, must not be taken lightly by organization decision-makers.

How Are Server Clusters Implemented?

Most commonly, server clusters are known as either server farms or server packs. A server farm is a clustered group of server computers that run the same applications and services but do not share the same repository of data. That is, each node in a server farm stores its own local, identical copy of a data repository that is periodically synchronized with the other nodes in the server farm. An example of a server farm would be a cluster group of Web servers, where each server might run a local instance of Microsoft Internet Information Services. However, the cluster handles requests for service with each node retrieving data from its own local data store.

By contrast, a server pack is a clustered group of server computers that runs the same applications and services and also shares a common data repository. A good example of a server pack would be a cluster of nodes running Microsoft SQL Server. In a server pack configuration, all nodes in the cluster connect to a separate, shared disk subsystem and retrieve data from the shared data store. Fibre Channel and SCSI are the two most common interface technologies in use today for shared disk storage among cluster nodes.

Tim Warner is director of technology for Ensworth High School in Nashville, Tenn. He can be reached at twarner@certmag.com.

Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone
cmadmin

ABOUT THE AUTHOR

Posted in Archive|

Comment: