Windows Server 2008 Network Load-Balancing Clusters

NLB in Windows Server 2008 is accomplished by a special network driver that works between the drivers for the physical network adapter and the TCP/IP stack. This driver communicates with the NLB program (called wlbs.exe, for the Windows Load Balancing Service) running at the application layer—the same layer in the OSI model as the application you are clustering. NLB can work over FDDI- or Ethernet-based networks—even wireless networks—at up to gigabit speeds.

Why would you choose NLB? For a few reasons:
• NLB is an inexpensive way to make a TCP/IP-dependent application somewhat fault tolerant, without the expense of maintaining a true server cluster with fault-tolerant components. No special hardware is required to create an NLB cluster. It's also cheap hardware-wise because you need only two network adapters to mitigate a single point of failure.

• The "shared nothing" approach—meaning each server owns its own resources and doesn't share them with the cluster for management purposes, so to speak—is easier to administer and less expensive to implement, although there is always some data lag between servers while information is transferred among the members. (This approach also has its drawbacks, however, because NLB can only direct clients to backend servers or to independently replicated data.)

• Fault tolerance is provided at the network layer, ensuring that network connections are not directed to a server that is down.

• Performance is improved for your web or FTP resource because load is distributed automatically among all members of the NLB cluster.

NLB works in a seemingly simple way: all computers in an NLB cluster have their own IP address just like all networked machines do these days, but they also share a single, cluster-aware IP address that allows each member to answer requests on that IP address. NLB takes care of the IP address conflict problem and allows clients who connect to that shared IP address to be directed automatically to one of the cluster members.

NLB clusters support a maximum of 32 cluster members, meaning that no more than 32 machines can participate in the load-balancing and sharing features. Most applications that have a load over and above what a single 32-member cluster can handle take advantage of multiple clusters and use some sort of DNS load-balancing technique or device to distribute requests to the multiple clusters individually.

When considering an NLB cluster for your application, ask yourself the following questions: how will failure affect application and other cluster members? If you are running a high-volume e-commerce site and one member of your cluster fails, are the other servers in the cluster adequately equipped to handle the extra traffic from the failed server? A lot of cluster implementations miss this important concept and later see the consequence—a cascading failure caused by a perpetually growing load failed over onto servers perpetually failing from overload. Such a scenario is very common and entirely defeats the true purpose of a cluster. Avoid this by ensuring that all cluster members have sufficient hardware specifications to handle additional traffic when necessary.

Also examine the kind of application you are planning on clustering. What types of resources does it use extensively? Different types of applications stretch different components of the systems participating in a cluster. Most enterprise applications have some sort of performance testing utility; take advantage of any that your application offers in a testing lab and determine where potential bottlenecks might lie.

Web applications, Terminal Services, and Microsoft's ISA Server 2004 product can take advantage of NLB clustering.

It's important to be aware that NLB is unable to detect whether a service on the server has crashed but not the machine itself, so it could direct a user to a system that can't offer the requested service.


NLB Terminology
Some of the most common NLB technical terms are as follows:

NLB driver : This driver resides in memory on all members of a cluster and is instrumental in choosing which cluster node will accept and process the packet. Coupled with port rules and client affinity (all defined on the following pages), the driver decides whether to send the packet up the TCP/IP stack to the application on the current machine, or to pass on the packet because another server in the cluster will handle it.

Unicast mode : In unicast mode, NLB hosts send packets to a single recipient.

Multicast mode : In multicast mode, NLB hosts send packets to multiple recipients at the same time.

Port rules : Port rules define the applications on which NLB will "work its magic," so to speak. Certain applications listen for packets sent to them on specific port numbers—for example, web servers usually listen for packets addressed to TCP port 80. You use port rules to instruct NLB to answer requests and load-balance them.

Affinity : Affinity is a setting that controls whether traffic that originated from a certain cluster member should be returned to that particular cluster node. Effectively, this controls which cluster nodes will accept what types of traffic.

Source of Information : OReilly Windows Server 2008 The Definitive Guide

No comments:

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...