For either of the Windows Server 2008 R2 fault-tolerant clustering technologies to be most effective, administrators must carefully choose which technology and configuration best fits their application or service requirements. NLB is best suited to provide connectivity to TCP/IP-based services such as Remote Desktop Services, web-based services and applications, VPN services, streaming media, and proxy services. NLB is easily scalable and the number of clients that can be supported is based on the number of clients a single NLB cluster node can support multiplied by the number of nodes in the cluster. Windows Server 2008 R2 failover clusters provide system failover functionality for mission-critical applications, such as enterprise messaging, databases, file servers, print services, DHCP services, Hyper-V virtualization services, and many other built-in Windows Server 2008 R2 roles, role services, and features.
Although Microsoft does not support using both NLB and failover clusters on the same server, multitiered applications can take advantage of both technologies by using NLB to load-balance front-end application servers and using failover clusters to provide failover capabilities to back-end databases that contain data too large to replicate during the day or if the back end cannot withstand more than a few minutes of downtime if a node or service encounters a failure.
Windows Server 2008 R2 failover clusters are a clustering technology that provides systemlevel fault tolerance by using a process called failover. Failover clusters are best used to provide access to resources such as file shares, print queues, email or database services, and back-end applications. Applications and network services defined and managed by the failover cluster, along with cluster hardware including shared disk storage and network cards, are called cluster resources. When services and applications are cluster-aware or certified to work with Windows Server 2008 R2 failover clusters, they are monitored and managed by the cluster service to ensure proper operation.
When a problem is encountered with a cluster resource, the failover cluster service attempts to fix the problem by restarting the resource and any dependent resources. If that doesn’t work, the Services and Applications group the resource is a member of is failed over to another available node in the cluster, where it can then be restarted. Several conditions can cause a Services and Applications group to failover to a different cluster node. Failover can occur when an active node in the cluster loses power or network connectivity or suffers a hardware or software failure. In most cases, the failover process is either noticed by the clients as a short disruption of service or is not noticed at all. Of course, if failback is configured on a particular Services and Applications group and the group is simply not stable but all possible nodes are available, the group will be continually moved back and forth between the nodes until the failover threshold is reached. When this happens, the group will be shut down and remain offline by the cluster service.
To avoid unwanted failover, power management should be disabled on each of the cluster nodes in the motherboard BIOS, on the network interface cards (NICs), and in the Power applet in the operating system’s Control Panel. Power settings that allow a display to shut off are okay, but the administrator must make sure that the disks, as well as each of the network cards, are configured to never go into Standby mode.
Cluster nodes can monitor the status of resources running on their local system, and they can also keep track of other nodes in the cluster through private network communication messages called heartbeats. Heartbeat communication is used to determine the status of a node and send updates of cluster configuration changes and the state of each node to the cluster quorum.
The cluster quorum contains the cluster configuration data necessary to restore a cluster to a working state. Each node in the cluster needs to have access to the quorum resource, regardless of which quorum model is chosen or the node will not be able to participate in the cluster. This prevents something called “split-brain” syndrome, where two nodes in the same cluster both believe they are the active node and try to control the shared resource at the same time or worse, each node can present its own set of data, when separate data sets are available, which causes changes in both data sets and a whirlwind of proceeding issues.
Network Load Balancing
The second clustering technology provided with Windows Server 2008 R2 is Network Load Balancing (NLB). NLB clusters provide high network performance, availability, and redundancy by balancing client requests across several servers with replicated configurations. When client load increases, NLB clusters can easily be scaled out by adding more nodes to the cluster to maintain or provide better response time to client requests. One important point to note now is that NLB does not itself replicate server configuration or application data sets.
Two great features of NLB are that no proprietary hardware is needed and an NLB cluster can be configured and up and running literally in minutes. One important point to remember is that within NLB clusters, each server’s configuration must be updated independently. The NLB administrator is responsible for making sure that application or service configuration, version and operating system security, and updates and data are kept consistent across each NLB cluster node.
Source of Information : Sams - Windows Server 2008 R2 Unleashed (2010)