WiMax

The WiMax Forum harmonizes IEEE 802.16 and ETSI HIPERMAN into a WiMax standard. The core components of a WiMax system include the subscriber station (SS), also known as the customer premise environment (CPE), and the base station (BS). A BS and one or more CPEs can form a cell with a point-to-multipoint (PTM) structure, in which the BS acts as central control over participating CPEs. The WiMax standard specifies the use of licensed and unlicensed bands within the 2- to 11-GHz range, allowing non-LOS (NLOS) transmission, which is highly desired for wireless service deployment, as NLOS does not require high antennas in order to reach remote receivers, which reduces site interference and the deployment cost of CPE. NLOS raises multipath transmission issues such as signal distortion and interference. WiMax employs a set of technologies to address these issues:

» OFDM : As discussed earlier in this chapter, OFDM uses multiple orthogonal narrowband carriers to transmit symbols in parallel, effectively reducing ISI and frequency-selective fading.

» Subchannelization : The subchannelization of WiMax uses fewer OFDM carriers in the upstream link of a terminal, but each carrier operates at the same level of the base station. Subchannelization extends the reach of upstream signals from a terminal and reduces its power consumption.

» Directional antennas : Directional antennas are advantageous in fi xed wireless systems because they are more powerful in picking up signals than are omnidirectional antennas; hence, a fixed CPE typically uses a directional antenna, while a fixed BS may use directional or omni directional antennas.

» Transmit and receive diversity : WiMax may optionally employ a transmit and receive
diversity algorithm to make use of multipath and refl ection using MIMO radio systems.

» Adaptive modulation : Adaptive modulation allows the transmitter to adjust modulation schemes based on the SNR of the radio links. For example, if the SNR is 20 dB, 64 QAM will be used to achieve high capacity. If the SNR is 16 dB, 16 QAM will be used, and so on. Other NLOS schemes of WiMax, such as directional antenna and error correction, are also used.

» Error-correction techniques : WiMax specifi es the use of several error-correction codes and algorithms to recover frames lost due to frequency-selective fading or burst errors. These codes and algorithms are Reed Solomon FEC, convolutional encoding, interleaving algorithms, and Automatic Repeat Request (ARQ) for frame retransmission.

» Power control : In a WiMax system, a BS is able to control power consumption of CPEs by sending power-control codes to them. The power-control algorithms improve overall performance and minimize power consumption.

» Security : Authentication between a BS and an SS is based on the use of X.509 digital certificates with RSA public key authentication. Traffi c is encrypted using Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) which uses Advanced Encryption Standard (AES) for transmission security and data integrity authentication. WiMax also supports Triple Data Encryption Standard (3DES).

Initially, the WiMax Forum has focused on fi xed wireless access for home and business users using outdoor antennas (CPEs), and indoor fi xed access is under development. A base station may serve about 500 subscribers. WiMax vendors have begun to test fi xed wireless broadband access in metropolitan areas such as Seattle. Due to its relatively high cost, the major targets of this technology are business users who want an alternative to T1, rather than residential home users. A second goal of the forum is to address portable wireless access without mobility support, and another is to achieve mobile access with seamless mobility support (802.16e). Recall that a Wi-Fi hotspot offers wireless LAN access within a limited coverage of an AP; the WiMax Forum plans to build MetroZones that allow portable broadband wireless access. A MetroZone comprises base stations connected to each other via LOS wireless links, and 802.16 interfaces for laptop computers or PDAs that connect to the 「 best 」 base station for portable data access. This aspect of WiMax seems more compelling in terms of potential data rate compared with 3G cellular systems.

Like the Wi-Fi forum, the WiMax forum aims at providing certifi cation of WiMax products in order to guarantee interoperability. In March 2005, Alvarion, Airspan, and Redline began to conduct the industry's fi rst WiMAX interoperability test. WiMax chips for fixed CPEs and base stations developed by Intel will be released in the second half of 2005, and WiMax chips for mobile devices will be released in 2007. At the time of this writing, some WiMax systems were expected to go into trial operation in late 2005.

Source of Information : Elsevier Wireless Networking Complete 2010

Wireless Broadband: IEEE 802.16

The most noticeable technological development in wireless MANs and wireless WANs are embodied by the IEEE 802.16, 802.20, and ETSI HIPERMAN standards. Based on the open IEEE 802.16 and HIPERMAN, a commercialized technology called WiMax has been devised. The WiMax Forum, an industry consortium of over 100 companies, has been formed to promote the technology and provide certified, interoperable WiMax products. IEEE 802.16 specifies the PHY and MAC layers. It will support higher network layers and transport layer protocols such as ATM, Ethernet, and IP.

It is noteworthy that the frequency band of 10 to 66 GHz specified by the initial 802.16 standard requires LOS transmission. Some other frequency bands are also specified in later versions of the standard in order to provide indoor wireless access. The MAC layer portion of 802.16 addresses QoS by introducing a bandwidth request and grant scheme. Terminals can be polled or actively signal the required bandwidth, which is based on traffic QoS parameters. 802.16 employs a public-key infrastructure in conjunction with a digital certificate for authentication.

Extensions of IEEE 802.16 include:

• 802.16a, which specifies a data rate up to 280 Kbps per base station over the 2- to 11-GHz frequency band reaching a maximum of 50 km and mesh deployment.

• 802.16b, which addresses QoS issues surrounding real-time multimedia traffic.

• 802.16c, which defines system profiles that operate at 10 – 66 GHz for interoperability.

• 802.16d, which represents system profile for 802.16a devices.

• 802.16e, which standardizes handoff across base stations for mobile data access.

The ETSI HIPERMAN standard is similar to 802.16a. It has been developed in very close cooperation with IEEE 802.16, such that the HIPERMAN standard and IEEE 802.16a standard can work together seamlessly. As a result, many of the characteristics of 802.16 are available in HIPERMAN, such as QoS support, adaptive modulation, and strong security. HIPERMAN supports both PTM and mesh network configurations. The differences between HIPERMAN and 802.16 are primarily on the PHY layer. In order to create a single interoperable standard for commercialization, as well as product testing and certification, several leaders in the wireless industry formed the WiMax Forum. Another IEEE working group, called IEEE 802.20 Mobile Broadband Wireless Access (MBWA), uses the 500-MHz to 3.5-GHz frequency band for mobile data access, an application also targeted by 802.16e; however, 802.20 does not have as strong industry support as 802.16 does.

Source of Information : Elsevier Wireless Networking Complete 2010

Wireless Metropolitan Area Networks

Wireless MANs refer to a set of wireless data networks that provide wireless data access in a metropolitan area. The principal advantage of building wireless MANs for data access as opposed to establishing a wired network infrastructure is the cost of copper-wire or fiber optic cable, installation, and maintenance. In rural areas and developing countries where telephone lines and cable televisions are not in place, a wireless data access solution is more cost effective than a wired network solution. Depending on how wireless technologies are used in the infrastructure, wireless MANs can be categorized into the following types:

• Wireless “ last mile ” (fixed broadband wireless access),
• Wireless data access for mobile terminals,
• Wireless backbones or wireless mesh.

The first type is still based on a wired network infrastructure; that is, base stations connect directly to a backend wired network. PTM wireless communication replaces wired network communication between a base station and the end-user’s computer, the so-called “ last mile. ” Telephone-line-based last-mile access allows dial-up data access and ADSL (with necessary modems), whereas cable-television-based last-mile access permits higher bandwidths and an always-on connection. Dedicated T1 is commonly used by businesses. For the general public, these Internet service providers coined the terms “ broadband Internet ” or “ high-speed Internet access ” in order to differentiate high-speed data access services such as ADSL and cable television from traditional dial-up service. In fact, one of the driving forces behind the wireless last-mile technology is that the broadband Internet access of ADSL and cable has grown rapidly in recent years.

The second type of wireless MANs targets mobile data access. In a sense, 2.5G and 3G cellular networks could be considered wireless MANs or wireless WANs as they have provided wide-area mobile data access for cell phone users. On the other hand, it would be natural to speculate on extending wireless LANs to cover a larger area and to allow roaming across areas covered by these base stations. Still, this type of wireless MAN relies on a wired network infrastructure to function, as the base stations connect directly to a wired network. Many proprietary wireless MANs have been in operation for years. They mainly target a very narrow business market such as mobile professionals, rather than the general public.

The third type of MAN is a pure wireless network, in which backbones as well as the means of access are both wireless. Base stations are not connected to a backend wired network; instead, they coordinate with adjacent base stations, forming a mesh for data forwarding over a wide area. This is a significant development with regard to providing data access services to underdeveloped areas where no fixed networks exist.

Source of Information : Elsevier Wireless Networking Complete 2010

Creating an NLB Cluster

Before an NLB cluster can be created, a few bits of information are required. The NLB cluster is actually clustering based on a defined IP address, the DNS name, and the TCP/IP ports that will be used. Each NLB cluster node can also be configured with multiple network cards. Each card can be associated with a different NLB cluster and a single card can support multiple clusters, but each cluster must have a different DNS name and IP address(es). One configuration that cannot be performed is creating a single NLB cluster that uses multiple network adapters in a single node. To designate multiple adapters for a single NLB cluster, third-party network teaming software must be loaded prior to configuring the NLB cluster; the cluster will use the Virtual Team Network adapter and the teamed physical adapters should not be configured with NLB. For this example, a new NLB cluster will be created for the name www.companyabc.com using the IP address of 192.168.206.50. To create an NLB cluster, perform the following steps:

1. Log on to a Windows Server 2008 R2 system with an account that has local administrator rights and that has the NLB feature already installed.

2. Click Start, click All Programs, click Administrative Tools, and select Network Load Balancing Manager.

3. When the Network Load Balancing Manager console opens, click the Cluster menu, and select New to create a new cluster.

4. When the New Cluster window opens, type in the name of the first server that will be added to the new NLB cluster, and click Connect. If the server is a remote system and cannot be contacted, ensure that the Inbound Remote Administration exception has been enabled in the remote system’s firewall.

5. When the server is contacted, each of the network adapters will be listed, select the adapter that will be used for the NLB cluster, as shown in Figure 29.16, and click Next.

6. On the Host Parameters page, accept the defaults of giving this first server the Host ID of 1 and select the dedicated IP address that will be used when communication is received for the NLB cluster IP address, which will be specified next. Click Next to continue.

7. On the Cluster IP Addresses page, click the Add button to specify an IPv4 address and subnet mask or an IPv6 address to use for the NLB cluster, and click OK. For our example, we will use the IPv4 configuration of 192.168.206.50/255.255.255.0.

8. Back on the Cluster IP Addresses page, add additional cluster IP addresses as required, and click Next to continue.

9. On the Clusters Parameters page, enter the fully qualified DNS name that is associated with the IP address specified on the previous page, and select whether it will be used for Unicast traffic, Multicast traffic, or IGMP Multicast. This choice depends on the network communication of the service or application that will be used in this NLB cluster. For this example, we are creating an NLB cluster for standard web traffic, so we will use www.companyabc.com as the Internet name and select Unicast as the cluster operation mode.

10. If multiple IP addresses were defined on the previous page, the IP address can be chosen from the IP address drop-down list, and the Internet name and cluster operation mode can be defined for each additional address. When all the IP addresses have had their properties defined, click Next to continue.

11. On the Port Rules page, a default rule is precreated that allows all traffic on all ports to be load-balanced across the NLB cluster between the cluster IP address and the dedicated IP address of the local server’s dedicated IP address. Select this rule and click the Remove button to delete it.

12. Click the Add button to create a new port rule.

13. When the Add/Edit Port Rule window opens, type in the starting and ending port range, for example 80 and 80 for a single HTTP port rule, but do not close the window.

14. Under protocols, select the TCP option button, but do not close the window.

15. In the Filtering Mode section, select Multiple Host, and select Single Affinity, but do not close the window.

16. Finally, review the settings, and click OK to create the port rule.

17. Back on the Port Rules page, click the Add button to create an additional port rule.

18. Specify the starting port as 0 and the ending port as 79, select Both for the protocol’s configuration, select the Disable This Port Range Filtering mode, and click OK to create the rule.

19. Back in the Port Rules page, click the Add button to create one more port rule.

20. Specify the starting port as 81 and the ending port as 65535, select Both for the protocol’s configuration, select the Disable This Port Range Filtering mode, and click OK to create the rule.

21. Back on the Port Rules page, review the list of port rules and if the rules look correct, click Finish.

22. Back in the Network Load Balancing Manager window, the cluster will be created and brought online. The cluster IP addresses are automatically added to the TCP properties of the designated network adapter. Close the NLB Manager and log off of the server.

Source of Information : Sams - Windows Server 2008 R2 Unleashed (2010)

NLB cluster Port Rules

Creating Port Rules
When an NLB cluster is created, one general port rule is also created for the cluster. The
NLB cluster port rule or rules define what type of network traffic the cluster will loadbalance across the cluster nodes and how the connections will be managed. The Port Rules Filtering option defines how the traffic will be balanced across each individual node. As a best practice, limiting the allowed ports for the clustered IP addresses to only those needed by the cluster load-balanced applications can improve overall cluster performance and security. In an NLB cluster, because each node can answer for the clustered IP address, all inbound traffic is received and processed by each node. When a node receives the request, it either handles the request or drops the packet if another node has already established a session or responded to the initial request.

When an administrator discards the default NLB cluster port rule and creates a rule that only allows specific ports to the clustered IP address or addresses, plus an additional rule to block all other traffic destined for the cluster IP address, each cluster node can quickly eliminate and drop packets that do not meet the allow port rule and in effect improve network performance of the cluster. The security benefit of this configuration also removes any risk of attacks on any other port using the cluster IP address.



Port Rules Filtering Mode and Affinity
Within an NLB cluster port rule, the NLB administrator must configure the appropriate filtering mode. This allows the administrator to specify whether only one node or multiple nodes in the cluster can respond to requests from a single client throughout a session. There are three filtering modes: Single Host, Disable This Port Range, and Multiple Host.

Single Host Filtering Mode
The Single Host filtering mode ensures that all traffic sent to the cluster IP address that matches a port rule with this filtering mode enabled is handled exclusively in the cluster by one particular cluster node.

Disable This Port Range Filtering Mode
The Disable This Port Range filtering mode tells the cluster which ports are not active on the cluster IP address. Any traffic requests received on the cluster IP address that match a port rule with this filtering mode result in the network packets getting automatically discarded or dropped. Administrators should configure specific port rules and use this filter mode for ports and port ranges that do not need to be load-balanced across the cluster nodes.

Multiple Hosts Filtering Mode
The Multiple Host filtering mode is probably the most commonly used filtering mode and is also the default. This mode allows traffic to be handled by all the nodes in the cluster. When traffic is balanced across multiple nodes, the application requirements define how the Affinity mode should be set. There are three types of multiple host affinities:

» None—This affinity type can send unique clients’ requests to all the servers in the cluster during the entire span of the session. This can speed up server response times but is well suited only for serving static data to clients. This affinity type works well for general web browsing, read-only file data, and FTP servers.

» Network—This affinity type routes traffic from a particular class C address space to a single NLB cluster node. This mode is not used too often but can accommodate client sessions that use stateful applications and when different client requests are serviced by down-level proxy servers. This is a useful affinity type for companies that direct traffic from several remote offices, through proxies before connecting to the services, and/or applications managed by the port rules in the NLB cluster.

» Single—This affinity type is the most widely used. After the initial request is received by the cluster nodes from a particular client, that node will handle every request from that client until the session is completed. This affinity type can accommodate sessions that require stateful data such as an encrypted SSL web application or a Remote Desktop session. This is the default filtering mode on a port rule and is well suited to handle almost any NLB clustered service or application.

Source of Information : Sams - Windows Server 2008 R2 Unleashed (2010)

NLB Applications and Services

NLB is well equipped to distribute user connections and create fault tolerance for a number of different applications and network services. Because NLB does not replicate data across cluster nodes—and neither does failover clustering for that matter—using applications that require access to local data that is dynamic or frequently changes is not recommended for NLB clusters.

Applications well suited for NLB clusters are web-based applications and services, proxy services, virtual private network, SMTP gateways, streaming media, and Remote Desktop Services Session Host server systems. Many other applications and services can also run well on NLB clusters, but the preceding list is what most organizations utilize NLB clusters for.

NLB clusters are based on client connections made to a specific DNS name, IP address, and TCP and/or UDP port using either IPv4 or IPv6. It’s important to read the vendor’s application documentation regarding how the client communicates with the application and how this communication can be configured on load-balancing devices or services such as Microsoft Windows Server 2008 R2 NLB clusters. For instance, certain applications use cookies or other stateful session information that can be used to identify a client throughout the entire session and it is important that the client maintains a connection to the same cluster node during the entire session. Other applications, such as a website that serves up static pages, can respond to a single client’s requests from multiple nodes in the NLB cluster. For a web-based application, such as an e-commerce application, an encrypted SSL session, or an application that is authenticated by the actual web server, the NLB cluster would need to direct all communication between the client and a specific cluster node. Considering these types of scenarios in advance helps determine how the NLB cluster will be defined.


Installing the Network Load Balancing Feature
Before an NLB cluster can be created, the feature needs to be installed on all servers that will participate in the cluster. To install the Network Load Balancing feature, perform the following steps:

1. Log on to each Windows Server 2008 R2 system with an account that has local administrator rights.

2. Click Start, click All Programs, click Administrative Tools, and select Server Manager.

3. In the tree pane, select Features, and in the Actions pane, click the Add Features link.

4. On the Before You Begin page, click Next to continue.

5. On the Add Features page, check the box for Network Load Balancing, and click Next to continue.

6. On the Confirm Installation Selections page, review the list of features that will be added, and click Install to begin the installation.

7. On the Installations Results page, review the results, and click Close to return to
Server Manager.

8. Close the Server Manager console and log off of the server.

9. Log on and repeat this process on the remaining servers that will participate in the cluster as required.

Source of Information : Sams - Windows Server 2008 R2 Unleashed (2010)

Backing Up and Restoring Failover Clusters

Windows Server 2008 R2 contains a rebuilt backup program appropriately named Windows Server Backup. Windows Server Backup can be used to back up each cluster node and any cluster disks that are currently online on the local node. Also, the System State of the cluster node can be backed up individually or as part of a complete system backup.

To successfully backup and restore the entire cluster or a single cluster node, the cluster administrator must first understand how to troubleshoot, back up, and restore a standalone Windows Server 2008 R2 system using Windows Server Backup. The process of backing up cluster nodes is the same as for a standalone server, but restoring a cluster might require additional steps or configurations that do not apply to a standalone server. To be prepared to recover different types of cluster failures, you must take the following steps on each cluster node:

» Back up each cluster node’s local disks.

» Back up each cluster node’s System State.

» Back up the cluster quorum from any node running in the cluster.

» For failover clusters using shared storage, back up shared cluster disks from the node on which the disks are currently hosted.


Failover Cluster Node—Backup Best Practices
As a backup best practice for cluster nodes, administrators should strive to back up everything as frequently as possible. Because cluster availability is so important, here are some recommendations for cluster node backup:

» Back up each cluster node’s System State daily and immediately before and after a cluster configuration change is made.

» Back up cluster local drives and System State daily if the schedule permits or weekly if daily backups cannot be performed.

» Back up cluster shared drives daily if the schedule permits or weekly if daily backups cannot be performed.

» Using Windows Server Backup, perform a full system backup before any major changes occur and monthly if possible. If a full system backup is scheduled using Windows Server Backup, this task is already being performed.


Restoring an Entire Cluster to a Previous State
Changes to a cluster should be made with caution and, if at all possible, should be tested in a nonproduction isolated lab environment first. When cluster changes have been implemented and deliver undesirable effects, the way to roll back the cluster configuration to a previous state is to restore the cluster configuration to all nodes. This process is simpler than it sounds and is performed from only one node. There are only two caveats to this process:

» All the cluster nodes that were members of the cluster previously need to be currently available and operational in the cluster. For example, if Cluster1 was made up of Server1 and Server2, both of these nodes need to be active in the cluster before the previous cluster configuration can be rolled back.

» To restore a previous cluster configuration to all cluster nodes, the entire cluster needs to be taken offline long enough to restore the backup, reboot the node from which the backup was run, and manually start the cluster service on all remaining nodes.

To restore an entire cluster to a previous state, perform the following steps:

1. Log on to one of the Windows Server 2008 R2 cluster nodes with an account with administrator privileges over all nodes in the cluster. (The node should have a full system backup available for recovery.)

2. Click Start, click All Programs, click Accessories, and select Command Prompt.

3. At the command prompt, type wbadmin get versions to reveal the list of available backups. For this example, our backup version is named 09/16/2009-08:30 as defined by the version identifier.

4. After the correct backup version is known, type the following command wbadmin Start Recovery –version: 09/16/2009-08:30 –ItemType:App –Items:Cluster (where version is the name of the backup version name), and press Enter.

5. Wbadmin returns a prompt stating that this command will perform an authoritative restore of the cluster and restart the cluster services. Type in Y and press Enter to start the authoritative cluster restore.

6. When the restore completes, each node in the cluster needs to have the cluster service started to complete the process. This might have been performed by the restore operation, but each node should be checked to verify that the cluster service is indeed started.

7. Open the Failover Cluster Manager console to verify that the restore has completed successfully. Close the console and log off of the server when you are finished.

Source of Information : Sams - Windows Server 2008 R2 Unleashed (2010)

Deploying Failover Clusters

The Windows Server 2008 R2 Failover Clustering feature is not installed on a system by default and must be installed before failover clusters can be deployed. Remote management on administrative workstations can be accomplished by using the Remote Server Administration Tools feature, which includes the Failover Cluster Manager snap-in, but the feature needs to be installed on all nodes that will participate in the failover cluster. Even before installing the Failover Clustering features, several steps should be taken on each node of the cluster to help deploy a reliable failover cluster. Before deploying a failover cluster, perform the following steps on each node that will be a member of the failover cluster:

» Configure fault-tolerant volumes or LUNs using local disks or SAN-attached storage for the operating system volume.

» Configure at least two network cards, one for client and cluster communication and one for dedicated cluster communication.

» For iSCSI shared storage, configure an additional, dedicated network adapter or hardware-based iSCSI HBA.

» For Hyper-V clusters, configure an additional, dedicated network adapter on each node for virtual guest communication.

» Rename each network card property for easy identification within the Failover Cluster Manager console after the failover cluster is created. For example, rename Local Area Connection to PRODUCTION, Local Area Connection 2 to iSCSI NIC, and Local Area Connection 3 to HEARTBEAT, as required and possible. Also, if network teaming will be used with third-party software, configure the team first and rename each physical network adapter in the team to TEAMMEMBER1 and 2. The virtual team adapter should then get the name of PRODUCTION. Remember, teaming is not supported or recommended for iSCSI and heartbeat connections.

» Configure all necessary IPv4 and IPv6 addresses as static configurations.

» Verify that any and all HBAs and other storage controllers are running the proper firmware and matched driver version suitable for Windows Server 2008 R2 failover clusters.

» If shared storage will be used, plan to utilize at least two separate LUNs, one to serve as the witness disk and one to serve as the cluster disk for a high-availability Services and Applications group.

» If applications or services not included with Windows Server 2008 R2 will be deployed in the failover cluster, as a best practice, add an additional fault-tolerant array or LUN to the system to store the application installation and service files.

» Ensure that proper LUN masking and zoning has been configured at the FC or Ethernet switch level for FC or iSCSI shared storage communication, suitable for failover clustering. Each node in the failover cluster, along with the HBAs of the shared storage device, should have exclusive access to the LUNs presented to the failover cluster.

» If multiple HBAs will be used in each failover node or in the shared storage device, ensure that a suitable Multipath I/O driver has been installed. The Windows Server 2008 R2 Multipath I/O feature can be used to provide this function if approved by the HBA, switch, and storage device vendors and Microsoft.

» Shut down all nodes except one and on that node, configure the shared storage LUNs as Windows basic disks, format as a single partition/volume for the entire span of the disk, and define an appropriate drive letter and volume label. Shut down the node used to set up the disks and bring each other node up one at a time and verify that each LUN is available, and, if necessary, configure the appropriate drive letter if it does not match what was configured on the first node.

» As required, test Multipath I/O for load balancing and/or failover using the appropriate diagnostic or monitoring tool to ensure proper operation on each node one at a time.

» Designate a domain user account to be used for Failover Cluster Manager, and add this account to the local Administrators group on each cluster node. In the domain, grant this account the Create Computer Accounts right at the domain level to ensure that when the administrative and high-availability Services and Applications groups are created, the account can create the necessary domain computer accounts.

» Create a spreadsheet with the network names, IP addresses, and cluster disks that will be used for the administrative cluster and the high-availability Services and Applications group or groups that will be deployed in the failover cluster. Each Services and Applications group requires a separate network name and IPv4 address, but if IPv6 is used, the address can be added separately in addition to the IPv4 address or a custom or generic Services and Applications group needs to be created.

After the tasks in the preceding list are completed, the Failover Clustering feature can be installed. Failover clusters are deployed using a series of steps, including the following tasks:

1. Preconfigure the nodes, as listed previously and create a domain user account to be used as the cluster service account.

2. Install any necessary Windows Server 2008 R2 roles, role services, or features that will be deployed on the failover cluster. If any wizards are included with the role installation, like creating a DFS namespace or a DHCP scope, skip those wizards. Repeat this installation on all nodes that will be in the cluster.

3. Install the Failover Clustering feature on each node logged on with the cluster service account.

4. Run the Validate a Configuration Wizard and review the results to ensure that all tests pass successfully. If any tests fail, the configuration will not be supported by Microsoft and can be prone to several different types of issues and instability.

5. Run the Create a Cluster Wizard to actually deploy the administrative cluster.

6. Customize the failover cluster properties.

7. Install any Microsoft or third-party applications that will be added as applicationspecific cluster resources, so the application can be deployed using the High Availability Wizard.

8. Run the High Availability Wizard to create a high-availability Services and Applications group within the failover cluster, such as a file server, print server,
DHCP server, virtual machine, or another of the included or separate services or applications that will run on a Windows Server 2008 R2 failover cluster.

9. Test the failover cluster configuration, and back it up.

Source of Information : Sams - Windows Server 2008 R2 Unleashed (2010)

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...