Storage can be one of the most complicated areas to plan for virtualization deployments. It’s rare that you’ll know exactly how large your VMs will grow, which may lead to either too much or not enough storage being allocated to a particular virtualization host. You can avoid both situations with some planning and monitoring.
When you’re planning a virtualization deployment, knowing the basics of the workload and expected growth is critical to ensuring that enough storage is provisioned to the host. However, the way that storage is provisioned is as critical as the amount. Allocating 2TB of storage to a host for VM usage may sound great; but if it’s two 1TB drives connected to a Serial Advanced Technology Attachment (SATA) controller on the motherboard, it’s highly unlikely that it will perform under load.
Storage planning involves two main areas of concern: storage controllers and the number of drives connected to those controllers. The type of storage on the back end also matters:
Storage controllers. The number of storage controllers installed in the system is a common bottleneck. A VM will do as much I/O as a physical system. If a VM is doing significant amounts of I/O, it can and will saturate the storage controller. Performance will suffer for any other VMs that are using virtual hard disks (VHDs) available from that storage controller. That’s why it’s absolutely critical to have multiple paths available to the storage pool, for both performance reasons and failover in case of a loss of connection. Having multiple controllers available eliminates the single point of failure that can cripple a large-scale virtualization deployment.
Number of drives. As we mentioned earlier, provisioning storage for virtualization doesn’t always mean getting the largest drive available. In many cases, just as with many highperformance workloads, it’s preferable to have multiple smaller disks as opposed to fewer larger disks. Having multiple disks available lets you spread the work across multiple physical disks that are part of a Redundant Array of Independent Disks (RAID).
Storage type. The type of storage connected to the host is of slightly less importance. As long as the storage is on the Windows Server 2008 hardware compatibility list, it will work with Hyper-V. This includes small computer system interface (SCSI), serial-attached SCSI (SAS), Internet SCSI (iSCSI), fibre channel, and even Intelligent Drive Electronics (IDE) and SATA. You’ll see the difference in the rotational speed of the disk, as well as the amount of cache available on the disk. The performance gains from moving from a 7,200 RPM disk to a 10,000 RPM or even 15,000 RPM disk are significant and can increase even more past that level. Similarly, moving from 4 or 8MB of cache to 16 or 32MB will increase performance.
Volume management. When you pair storage with highly available VMs, the best practices get a bit more complicated. VMs that are made highly available as part of failover clustering have a limitation of one VM per logical unit number (LUN) if individual failover per VM is desired. This means you must carefully plan the provisioning of LUNs.
After your Hyper-V host is up and running, you should watch a few performance counters related to storage:
• Physical Disk, % Disk Read Time
• Physical Disk, % Disk Write Time
• Physical Disk, % Idle Time
These three counters provide a good high-level view of disk activity. If the read and write times are high (consistently greater than 75%), then disk performance is likely to be affected.
Additional counters to monitor include these:
• Physical Disk, Avg. Disk Read Queue Length
• Physical Disk, Avg. Disk Write Queue Length
High levels for these counters (greater than 2) may indicate a disk bottleneck.
Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor
Subscribe to:
Post Comments (Atom)
Cloud storage is for blocks too, not just files
One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...
-
Many of the virus, adware, security, and crash problems with Windows occu when someone installs a driver of dubious origin. The driver suppo...
-
The Berkeley motes are a family of embedded sensor nodes sharing roughly the same architecture. Let us take the MICA mote as an example. T...
-
Modern computers contain a significant amount of memory, and it isn’t easy to know whether the memory is usable. Because of the way that Win...
No comments:
Post a Comment