Showing posts with label Windows Server 2008 Hyper-V. Show all posts
Showing posts with label Windows Server 2008 Hyper-V. Show all posts

Virtualization Best Practices - Storage: How Many Drives Do I Need?

Storage can be one of the most complicated areas to plan for virtualization deployments. It’s rare that you’ll know exactly how large your VMs will grow, which may lead to either too much or not enough storage being allocated to a particular virtualization host. You can avoid both situations with some planning and monitoring.

When you’re planning a virtualization deployment, knowing the basics of the workload and expected growth is critical to ensuring that enough storage is provisioned to the host. However, the way that storage is provisioned is as critical as the amount. Allocating 2TB of storage to a host for VM usage may sound great; but if it’s two 1TB drives connected to a Serial Advanced Technology Attachment (SATA) controller on the motherboard, it’s highly unlikely that it will perform under load.

Storage planning involves two main areas of concern: storage controllers and the number of drives connected to those controllers. The type of storage on the back end also matters:

Storage controllers. The number of storage controllers installed in the system is a common bottleneck. A VM will do as much I/O as a physical system. If a VM is doing significant amounts of I/O, it can and will saturate the storage controller. Performance will suffer for any other VMs that are using virtual hard disks (VHDs) available from that storage controller. That’s why it’s absolutely critical to have multiple paths available to the storage pool, for both performance reasons and failover in case of a loss of connection. Having multiple controllers available eliminates the single point of failure that can cripple a large-scale virtualization deployment.

Number of drives. As we mentioned earlier, provisioning storage for virtualization doesn’t always mean getting the largest drive available. In many cases, just as with many highperformance workloads, it’s preferable to have multiple smaller disks as opposed to fewer larger disks. Having multiple disks available lets you spread the work across multiple physical disks that are part of a Redundant Array of Independent Disks (RAID).

Storage type. The type of storage connected to the host is of slightly less importance. As long as the storage is on the Windows Server 2008 hardware compatibility list, it will work with Hyper-V. This includes small computer system interface (SCSI), serial-attached SCSI (SAS), Internet SCSI (iSCSI), fibre channel, and even Intelligent Drive Electronics (IDE) and SATA. You’ll see the difference in the rotational speed of the disk, as well as the amount of cache available on the disk. The performance gains from moving from a 7,200 RPM disk to a 10,000 RPM or even 15,000 RPM disk are significant and can increase even more past that level. Similarly, moving from 4 or 8MB of cache to 16 or 32MB will increase performance.

Volume management. When you pair storage with highly available VMs, the best practices get a bit more complicated. VMs that are made highly available as part of failover clustering have a limitation of one VM per logical unit number (LUN) if individual failover per VM is desired. This means you must carefully plan the provisioning of LUNs.

After your Hyper-V host is up and running, you should watch a few performance counters related to storage:
• Physical Disk, % Disk Read Time
• Physical Disk, % Disk Write Time
• Physical Disk, % Idle Time

These three counters provide a good high-level view of disk activity. If the read and write times are high (consistently greater than 75%), then disk performance is likely to be affected.
Additional counters to monitor include these:
• Physical Disk, Avg. Disk Read Queue Length
• Physical Disk, Avg. Disk Write Queue Length
High levels for these counters (greater than 2) may indicate a disk bottleneck.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Virtualization Best Practices - How Much Memory Is Enough?

Memory is another key area to consider when you set up virtualization. After all, you can have all the processing power in the world, but unless you have enough memory to run those VMs at the same time, you’ll be stuck with extra processing power.

Hyper-V doesn’t support the concept of allocating more memory than is available in the host system. This prevents you from affecting the performance of the host. (If you started a VM using 4GB on a system with 2GB of RAM, the system would need to use virtual memory to provide the extra memory beyond what was available on the host.) How much memory is necessary for the host? The usual answer applies here: It depends on a number of factors.

Number of VMs running, and their allocated memory. How many VMs will be running on the host, and how much memory will be allocated to each one? The amount of memory each VMs needs is entirely dependent on the workload running within the VM. A SQL Server running in a VM will require much more memory than a departmental file server.

Other workloads running on the host. Although it’s recommended that Hyper-V be the only role running on the host, it’s possible that this won’t be the case. If so, it’s critical that enough memory be available to service all the other workloads running on the system. Refer to the memory requirements of the other workload(s) that will be running on the host, and add that amount to the total amount of memory required for the VM.

Host reserves. It’s recommended that you set aside 512MB of RAM for the host. That memory is used by Hyper-V’s virtualization stack that runs in the parent partition, as well as by any services running in the parent partition. Hyper-V won’t allow a VM to launch unless at least 32MB of RAM is available.

Other VMs (for quick-migration scenarios only). If the host is part of a Windows Server
2008 cluster for quick migration, ensure that there are sufficient resources across all nodes of the cluster in case one node goes down. If a node hosting VMs goes offline for any reason, those VMs will attempt to restart across all other nodes in the cluster. However, if there’s not enough memory on the cluster’s remaining active nodes, the VMs may not be able to start.

Luckily, monitoring the amount of available memory on a Hyper-V host is significantly easier than monitoring processor utilization, because memory utilization appears in Task Manager. You can also monitor the host’s memory utilization using the Memory Available Mbytes counter.

In some rare cases, a VM may not be able to start even when plenty Note of memory is available. This is most commonly seen when large file copies are performed in the parent partition. Microsoft has released a hotfix for this as KB953585.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Virtualization Best Practices - Faster Processors and Performance

In some cases, the speed of the processors leads to better performance. However, this isn’t always the case. For example, let’s consider a VM created by using a physical-to-virtual conversion. This workload was previously running on an older Pentium 3 Xeon processor at 700MHz and is running a custom line-of-business (LOB) application.
Now that it’s in a VM, will the workload run more quickly? Depending on the type of application, it may not. That doesn’t mean the extra processing power from faster processors goes to waste, because it provides extra headroom for workloads to grow and a resource for other VMs. But you do need to take this fact into account.


CPU-Bound or I/O-Bound Workloads
Not all workloads are capped by the processing power available to the VM. Some workloads, such as a SQL Server, are generally bound to a greater extent by the limits of memory and the disk subsystem than by the processor. In this case, buying a faster processor won’t necessarily provide faster performance to the VMs—use the money you save to invest in memory or a faster storage subsystem. Once the host has been deployed, administrators often use management tools to determine how the host is performing. But because of the virtual nature of the processors, monitoring a virtual system isn’t as simple as looking at the Task Manager.


Perfmon
Traditionally, administrators used Windows Task Manager to get a quick glance at what was happening on the system. However, because of the architecture of Hyper-V, Task
Manager doesn’t show the CPU usage of VMs. Task Manager running in the parent partition has no way of displaying that information; instead, you need to use Perfmon.

Perfmon, short for Performance Monitor, is Microsoft’s tool to examine performance data. This data can be as simple as CPU utilization or as complex as context switches between Ring 0 and Ring 3.

By looking at the Hyper-V performance counters through Perfmon, you can determine if the system has room for more VMs or, conversely, if the system is oversubscribed (too many VMs running on the host).

Using Perfmon to monitor the host is easy. From the Start menu, select Administrative Tools, and then select Reliability And Performance Monitor. Click Performance Monitor in the list on the left.

By default, only one item is tracked in Perfmon: % Processor Time. Unless you’re interested in the processor utilization of the parent partition only, you’ll need to add some counters.

The processor-performance counters refer to the number of logical processors (LPs) in the system. A logical processor is a unit of processing power—for example, if you have a system with a single CPU socket and a single-core, non-hyperthreaded processor, you have one LP. Change that processor to a dual-core processor, and you have two LPs. Adding Hyper-Threading? Make it four logical processors.


To add a performance counter, you click the green + sign or press Ctrl-I. A large number of Hyper-V–related performance counters are available, but we’re interested in a couple in particular:

Hyper-V Hypervisor Virtual Processor, % Total Run Time. Under Instances Of Selected
Object, you’ll see a few options. _Total provides a sum of all the VPs allocated to all running VMs. You can also add individual VPs allocated to a VM.

Hyper-V Hypervisor Root Virtual Processor, % Total Run Time. This is the percentage of the time that the logical processors selected are executing instructions in non Hypervisor–based code in the root/parent partition.

Adding these two values gives you the total CPU utilization of the host executing virtualizationrelated code. The closer the value gets to 100%, the more heavily loaded the system is.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Virtualization Best Practices - Choosing a Processor

As of this writing, Hyper-V provides support for up to 24 cores in the parent partition. A core is a unit of processing power. Both Intel and AMD have released processors that consist of multiple processor cores on a single processor. These processors plug into sockets on the computer’s motherboard. Having multiple processor cores on a single die allows even a single-socket system to execute multiple threads of execution at the same time. Although 24 cores sounds like a lot of processing power (and in most cases, it certainly is!), virtualization can easily use all of it.

You must consider three key factors about processors as you work through the planning stages:
• Number of processors in the system
• Number of cores on the processors in the system
• Speed of the processors in the system

Because one of the key features of virtualization is the ability to achieve higher density (running multiple VMs on a single physical host), administrators naturally gravitate toward the processor as a key bottleneck. After all, if a host runs out of processing power, those virtualized workloads may not be able to keep up with the demand being placed on them.

You need to answer a couple of key questions for the host:
• How many processors are necessary for this virtualization host?
• Do the processors need to provide t wo, four, or even six cores per processor?

The answer to these two questions is usually, “It depends.” Two schools of thought apply here, which bring up two more questions: Do you want to use more dual-socket systems, which usually have a lower price point? Or do you want to achieve maximum consolidation by going with quad-socket systems?

The price advantage of dual-socket systems is significant. At the time of writing, you can deploy three physical dual-socket systems for the price of one quad-socket system. You can then cluster those three dual-socket systems together in a high-availability configuration to ensure continuous uptime for the workloads running in the VMs. With the three-system configuration, however, you need to consider some other costs. Having three systems means further expenses for operating system licenses, management software licenses, and the administration of three servers. You also need to factor in the power costs of the three servers.

The other option, which uses only one quad-socket virtualization host, doesn’t provide any sort of backup or high availability—meaning that if the single host goes down, all the virtualized workloads will go down with it. But quad-socket servers generally provide a bit more in terms of expansion and I/O scalability, which could result in additional VMs being deployed on a single host.

Some enterprises are also considering the use of blade servers. Although the up-front cost of the enclosure is higher, the ability to use 14 systems in 7 units of rack space (for example) could be a better fit for some companies. As you can see, there’s no simple answer when you’re deciding on the best configuration for your host.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Virtual Machine Settings

Now that you’ve created a new VM using the wizard, let’s look at the VM’s setting. To do so, select the newly created VM in the Hyper-V Manager, and click Settings. The Settings dialog is broken into two sections: Hardware and Management. The hardware options control the hardware that’s available to the VM, and the management options control the VM’s administrative tasks. We’ll look at all the options available.



Hardware
Just like a physical system, a VM consists of a variety of (virtual) hardware devices. In the settings for a VM, you can modify that hardware—including adding processors, network adapters, and hard disks.



Add Hardware
You can modify the configuration of the VM by adding hardware, such as a small computer system interface (SCSI) controller or additional network interface, to the VM. The VM must be powered off to add hardware to the VM. After you add the virtual hardware to the VM and power on the VM, the OS will recognize the new hardware.



BIOS
Hyper-V doesn’t allow direct access to the Basic Input/Output System (BIOS), so the only BIOS settings you can modify are exposed here:

• Num Lock. Selecting this check box triggers Num Lock in the VM to be active on boot.

• Startup Order. This option controls the order in which devices will be queried for boot. The top-most option will be tried first, and if it fails, then the next option will be tried. By default, the boot order is CD, IDE, legacy network adapter, and then floppy.



Memory
You can adjust the amount of memory allocated to the VM. This can range from 8MB to the maximum amount of RAM in the system. There are some caveats:

• Once the VM is powered on, the memory is allocated to the VM and can’t be reclaimed until the VM is saved or turned off.

• Memory allocated to a VM can’t be shared. If multiple VMs are running the same OS,
Hyper-V doesn’t provide the capability to share common pages of memory between the VMs.
• Hyper-V doesn’t provide support for allocating more memory than is available on the host. This limits the amount of memory available to allocate to VMs to about 1GB less than the maximum amount of RAM in the host.



Processor
The Processor Settings dialog has a number of options. Hyper-V supports up to four virtual processors in the VM. Those virtual processors are scheduled as threads on the physical processors. A VM can’t have more virtual processors allocated than are present in the host. That means that in order to create a four-core VM, the host system must have at least four cores.

The upper limit of total virtual processors you can allocate on a host is eight times the number of logical processors. A single dual-socket, dual-core server (exposing 4 processors to the host) can support a total of 32 virtual processors. You should keep a very close eye on performance to ensure that the system can handle all the running VMs as well as the host.

Additionally, the Processor Settings dialog is where you set the resource-control options for the VM. This dialog includes the following options:

• Virtual Machine Reserve. This reserves a set amount of processor power for the VM. You can think of this reserve as a guaranteed amount of processor resources.

• Virtual Machine Limit. This is a hard cap on the amount of processor power that the VM can take from the host.

• Relative Weight. The relative weight is another method of assigning a value of importance between multiple VMs. You can set this option to any value from 1 to 10,000. If two VMs have the same VM reserves and limits, the VM with a higher relative weight will receive more processing power.

• Processor Functionality. The last check box in the processor settings controls the processor functionality. By selecting this option, you’ll let older OSs, such as Windows NT or earlier, work with Hyper-V.

It’s important to know the differences between a logical processor and a virtual processor. Logical processors are the foundation of today’s multicore processors. A system with a single core and without Hyper-Threading has a single logical processor. Adding additional cores increases the logical processor count. For example, a system with two physical processors, each processor having two cores, has a total logical-processor count of four. A virtual processor is seen on the host as a single thread of execution, which can then be scheduled on any of the logical processors in the system.



IDE Controller
Hyper-V includes a dual-channel IDE controller much like many standard hardware PCs. By default, a single VHD is connected to the primary IDE controller in the primary connection, with a CD/DVD drive connected to the primary connection of the secondary controller.

The VM can boot only from a VHD connected to the IDE controller. Although this does seem strange and counterintuitive for performance reasons, this arrangement is necessary because of the architecture of Hyper-V. The synthetic devices in Hyper-V aren’t seen in the OS without the integration components being installed.

By clicking the IDE controller, you can add a new hard disk or DVD drive to the specific IDE controller. (DVD drives can be connected only to the IDE controllers.) If you select a hard disk, you have a number of options to choose from. At the top, you can select the specific location where the VHD file will be connected the top. If no SCSI controllers have been added to the VM, then you can add the new VHD to one of the pre-existing IDE controllers only. However, if you added a SCSI controller to the VM’s configuration, then the SCSI controller and all available locations will be listed as well.

After you’ve assigned the new VHD to a specific location, you can set up the specifics of the disk. A number of Hard Drive settings are available, including creating a new VHD, using an existing disk, or editing or inspecting an existing disk. The New, Edit, and Inspect buttons all map back to the New Virtual Hard Disk Wizard. This wizard provides a one-stop interface for all tasks having to do with VHD files.

The bottom option, Physical Hard Disk, lets you directly connect a physical logical unit number (LUN) to a VM. This allows the VM to use directly storage volumes that are connected to controllers on the host—including fibre channel, Internet SCSI (iSCSI), or direct-attached SCSI storage. The use of physical hard disks lets you treat your VMs the same way as physical machines, and you get an increase in performance compared to the default dynamically expanding VHD. In order to connect a physical hard disk to a VM, you must mark the physical disk Offline on the host. You can do this by opening the Disk Management MMC snap-in, selecting the disk, and then right-clicking it and selecting Offline. Take care that you don’t bring the same volume online while the VM has it mounted, or you may lose data.



Network Adap ter
You have several items to choose in the Network Adapter Settings window, and you can change the same settings regardless of the type of network adapter—normal or legacy.

• Network. Each network adapter defined in the Settings dialog can be connected to a single virtual network.

• MAC Address. The Media Access Control (MAC) address of a network adapter is what makes each network adapter unique.

Hyper-V gives you the ability to assign a static MAC address to each network adapter in the VM or to use a dynamically generated MAC address. Some applications use the MAC address of a system for a number of purposes. To set a static MAC address, click the Static radio button and enter the value.

Dynamic MAC addresses under Hyper-V always start with 00:15:5D, with the last three octets randomly chosen based on the MAC address of the host’s physical adapter.

• Enable Virtual LAN Identification. If the VM needs to communicate over a specific virtual local area network (VLAN) using the 802.1q protocol, enter the VLAN ID here. Multiple virtual network adapters can be connected to different VLANs.



COM Port
The COM ports in the VM can either be left unconnected (the default selection) or be connected to a named pipe. Named pipes are a special way of communicating between two different systems.

To connect a virtual COM port to a named pipe on the local system, enter the name of the pipe in the Pipe Name text box. There’s no need to format it in the traditional \\.\pipe\pipe syntax; the dialog box takes care of that. To connect to a remote pipe on another system, select the Remote Computer check box and enter the name of the computer.



Floppy
A VM has a single virtual floppy disk drive. The virtual floppy drive has no access to the physical floppy disk drive—rather, it uses virtual floppy disks (VFD files). You can create VFD files by using the Virtual Disk Wizard (New -> Floppy Disk).


Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Hyper-V Software Requirements

Hyper-V is a feature of Windows Server 2008 x64 Edition only. There’s no support for Hyper-V in the x86 (aka 32-bit) Edition or the Itanium versions of Windows Server 2008. The x64 Edition is required for a couple of reasons:

Kernel address space The 64-bit version of Windows Server 2008 provides a much larger kernel address space as compared to the 32-bit edition. This directly translates into the support of larger processes, which is crucial for virtualization.

Large amount of host memory Hyper-V supports up to 1 TB of RAM on the host. x86 versions of Windows Server 2008 support only up to 64 GB of RAM on the host, which would severely limit the number of VMs you could run.

We’re frequently asked to explain the differences with Hyper-V between versions of Windows Server 2008. There’s no difference—the features of Hyper-V are the same, regardless of whether you’re running the Standard, Enterprise, or Datacenter product. However, differences in the versions of Windows Server 2008 affect key virtualization scenarios:

Processor sockets Windows Server 2008 Standard Edition is limited to four sockets, whereas Enterprise Edition supports eight sockets.

Memory Windows Server 2008 Standard Edition supports up to 32 GB of RAM, and
Windows Server 2008 Enterprise Edition supports up to 2 TB of RAM. Failover clustering Windows Server 2008 Standard Edition doesn’t include the failover clustering functionality required for quick migration.

Windows Server 2008 includes the rights to run virtual images of the installed operating system. The number of those virtual images is tied to the edition.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Hyper-V Requirements

Because Hyper-V is included as a role of Windows Server 2008 x64 Edition, it inherits the same hardware requirements. However, a few areas require special attention for Hyper-V.


Hardware Requirements
Some of the requirements for Hyper-V are hard requirements, such as the type of processor, whereas others are best practices to ensure that Hyper-V performs optimally.


Processor
Hyper-V requires a 64-bit capable processor with two separate extensions: hardware-assisted virtualization and data-execution prevention. Hardware-assisted virtualization is given a different name by each vendor—Intel calls it Virtualization Technology (VT), and AMD calls it AMD Virtualization (AMD-V). Almost all processors now ship with those features present, but check with your processor manufacturer to make sure. Although the functionality is required in the processor, it’s also required to be enabled in the BIOS. Each system manufacturer has a different way of exposing the functionality, as well as a different name for it. However, most, if not all, manufacturers provide a way to enable or disable it in the BIOS. You can enable it in the BIOS, but some systems don’t enable the feature unless there’s a hard-power cycle—shutting off the system completely, for example. We recommend that the system be completely powered off.

Data-execution prevention (DEP) goes by different names depending on the processor manufacturer—on the Intel platform, it’s called eXecute Disable (XD); and AMD refers to it as No eXecute (NX). DEP helps protect your system against malware and improperly written programs by monitoring memory reads and writes to ensure that memory pages marked as Data aren’t executed. Because you’ll be running multiple VMs on a single system, ensuring stability of the hosting system is crucial.


Storage
As we talked about earlier, Hyper-V’s architecture lets you use standard Windows device drivers in conjunction with the VSP/VSC architecture. As such, any of the storage devices listed in the Windows Server Catalog will work with Hyper-V. These include SCSI, SAS, fibre channel, and iSCSI—if there’s a driver for it, Hyper-V can use it. Of course, you’ll want to take some considerations into account when planning the ideal Hyper-V host.

Here are some of the areas where extra attention is necessary:

Multiple spindles and I/O paths. Most disk-intensive workloads, such as database servers, need multiple spindles to achieve high performance. Hyper-V’s storage architecture enables those workloads to be virtualized without the traditional performance penalty. When multiple disk-intensive workloads share the same disk infrastructure, they can quickly slow to a crawl. Having multiple disks (as well as multiple I/O paths) is highly recommended for disk-intensive workloads. Even two workloads sharing a host bus adapter with a single fibre channel can saturate the controller, leading to decreased performance. Having multiple controllers also can provide redundancy for critical workloads.

Disk configurations for optimal performance Hyper-V has a number of different ways to store the VM’s data, each with its own pros and cons:

Pass-through disks:
• Pros: Pass-through disks generally provide the highest performance. The VM writes directly to the disk volume without any intermediate layer, so you can see near-native levels of performance.
• Cons: Maintaining the storage volumes for each VM can be extremely challenging, especially for large enterprise deployments.

virtual hard disks:
• Pros: These are the best choice for production environments using VHD files. Because you allocate all the disk space when you create the VHD file, you don’t see the expansion penalty that occurs with the dynamically expanding VHD.
• Cons: Because all the space for the VHD is allocated at creation, the VHD file can be large.

Dynamic virtual hard disks:
• Pros: A dynamically expanding VHD expands on demand, saving space on the system until it’s needed. Disks can remain small.
• Cons: There is a small performance penalty when a disk is expanded. If large amounts of data are being written, the disk will need to be expanded multiple times.

Snapshots. Snapshots are extremely useful in the test and development environment. However, what can be helpful in one environment can be harmful in another. You shouldn’t use snapshots in a production environment because rolling back to a previous state without taking the proper precautions can mean data loss!


Networking
Much like storage, networking with Hyper-V inherits the rich driver support of Windows Server 2008. Many of the caveats for storage apply to networking as well—ensure that multiple NICs are present so a single interface doesn’t become the bottleneck.

The following list identifies areas where you should pay special attention with networking:
• Hyper-V supports Ethernet network adapters, including 10, 100, 1000, and even 10GbE network adapters. Hyper-V can’t use ATM or Token Ring adapters, nor can it use wireless (802.11) adapters to provide network access to the VMs.

• During the Hyper-V role installation you can create a virtual network for each network adapter in your system.

• We recommend that you set aside a single NIC to manage the host. That NIC shouldn’t be used for any VMs (no virtual switch should be associated with it). Alternatively, you can use out-of-band management tools to manage the host. Such tools typically use an onboard management port to provide an interface to the system.


Software Requirements
Hyper-V is a feature of Windows Server 2008 x64 Edition only. There’s no support for Hyper-V in the x86 (aka 32-bit) Edition or the Itanium versions of Windows Server 2008. The x64 Edition is required for a couple of reasons:

Kernel address space The 64-bit version of Windows Server 2008 provides a much larger kernel address space as compared to the 32-bit edition. This directly translates into the support of larger processes, which is crucial for virtualization.

Large amount of host memory Hyper-V supports up to 1 TB of RAM on the host. x86 versions of Windows Server 2008 support only up to 64 GB of RAM on the host, which would severely limit the number of VMs you could run.

We’re frequently asked to explain the differences with Hyper-V between versions of Windows Server 2008. There’s no difference—the features of Hyper-V are the same, regardless of whether you’re running the Standard, Enterprise, or Datacenter product. However, differences in the versions of Windows Server 2008 affect key virtualization scenarios:

Processor sockets. Windows Server 2008 Standard Edition is limited to four sockets, whereas Enterprise Edition supports eight sockets.

Memory. Windows Server 2008 Standard Edition supports up to 32 GB of RAM, and Windows Server 2008 Enterprise Edition supports up to 2 TB of RAM.

Failover clustering. Windows Server 2008 Standard Edition doesn’t include the failoverclustering functionality required for quick migration.

Virtual image use rights. Windows Server 2008 includes the rights to run virtual images of the installed operating system. The number of those virtual images is tied to the edition.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Hyper-V Features

Now that we’ve gone over both the scenarios and architecture of Hyper-V, let’s dive into some of the features of Microsoft’s virtualization platform:

32-bit (x86) and 64-bit (x64) VMs Hyper-V provides support for both 32-bit as well as 64-bit VMs. This lets users provision both architectures on the same platform, easing the transition to 64-bit and providing legacy 32-bit operating systems.

Large memory support (64 GB) within VMs With support for up to 64 GB of RAM, Hyper-V scales out to run the vast majority of enterprise-class workloads. Hyper-V can also use up to a total of 1 terabyte (TB) of RAM on the host.

SMP virtual machines Symmetric Multi Processor (SMP) support allows VMs to recognize and utilize four virtual processors. As a result, server applications running in a Hyper-V VM take full advantage of the host system’s processing power.

Integrated cluster support for quick migration and high availability (HA) Windows Server 2008 Hyper-V and HA go hand in hand. It’s easy to create a failover cluster of VM hosts that your VMs can live on. After you set up the failover cluster, you can quickly and easily move a VM from one host to the other from the Failover Cluster Manager or from other management tools (such as System Center Virtual Machine Manager).

Volume Shadow Service integration for data protection Hyper-V includes a Volume Shadow Services (VSS) provider. As we discussed earlier, in the list of scenarios, VSS lets backup applications prepare the system for a backup without requiring the applications (or VMs) to be shut down.

Pass-through high-performance disk access for VMs When a physical volume is connected directly to the VM, disk I/O–intensive workloads can perform at their peak. If the Windows Server 2008 system can see the volume in the Disk Management control panel, the volume can be passed through to the VM. Although you’ll see faster performance with pass-through disk access, certain features (such as snapshots, differencing disks, and host-side backup) that you get from using a VHD file aren’t available with pass-through disks.

VM snapshots Snapshots let administrators capture a point in time for the VM (including state, data, and configuration). You can then roll back to that snapshot at a later point in time or split from that snapshot to go down a different path. The snapshot is a key feature for the test and development scenario, because it lets users easily maintain separate points in time. For example, a user may install an operating system inside a VM and take a snapshot. The user can perform a number of tasks and then take a second snapshot. Then, the user can return to either of those snapshots later, saving configuration time and effort.

New hardware-sharing architecture (VSP/VSC/VMBus) By using the new VMBus communication protocol for all virtual devices, Hyper-V can provide higher levels of performance than were previously seen with Microsoft virtualization products.

Robust networking: VLANs and NLB Virtual Local Area Network (VLAN) tagging—also referred to as the IEEE standard 802.1q—provides a secure method for multiple networks to use the same physical media. Hyper-V supports VLAN tagging (802.1q) on the virtual network interfaces and specifies a VLAN tag for the network interface. Network Load Balancing (NLB) support in Hyper-V allows VMs to participate in an NLB cluster. An NLB cluster is different from a failover cluster, such as those used for VM quick migration. NLB clusters are configured with front-end nodes that handle all incoming traffic and route it to multiple servers on the back-end.

DMTF standard for WMI management interface The Distributed Management Task Force (DMTF) is a standards body that provides a uniform set of standards for the management of IT environments. Microsoft has worked closely with the DMTF to ensure that all the management interfaces for Hyper-V adhere to the standards, allowing management tools from multiple vendors to manage the system.

Support for full or Server Core installations Hyper-V can run on a full installation of Windows Server 2008 as well as the Server Core option. We’ll discuss Server Core in more depth later.


Advantages over Virtual Server
Windows Server 2008 Hyper-V has a number of advantages over Virtual Server 2005 R2 SP1:

• Support for SMP and 64-bit VMs. Virtual Server was limited to 32-bit uni-processor virtual machines.
• Support for more than 3.6 GB of RAM per VM.
• Support for mapping a logical unit number (LUN) directly to a VM.
• Increased performance from VSP/VSC architecture.
• Hyper-V management via a MMC-based interface instead of the web-based console.

However, it’s impossible for users who have only 32-bit hardware in their environment to move to Hyper-V (because it’s a feature of the 64-bit version of Windows Server 2008).

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Hyper-V Architecture - Virtual Machine

A VM can have two different types of devices: emulated and synthetic. Although synthetic devices are encouraged due to their superior performance, they aren’t available for all operating systems. Emulated devices are present in Hyper-V mainly for backward compatibility with nonsupported operating systems. VMs running certain distributions of Linux have synthetic device support as well. Let’s examine each type of device.


Emulated Devices
Emulated devices in Hyper-V exist primarily for backward compatibility with older operating systems. In an ideal world, all applications would run on the latest version of the operating system they were designed for, but that’s far from reality. Many companies have systems in production that run on older copies of operating systems because one of their applications doesn’t run on anything newer. An older operating system may not be supported under Hyper-V, which means it can’t take advantage of the high-performance I/O. That’s not a total loss, however: If you consolidate those older systems onto a newer Hyper-V host, the advantages of moving to a more up-to-date hardware platform can provide a performance boost.

Emulated devices have another key role. During the installation of the VM, operating systems don’t have support for the synthetic devices that may be installed in the VM. For that reason, you must use emulated devices—otherwise, the operating-system installation can’t function. For Hyper-V, it’s easy to move from emulated to synthetic devices.

The emulated devices presented to a VM are chosen for their high degree of compatibility
across a wide range of operating systems and in-box driver support. The video card is based on an S3 video card, and the network card is an Intel 21140-based Ethernet adapter.

Emulated devices under Hyper-V don’t perform as well as the new synthetic devices. Thanks to part of the work that was done to harden the entire virtualization stack, emulated devices execute in the worker process—specifically, in user mode in the parent partition.

How does I/O happen with emulated devices?

The below are about how emulated storage requests are handled. Emulated networking is handled in a similar fashion. I want to point out a few specific items:

• Context switches are used. A context switch is a switch from executing a particular processor instruction in kernel mode to user mode. When paired with virtualization, a context switch is an “expensive” operation. There’s no money involved, but the CPU cost for such an operation is very high. That time could be spent doing other tasks.

• The path that the data packet traverses is long, especially compared to the synthetic case (which we’ll review next).

• The path illustrated in the figure is repeated hundreds of times for a 10 kilobyte write to disk. Imagine if you’re doing a large SQL transaction that involved writing hundreds of megabytes to disk, or running a popular website being served up from IIS running in the VM. You can see that it won’t scale well.


Synthetic Device Drivers
Synthetic devices provide much higher performance than their emulated counterparts. By taking advantage of VMBus, synthetic devices can execute I/O transactions at a much faster rate. Synthetic devices, such as the Microsoft Virtual Machine Bus Network Adapter, don’t have real-world counterparts. They are purely virtual devices that function only with Hyper-V—loading the drivers on a physical system does nothing. These new synthetic devices rely on VMBus.

Synthetic device drivers are available only for operating systems that are supported by Microsoft. (For reference, a list of supported operating systems for Hyper-V is available at www.microsoft.com/virtualization.) If you’re running an operating system in the VM that isn’t supported by Microsoft, you’ll need to use the emulated devices in the VM. Much like the emulated storage request chart presents a lot of data. Here are a few key differences:

• In the beginning, the data path is similar to the emulated data path. •u However, the synthetic storage device in Hyper-V is a SCSI-based device—so the last driver it hits before getting put on VMBus is the StorPort driver.

• When a packet makes it to the miniport driver, it’s put on VMBus for transport to the Storage VSP in the parent partition. Because VMBus is a kernel-mode driver, no context switches are necessary.

• After the data packet crosses over to the parent partition, the correct destination is determined by the VSP, which routes the packet to the correct device. The destination is a virtual hard disk (VHD) file.


Installing Synthetic Device Drivers
It’s easy to install synthetic device drivers in the VM. After you’ve installed the operating system, select Action -> Insert Integration Services Setup Disk. An installer launches and automatically installs the drivers for you. When you reboot, the VM can take advantage of the new architecture.

A special synthetic driver deals with the boot process: Optimized Note Boot Performance.
Because the synthetic drivers rely on VMBus, you can’t boot off hard drives that are connected to the SCSI controller. All isn’t lost—during the boot process, after the VMBus driver is loaded, all the IDE boot traffic is automatically routed through the same infrastructure that is used for SCSI traffic. This means the boot process and all disk traffic (reads and writes) perform at the same accelerated speed.


Linux Device Drivers
No, that’s not a typo—certain distributions of Linux are supported under Hyper-V. Not only is the operating system supported, but a full set of device drivers also enable synthetic device support under Linux. The drivers include the Hypercall adapter—a thin piece of software that runs in the Linux VM and increases performance by translating certain instructions to a format that Hyper-V can understand.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Hyper-V Architecture - Parent Partition

The installation of Windows is now running on top of the Windows hypervisor. One of the side effects of running on top of the hypervisor is that the installation is technically a VM. We’ll refer to this as the parent partition. The parent partition has two special features:

• It contains all the hardware device drivers, as well as supporting files, for the other VMs.

• It has direct access to all the hardware in the system. In conjunction with the virtualization service providers, the parent partition executes I/O requests on behalf of the VM—sending disk traffic out over a fibre channel controller, for example.

The following best practices provide a secure and stable parent partition, which is critical to the VMs running on the host:

• Don’t run any other applications or services in the parent partition. This may seem like basic knowledge for system administrators, but it’s especially crucial when you’re running multiple VMs. In addition to possibly decreasing stability, running multiple roles, features, or applications in the parent partition limits the amount of resources that can otherwise be allocated to VMs.

• Use Windows Server 2008 in the Core role as the parent partition.


Windows Hypervisor
The Windows hypervisor is the basis for Hyper-V. At its heart, the hypervisor has only a few simple tasks: creating and tearing down partitions. (A partition is also known as the basis for a VM) and ensuring strong separation between the partitions. It doesn’t sound like much, but the hypervisor is one of the most critical portions of Hyper-V. That’s why development of the hypervisor followed the Microsoft Security Design Lifecycle process so closely—if the hypervisor is compromised, the entire system can be taken over, because the hypervisor runs in the most privileged mode offered by the x86 architecture. One of Microsoft’s design goals was to make the Microsoft hypervisor as small as possible. Doing so offered two advantages:

• The Trusted Computing Base (TCB) is smaller. The TCB is the sum of all the parts of the system that are critical to security. Ensuring that the hypervisor is small reduces its potential attack vectors.

• The hypervisor imparts less overhead on the system. Because all VMs (as well as the parent partition) are running on top of the hypervisor, performance becomes a concern. The goal is to minimize the hypervisor’s overhead.


Kernel-Mode Drivers
A Windows kernel-mode driver is one of two types of drivers in Windows. Kernel-mode drivers execute in Ring 0. Because this type of driver is executing in kernel mode, it’s crucial that these drivers be as secure as possible: An insecure driver, or a crash in the driver, can compromise the entire system.
Hyper-V adds two kernel-mode drivers:

VMBus. VMBus is a high-speed in-memory bus that was developed for Hyper-V. VMBus acts as the bus for all I/O traffic that takes place between the VMs and the parent partition. VMBus works closely with the virtualization service provider and virtualization service client.

Virtualization Service Provider (VSP). The Virtualization Service Provider (VSP) enables VMs to securely share the underlying physical hardware by initiating I/O on behalf of all VMs running on the system. It works in conjunction with the hardware vendor drivers in the parent partition—which means that no special “virtualization” drivers are necessary. If a driver is certified for Windows Server 2008, it should work as expected with Hyper-V. Each class of device has a VSP present—for example, a default installation of Hyper-V has a networking VSP as well as a storage VSP. The VSPs communicate with the matching Virtualization Service Client (VSC) that runs in the VM over VMBus. We’ll cover the VSC when we look at the different types of VMs.


User-Mode Applications
User-mode applications are, strangely enough, applications that run in user mode. They execute in Ring 3, which is where all unprivileged instructions are run. Many of the applications that run in Windows are user-mode applications—for example, the copy of Notepad that you use to look at a text file is executing in user mode.
Hyper-V has a number of different user-mode applications:

Virtual Machine Management Service (VMMS). The VMMS acts as the single point of
interaction for all incoming management requests. It interacts with a number of processes,
two of which we’ll refer to here.

WMI providers. Hyper-V has a rich set of WMI interfaces. They provide a way to manage the state and health of the VMs as well as get settings information and some performance information. All the WMI interfaces are fully documented on http://msdn.microsoft.com.

Worker processes. When a VM is started up, a worker process is created. The worker process represents the actions that are taking place in the virtual processor, as well as all emulated devices and the virtual motherboard. Each VM that is running on a host has a worker process.

Now that we’ve shown you what’s happening in the parent partition, let’s look at the VMs. After you create a VM and power it on, you can install a wide variety of x86/x64-based operating systems. Even though these are VMs, they can run the same operating systems without modification as a physical computer. But operating systems that are supported by Microsoft include new synthetic drivers, which work in conjunction with the matching VSP running in the parent partition.


Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Scenarios for Hyper-V

Hyper-V was developed with a several key scenarios in mind. When Microsoft started developing Hyper-V, the development team spent a great deal of time meeting with customers who were using virtualization—small businesses, consultants who implement virtualization on behalf of their customers, and large companies with multimillion dollar IT budgets. The following key scenarios were developed as a result of those meetings; they represent customer needs, demands, and wants.


Server Consolidation
Systems are becoming increasingly powerful. A couple of years ago, it was rare to find a quadprocessor server at a price most customers could afford. Now, with major processor manufacturers providing multicore functionality, servers have more and more processing power. Multicore technology combines multiple processor cores onto a single die—enabling a single physical processor to run multiple threads of execution on separate cores. Virtualization and multicore technology work great together: If you’re combining multiple workloads onto a single server, you need to have as much processing power as possible. Multicore processors help provide the optimal platform for virtualization.

Businesses are increasingly likely to need multiple systems for a particular workload. Some workloads are incredibly complex, requiring multiple systems but not necessarily using all the power of the hardware. By taking advantage of virtualization, system administrators can provide a virtualized solution that better utilizes the host hardware—thus allowing administrators to get more out of their expenditure.

Workloads aren’t the only driving item behind virtualization. The power and cooling requirements of modern servers are also key driving factors. A fully loaded rack of servers can put out a significant amount of heat. (If you’ve ever stood behind one, you’re sure to agree—it’s a great place to warm up if you’ve been working in a cold server room.) All that heat has to come from somewhere. The rack requires significant power.

But for companies in high-rise buildings in the middle of major cities, getting additional power is incredibly difficult, if not impossible. In many cases, the buildings weren’t designed to have that much power coming in—and the companies can’t add more power without extensive retrofitting. By deploying virtualization, more workloads can be run on the same number of servers.


Testing and Development
For people working in a test or development role, virtualization is a key to being more productive. The ability to have a number of different virtual machines (VMs), each with its own operating system that’s ready to go at the click of a mouse, is a huge time-saver. Simply start up whichever VM has the operating system. You no longer need to install a clean operating system. Also, by using the snapshot functionality, users can quickly move between known states in the VM.

With Hyper-V’s rich Windows Management Interface (WMI) interfaces, testing can be started automatically. By scripting both Hyper-V and the operating system to be tested, testers can run a script that starts the VM, installs the latest build, and performs the necessary tests against it. A Hyper-V virtual machine is also portable. A tester can work in the VM; if an issue is found, the tester can save the state of the VM (including the memory contents and processor state) and transfer it to the developer, who can restore the state at their convenience. Because the state of the VM is saved, the developer sees exactly what the tester saw.


Business Continuity and Disaster Recovery
Business continuity is the ability to keep mission-critical infrastructure up and running. Hyper-V provides two important features that enable business continuity: live backup and quick migration. Live backup uses Microsoft Volume Shadow Services functionality to make a backup of the entire system without incurring any downtime, as well as provide a backup of the VM at a known good point in time. The system backup includes the state of all the running VMs. When a backup request comes from the host, Hyper-V is notified, and all the VMs running on that host are placed into a state where they can be backed up without affecting current activity; they can then be restored at a later time.

Quick migration is the ability to move a VM from one host to another in a cluster using Microsoft Failover Cluster functionality. During a quick migration, you save the state of the VM, move storage connectivity from the source host to the target host, and then restore the state of the VM. Windows Server 2008 added support for the virtual-machine resource type to the Failover Clustering tool, enabling you to make a VM highly available using functionality included with the operating system.

Disaster recovery is becoming a requirement for increasing numbers of businesses. With natural disasters such as Hurricane Katrina fresh in the minds of system administrators, enterprises are seeking ways to keep their businesses running throughout such events. You must consider more than just big disasters, though—small disasters or even simple configuration issues can lead to a mission-critical service being unavailable. Hyper-V includes support for geographically dispersed clusters (a new feature of Windows Server 2008).


Dynamic IT
Microsoft’s idea of a dynamic IT infrastructure involves self-managing dynamic systems— systems that adjust automatically to the workload they’re running. By using Hyper-V in conjunction with the systems-management functionality present in the System Center family of products, enterprises can take advantage of the benefits of virtualization to meet the demands of a rapidly changing environment. Now that we’ve covered Hyper-V’s key targeted scenarios, let’s review the architecture of Hyper-V to see how Microsoft has implemented support for them.

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Microsoft’s Approach to Virtualization

Some software companies address virtualization from a single direction. VMware, for example, focuses on virtualizing and managing operating-system instances. Microsoft has been more thoughtful and less myopic in its approach. Microsoft’s articulated virtualization direction is in five key areas:

• Server—Hyper-V and Virtual Server 2005 for server services
• Desktop—Virtual PC for client-centric, local operating-system instances
• Presentation—Terminal Services providing remote desktop and application access
• Application—SoftGrid/AppV for application encapsulation
• Profile—Roaming profiles for personal-experience encapsulation

All these approaches are tied together by Windows as a platform and managed by the System Center family of products to enable administration of virtual and physical resources. You can benefit from this multipronged approach to virtualization, which is unified by a common platform and management suite.


It’s All Windows
The great thing about virtualization technology from Microsoft is that it’s integrated with Windows. Windows is a platform well known to administrators and users alike. You don’t need special training to use Microsoft’s virtualization offerings because they’re already familiar. You don’t need to be a virtualization specialist to use Hyper-V, Terminal Services, or AppV (as you might with VMware). You can have virtualization as a competency, just as you might with other focus areas of Windows administration.


System Center Manages All Worlds Well
You manage and monitor each of these virtualization offerings with the same System Center tools that you may already have in your environment for physical system management. Some virtualization-management tools only provide insight into the virtualization layer and can’t dive further into running operating systems or applications (they’re essentially half blind). Using a unified, familiar tool set that can correlate data between physical, virtual, and application software can magnify the benefits of virtualization.


Mixing and Matching with Virtualization
You can use these separate directions of virtualization together with the others to provide more value. You can combine the different focuses of virtualization—server, desktop, presentation, application, and profile—to meet the needs and requirements of changing enterprises. Why not rapidly provision Hyper-V–based virtual machines for thin-client access to meet dynamic demands? How about combining AppV with Terminal Services to alleviate application coexistence issues and reduce server count?

Source of Information : Sybex Windows Server 2008 Hyper-V Insiders Guide to Microsofts Hypervisor

Advancing Microsoft’s Strategy for Virtualization

Microsoft is leading the effort to improve system functionality, making it more self-managing and dynamic. Microsoft’s main goal with virtualization is to provide administrators more control of their IT systems with the release of Windows Server 2008 and Hyper-V. This includes a faster response time to restore that is head and shoulders above their competition. Windows Server 2008 provides a total package of complimentary virtualization products that range in functionality from desktop usage to datacenter hubs. One of their major goals is to provide the ability to manage all IT assets, both physical and virtual, from a single remote machine. Microsoft is also forwarding an effort to cut IT costs with their virtualization programs to better help customers take advantage of the interoperability features their products have to offer as well as data center consolidation.

This also includes energy efficiency due to the use of less physical machines. This fact alone reduces the consumption of energy in the data center and helps to save money long term. By contributing to and investing in the areas of management, applications, and licensing they hope to succeed in this effort.

Windows Server 2008 has many of these goals in mind, providing a number of important assets to administrators. The implementation of Hyper-V for virtualization allows for quick migration and high availability. This provides solutions for scheduled and unscheduled downtime, and the possibility of improved restore times. Virtual storage is supported for up to four virtual SCSI controllers per virtual machine, allowing for ample storage capacity. Hyper-V allows for the import and export of virtual machines and is integrated with Windows Server Manager for greater usability options.

In the past compatibility was always an issue of concern. Now the emulated video card has been updated to a more universal VESA compatible card. This will improve video issues, resulting in noncompatibility with operating systems like Linux. In addition Windows Server 2008 also includes integration components (ICs) for virtual machines. When you install Windows Server 2008 as a guest system in a Hyper-V virtual machine, Windows will install the ICs automatically. There is also support for Hyper-V with the Server Core in the parent partition allowing for easier configuration. This as well as numerous fixes for performance, scalability, and compatibility make the end goal for Hyper-V a transparent end user experience.


Windows Hypervisor
With Windows Server 2008 Microsoft introduced a next-generation hypervisor virtualization platform. Hyper-V, formerly codenamed Viridian, is one of the noteworthy new features of Windows Server 2008. It offers a scalable and highly available virtualization platform that is efficient and reliable. It has an inherently secure architecture. This combined with a minimal attack surface (especially when using Server Core) and the fact that it does not contain any third-party device drivers makes it extremely secure. It is expected to be the best operating system platform for virtualization released to date. Compared to its predecessors, Hyper-V provides more options for specific needs because it utilizes more powerful 64-bit hardware and 64-bit operating systems. Additional processing power and a large addressable memory space is gained by the utilization of a 64-bit environment. It also requires no need for outside applications, which increase overall compatibility across the board. Hyper-V has three main components: the hypervisor, the virtualization stack, and the new virtualized I/O model. The hypervisor (also known as the virtual machine monitor or VMM) is a very small layer of software that interacts directly with the processor, which creates the different “partitions” that each virtualized instance of the operating system will run within. The virtualization stack and the I/O components act to provide a go-between with Windows and with all the partitions that you create. Hyper-V’s virtualization advancements will only help to further assist administrators in quicker easier responses to emergency deployment needs.

Source of Information : Syngress The Best Damn Windows Server 2008

Understanding the Administration of Virtual Guest Sessions

One question that comes up frequently from administrators implementing virtual environments for the first time is how one administers a virtual server. For years, we have just walked up to a server that has a keyboard, mouse, and monitor and worked on “that system.” Having a different mouse, keyboard, and monitor for each system is simple; we know which devices go to which server that is running a specific application. With virtualization, however, guest sessions do not have their own mouse, keyboard, or monitor. So, how do you administer the system?

Many organizations have already been working off of centralized mice, keyboards, and monitors by using switchboxes that allow 4, 8, 16, or more servers to all plug into a single physical mouse, keyboard, and monitor. Simply by pushing a button on the switchbox, or using a command sequence, the administrator “toggles” between the servers. Administration of virtual servers works the exact same way. An administrator utility is loaded, and that utility enables administrators to open multiple virtual server sessions on their screen. Various tools and strategies, including the following, enable you to administer virtual systems:

• Using the Hyper-V Administration tool
• Using the System Center VMM tool
• Using Terminal Services for remote administration

The various administration options provide different levels of support to the management of the virtual guest sessions on Hyper-V.


Management Using the Hyper-V Administration Tool
The built-in Hyper-V Administration tool provides basic functions such as starting and stopping guest images, pausing guest images, forcing a shutdown of guest images, immediately turning off guest images, and the ability to snapshot images for a configuration state at a given time.

In most environments, the administrator would set a guest image to automatically start as soon as the host server itself has been started. That way, if the server is rebooted, the appropriate guest images are also started (but like if a physical server lost power and rebooted when the power came back on).

For images that have been set to be off after the host server reboot, those images can be manually started from the Hyper-V Administration tool. The manual start of images is common for servers that are hosting test images, images used for demonstration purposes, and copies of images that can be manually started when a specific server is required (that is, cold standby server startup).


Management Using the Virtual Machine Manager 2008 Tool
Organizations that want more than just starting and stopping guest images should consider buying and implementing the System Center Virtual Machine Manager 2008 (VMM) tool. VMM provides basic information about whether a guest image has been started or not, and it provides more information than the built-in Hyper-V Administration tool in terms of how much memory and disk space each image is taking on the host server. The VMM 2008 tool has several wizards and functions that allow an administrator to capture physical server information and bring the server configuration into a virtual image. VMM 2008 can also extract an image from another virtual server and bring that information into a new Hyper-V guest image.

Another feature built in to VMM 2008 is the ability to create a library where template images, ISO application images, snapshot libraries, and the like are stored. With a centralized library, administrators have at their fingertips the images, tools, and resources to build new images, to recover from failed images, and to deploy new images more easily. In addition, VMM 2008 provides delegation and provisioning capabilities so that administrators can issue rights to other users to self-provision and self-manage specific images without depending on the IT department to manage images or manually build out configurations.


Management Using Thin Client Terminal Services
Aside from using the centralized Hyper-V Administration tool to manage guest images, administrators can still use Terminal Services to remotely administer servers on the network, whether that’s physical servers or images running as virtual sessions in a Hyper V environment.

An administrator may choose to gain remote access into the Hyper-V host server, and then control all the guest images on that host server, or the administrator could gain remote access one by one to each of the guest sessions. The latter, which is the ability to individually administer a remote system, is a good solution to provide to an individual who needs access to a single server or a limited number of servers, such as a web administrator or a database administrator.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Migrating from Microsoft Virtual Server 2005 and VMware to Hyper-V

Many organizations have already implemented Microsoft Virtual Server 2005 or VMware in their environments and wonder what it takes to migrate to Hyper-V, or whether there are ways to centrally manage and administer a dual virtualization platform environment. The simple answer is that there are several ways to migrate, integrate, and support Virtual Server 2005 and VMware images in a Hyper-V environment.


Mounting Existing Virtual Guest Images on Hyper-V
Hyper-V can mount and run both Microsoft Virtual Server 2005 VHD images and VMware guest images directly as guests within Hyper-V. Because Hyper-V has this ability to mount other virtual images, some organizations use Hyper-V as a disaster recovery host for existing images. In such a scenario, if a VS/2005 server or VMware server fails, the images can be copied over to Hyper-V, and Hyper-V can easily boot and mount the images onto the network backbone.

However, despite Hyper-V’s ability to mount VMware images natively within Hyper-V, a VMware image does not have the same administration, management, snapshotting, backup, and support capabilities as a native Hyper-V guest image. So the long-term plan should be to migrate images from VMware to Hyper-V using a virtual to virtual imagemigration tool like what is available in System Center Virtual Machine Manager 2008. When you migrate the VMware to a native Hyper-V image, all the capabilities built in to support Hyper-V images are supported.

For Virtual Server 2005 images mounted on Hyper-V, those images work fine as long as you install the Hyper-V integration tools onto the image that update the drivers of the image itself.


Performing a Virtual to Virtual Migration of Guest Images
A strategy for migrating older images to Hyper-V is to do a virtual to virtual image migration. Via VMM, an administrator can select a running virtual machine (running VMware, XenServer, Virtual Server 2005, or the like) and choose to migrate the image to Hyper-V. This process extracts all the pertinent server image information, applications, data, Registry settings, user settings, and the like and moves the information over to a target Hyper-V host server. Once migrated, the Hyper-V integration tools can be installed, and the image is now clear and ready to be supported by Hyper-V or VMM.


Using VMM to Manage VMware Virtual Infrastructure 3
For organizations that have a fairly substantial investment in VMware and the VMware Infrastructure 3 (VI3) management environment, Microsoft System Center VMM has a built-in configuration setting, shown in Figure 1.8, that allows for the support, monitoring, and consolidation of information between VI3 and VMM. This integration between management tools is vital for organizations that want to keep both the VMware and a new Hyper-V environment running in parallel, and for organizations that are migrating to Hyper-V but still want to have integrated support for the old VMware environment while the migration process is performed.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Windows Server 2008 Hyper-V New Features - Provide Better Reliability Capabilities

Another critical area of improvement in Hyper-V is its support for capabilities that improve reliability and recoverability of the Hyper-V host and guest environments. The technologies added to Windows 2008 and Hyper-V are clustering technologies as well as server snapshot technologies.

Clustering is supported on Hyper-V both for host clustering and guest clustering. The clustering capabilities allow redundancy both at the host server level and the Hyper-V guest level, with both areas of clustering greatly improving the uptime that can be created for applications.

Another capability added to Hyper-V for better reliability is the ability to take snapshots of virtual guest sessions. A snapshot allows the state of a guest image to be retained so that at any time an administrator wants to roll back to the state of the image at the time of the snapshot, the information all exists. This capability is used frequently to take a snapshot before a patch or update is applied so that the organization can, if need be, quickly and easily roll back to that image. Snapshots are also used for general recovery purposes. If a database becomes corrupted or an image no longer works, the network administrator can roll back the image to a point before the corruption or system problems started to occur.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Windows Server 2008 Hyper-V New Features - Provide Better Administration and Guest Support

Better Administration Support. Hyper-V guest sessions can be administered by two separate tools. One tool, the Hyper-V Administration tool, comes free out of the box with Windows Server 2008. The other tool, System Center VMM, can be purchased separately. Some overlap exists between what the Hyper-V Administration tool and the VMM tool do. For the most part, however, the builtin tool enables you to start and stop guest sessions and to take snapshots of the sessions for image backup and recovery. The VMM tool provides all those capabilities, too. But, it also enables an administrator to organize images across different administrative groups, as. Thus, the VMM tool allows for the creation and management of template images for faster and easier image provisioning, provides a way to create a virtual image from existing physical or running virtual sessions, and provides clustering of virtual images across multiple VMM manage host servers.


Better Guest Support. Hyper-V added several new features that provide better support for guest sessions, such as 64-bit guest support, support for non-Windows guest sessions, and support for dedicated processors in guest sessions.

Hyper-V added the ability to support not only 32-bit guest sessions as earlier versions of
Microsoft’s Virtual Server 2005 product provided, but also 64-bit guest sessions. This
improvement allows guest sessions to run some of the latest 64-bit-only application software from Microsoft and other vendors, such as Exchange Server 2007. And although
some applications will run in either 32-bit or 64-bit versions, for organizations looking for faster information processing, or support for more than 4GB of RAM, the 64-bit guest
session provides the same capabilities as if the organization were running the application
on a dedicated physical 64-bit server system.

With Hyper-V, you can also dedicate one, two, or four processor cores to a virtual guest
session. Instead of aggregating the performance of all the Hyper-V host server’s processors and dividing the processing performance for the guest images somewhat equally, an administrator can dedicate processors to guest images to ensure higher performance for the guest session. With hardware supporting two or four quad-core processors in a single server system, there are plenty of processors in servers these days to appropriately allocate processing speed to the server guests that require more performance.

Support for non-Windows guests, such as Linux, was an indication from Microsoft that they are serious about providing multiplatform support within their Hyper-V host servers. Linux servers are not only supported to run as guest sessions on Hyper-V, but Microsoft has developed integration tools to better support Linux guest integration into a managed Hyper-V host environment.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Windows Server 2008 Hyper-V New Features - Provide Better Virtual Host Capabilities

The broadest improvements made by Microsoft to the virtual host capabilities of Hyper-V are the core functions added in to Windows Server 2008 that relate to security, performance, and reliability. However, the addition of a new virtual switch capability in Hyper-V provides greater flexibility in managing network communications among guest images, and between guest images and an organization’s internetworking infrastructure.

Effectively, Windows Server 2008 and Hyper-V leverage the built-in capabilities of Windows 2008 along with specific Hyper-V components to improve overall support, administration, management, and operations of a Hyper-V host server. When Hyper-V host server is joined to a Microsoft Active Directory environment, the host server can be managed and administered just like any other application server in the Active Directory environment. Security is centralized and managed through the use of Active Directory organizational units, groups, and user administrators. Monitoring of the Hyper-V host server and its guest sessions is done through the same tools organizations use to monitor and manage their existing Windows server systems.

Security policies, patch management policies, backup procedures, and the corresponding tools and utilities used to support other Windows server systems can be used to support the Hyper-V host server system. The Hyper-V host server becomes just another managed Windows server on the network.

Also important is the requirement for the Hyper-V host server to run on a 64-bit system, to not only take advantage of hardware-assisted virtualization processors like the AMD64 and Intel IA-32E and EM64T (x64) but also to provide more memory in the host server to distribute among guest sessions. When a 32-bit host server was limited to about 4GB of RAM memory, there weren’t too many ways to divide that memory among guest sessions in which guests could run any business application. With 64-bit host servers supporting 8GB, 16GB, 32GB, or more, however, guest sessions can easily take 4GB or 8GB of memory each and still leave room for other guest sessions, tasks, and functions.

Unlike multiple physical servers that might be connected to different network switches, the guest sessions on a Hyper-V host all reside within a single server. Therefore, the virtual switch capability built in to the Hyper-V Administration tool enables the Hyper-V administrator to create special network segments and associate virtual guest sessions to specific network adapters in the host server to ensure that virtual guests can be connected to network segments that meet the needs of the organization.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Choosing to Virtualize Servers

“Virtualization as an IT Organization Strategy” identified basic reasons why organizations have chosen to virtualize their physical servers into virtual guest sessions. However, organizations also benefit from server virtualization in several areas. Organizations can use virtualization in test and development environments. They can also use virtualization to minimize the number of physical servers in an environment, and to leverage the capabilities of simplified virtual server images in high-availability and disaster recovery scenarios.


Virtualization for Test and Development Environments
Server virtualization got its start in test and development environments in IT organizations. The simplicity of adding a single host server and loading up multiple guest virtual sessions to test applications or develop multiserver scenarios without having to buy and manage multiple physical servers was extremely attractive. Today, with physical servers with 4, 8, or 16 core processors in a single system with significant performance capacity, organizations can host dozens of test and development virtual server sessions just by setting up 1 or 2 host servers.

With administrative tools built in to the virtual server host systems, the guest sessions can
be connected together or completely isolated from one another, providing virtual local area networks (LANs) that simulate a production environment. In addition, an administrator can create a single base virtual image with, for example, Windows Server 2003 Enterprise Edition on it, and can save that base image as a template. To create a “new server” whenever desired, the administrator just has to make a duplicate copy of the base template image and boot that new image. Creating a server system takes 5 minutes in a virtual environment. In the past, the administrator would have to acquire hardware,
configure the hardware, shove in the Windows Server CD, and wait 20 to 30 minutes before the base configuration was installed. And then after the base configuration was installed, it was usually another 30 to 60 minutes to download and install the latest service packs and patches before the system was ready.

With the addition of provisioning tools, such as Microsoft System Center Virtual Machine Manager 2008 (VMM), “Using Virtual Machine Manager 2008 for Provisioning,” the process of creating new guest images from templates and the ability to delegate the provisioning process to others greatly simplifies the process of making virtual guest sessions available for test and development purposes.


Virtualization for Server Consolidation
Another common use of server virtualization is consolidating physical servers. Organizations that have undertaken concerted server consolidation efforts have been able to decrease the number of physical servers by upward of 60% to 80%. It’s usually very simple for an organization to decrease the number of physical servers by at least 25% to 35% simply by identifying low-usage, single-task systems.

Servers such as domain controllers, Dynamic Host Configuration Protocol (DHCP) servers, web servers, and the like are prime candidates for virtualization because they are typically running on simple “pizza box” servers (thin 1 unit high rack-mounted systems).

Beyond just taking physical servers and doing a one-for-one replacement as virtual servers in an environment, many organizations are realizing they just have too many servers doing the same thing and underutilized because of lack of demand or capacity. The excess capacity may have been projected based on organizational growth expectations that never materialized or has since been reduced due to organization consolidation.

Server consolidation also means that organizations can now decrease their number of sites and data centers to fewer, centralized data centers. When wide area network (WAN)
connections were extremely expensive and not completely reliable, organizations distributed servers to branch offices and remote locations. Today, however, the need for a fully distributed data environment has greatly diminished because the cost of Internet connectivity has decreased, WAN performance has increased, WAN reliability has drastically improved, and applications now support full-feature robust web capabilities.

Don’t think of server consolidation as just taking every physical server and making it a virtual server. Instead, spend a few moments to think about how to decrease the number
of physical (and virtual) systems in general, and then virtualize only the number of systems required. Because it is easy to provision a new virtual server, if additional capacity is required, it doesn’t take long to spin up a new virtual server image to meet the
demands of the organization. This ease contrasts starkly with requirements in the past: purchasing hardware and spending the better part of a day configuring the hardware and installing the base Windows operating system on the physical use system.


Virtualization as a Strategy for Disaster Recovery and High Availability
Most use organizations realize a positive spillover effect from virtualizing their environments: They create higher availability and enhance their disaster-recovery potential, and thus fulfill other IT initiatives. Disaster recovery and business continuity is on the minds of most IT professionals, effectively how to quickly bring back online servers and systems in the event of a server failure or in the case of a disaster (natural disaster or other). Without virtualization, disaster-recovery plans generally require the addition (to a physical data center perhaps already bloated with too many servers) of even more servers to create redundancy (both in the data center and in a remote location).

Virtualization has greatly improved an organization’s ability to actually implement a disaster-recovery plan. As physical servers are virtualized and the organization begins to decrease physical server count by 25%, 50%, or more, the organization can then repurpose spare systems as redundant servers or as hosts for redundant virtual images both within the data center and in remote locations for redundant data sites. Many organizations have found their effort to consolidate servers is negated because even though they virtualized half their servers, they went back and added twice as many servers to get redundancy and fault tolerance. However, the net of the effort is that the organization has been able to get disaster recovery in place without adding additional physical servers to the network.

After virtualizing servers as guest images, organizations are finding that a virtualized image is very simple to replicate; after all, it’s typically nothing more than a single file sitting on a server. In its simplest form, an organization can just “pause” the guest session temporarily, “copy” the virtual guest session image, and then “resume” the guest session to bring it back online. The copy of the image has all the information of the server. The image can be used to re-create a scenario in a test lab environment; or it can be saved so that in the event that the primary image fails, the copy can be booted and bring the server immediately back up and running. There are more elegant ways to replicate an image file. However, the ability for an IT department to bring up a failed server within a data center or remotely has been greatly simplified though virtualization technologies.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed

What Is Server Virtualization and Microsoft Hyper-V?

Hyper-V is a long-awaited technology that has been anticipated to help Microsoft leap past rival virtual server technologies such as VMware and XenServer. Although Microsoft has had a virtual server technology for a few years, the features and capabilities have always lagged behind its competitors. Windows Server 2008 was written to provide enhanced virtualization technologies through a rewrite of the Windows kernel itself to support virtual server capabilities equal to, if not better than, other options
on the market. The Hyper-V server role in Windows Server 2008 and provides best practices that organizations can follow to leverage the capabilities of server virtualization to lower costs and improve the manageability of an organization’s network server environment.

Server virtualization is the ability for a single system to host multiple guest operating system sessions, effectively taking advantage of the processing capabilities of very powerful servers. Most servers in data centers run under 5% to 10% processor utilization, meaning that excess capacity on the servers goes unused. By combining the workloads of multiple servers onto a single system, an organization can better utilize the processing power available in its networking environment.

Hyper-V enables an organization to consolidate several physical server systems into a single host server while still providing isolation between virtual guest session application operations. With an interest to decrease costs in managing their information technology (IT) infrastructure, organizations are virtualizing servers. Bringing multiple physical servers into a single host server decreases the cost of purchasing and maintaining multiple physical server systems, decreases the cost of electricity and air-cooling systems to maintain the physical servers, and enables an organization to go “green” (by decreasing the use of natural resources in the operation of physical server systems).

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...