Defining the Lightweight Directory Access Protocol (LDAP)

The Directory Service Protocol that is utilized by AD DS is based on the Internet-standard Lightweight Directory Access Protocol defined by RFC-3377. LDAP allows queries and updates to take place in AD DS. Objects in an LDAP-compliant directory must be uniquely identified by a naming path to the object. These naming paths take two forms: distinguished names and relative distinguished names.


Explaining Distinguished Names in AD
The distinguished name of an object in AD DS is represented by the entire naming path that the object occupies in AD DS. For example, the user named Brian McElhinney can be represented by the following distinguished name:

CN=Brian McElhinney,OU=Sydney,DC=Companyabc,DC=com

The CN component of the distinguished name is the common name, which defines an object within the directory. The OU portion is the organizational unit in which the object belongs. The DC components define the DNS name of the Active Directory domain.


Outlining Relative Distinguished Names
The relative distinguished name of an object is basically a truncated distinguished name that defines the object’s place within a set container. For example, take a look at the following object:

OU=Sydney,DC=companyabc,DC=com

This object would have a relative distinguished name of OU=Sydney. The relative distinguished name in this case defines itself as an organizational unit within its current domain container.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Conceptualizing the AD DS Schema

The AD DS schema is a set of definitions for all object types in the directory and their related attributes. The schema determines the way that all user, computer, and other object data are stored in AD DS and configured to be standard across the entire AD DS structure. Secured by the use of discretionary access control lists (DACLs), the schema controls the possible attributes to each object within AD DS. In a nutshell, the schema is the basic definition of the directory itself and is central to the functionality of a domain environment. Care should be taken to delegate schema control to a highly selective group of administrators because schema modification affects the entire AD DS environment.


Schema Objects
Objects within the AD DS structure such as users, printers, computers, and sites are defined in the schema as objects. Each object has a list of attributes that define it and that can be used to search for that object. For example, a user object for the employee named Weyland Wong will have a FirstName attribute of Weyland and a LastName attribute of Wong. In addition, there might be other attributes assigned, such as departmental name, email address, and an entire range of possibilities. Users looking up information in AD DS can make queries based on this information, for example, searching for all users in the Sales department.


Extending the Schema
One of the major advantages to the AD DS structure is the ability to directly modify and extend the schema to provide for custom attributes. A common attribute extension occurs with the installation of Microsoft Exchange Server, which extends the schema, more than doubling it in size. An upgrade from Windows Server 2003 or Windows Server 2008 AD to Windows Server 2008 R2 AD DS also extends the schema to include attributes specific to Windows Server 2008 R2. Many third-party products have their own schema extensions as well, each providing for different types of directory information to be displayed.


Performing Schema Modifications with the AD DS Service Interfaces
An interesting method of actually viewing the nuts and bolts of the AD DS schema is by using the AD DS Service Interfaces (ADSI) utility. This utility was developed to simplify access to the AD DS and can also view any compatible foreign LDAP directory. The ADSIEdit utility, enables an administrator to view, delete, and modify schema attributes. Great care should be taken before schema modifications are undertaken because problems in the schema can be difficult to fix.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Outlining Functional Levels in Windows Server 2008 R2 AD DS

Just as Windows 2000 and Windows 2003 had their own functional levels that ensured down-level compatibility with legacy domain versions, Windows Server 2008 R2 has its own functional levels that are used to maintain compatibility. The following functional levels exist in Windows Server 2008 R2:

. Windows 2000 Native functional level—This functional level allows Windows Server 2008 R2 domain controllers to coexist with both Windows 2000 SP3+ and Windows 2003 domain controllers within a forest.

. Windows Server 2003 functional level—This functional level allows Windows 2003 and Windows Server 2008 R2 domain controllers to coexist. Additional functionality is added to the forest, including cross-forest transitive trust capabilities and replication enhancements.

. Windows Server 2008 functional level—In this functional level, all domain controllers must be running Windows Server 2008 or later. Changing the domain and forest functional level to Windows Server 2008 adds additional functionality, such as fine-grained password policies.

. Windows Server 2008 R2 functional level—In this functional level, all domain controllers must be running Windows Server 2008 R2. Changing the forest functional level to this latest AD DS level grants Windows Server 2008 R2 feature functionality, such as access to the Active Directory Recycle Bin.

By default, a fresh installation of Active Directory on Windows Server 2008 R2 domain controllers allows you to choose which functional level you want to start the forest in. If an existing forest is in place, it can be brought to Windows Server 2008 R2 functional level by performing the following steps:

1. Ensure that all domain controllers in the forest are upgraded to Windows Server 2008 R2 or replaced with new Windows Server 2008 R2 DCs.

2. Open Active Directory Domains and Trusts from the Administrative Tools menu on a domain controller.

3. In the left scope pane, right-click on the domain name, and then click Raise Domain Functional Level.

4. In the box labeled Raise Domain Functional Level, select Windows Server 2008 R2, and then click Raise.

5. Click OK and then click OK again to complete the task.

6. Repeat steps 1–5 for all domains in the forest.

7. Perform the same steps on the root node of Active Directory Domains and Trusts, except this time choose Raise Forest Functional Level and follow the prompts.

When all domains and the forest level have been raised to Windows Server 2008 R2 functionality, the forest can take advantage of the latest AD DS functionality, such as the Active Directory Recycle Bin, outlined in more detail later in this chapter. Remember, before you accomplish this task, Windows Server 2008 R2 essentially operates in a downgraded mode of compatibility.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Examining the Key Features of Active Directory Domain Services

Five key components are central to AD DS’s functionality. As compatibility with Internet standards has become required for new directory services, the existing implementations have adjusted and focused on these areas:

. TCP/IP compatibility—Unlike some of the original proprietary protocols such as IPX/SPX and NetBEUI, the Transmission Control Protocol/Internet Protocol (TCP/IP) was designed to be cross-platform. The subsequent adoption of TCP/IP as an Internet standard for computer communications has propelled it to the forefront of the protocol world and essentially made it a requirement for enterprise operating systems. AD DS and Windows Server 2008 R2 utilize the TCP/IP protocol stack as their primary method of communications.

. Lightweight Directory Access Protocol support—The Lightweight Directory Access Protocol (LDAP) has emerged as the standard Internet directory protocol and is used to update and query data within the directory. AD DS directly supports LDAP.

. Domain name system (DNS) support—DNS was created out of a need to translate simplified names that can be understood by humans (such as www.cco.com) into an IP address that is understood by a computer (such as 12.155.166.151). The AD DS structure supports and effectively requires DNS to function properly.

. Security support—Internet standards-based security support is vital to the smooth functioning of an environment that is essentially connected to millions of computers around the world. Lack of strong security is an invitation to be hacked, and Windows Server 2008 R2 and AD DS have taken security to greater levels. Support for IP Security (IPSec), Kerberos, Certificate Authorities, and Secure Sockets Layer (SSL) encryption is built in to Windows Server 2008 R2 and AD DS.

. Ease of administration—Although often overlooked in powerful directory services implementations, the ease in which the environment is administered and configured directly affects the overall costs associated with its use. AD DS and Windows Server 2008 R2 are specifically designed for ease of use to lessen the learning curve associated with the use of a new environment. Windows Server 2008 R2 also enhanced AD DS administration with the introduction of the Active Directory Administration Center, Active Directory Web Services, and an Active Directory module for Windows PowerShell command-line administration.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Examining the Evolution of Directory Services

Directory services have existed in one form or another since the early days of computing to provide basic lookup and authentication functionality for enterprise network implementations. A directory service provides detailed information about a user or object in a network, much in the same way that a phone book is used to look up a telephone number for a provided name. For example, a user object in a directory service can store the phone number, email address, department name, and as many other attributes as an administrator desires.

Directory services are commonly referred to as the white pages of a network. They provide user and object definition and administration. Early electronic directories were developed soon after the invention of the digital computer and were used for user authentication and to control access to resources. With the growth of the Internet and the increase in the use of computers for collaboration, the use of directories expanded to include basic contact information about users. Examples of early directories included MVS PROFS (IBM), Grapevine’s Registration Database, and WHOIS.

Application-specific directory services soon arose to address the specific addressing and contact-lookup needs of each product. These directories were accessible only via proprietary access methods and were limited in scope. Applications utilizing these types of directories were programs such as Novell GroupWise, Lotus Notes, and the UNIX sendmail /etc/aliases file.

The further development of large-scale enterprise directory services was spearheaded by Novell with the release of Novell Directory Services (NDS) in the early 1990s. It was adopted by NetWare organizations and eventually was expanded to include support for mixed NetWare/NT environments. The flat, unwieldy structure of NT domains and the lack of synchronization and collaboration between the two environments led many organizations to adopt NDS as a directory service implementation. It was these specific deficiencies in NT that Microsoft addressed with the introduction of AD DS.

The development of the Lightweight Directory Access Protocol (LDAP) corresponded with the growth of the Internet and a need for greater collaboration and standardization. This nonproprietary method of accessing and modifying directory information that fully utilized TCP/IP was determined to be robust and functional, and new directory services implementations were written to utilize this protocol. AD DS itself was specifically designed to conform to the LDAP standard.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Performing a Live Migration

The virtual machine runs on one of the cluster nodes, known as the owner. When a Live Migration is performed, multiple steps are performed. These steps can be broken down into three stages: preflight migration, virtual machine transfer, and final transfer/startup of the virtual machine.

The first step in Live Migration occurs on the source node (where the virtual machine is currently running) and the target node (where the virtual machine will be moved) to ensure that migration can, in fact, occur successfully.

The detailed steps of Live Migration are as follows:
1. Identify the source and destination machines.

2. Establish a network connection between the two nodes.

3. The preflight stage begins. Check if the various resources available are compatible between the source and destination nodes:
. Are the processors using similar architecture? (For example, a virtual machine running on an AMD node cannot be moved to an Intel node, and vice versa.)
. Are there a sufficient number of CPU cores available on the destination?
. Is there sufficient RAM available on the destination?
. Is there sufficient access to required shared resources (VHD, network, and so on)?
. Is there sufficient access to physical device resources that must remain associated
with the virtual machine after migration (CD drives, DVDs, and LUNs or
offline disks)?
Migration cannot occur if there are any problems in the preflight stage. If there are, the virtual machine will remain on the source node and processing ends here. If preflight is successful, migration can occur and the virtual machine transfer continues.

4. The virtual machine state (inactive memory pages) moves to the target node to reduce the active virtual machine footprint as much as possible. All that remains on the source node is a small memory working set of the virtual machine. The virtual machine configuration and device information are transferred to the destination node and the worker process is created. Then, the virtual machine memory is transferred to the destination while the virtual machine is still running. The cluster service intercepts memory writes and tracks actions that occur during themigration. This page will be retransmitted later. Up to this point, the virtual
machine technically remains on the source node.

5. What remains of the virtual machine is briefly paused on the source node. The virtual
machine working set is then transferred to the destination host, storage access is
moved to the destination host, and the virtual machine is reset on the destination
host.

The only downtime on the virtual machine occurs in the last step, and this outage is
usually much less than most network applications are designed to tolerate. For example,
an administrator can be accessing the virtual machine via Remote Desktop while it is
being Live Migrated and will not experience an outage. Or a virtual machine could be
streaming video to multiple hosts, Live Migrated to another node, and the end users don’t know the difference.

Use the following steps to perform a Live Migration between two cluster nodes:
1. On one of the cluster nodes, open Failover Cluster Management.

2. Expand the Cluster and select Services and Applications.

3. Select the virtual machine to Live Migrate.

4. Click Live Migrate Virtual Machine to Another Node in the Actions pane and select the node to move the virtual machine to. The virtual machine will migrate to the selected node using the process described previously.

If there are processor differences between the source and destination node, Live Migration will display a warning that the CPU capabilities do not match. To perform a Live Migration, you must shut down the virtual machine and edit the settings of the processor to “Migrate to a Physical Computer with a Different Processor Version”.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Quick Migration and Live Migration

There are two forms of automated migration provided by Windows Server 2008 R2 Hyper-V: Quick Migration and Live Migration. These migration processes can be used to increase service availability for planned and unplanned downtime. Although both technologies achieve the same thing—moving virtual servers between Hyper-V hosts—they use different methods and mechanisms to achieve it. Both require at least two Hyper-V host servers in a cluster, attached to the same shared storage system. Usually, the shared storage is an iSCSI or Fibre Channel storage area network (SAN).


Quick Migration
Quick Migration provides a way to quickly move a virtual machine from one host server to another with a small amount of downtime.

In a Quick Migration, the guest virtual machine is suspended on one host and resumed on another host. This operation happens in the time it takes to transfer the active memory of the virtual machine over the network from the first host to the second host. For a host with 8GB of RAM, this might take about two minutes using a 1GB iSCSI connection.

Quick Migration was the fastest migration available for Windows Server 2008 Hyper-V. Microsoft made considerable investments in Hyper-V migration technologies, trying to reduce the time required to migrate virtual machines between Hyper-V hosts. The result was Live Migration, which has the same hardware requirements as Quick Migration, but with a near instantaneous failover time.


Live Migration
Since the release of Hyper-V V1 with Windows Server 2008, the number-one mostrequested feature by customers is the ability to migrate running virtual machines between hosts, with no downtime. VMware’s VMotion has been able to do this for some time. Finally, with Windows Server 2008 R2, it can be done natively with Hyper-V for no extra cost. This makes it a compelling reason to move to Hyper-V.

Live Migration uses failover clustering. The quorum model used for your cluster will depend on the number of Hyper-V nodes in your cluster. In this example, we will use two Hyper-V nodes in a Node and Disk Majority Cluster configuration. There will be one shared storage LUN used as the cluster quorum disk and another used as the Cluster Shared Volume (CSV) disk.


If there is only one shared storage LUN available to the nodes when the cluster is formed, Windows will allocate that LUN as the cluster quorum disk and it will not be available to be used as a CSV disk.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Various RAID Levels

» For RAID 0 (Data Striping), the cost of storage is higher than for a single disk (assuming that a single disk has sufficient capacity) since using several disks (regardless of their ability to provide more storage capacity than a single disk) increases costs for items such as racks, cables, controllers, power. Data availability is lower than for a single
disk, because MTBF for the RAID is the MTBF of a single disk divided by the number of disks used—that is, a RAID 0 of N disks has an MTBF N times smaller than the MTBF of a single disk. Reading and writing large blocks on a RAID 0 using N disks takes less time than for a single disk (at best N times less, limited by the fact that the disks are not in general rotationally synchronized). This reduces the occupation time of the disks and allows higher bandwidths. The same is true for random reads and writes.


» For RAID 1 (Mirroring), the storage cost is proportional to the number of copies of the data kept (the factor M in the table). Most often, mirroring is simple replication (M = 2). As to availability, it is clear that RAID 1 has higher availability than RAID 3 or RAID 5, since it has complete data replication rather than a parity disk per N physical disks. Reading, whether of large transfers or random transfers, has higher performance because the data can be read concurrently from multiple disks. Concurrency is less effective for writes, whether for large transfers or random transfers, because of the need to not signal completion until the last write on the last disk is complete.


» RAID 0 + 1 (Striped Mirror) has more or less the same properties as RAID 1, with just one further comment on write operations: the time for write operations for large transfers can be lower than for a single disk, if the time saved as a result of distributing the data across N parallel disks is greater than the extra cost of synchronizing completion across M groups of disks.


» RAID 3 (Parity Disk) availability is ensured through the use of parity information. Large block reads offer similar performance to RAID 0, with any differences attributable to the need to compute parity for the information read, along with any required correction. Large block writes are slower, because such transfers involve both the calculation of parity and writing the parity values to the parity disk, whose busy time can be greater than those of the collection of data disks, since there is just one parity disk. Random reads require a parity disk access, calculation of data parity, parity comparison, and any necessary correction. A write operation implies calculation of parity and its writing to the parity disk. Performance compared with a single disk depends on the performance advantage obtained by distributing the data across multiple disks.


» RAID 5 (Spiral Parity) offers essentially the same availability as RAID 3. Again, large transfer performance is impacted by the need to calculate parity and apply correction as required. Random reads and writes are generally better than for RAID 3 because of the distribution of parity information over multiple disks, reducing contention on parity updates.


» RAID 6 (Double Spiral Parity) provides higher availability than RAID 5, since it can survive two concurrent independent failures. RAID 6 has slightly higher read performance than RAID 5, since double parity reduces contention and thus wait time for parity writes (only slightly higher performance, since the number of disks grows only from N + 1 to N + 2). Write operations, on the other hand, are slower, suffering from the increased burden of double parity computation and writing.

Source of Information : Elsevier Server Architectures

Common Internet File System

We would be remiss in our descriptions of remote access file systems were we to omit mention of CIFS, which is used in Windows systems for remote file access.

CIFS is an improved version of Microsoft’s SMB (Server Message Block); proposed by Microsoft, CIFS was offered to the IETF (Internet Engineering Task Force) for adoption as a standard.

CIFS, installed on a PC, allows that PC access to data held on UNIX systems.

There is an important difference between NFS and CIFS. NFS is stateless, while CIFS is stateful.

This means that an NFS server does not need to maintain any state information on its clients, but a CIFS server must. Thus, in the event of a failure in either the network or the server, recovery is much more complex for a CIFS server than for an NFS server. NLM (Network Lock Manager) was provided to implement lock operations in NFS, but its use is not widespread. Version 4 of NFS supports locking.

Examples of products implementing CIFS include:
» Samba (free software);
» ASU (Advanced Server UNIX) from AT&T
» TAS (TotalNET Advanced Server) from Syntax

UNIX file systems need extensions to support Windows file semantics; for example, the “creation date” information needed by Windows and CIFS must be kept in a UNIX file system in a complementary file.


This diagram follows our practice of omitting some components for simplicity. We do not show the TLI (Transport Layer Interface) nor the NDIS (Network Driver Interface Layer), for example, nor do we show local accesses on the server. NTFS (NT File System) is the Windows 2000 native file system.

The I/O manager determines whether an access is local or remote; the request is either directed to the local file system or handled by the CIFS Redirector. This checks to see whether the data is available in the local cache and, if not, passes the request on to the network layers for forwarding to the server holding the file involved.

Source of Information : Elsevier Server Architectures

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...