Windows Server 2008 Enhances Networking - Offloading protocol processing

Certain specialized network interfaces and hardware are capable of offloading the often resource-intensive burden of processing TCP/IP network stack information, which requires handling of a multilayered protocol framework to deliver encapsulated data. This frees up local CPU and RAM to process other general-purpose tasks and moves the strain of ongoing network connection processes to specially-designed hardware designated for that purpose.

By encapsulated data, we refer to the way data is packaged as it travels down the TCP/IP network protocol stack. Higher-level protocols are encapsulated within header (and sometimes trailer) information so that lower-level routing and switching devices can process (and in some cases interpret) protocol data.

Protocol offload processing is supported through software that is called the TCP Chimney in Windows and hardware that is called the TCP Offload Engine.


TCP Chimney
The TCP Chimney is a feature introduced first in Windows Vista and second — by extension — in Windows Server 2008. It’s the result of Microsoft’s Scalable Networking initiative, which encompasses a number of changes to the core network infrastructure of every new platform product. The goal is to reduce operational overhead associated with establishing, maintaining, and terminating connection state — the status of a given network connection — and all requisite state information throughout the lifetime of a connection. By removing such overhead from general-purpose resources and delegating the responsibility to special-purpose network interfaces, additional computing resources are freed up, especially on servers.

A chimney is a collection of offloaded protocol state objects and any associated semantics that enable the host computer to offload network protocol processing to some other network device, usually the network interface. Since NDIS 6.0, Windows Server has included an architecture that supports full TCP offload, called a chimney offload architecture because it provides a direct connection between applications and an offload capable network adapter. This enables the network adapter to perform TCP/IP stack processing for offloaded connections, as well as to maintain the protocol state.


Changes to NDIS
Microsoft’s Network Driver Interface Specification (NDIS) defines a standard application programming interface (API) for network adapters. The details of a network adapter’s hardware implementation are wrapped by a MAC device driver so that all devices for the same media are accessed in a common, predictableway.

NDIS provides the library of functionality necessary to drive network interactions for the Windows platform that both simplifies driver development tasks and hides the ugliness of platform-specific dependencies. Some of the new features provided by NDIS specification version 6.0 are described below.

New offload support
NDIS 6.0 now supports new offloading network traffic processing functionality to compatible network adapters that includes:

• IPv6 traffic offload: NDIS 5.1 (Windows XP, Windows Server 2003) already supports IPv4 protocol offload processing; NDIS 6.0 also includes IPv6 traffic.

• IPv6 checksum offload: Checksum calculations for IPv6 can now be offloaded to compliant network adapters.

• Large send offload (version 2): NDIS 5.1 supports large send offload (LSO), which offloads the segmentation of TCP protocol data into 64K blocks. Large send offload 2 (LSOv2) in NDIS 6.0 now offloads much larger blocks.


Support for lightweight filter drivers
Intermediate filter drivers are replaced by lightweight filter (LWF) drivers, a combination of an NDIS 6.0 intermediate driver and a miniport driver. LWF improves performance, consolidates protocol driver support, and provides a bypass mode where LWF examines only select control and data paths.


Receive-side scaling
Multiprocessor computers running Windows Server 2003 or Windows XP associate a given network adapter with a single processor. That individual processor must handle all traffic for that interface, despite the fact that other processors may be available. This impacts Web- and file-server performance when client connections reach the serviceable limit of that associated processor.

Incoming traffic that can’t be handled by either network interface or server processor will be discarded, which is undesirable in just about every situation. This increases the number of TCP/IP-oriented session serialization and sequence identifiers and amplifies performance penalties as a result of network stack retransmissions.

Both session serialization (sessions encoded as a sequence) and sequence identifiers (unique numeric values associated with serialized sessions) are related to the protocol stack. These properties help identify what portions of data are assembled and in what order, such that portions arriving out-oforder are properly reordered and those that never arrive are requested again.

Windows Server 2008 no longer associates a network adapter to a single processor; instead, inbound traffic is distributed among the available processor array and processed accordingly. This feature is called receive-side scaling, which allows for more inbound traffic on high-volume network interfaces. A multiprocessor server computer can scale its ability to handle incoming traffic without additional hardware, so long as compliant network adapters are already in place.

>>> Read more about Windows Server 2008 Enhances Networking - Next Generation TCP/IP stack <<<

Source of Information : For Dummies Windows Server 2008 For Dummies

No comments:

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...