Common Window Server 2008 Networking Functions

Window Server 2008 includes features and functionalities that support almost every conceivable networking service. But not all of these functionalities are new or updated in Windows Server 2008. It is, therefore, important to first establish a common vocabulary on standard networking services and then identify where Window Server 2008 brings new features and functionalities to help draw a graphical map of the new Window Server 2008 features. This will provide you with a simple graphical layout of the new
Windows Server 2008 feature set. Small organizations or networks that include only a single site will often include a basic set of networking services. These services tend to focus on the following:

• Domain Services. Using Active Directory to centrally store and manage all user accounts makes sense in organizations of all sizes. The alternative—using workgroup practices—means having to manage multiple security account databases, one on each server or workstation, in fact. Active Directory is so simple to use that it simply does not make sense to use anything else.

• File and Printer. Sharing Storing documents centrally has always made sense because you only have to protect one single location. Every organization has a use for central file and printer management, even if new collaboration features offer a better way to manage documents and have teams interact.

• Collaboration Services. With Windows SharePoint Services (WSS), organizations can have teams interact with each other through a Web-based team structure. Since almost all organizational activity takes the form of a project, using team sites and collaboration services only makes sense, especially since WSS is so easy to install and manage.

• Database Services. Windows SharePoint Services relies on a database—in this case, the Windows Internal Database, which is, in fact, a version of SQL Server Embedded edition.

• E-mail Services. Most organizations also rely on e-mail services. Though Windows
Server 2008 does provide the simple message transfer protocol (SMTP) service, organizations usually opt for a professional e-mail service, such as that provided by Microsoft Exchange Server.

• Backup and Restore Services. All organizations will want to partake of Windows Backup to protect their systems, both at the data and the operating system level. The new Backup tool in Windows Server 2008 provides protection for both.


These often form the basic services that most organizations require. Optionally, even small organizations will also rely on the following services:

• Firewall Services. Any organization that has a connection to the external world through the Internet will want to make sure they are completely protected. The only way to do so is to implement an advanced firewall service.

• Fax Services. Windows Server 2008 can provide integrated fax services, freeing organizations from needing a conventional fax machine.

• Terminal Services. Terminal Services (TS) provides the ability to run applications on a server instead of on the user’s workstation. The advantage of this is that organizations need to manage applications only in one central location. In addition, with Windows Server 2008, the use of TS applications is completely transparent to end users, since it appears as if they are working off the local machine.

• Hyper-V. This is a core service of the new datacenter. It supports the virtualization of all other service offerings. This service is installed on all hardware, and all other services are installed within virtual machines.

• Network Access Services (NAS). With the proliferation of home offices, more and more organizations are relying on network access services, such as virtual private networks (VPNs), to let home workers access the corporate network over common home-based Internet connections.

• Deployment Services. With the advent of new Windows Deployment Services in Windows Server 2008, many organizations will want to take advantage of this feature to automate the installation and deployment of Windows XP and Windows Vista machines. Larger organizations will definitely want to use these services to
deploy servers as well as workstations.

• Windows Server Update Services. With the proliferation of attacks on systems of all types, organizations of all sizes will want to make sure they implement a system for keeping all of their computers—workstations and servers—up to date at all times. Windows Server Update Services (WSUS) is not part of WS08, but is free and can be obtained at www.microsoft.com/windowsserversystem/updateservices/downloads/WSUS.mspx. Registration is required to obtain the download.

In addition, any organization that includes more than one site will need to ensure that the services they provide at one site are available at any other. This is done through a series of different features, which rely mostly on either a duplication of the base services in remote sites or the use of a replication mechanism to copy data from one location to the other. The implementation of these systems is more complex than single-site structures.


Larger organizations will add more services to their network just because of the nature of their organization. These will include:

• Certificate Services. Anyone who wants to control identity and ensure that users are who they claim they are at all times will want to take advantage of Active Directory Certificate Services, a public key infrastructure system that provides electronic certificates to users and machines in order to clearly identify who they are.

• Rights Management Services. Organizations concerned about the protection of their intellectual data will want to implement Active Directory Rights Management Services (ADRMS). ADRMS can protect electronic documents from tampering through the inclusion of protection mechanisms directly within the documents.

• Advanced Storage. Organizations maintaining large deposits of information will want to take advantage of advanced storage systems, such as storage area networks (SANs). Windows Server 2008 provides new ways to access and manage SANs.

• Clustering Services and Load Balancing. Organizations running N-tier applications— applications that are distributed among different server roles—will want to protect their availability through the use of the Windows Clustering Service (WCS)—a service that provides availability through a failover capacity to another server running the same service—and/or Network Load Balancing (NLB)—a service that provides availability through the use of multiple servers running identical configurations.

• Database Services. Organizations relying on large data structures will want to run more than the Windows Internal Database and will rely on other versions of SQL Server to protect their databases.

• Web Applications. Organizations providing custom services, both internally and externally, will need to rely on Internet Information Services (IIS) to deliver a consistent Web experience to end users.

• Middleware Services. Organizations running N-tier applications will want to support them with middleware, such as the Microsoft .NET Framework, COM+, and other third-party components. These run on middleware servers.

• Key Management Services. Organizations that take advantage of Microsoft Software Assurance and Volume Licensing will want to implement this new WS08 role. Key Management Services (KMS) controls the activation of Microsoft volumelicensed software from both clients and servers from within your firewall.

Large organizations will add more functionality to their network. This is illustrated as enterprise services. Organizations having more than two sites will simply duplicate the services found in the remote site. Finally, this illustration demonstrates where Windows Server 2008 provides new and updated functionalities. Use it as a guide for the identification of what you would want to add to your network in terms of modern, secure services.

Source of Information : McGraw Hill Microsoft Windows Server 2008

Roxio Creator 2009 - A Comprehensive Media Suite

Roxio’s popular multimedia suite has been renamed (it was formerly Roxio Easy Media Creator) and updated with much-needed support for the AVCHD (Advanced Video Codec High Definition) video format. It also has a few nifty new treats in store, especially for devotees of portable music devices and HD (high definition) video. Whether users of older editions will find it worth the upgrade depends upon their needs. Sonic Solutions has an incentive for owners of a wide array of products: They enjoy a $20 discount when purchasing Creator 2009. With Creator 2009, you’ll go through a fairly lengthy installation process, but that’s not surprising given the breadth of this suite. When you open the program, you’ll be presented with a wellorganized and streamlined (compared to older versions), icon-based interface that we found quite pleasant to use. As with previous versions of the product, the various utilities are still separate modules, but they are better integrated and easier to access and work with than before. Furthermore, a single tab (Home) now affords access to Roxio’s six most popular functions without the need to dig into the entire suite.

What’ s New, What’ s Not
Users of previous versions will be pleased to know that CineMagic, Roxio’s “quick fix” video creator tool, is still present, as is VideoWave, its more advanced video editor. Both have been tweaked slightly, especially in relation to handling of HD content. Creator 2009 can now let you create and burn up to 60 minutes of HD video to a single-layer DVD. If you want Blu-ray support, you’ll have to pay an extra $29.99. However, if Bluray is important to you, consider purchasing Roxio Creator 2009 Ultimate ($129.99), with which you’ll get the Blu-ray support plus enhanced audio tools and data backup features.

Regarding what’s new, a couple of the treats, including Beatmatch (creates smooth transitions but only works well with heavily syncopated music tracks) and Automix (a so-so playlist creator), are more sizzle than steak. However, there are several interesting new features. One that we love, love, love is SyncIt, which offers, in a single operation, drag-and-drop batch conversion of an array of files of varying formats into a different format (for example, portable media player). Other niceties are direct YouTube upload and Audiobook Creator, which converts an entire audio book into one .m4b format (iPhone, iPod, iTunes) file but preserves chapter markers. Fans of audio books will drool over this function. As before, Creator nimbly handles photo management, as well, with tools such as Media Manager that make it easy to keep everything—audio, video, and digital imagery—neatly arranged. Media burning, ripping, and copying are highly automated and intuitive, and the process works like it should.

Final Tally
Overall, Creator 2009 is a solid performer, especially for those who are dipping their toes into HD but don’t yet have the budget to get into Bluray. As with most suites of this genre, none of its components have the power of similar standalone tools. However, those tools also cost far more and are generally not as well-integrated as Roxio’s many components. For those who want to manage and edit an array of content from a single, easy-to-use toolbox, it doesn’t get much better than Roxio Creator 2009. In support of the suite’s launch, Roxio is also extending its educational portal, My Moments (mymo ments.roxio.com), with new media resources and tutorials.

Source of Information : Smart Computing / January 2009

Building Windows Server 2008 Network against Organization Size Definitions

Windows Server 2008 has been designed to respond to the needs of organizations of all sizes, whether you are a company of one working in a basement somewhere or whether your organization spans the globe, with offices in every continent. Obviously, there is a slight difference in scale between the two extremes. Each of these is defined as follows:

• Small organizations are organizations that include only a single site. They may have several dozens of workers, but given that they are located in a single site, their networking needs are fairly basic.

• Medium organizations are organizations that have more than one site but less than ten. The complexities of having a network with more than one site address the networking needs of medium organizations.

• Large organizations are organizations that have ten sites or more. In this case, organizations need more complex networks and will often rely on services that are not required at all by the two previous organization sizes.

Small organizations have all of the requirements of a basic network and will normally implement a series of technologies, including directory services, e-mail services, file and printer sharing, database services, and collaboration services. Even if the organization includes a very small number of people, these services will often be at the core of any networked productivity system. For this reason, it is often best for this type of organization to use Windows Small Business Server 2008 (SBS08), because it is less expensive and it includes more comprehensive applications for e-mail and database services. Nevertheless, some organizations opt for Windows Server 2008 anyway, because they are not comfortable with the limitations Microsoft has imposed on the Small Business Server edition. For example, it is always best and simpler to have at least two domain controllers running the directory service because they become automatic backups of each other. SBS08 can only have a single server in the network and therefore cannot offer this level of protection for the directory service. This is one reason why some small organizations opt for Windows Server 2008 even if it is more costly at first. However, realizing this business need, Microsoft is releasing Windows Essential Business Server 2008 (WEBS) as a multi-component server offering for these organizations. WEBS is made up of three server installations:

• Windows Essential Business Server Management Server. To manage the WEBS network as well as worker collaboration and network services centrally.

• Windows Essential Business Server Security Server. To manage security, Internet access, and remote-worker connectivity.

• Windows Essential Business Server Messaging Server. To provide messaging capabilities.

Medium organizations face the challenge of having to interconnect more than one office. While small organizations have the protection of being in a single location, medium organizations often need to bridge the Internet to connect sites together. This introduces an additional level of complexity.

Large organizations have much more complex networks that provide both internal and external services. In addition, they may need to interoperate in several languages and will often have internally developed applications to manage. Large organizations may also have remote sites connected at varying levels of speed and reliability: Integrated Services Digital Network (ISDN) or dial-up. From a Windows standpoint, this necessitates a planned replication and possibly an architecture based on the Distributed File System (DFS). For this reason, they include many more service types than small or medium organizations.

Source of Informarion : McGraw Hill Microsoft Windows Server 2008

BlackBerry Prepares To Storm The Smartphone Market

There once was a time when choosing a smartphone was an easy endeavor because only a few models were available. Now, the market is saturated with a massive range of models that differ widely in style and function. Even RIM, which had been the pinnacle of implicity with just one device— albeit, not a smartphone but a smart pager of sorts—now offers plenty of BlackBerry models.

In August 2008, we reported on the new BlackBerry Bold, the first Black- Berry to support tri-band HSDPA (High-Speed Downlink Packet Access) networks. Now, details have surfaced on RIM’s first touchscreen BlackBerry, the Storm. Perhaps most notable about the Storm is that the device resembles an Apple iPhone more than a traditional BlackBerry device.

BlackBerry traditionalists might miss the physical keyboard on the Storm, which opts instead for a lightsensing, 3.2-inch touchscreen with 480x 360 resolution. A 528MHz processor powers the device, which also includes 1GB of built-in memory (with support for another 16GB of memory using microSD cards).

The Storm isn’t quite as businesscentric as other BlackBerry models. The device supports a wide range of programs, including entertainment-based software that can be downloaded from a built-in browser. Also included are GPS (global positioning system) navigation; a 3.2-megapixel camera that allows video recording; an organizer; BlackBerry Maps; a media player; corporate data access; SMS (Short Message Service); and MMS (Multimedia Messaging Service). The phone features voice-activated dialing, conference calling, speakerphone, and voicemail attachment playback. Also provided are access to social networking sites, such as Facebook and MySpace.

Users can sync iTunes music files on their desktops or notebooks with the Storm using the device’s BlackBerry Media Sync, while the included Roxio Media Manager lets users create a personal jukebox. Mobile-streaming support provides access to mobile versions of news, television, and other media sites, such as You- Tube. As expected with any Black-Berry device, the Storm includes extensive email compatibility, with support for BlackBerry Enterprise Server for Microsoft Exchange, IBM Lotus Domino, and Novell GroupWise, as well as support for existing enterprise and personal email accounts.

The Storm has an approximate talk time of 5.5 hours and 15 days of standby time, and it features password protection, screen lock, and other security features. Although the price and precise release date were unavailable at press time, the Storm was expected to be available from Verizon Wireless by the end of 2008.

Source of Information : Smart Computing / January 2009

DHCP Message Format

The fields in the DHCP message are the following:

• Message Op Code (Op) A 1-byte field that indicates whether the message is a request
(set to 1) or a reply (set to 2).

• Hardware Address Type (Htype). A 1-byte field that indicates the type of hardware being used by the DHCP client. This field uses the same values as the Hardware Type field in the Address Resolution Protocol (ARP) header. For a complete list of ARP Hardware Type values, see http://www.iana.org/assignments/arp-parameters.

• Hardware Address Length (Hlen). A 1-byte field that indicates the number of highorder bytes within the fixed-length Client Hardware Address field that contains the client’s hardware address. For commonly used IEEE 802-based technologies, such as Ethernet and IEEE 802.11, the value of this field is 6.

• Hops. A 1-byte field that indicates how many DHCP relay agents have forwarded the message. The initial value is 0. When a DHCP relay agent forwards a DHCP message on behalf of either a DHCP client or a DHCP server, it increments this field. The maximum number of hops in a DHCP infrastructure is 16. If the value is greater than 16, the receiving DHCP relay agent silently discards the message. DHCP relay agents can also discard DHCP messages if this field exceeds a configurable value. For example, the DHCP Relay Agent component of Routing and Remote Access in Windows Server 2008 uses a default maximum of 4 hops.

• Transaction ID (Xid). A 4-byte field that contains a random number derived by the
DHCP client to group all of the DHCP messages of a given message exchange together, such as all of the messages for a lease acquisition.

• Seconds (Secs). A 2-byte field set by the DHCP client to indicate the number of seconds that have elapsed since the client began the address acquisition process.

• Flags. A 2-byte field that indicates flags that are set by the DHCP client. RFC 2131 defines the high-order bit as the Broadcast flag. A DHCP client uses the broadcast flag to indicate that it can (set to 0) or cannot (set to 1) receive unicast IP datagrams even though it has not been configured with an IP address. Windows Server 2008 and Windows Vista-based DHCP clients set the Broadcast flag to 1 (responses must be broadcast). If the DHCP server has been configured to process this flag, it will send its response as either a unicast (when the Broadcast flag is set to 0) or as a broadcast (when the Broadcast
flag is set to 1).

• Client IP. Address (Ciaddr) A 4-byte field that indicates a DHCP client’s IP address. This field is set by the DHCP client in DHCP messages when it has been successfully configured with the IP address and can respond to ARP requests to defend the use of the address.

• Your IP. Address (Yiaddr). A 4-byte field that indicates the IP address that is being allocated to the DHCP client by the DHCP server.

• Server IP Address (Siaddr). A 4-byte field that indicates the IP address of the DHCP server that is offering an IP address.

• Gateway IP Address (Giaddr). A 4-byte field that indicates an IP address that is assigned to the interface on the initial DHCP relay agent that received the message from the DHCP client. The initial DHCP relay agent is located on the same subnet as the DHCP client that broadcast the DHCP request message (either a DHCPDISCOVER or DHCPREQUEST message). By recording an IP address for the subnet of the DHCP client in this field, the DHCP server can determine the proper scope from which to assign an IP address to the requesting DHCP client.

• Client Hardware Address (Chaddr). A 16-byte field that indicates the hardware address of the DHCP client. To determine how many bytes are used for the hardware address, the DHCP server and relay agent use the value of the Hardware Address Length field. Fo commonly used IEEE 802-based technologies, this field contains the 6-byte media access control (MAC) address of the Ethernet or 802.11 network adapter of the DHCP client and
10 bytes set to 0.

• Server Host Name (Sname). A 64-byte field that indicates a name for the DHCP server.
The DHCP Server service in Windows Server 2008 does not use this field.

• Boot File Name (File). A 128-byte field that indicates the name of the file containing a boot image for a BOOTP client. BOOTP was developed before DHCP to allow a diskless host computer to obtain an IP address configuration, the name of a boot file, and the location of a Trivial File Transfer Protocol (TFTP) server from which the computer loads the boot file. DHCP message exchanges do not use this field.

• Options A variable-length set of fields containing DHCP options.


By default, the DHCP Server service in Windows Server 2008 ignores the Broadcast flag in the Flags field of broadcast-based DHCP messages received by DHCP clients. To configure the DHCP Server service to process the Broadcast flag, create and set theIgnoreBroadcastFlag registry value to 0.

Source of Information : Microsoft Press Windows Server 2008 TCP IP Protocols and Services

Intel Readies Do-It-All Mobile Chip

There’s no denying the world of computing is growing increasingly mobile-centric. Consumers want devices that are not only portable but that also can perform similarly to desktop and notebook computers. Intel is working to create technologies that can support these heavy demands. “Technology innovation is the catalyst for new user experiences, industry collaborations, and business models that together will shape the next 40 years,” said Anand Chandrasekher, senior vice president and general manager of Intel’s Ultra Mobility Group, at the Intel Developer Forum in Taiwan. “As the next billion people connect to and experience the Internet, significant opportunities lie in the power of technology and the development of purpose-built devices that deliver more targeted computing needs and experiences.” One of these technologies is Intel’s upcoming “Moorestown” platform, which revolves around an SOC, or system on a chip, that integrates a 45-nanometer processor, graphics, memory controller, and video encoding/decoding technology onto the single chip. Despite all of these features jammed onto a single device, the chip is slated to be surprisingly power efficient: Chandrasekher says Intel aims to reduce the platform’s idle power by more than 10 times
compared to first-generation MIDs (mobile Internet devices)
based on Intel’s Atom processor.

An upcoming mobile platform will rely heavily on the SOC (system on a chip) architecture, which includes a processor, graphics, memory controller, and other technologies on one chip.

Display Driver Support in Windows Server 2008

Windows Server 2003 shipped with many third-party display devices supported. This support was different based on the product SKU-for example, Standard edition vs. Enterprise edition. On Standard edition, third-party drivers were present and fully supported to enable full support if the system was going to be used as a client or in a workstation environment or role. On Enterprise edition, the development team decided to have no third-party display driver support, Direct3D was disabled by default via registry settings, and only the Microsoft-owned VGA drivers for base display support were shipped.

Windows Server 2008 pre-Beta3 had the same display device support that was in the Windows Vista client release. The code base used for the Windows Server 2008 release was carried forward from Windows Vista and, along with this, all the client-level device support. Windows Server 2008 does not have the ability to differentiate device support on a per-SKU or edition basis as did Windows Sever 2003. So the only differentiating mechanism available is the product type decoration in the driver INFs, which are documented on MSDN at http://msdn2.microsoft.com/en-us/library/ms794359.aspx.

The Windows graphics team reviewed the current limitations within per-SKU differentiation, and they discussed many options with our hardware and OEM partners. The decision was to mark all inbox third-party display drivers as Workstation only, thereby not enabling them on any Windows Server 2008 SKU. Starting with the Beta3 release of Windows Server 2008, the default user experience upon installing the operating system is that the user will boot the machine with the Microsoft-supplied VGA driver. Some of the specifics of this include the following:

• The VGA driver always assumes that a monitor is connected even if it cannot detect one.

• The default display resolution for the VGA driver is 800 x 600. There is logic to choosing a higher resolution, but that is bypassed for the VGA driver because we cannot determine the frequency it will use for resolutions set through the VESA BIOS.

• The frequency of modes for the VGA driver is chosen by the video BIOS, but for 800 x 600 we have yet to encounter a BIOS that defaults to anything other than 60 Hz. The reason we use 800 x 600 as the default mode is that we found a few BIOSes that choose unusual timings for higher resolutions.

• Regarding color depth, the default color depth is 32 bits per pixel at 800 x 600. If this mode is not supported, we try the highest color depth available for 800 x 600, and in the very rare cases where none is available, we take the best 640 x 480 color depth mode available.

• Regarding the maximum display resolutions available when running the VGA driver, this is based on what the display device reports as available VESA modes listed in their video BIOS. Most devices since 2004 should properly support 640 x 480, 800 x 600, 1024 x 768, and 1200 x 1024 at good color depths, either 16 or 32 bpp.

• With the VGA driver installed, only system hibernation is supported. You cannot enter any sleep states outside of S4 (hibernation) and S5 power off.

With the removal of third-party display drivers in the Windows Server 2008 release, the core graphics infrastructure is still available to re-enable full display functionality. You need to go through Windows Update or your system provider to obtain display drivers for your hardware and install them to regain functionality.

Source of Information : Introducing Windows Server 2008

BlueTrack Revolutionizes Mouse Tracking

First there was the mouse ball. Then came optical mice with LEDs. Then came lasers. Now we have BlueTrack, a new Microsoft technology that the company claims works better than both optical and laser technologies. But how much different is it from these older technologies? Microsoft’s unique combination of both optical and laser technologies helps BlueTrack mice to track better on more surfaces, according to the company. Microsoft says that BlueTrack will allow people to use mice on carpet, granite, and even rough-grain wood (park bench computing, anyone?).

The light beam that emanates from the bottom of the mouse is more than four times as large as the average laser beam used in today’s mice, according to Microsoft. This allows larger images to be captured, in turn providing a better surface reflection. In addition to providing more efficient tracking on varied surfaces, the company says BlueTrack further trumps laser technology because lasers are more sensitive to dust and dirt accumulation. The Microsoft Explorer Mouse with BlueTrack technology ($99.95; www.microsoft.com) features a wireless snap-in transceiver for easy portability, 30-foot wireless range, blue lighting effects, customizable buttons, and a battery that allows three weeks of use between charges.

This Microsoft Explorer Mouse features Microsoft’s BlueTrack technology, which blends optical and laser technologies to provide enhanced tracking capabilities on a variety of surfaces.

Source of Information : Smart Computing / January 2009

Accounts Model in Windows Fax Server

Fax Server in Windows Server 2008 introduces the concept of Accounts. An account can be briefly described as a registration between an authenticated user and the fax server.

All clients connecting to a Windows Server 2008 Fax Server need to have an account with the fax server. If the account already exists, the server authenticates the user and establishes the connection. If the account does not exist, the server either automatically creates the account or, if not permitted to do so, denies the connection to the client. An account is the same as the end user’s Windows credentials, and it is tied to Windows authentication. So an administrator creating an account for an end user allows the Windows authenticated user to access only the fax server.

The first configuration setting that the fax administrator needs to consider is whether the server supports auto-creation or accounts. This scenario works as follows: In an auto-create environment, the server automatically creates an account for an authenticated user if it does not exist. If the setting is turned off, the server does not create the account and denies the connection. The administrator can choose this setting by taking the following steps:

1. Launch the fax service manager.

2. Right-click on the root node.

3. Launch the Properties dialog.

4. Navigate to the Accounts tab.

5. Choose whether the Auto-Create Accounts On Connection option is On or Off.

If auto-create is Off, the fax administrator has to manually manage the accounts on the fax server. If a user needs to work with the fax server, the administrator creates the account manually and then asks the user to try connecting to the fax server. To create an account, the fax administrator takes the following steps:

6. Launch the fax service manager.

7. Navigate to the Accounts node.

8. Choose Action New Account….

9. Enter the username and the domain for the account.

10. Click Create, and create the new account.

The default setting in Windows Server 2008 is that the auto-creation of accounts is turned on. The administrator can use the same Accounts node to delete an existing account. When an end user launches Windows Fax and Scan to work with a Windows Server 2008 Fax Server, the server authenticates whether he has an account or not. The connection is established only if the particular user has a valid account on the fax server. The accounts model in fax server allows a higher degree of control for the fax administrator, and it enforces better security on the fax server.

Source of Information : Introducing Windows Server 2008

Integrated Graphics No Longer The Underdog

Historically, integrated graphics—the graphics technology included with motherboards—have paled in comparison to the performance of discrete graphics cards. This comes as little surprise, as cards have plenty more real estate to pack on graphics processors and dedicated graphics memory than do motherboards. But as circuitry continues to shrink, the game is changing, thanks to Nvidia.

“We’ve combined the power of three different chips into one highly compact and efficient GPU,” says Drew Henry, general manager of MCP business at Nvidia. “In doing so, we’ve redefined the level of performance people can expect from a motherboard solution to enrich visual computing experiences for mainstream systems. You can now have the performance of a discrete GPU in a small form-factor PC.”

Nvidia’s new GeForce 9400 and 9300 motherboard GPUs for desktop PCs on the Intel platform use a 16-core graphics architecture that supports DirectX 10 games. These GPUs also enable high-quality video playback with the help of the company’s PureVideo HD (high-definition) technology, which offloads all of the video processing from the CPU to the GPU. Also provided is support for advanced audio and video connectivity,
such as uncompressed LPCM (linear pulse code modulation) 7.1 audio, dual-link DVI (Digital Visual Interface), and HDMI (High-Definition Multimedia Interface). Manufacturers releasing motherboards with these chips include Asus, ECS, Evga, Gigabyte, XFX, MSI, Foxconn, Galaxy, J&W, Onda, and Zotac.

Source of Information : Smart Computing / January 2009

Public and Private Mode in Windows Server 2008 Fax Server

Fax Server in Windows Server 2008 supports two different operating modes, which are governed by the Reassign Setting for the fax server. These two modes, described in the following list, apply only to incoming fax messages:

Public Mode In this mode, all fax messages are received in a central Server Inbox, and these are visible and available to all users of the fax server.

Private Mode In this mode, all fax messages are received in a central Server Inbox, but this is hidden from individual users, and they cannot access these fax messages until they are assigned to the mailboxes of the individual users.

Configuring the fax server in public mode is recommended for small businesses that do not have a dedicated fax administrator or routing assistants and are comfortable with having the incoming faxes available to all users of the fax server. For example, a travel agency might configure its fax server in public mode so that all the travel agents are able to view incoming faxes and work with them. This mode does not require a large administration overhead, and it’s easy to configure and use. In this mode, when an authenticated user launches her Windows Fax and Scan application, she will have access to all the received faxes on the server.

Configuring the fax server in private mode is recommended for businesses that might have either a dedicated fax administrator or an IT generalist who manages the fax server along with other server roles such as file and print. This mode is recommended for the usage scenario where individual faxes need to be kept private and made available only to the intended recipient. This setting requires that the business employ routing assistants who have access to the protected server inbox, go through the received faxes manually, and assign them to the intended recipient. In this mode, when an authenticated user launches her Windows Fax and Scan application, the faxes that are assigned to her account show up in her Inbox.

The default setting in Windows Server 2008 is public mode. To configure this setting, the fax administrator does the following:

1. Launch the fax service manager.

2. Right-click on the root node.

3. Launch the Properties dialog.

4. Navigate to the Accounts tab.

5. Choose whether the Reassign Setting is On or Off.
When using private mode, the administrator has to designate certain users as routing assistants. These users have access to the server inbox and do the actual reassign operation. To designate a particular user as a routing assistant, the fax administrator does the following:

6. Launch the fax service manager.

7. Right-click on the root node.

8. Launch the Properties dialog.

9. Navigate to the Security tab.

10. Click the Advanced button.

11. Choose the particular user who needs to be designated as the routing assistant.

12. Click Edit.

13. Select the Allow check-box for the Manage Server Receive Folder setting.

As mentioned earlier, assigning a fax is permissible only in private mode. If a server has been set up in private mode, the routing assistants launch the Windows Fax and Scan application to assign faxes to the ultimate recipients. The routing assistants have access to the private server inbox that contains the unassigned faxes. If there are any unassigned faxes, the routing assistants can right-click the fax message and choose the Reassign task. Doing this displays a dialog box in which the user can choose the fax accounts to which the fax has to be assigned. The routing assistant can also optionally add some fax message metadata such as the subject and the sender, if that is displayed on the cover page. When the routing assistant completes the assign operation, the fax is marked as assigned and delivered into the Inbox of the intended recipient.

Also, the fax server can be made to operate in either of the modes by choosing the On/Off option for Reassign Setting. This new feature in Windows Server 2008 makes the management of received faxes easy and efficient, and it can be tailored to the requirements of the business.

Source of Information : Introducing Windows Server 2008

Don’t Toss That Old Hard Drive

Internal hard drives—both for desktops and laptops—have dropped so dramatically in price in recent years that many consumers regularly replace their drives with beefier models to accommodate their everexpanding collections of video files, music, and other storage- hungry content.

But that practice can present a challenge: Do you try to sell the old drive for pennies
on the dollar or simply discard it? Now, you don’t have to do either. Thermaltake has devised a unique solution to using internal hard drives that are no longer installed in your computers. The BlacX SE USB hard drive dock ($69.99; www.thermaltakeusa.com) lets you plug in any 2.5- or 3.5-inch SATA (Serial Advanced Technology Attachment) hard drive up to 1.5TB (terabytes).

The BlacX accommodates hotswapping, which means you can simply plug in a hard drive to the dock while the dock is connected to your running PC. Although this convenience could prove handy for backups and for accessing files on older drives (though, not too old—the dock supports only SATA drives, after all), the design does expose part of the hard drive, in turn potentially allowing for EMI (electromagnetic
interference) issues. Further, drives inserted in the dock are limited to USB 2.0 speeds, but this dock is intended as a supplement to existing internal hard drives.

Source of Information : Smart Computing / January 2009

Netbooks Become The New Notebooks

Despite the rough economy, certain segments of the computer market are faring well. One of these is the netbook (also known as the ultra-portable or mininotebook), which continues to carve a comfortable niche in the PC market by providing an ideal mix of power and portability. HP’s latest entries into this segment should further cement the netbook’s status as a viable contender to conventional notebooks.

Following up on the release of its sub-$500, student-targeted HP 2133 Mini-Note in April 2008, the company has now expanded its HP Mini family with the new HP Mini 1000 line. Each of the three new netbooks measures less than 1 inch thick, weighs 2.25 pounds, and has a keyboard that is slightly smaller than a standard notebook keyboard.

All models also include a built-in Web cam and microphone, along with a BrightView widescreen Infinity display with flush glass, LED (light-emitting diode) backlight, and 1,024 x 600 resolution. Each is powered by an Intel Atom N270 1.6GHz processor. The HP Mini 1000 (starting at $399.99; www.hp.com) includes Windows XP Home, an 8.9 inch display (a 10.2-inch display is also available), 512MB of DDR2 (double-data rate 2) memory (1GB is also available), 8GB SSD (solidstate drive; 16GB SSD and 60GB,
4,200rpm PATA [Parallel Advanced Technology Attachment] drives are also available), Wireless-G card, 3-cell lithium polymer battery, integrated stereo speakers, two USB ports, a microphone-in port, and Microsoft Works 9.

The HP Mini 1000 with MIE (Mobile Internet Experience) software (starting at $379) will be available this month and features an HP-developed, dashboard-style interface that’s designed to streamline the viewing of digital content such as videos, photos, music, and email. The MIE model is bundled with Internet-focused software for tasks such as instant messaging, email, and online video chat. These applications are preloaded and run from the MIE dashboard to minimize start time, and favorite Web sites that are added to the dashboard stay live.

If you’re feeling fancy, a third model is also available. The HP Mini 1000 Vivienne Tam Edition (starting at $699) features the artwork of designer Vivienne Tam, with a peony flower-inspired design that reflects Tam’s Spring 2009 collection. The design was first unveiled on the runway of the designer’s fashion show during New York’s Fashion Week last September, though according to HP, onlookers mistook the HP Mini 1000 for a purse.

Source of Information : Smart Computing / January 2009

Using Directory Services Restore mode

Active Directory is a special kind of hierarchical database that stores system settings, computer information, user information, application configuration, and a wealth of other information and statistics about your network. In fact, Active Directory is the most important database on your server when this database becomes corrupted, it can prevent your server from booting because Windows can’t find the settings it needs. Choosing the Directory Services Restore Mode option tells Windows to attempt to fix Active Directory — at least enough to let you boot the server. After you boot the server, you can restore any backup you have to fix the problem completely.

You find Active Directory used only on domain controllers. If your server isn’t a domain controller, it doesn’t have Active Directory installed and you should never use this option with it. When you use the Directory Services Restore Mode option, Windows performs the following tasks:

1. The server begins booting as if you had selected a Safe Mode option.

2. The server then performs a check of the hard drives on your system. This check looks for any problems with the hard drive that could have caused the Active Directory corruption (using the ChkDsk utility).

3. After a few more configuration tasks take place, you see a normal login screen. Supply your credentials and you see a Safe Mode screen — not the normal GUI.

4. Use any Active Directory GUI or command line tool to make repairs to Active Directory. You can also restore any backup you made (assuming the backup is available in Safe Mode).

5. After you finish the repairs, type Shutdown /r and press Enter at the command prompt or choose Start -> Shutdown.

When you’re working at the command line, Windows displays a You Are About to be Logged Off dialog box. After about a minute, the server reboots. When working with the GUI shutdown, you see the normal Shut Down Windows dialog box, where you can choose any of the standard shutdown options. You can use Windows Server 2008 in its normal mode at this point and continue any repairs you need to make to Active Directory.

Source of Information : For Dummies Windows Server 2008

Using the last known good configuration

Many errors occur due to a configuration change. For example, you might install a new device driver and find that the system suddenly doesn’t boot because of it. A new application can cause the system to fail as well. Any change that affects the boot sequence can cause problems that seem impossible to fix. The Last Known Good Configuration (Advanced) option lets you use the configuration from the last time that Windows booted successfully without using any of the special options. Think of it as an undo feature — you can reverse the effects of a single bad decision, configuration change, or installation.

Of course, this feature isn’t the same as creating a system restore point. You can use it only to reverse changes that prevent the system from booting properly. A system restore point is an automatic or manual process of saving the system settings when a major system change occurs or simply because you want to save your system setup (always a good idea when you install a new application). Never count on the Last Known Good Configuration (Advanced) option as a replacement for creating a system restore point.

You can’t undo the use of the Last Known Good Configuration (Advanced) option. Any changes that you reverse using this feature are gone, which makes this option a hammer when you really wanted a screwdriver. Always use this option with care. It’s really a last-ditch effort to get your server going again when all other options have failed.

Source of Information : For Dummies Windows Server 2008 All In One Desk Reference For Dummies

Enabling boot logging in Windows Server 2008

Whenever you start your computer in Safe Mode, you’ll notice a number of messages scrolling by that tell you which file Windows is loading. Unfortunately, the list can scroll by so fast that you can’t read it. Knowing which file Windows is loading is important because loading the wrong file at the wrong time can prove fatal when getting the operating system to work. Selecting the Enable Boot Logging option slows the Windows loading process considerably because the operating system records everything it loads into the NTBtLog.TXT file, located in the %SystemRoot% folder (normally C:\Windows) of your system. You can open this file using Notepad and see a list of the files that Windows loads.

Of course, all those filenames may not mean much to you. Sure, you might recognize a few of them, but for the most part, the meaning isn’t clear. Fortunately, you can check most of these filenames online. A simple Google search is enough to provide everything you need in most cases. You can also go to sites such as the ones in the following list to view information about the files:

• Program Checker: http://www.programchecker.com/

• Spyware.net: http://www.fbmsoftware.com/spyware-net/

• Software Tips & Tricks: http://www.softwaretipsandtricks.com/necessary_files/

• eConsultant: http://www.econsultant.com/windows-tasks/

You have another option for obtaining information about the individual files that load during the boot process. Because the NTBtLog.TXT file contains the full path to each of the files, you can quickly locate an individual file that Windows loaded. Right-click the file and choose Properties from the context menu. On the Digital Signature tab, you can verify the digital signature of the company that signed the file. The Details tab, provides significant information about the file that you can use for verification purposes on the many Web sites that provide this information.

The whole point of working through the NTBtLog.TXT file is to ensure that you know what’s loading and to look for potential sources of problems. In most cases, the log tells you when files haven’t loaded and tells you about errors that Windows experienced during the boot process. Although it’s perfectly normal to see some drivers fail to load, the failure of an essential driver is something you should note and fix.

Source of Information : For Dummies Windows Server 2008 All In One Desk Reference For Dummies

Working with the Safe Mode options in Windows Server 2008

Safe Mode is one of the oldest diagnostic features of Windows, and it’s still one that you find used quite often to locate problems. The idea behind Safe Mode is that the operating system boots with the minimal number of features in place that are necessary for the operating system to work. By removing all the extraneous features, you can determine whether the operating system will even boot. If Windows doesn’t boot in Safe Mode, you can more or less guarantee that something terrible has happened and it won’t boot at all. Safe Mode also makes it possible to fix problems. You can uninstall a problem device driver, service, or application in order to boot the system. It’s also possible to undo registry changes that may have looked good at the time but ultimately caused Windows to stop booting properly. This mode normally lets you restore a backup that you made as long as the backup device has the proper drivers installed. In short, even though you don’t want to use Safe Mode to perform any actual work, it can clear the way to fixing your system and making it possible to boot it again. Windows actually provides three kinds of Safe Mode. Each form serves a specific purpose, so you should choose the form that best suits your needs. The following sections describe each form.


Using standard Safe Mode
Standard Safe Mode is the most restrictive form: None of the non-essential device drivers, services, or applications load. In fact, you can’t even access the network. Your system becomes a standalone machine that really can’t do much except recover from whatever problem has affected it. Use standard Safe Mode when you don’t need network access but you do need to use the graphical interface to perform a task. For example, you can use this mode for the following tasks:

• Restore a backup.

• Perform a backup.

• Modify the registry.

• Uninstall an errant application, device driver, or service.

• Perform GUI-based diagnostics.


Using Safe Mode with Networking
The Safe Mode with Networking option performs the normal Safe Mode setup and then adds any drivers, services, and applications required to create a network connection. The resulting network connection lets you access other machines. Windows also restores any device mappings for your system so that you have access to hard drives on other systems. Whether you have access to printers depends on which drivers and application software the printer requires. You shouldn’t count on using a printer in Safe Mode because Windows doesn’t load the printer software in most cases. Use the Safe Mode with Networking option when you need the extra capability that network support can provide and you’re certain that the network isn’t the cause of your problem. You may actually want to start the system in the standard Safe Mode first to ensure that it boots at all before you use this option. You can use this mode for the following tasks:

• Install an application, device driver, or service update using a file on a server.

• Connect to another machine to compare its setup with the local setup.

• Use a shared Internet connection to obtain updates online.

• Use a shared Internet connection to search for troubleshooting help, leave help messages on newsgroups, and search vendor Web sites for additional information.

• Troubleshoot a network connectivity problem in an environment free of other software.

• Make the troubled system available for collaborative troubleshooting.

You should never place a machine with questionable software on the network. In some cases, a virus, some adware, or another type of malicious software can load, even using the Safe Mode with Networking option. When a system has a potential infection, you should isolate it from the rest of the network and perform any required cleanup before you reattach it. Otherwise, you risk giving the same problem to other machines on the network.


Using Safe Mode with Command Prompt
The Safe Mode with Command Prompt option starts the system in Safe Mode but doesn’t start the graphical user interface (GUI). What you see instead is a command prompt where you can run utilities to determine the system status. You may not think that the command prompt has much to offer, but you can perform nearly any configuration task at the command prompt without GUI interference. In fact, Windows Server 2008 includes a new utility named Server ManagerCmd that makes it considerably easier to configure your server from the command prompt. You can use this mode for the following tasks:

• Verify that the graphical components aren’t causing a system failure.

• Perform configuration tasks outside the GUI to determine whether the GUI is keeping them from completing normally.

• Use batch files or other character-based tools to troubleshoot your system faster than you can when using the GUI (this mode provides a significant performance boost).

The Safe Mode with Command Prompt option doesn’t start most of the GUI features that you may have used in the past. You can’t even use a mouse. Consequently, make sure you know how to perform tasks using just the keyboard.

In addition, you don’t have access to the Start menu. If you start in this mode, you need to type Shutdown /s and press Enter. This command shuts off the system completely. If you decide that you want to restart the computer instead, type Shutdown /r and press Enter. When working at the command prompt, use the pipe symbol () followed by the More command to display long screens of Help information. For example, if you want to see all the help information for the Shutdown command, you type Shutdown /? More and press Enter. Windows displays the Help information one screen at a time.

Source of Information : For Dummies Windows Server 2008 All In One Desk Reference For Dummies

PKCS Standards

Here is a list of active PKCS standards. You will notice that there are gaps in the numbered sequence of these standards, and that is due to the retiring of standards over time since they were first introduced.

PKCS #1: RSA Cryptography Standard Outlines the encryption of data using the RSA algorithm. The purpose of the RSA Cryptography Standard is in the development of digital signatures and digital envelopes. PKCS#1 also describes a syntax for RSA public keys and private keys. The public-key syntax is used for certificates, while the private-key syntax is used for encrypting private keys.

PKCS #3: Diffie-Hellman Key Agreement Standard Outlines the use of the Diffie-Hellman Key Agreement, a method of sharing a secret key between two parties. The secret key used to encrypt ongoing data transfer between the two parties. Whitefield Diffie and Martin Hellman developed the Diffie-Hellman algorithm in the 1970s as the first public asymmetric cryptographic system (asymmetric cryptography was invented in the United Kingdom earlier in the same decade, but was classified as a military secret). Diffie-Hellman overcomes the issue of symmetric key system, because management of the keys is less difficult.

PKCS #5: Password-based Cryptography Standard A method for encrypting a string with a secret key that is derived from a password. The result of the method is an octet string (a sequence of 8-bit values). PKCS #8 is primarily used for encrypting private keys when they are being transmitted between computers.

PKCS #6: Extended-certificate Syntax Standard Deals with extended certificates. Extended certificates are made up of the X.509 certificate plus additional attributes. The additional attributes and the X.509 certificate can be verified using a single public-key operation. The issuer that signs the extended certificate is the same as the one that signs the X.509 certificate.

PKCS #7: Cryptographic Message Syntax Standard The foundation for Secure/Multipurpose Internet Mail Extensions (S/MIME) standard. It is also compatible with Privacy-Enhanced Mail (PEM) and can be used in several different architectures of key management.

PKCS #8: Private-key Information Syntax Standard Describes a method of communication for private-key information that includes the use of public-key algorithm and additional attributes (similar to PKCS #6). In this case, the attributes can be a DN or a root CA’s public key.

PKCS #9: Selected Attribute Types Defines the types of attributes for use in extended certificates (PKCS #6), digitally signed messages (PKCS #7), and private-key information (PKCS #8).

PKCS #10: Certification Request Syntax Standard Describes a syntax for certification request. A certification request consists of a DN, a public key, and additional attributes. Certification requests are sent to a CA, which then issues the certificate.

PKCS #11: Cryptographic Token Interface Standard Specifies an application program interface (API) for token devices that hold encrypted information and perform cryptographic functions, such as smart cards and Universal Serial Bus (USB) pigtails.

PKCS #12: Personal Information Exchange Syntax Standard Specifies a portable format for storing or transporting a user’s private keys and certificates. Ties into both PKCS #8 (communication of private-key information) and PKCS #11 (Cryptographic Token Interface Standard). Portable formats include diskettes, smart cards, and Personal Computer Memory Card International Association (PCMCIA) cards. On Microsoft Windows platforms, PKCS #12 format files are generally given the extension .pfx. PKCS #12 is the best standard format to use when exchanging private keys and certificates between systems.

RSA-derived technology in its various forms is used extensively by Windows Server 2008 for such things as Kerberos authentication and S/MIME. In practice, the use of the PKI technology goes something like this: Two users, Dave and Dixine, wish to communicate privately. Dave and Dixine each own a key pair consisting of a public key and a private key. If Dave wants Dixine to send him an encrypted message, he first transmits his public key to Dixine. She then uses Dave’s public key to encrypt the message. Fundamentally, since Dave’s public key was used to encrypt, only Dave’s private key can be used to decrypt. When he receives the message, only he is able to read it. Security is maintained because only public keys are transmitted—the private keys are kept secret and are known only to their owners.

In a Windows Server 2008 PKI, a user’s public and private keys are stored under the user’s profile. For the administrator, the public keys would be under Documents and Settings\Administrator\System Certificates\My\Certificates and the private keys would be under Documents and Settings\Administrator\Crypto\RSA (where they are double encrypted by Microsoft’s Data Protection API, or DPAPI). Although a copy of the public keys is kept in the registry, and can even be kept in Active Directory, the private keys are vulnerable to deletion. If you delete a user profile, the private keys will be lost!

Source of Information : Syngress The Best Damn Windows Server 2008 Book Period 2nd Edition

How PKI Works

Perhaps helpful to understand the term encryption and how PKI has evolved. The history of general cryptography almost certainly dates back to almost 2000 B.C. when Roman and Greek statesmen used simple alphabet-shifting algorithms to keep government communication private. Through time and civilizations, ciphering text played an important role in wars and politics. As modern times provided new communication methods, scrambling information became increasingly more important. World War II brought about the first use of the computer in the cracking of Germany’s Enigma code. In 1952, President Truman created the National Security Agency at Fort Meade, Maryland. This agency, which is the center of U.S. cryptographic activity, fulfills two important national functions: It protects all military and executive communication from being intercepted, and it intercepts and unscrambles messages sent by other countries.

Although complexity increased, not much changed until the 1970s, when the National Security Agency (NSA) worked with Dr. Horst Feistel to establish the Data Encryption Standard (DES) and Whitfield Diffie and Martin Hellman introduced the first public key cryptography standard. Windows Server 2008 still uses Diffie-Hellman (DH) algorithms for SSL, Transport Layer Security (TLS), and IPSec. Another major force in modern cryptography came about in the late 1970s. RSA Labs, founded by Ronald Rivest, Adi Shamir, and Leonard Adleman, furthered the concept of key cryptography by developing a technology of key pairs, where plaintext that is encrypted by one key can be decrypted only by the other matching key.

There are three types of cryptographic functions. The hash function does not involve the use of a key at all, but it uses a mathematical algorithm on the data in order to scramble it. The secret key method of encryption, which involves the use of a single key, is used to encrypt and decrypt the information and is sometimes referred to as symmetric key cryptography. An excellent example of secret key encryption is the decoder ring you may have had as a child. Any person who obtained your decoder ring could read your “secret” information.

There are basically two types of symmetric algorithms. Block symmetric algorithms work by taking a given length of bits known as blocks. Stream symmetric algorithms operate on a single bit at a time. One well-known block algorithm is DES. Windows 2000 uses a modified DES and performs that operation on 64-bit blocks using every eighth bit for parity. The resulting ciphertext is the same length as the original cleartext.

For export purposes the DES is also available with a 40-bit key. One advantage of secret key encryption is the efficiency with which it takes a large amount of data and encrypts it quite rapidly. Symmetric algorithms can also be easily implemented at the hardware level. The major disadvantage of secret key encryption is that a single key is used for both encryption and decryption. There must be a secure way for the two parties to exchange the one secret key.

In the 1970s this disadvantage of secret key encryption was eliminated through the mathematical implementation of public key encryption. Public key encryption, also referred to as asymmetric cryptography, replaced the one shared key with each user’s own pair of keys. One key is a public key, which is made available to everyone and is used for the encryption process only. The other key in the pair, the private key, is available only to the owner. The private key cannot be created as a result of the public key’s being available. Any data that is encrypted by a public key can be decrypted only by using the private key of the pair. It is also possible for the owner to use a private key to encrypt sensitive information. If the data is encrypted by using the private key, then the public key in the pair of keys is needed to decrypt the data. DH algorithms are known collectively as shared secret key cryptographies, also known as symmetric key encryption. Let’s say we have two users, Greg and Matt, who want to communicate privately. With DH, Greg and Matt each generate a random number.

Each of these numbers is known only to the person who generated it. Part one of the DH function changes each secret number into a nonsecret, or public, number. Greg and Matt now exchange the public numbers and then enter them into part two of the DH function. This results in a private key—one that is identical to both users. Using advanced mathematics, this shared secret key can be decrypted only by someone with access to one of the original random numbers. As long as Greg and Matt keep the original numbers hidden, the shared secret key cannot be reversed.

It should be apparent from the many and varied contributing sources to PKI technology that the need for management of this invaluable set of tools would become paramount. If PKI, like any other technology set, continued to develop without standards of any kind, then differing forms and evolutions of the technology would be implemented ad hoc throughout the world. Eventually, the theory holds that some iteration would render communication or operability between different forms impossible. At that point, the cost of standardization would be significant, and the amount of time lost in productivity and reconstruction of PKI systems would be immeasurable. Thus, a set of standards was developed for PKI. The Public-Key Cryptography Standards (PKCS) are a set of standard protocols sued for securing the exchange of information through PKI. The list of these standards was actually established by RSA laboratories—the same organization that developed the original RSA encryption standard—along with a group of participating technology leaders that included Microsoft, Sun, and Apple.

Source of Information : Syngress The Best Damn Windows Server 2008 Book Period 2nd Edition

PKI Enhancements in Windows Server 2008

Windows Server 2008 introduces many new enhancements that allow for a more easily implemented PKI solution and, believe it or not, the development of such solutions. Some of these improvements extend to the clients, such as the Windows Vista operating system. Overall, these improvements have increased the manageability throughout Windows PKI. For example, the revocations services have been redesigned, and the attack surface for enrollment has decreased. The following list items include the major highlights:

Enterprise PKI (PKIView). PKIView is a Microsoft Management Console (MMC) snap-in for Windows Server 2008. It can be used to monitor and analyze the health of the certificate authorities and to view details for each certificate authority certificate published in Active Directory Certificate Servers.

Web Enrollment. Introduced in Windows Server 2000, the new Web enrollment control is more secure and makes the use of scripts much easier. It is also easier to update than previous versions.

Network Device Enrollment Service (NDES). In Windows Server 2008, this service represents Microsoft’s implementation of the Simple Certificate Enrollment Protocol (SCEP), a communication protocol that makes it possible for software running on network devices, such as routers and switches that cannot otherwise be authenticated on the network, to enroll for x.509 certificates from a certificate authority.

Online Certificate Status Protocol (OCSP). In cases where conventional CRLs (Certificate Revocation Lists) are not an optimal solution, Online Responders can be configured on a single computer or in an Online Responder Array to manage and distribute revocation status information.

Group Policy and PKI. New certificate settings in Group Policy now enable administrators to manage certificate settings from a central location for all the computers in the domain.

Cryptography Next Generation. Leveraging the U.S. government’s Suite B cryptographic algorithms, which include algorithms for encryption, digital signatures, key exchange, and hashing, Cryptography Next Generation (CNG) offers a flexible development platform that allows IT professionals to create, update, and use custom cryptography algorithms in cryptography-related applications such as Active Directory Certificate Services (AD CS), Secure Sockets Layer (SSL), and Internet Protocol Security (IPSec).

Source of Information : Syngress The Best Damn Windows Server 2008 Book Period 2nd Edition

Components of PKI

In today’s network environments, key pairs are used in a variety of different functions.
This series will likely cover topics such as virtual private networks (VPNs), digital signatures, access control (SSH), secure e-mail (PGP—mentioned already—and S/MIME), and secure Web access (Secure Sockets Layer, or SSL). Although these technologies are varied in purpose and use, each includes an implementation of PKI for managing trusted communications between a host and a client.

While PKI exists at some level within the innards of several types of communications technologies, its form can change from implementation to implementation. As such, the components necessary for a successful implementation can vary depending on the requirements, but in public key cryptography there is always:

• A private key
• A public key
• A trusted third party (TTP)

Since a public key must be associated with the name of its owner, a data structure known as a public key certificate is used. The certificate typically contains the owner’s name, their public key and e-mail address, validity dates for the certificate, the location of revocation information, the location of the issuer’s policies, and possibly other affiliate information that identifies the certificate issuer with an organization such as an employer or other institution.

In most cases, the private and public keys are simply referred to as the private and public key certificates, and the trusted third party is commonly known as the certificate authority (CA). The certificate authority is the resource that must be available to both the holder of the private key and the holder of the public key. Entire hierarchies can exist within a public key infrastructure to support the use of multiple certificate authorities.

In addition to certificate authorities and the public and private key certificates they publish, there are a collection of components and functions associated with the management of the infrastructure. As such, a list of typical components required for a functional public key infrastructure would include but not be limited to the following:

• Digital certificates
• Certification authorities
• Certificate enrollment
• Certificate revocation
• Encryption/cryptography services

Source of Information : Syngress The Best Damn Windows Server 2008 Book Period 2nd Edition

What Is PKI?

The rapid growth of Internet use has given rise to new security concerns. Any company that does not configure a strong security infrastructure is literally putting the company at risk. An unscrupulous person could, if security were lax, steal information or modify business information in a way that could result in major financial disaster. To protect the organization’s information, the middleman must be eliminated. Cryptographic technologies such as public key infrastructure (PKI) provide a way to identify both users and servers during network use.

PKI is the underlying cryptography system that enables users or computers that have never been in trusted communication before to validate themselves by referencing an association to a trusted third party (TTP). Once this verification is complete, the users and computers can now securely send messages, receive messages, and engage in transactions that include the interchange of data.

PKI is used in both private networks (intranets) and on the World Wide Web (the Internet). It is actually the latter, the Internet, that has driven the need for better methods for verifying credentials and authenticating users. Consider the vast number of transactions that take place every day over the internet—from banking to shopping to accessing databases and sending messages or files. Each of these transactions involves at least two parties. The problem lies in the verification of who those parties are and the choice of whether to trust them with your credentials and information.

The PKI verification process is based on the use of keys, unique bits of data that serve one purpose: identifying the owner of the key. Every user of PKI actually generates or receives two types of keys: a public key and a private key. The two are actually connected and are referred to as a key pair. As the name suggests, the public key is made openly available to the public while the private key is limited to the actual owner of the key pair. Through the use of these keys, messages can be encrypted and decrypted, allowing data to be exchanged securely.

The use of PKI on the World Wide Web is so pervasive that it is likely that every Internet user has used it without even being aware of it. However, PKI is not simply limited to the Web; applications such as Pretty Good Privacy (PGP) also leverage the basis of PKI technology for e-mail protection; FTP over SSL/TLS uses PKI, and many other protocols have the ability to manage the verification of identities through the use of key-based technology. Companies such as VeriSign and Entrust exist as trusted third-party vendors, enabling a world of online users who are strangers to find a common point of reference for establishing confidentiality, message integrity, and user authentication. Literally millions of secured online transactions take place every day leveraging their services within a public key infrastructure.

Technology uses aside, PKI fundamentally addresses relational matters within communications. Specifically, PKI seeks to provide solutions for the following:

• Proper authentication
• Trust
• Confidentiality
• Integrity
• Nonrepudiation

By using the core PKI elements of public key cryptography, digital signatures, and certificates, all these equally important goals can be met successfully. The good news is that the majority of the work involved in implementing these elements under Windows Server 2008 is taken care of automatically by the operating system and is done behind the scenes.

The first goal, proper authentication, means that you can be highly certain that an entity such as a user or a computer is indeed the entity he, she, or it is claiming to be. Think of a bank. If you wanted to cash a large check, the teller will more than likely ask for some identification. If you present the teller with a driver’s license and the picture on it matches your face, the teller can then be highly certain that you are that person—that is, if the teller trusts the validity of the license itself. Because the driver’s license is issued by a government agency—a trusted third party—the teller is more likely to accept it as valid proof of your identity than if you presented an employee ID card issued by a small company that the teller has never heard of. As you can see, trust and authentication work hand in hand.

When transferring data across a network, confidentiality ensures that the data cannot be viewed and understood by any third party. The data might be anything from an e-mail message to a database of social security numbers. In the last 20 years, more effort has been spent trying to achieve this goal (data confidentiality) than perhaps all the others combined. In fact, the entire scientific field of cryptology is devoted to ensuring confidentiality (as well as all the other PKI goals).

As important as confidentiality is, however, the importance of network data integrity should not be underestimated. Consider the extreme implications of a patient’s medical records being intercepted during transmission and then maliciously or accidentally altered before being sent on to their destination. Integrity gives confidence to a recipient that data has arrived in its original form and hasn’t been changed or edited.

Finally we come to nonrepudiation. A bit more obscure than the other goals, nonrepudiation allows you to prove that a particular entity sent a particular piece of data. It is impossible for the entity to deny having sent it. It then becomes extremely difficult for an attacker to masquerade as a legitimate user and then send malevolent data across the network. Nonrepudiation is related to, but separate from authentication.

Source of Information : Syngress The Best Damn Windows Server 2008 Book Period 2nd Edition

What NAP Does

If you want a short definition of NAP, it’s this: NAP is a platform that can enforce compliance by computing devices with predetermined health requirements before these devices are allowed to access or communicate on a network. By itself, NAP is not designed to protect your network and is not intended to replace firewalls, AV products, patch management systems, and other protection elements. Instead, it’s designed to work together with these different elements to ensure devices on your network comply with policy that you have defined. And by devices I mean client computers (Windows Vista and soon Windows XP as well), servers running Windows Server 2008, PDAs running Windows Mobile (soon), and eventually also computers running other operating systems such as Linux and the Apple Macintosh operating system (using NAP components developed by third-party vendors).

Let’s unpack this a bit further. NAP supplies an infrastructure (components and APIs) that provides support for the following four processes:

• Health policy validation NAP can determine whether a given computer is compliant or not with a set of health policy requirements that you, the administrator, can define for your network. For example, one of your health requirements might be that all computers on your network must have a host-based firewall installed on them and enabled. Another requirement might be that all computers on your network must have the latest software updates installed on them.

• Network access limitation NAP can limit access to network resources for computers that are noncompliant with your health policy requirements. This limiting of access can range from preventing the noncompliant computer from connecting to any other computers on your network to quarantining it on a subnet and restricting its access to a limited set of machines. Or you can choose to not limit access at all for noncompliant computers and merely log their presence on the network for reporting purposes; it’s you’re choice-NAP puts you, the administrator, in control of how you limit network access based on compliance.

• Automatic remediation NAP can automatically remediate noncompliant computers that are attempting to access the network. For example, say you have a laptop that doesn’t have the latest security updates installed on it. You try to connect to corpnet, and NAP identifies your machine as noncompliant with corpnet health requirements, and it quarantines your machine on a restricted subnet where it can interact only with Windows Server Update Services (WSUS) servers. NAP then points your machine to the WSUS servers and tells it to go and get updates from them. Your machine downloads the updates, NAP then verifies that your machine is now healthy, and you’re let in the door and can access corpnet. Automatic remediation like this allows NAP to not just prevent unhealthy machines from connecting to your network, but also help those machines become healthy so that they can have access to needed network resources without bringing worms and other malware into your network. Of course, NAP puts you, the administrator, in the driver’s seat, so you can turn off auto-remediation if you want to and instead have NAP simply point the noncompliant machine to an internal Web site that gives the user instructions on what to do to make the machine compliant (or simply states why the noncompliant machine is not being allowed access to the network). Again, it’s your choice how you want NAP to operate with regard to how remediation is performed.

• Ongoing compliance Finally, NAP doesn’t just check for compliance when your computer joins the network. It continues to verify compliance on an ongoing basis to ensure that your machine remains healthy for the entire duration of the time it’s connected to your network.

As an example, let’s say your NAP health policy is configured to enforce compliance with the requirement that Windows Firewall be turned on for all Windows Vista and Windows XP clients connected to the network. You’re on the road and you VPN into corpnet, and NAP-after verifying that Windows Firewall is enabled on your machine- lets you in. Once you’re in, however, you decide for some reason to turn Windows Firewall off. (You’re an administrator on your machine, so you can do that-making users local administrators is not best practice, but some companies do that.) So you turn off Windows Firewall, which means the status of your machine has now changed and it’s out of compliance. What does NAP do? If you’ve configured it properly, it simply turns Windows Firewall back on! How does this work? The client computer has a NAP agent running on it and this agent detects this change in health status and tries to immediately remediate the situation. It can be a bit more complicated than that (for example, agent detects noncompliance, health certificate gets deleted, client goes into quarantine, NAP server remediates, agent confirms compliance, client becomes healthy again and regains access to the network) but that’s the basic idea-we’ll talk more about the NAP architecture in a moment.

Source of Information : Introducing Windows Server 2008

Understanding Network Access Protection

There are already solutions around that can do some of these things. Some of them are homegrown. For example, one organization I’m familiar with uses a DHCP registration system that links MAC addresses to user accounts stored in Active Directory to control which machines have access to the network. But homegrown solutions like this tend to be hard to manage and difficult to maintain, and they can sometimes be circumvented-for example, by using a static IP address configuration that allows access to a subnet scoped by DHCP.

Vendors also have their own solutions to this problem, and Microsoft has one for Windows Server 2003 called Network Access Quarantine Control, but although this solution can enhance the security of your network if implemented properly, it has its limitations. For example, although Network Access Quarantine Control can perform client inspection on machines trying to connect to the network, it’s only intended to do so for remote access connections. Basically, what Network Access Quarantine Control does is delay normal remote access to a private network until the configuration of the remote computer has been checked and validated by a quarantine script. And it’s the customers themselves who must write these scripts that perform the compliance checks because the exact nature of these scripts depends upon the customer’s own networking environment. This can make Network Access Quarantine Control challenging to implement.

Other vendors, such as Cisco Systems, have developed their own solutions to the problem, and Cisco’s solution is called Network Access Control (NAC). NAC is designed to enforce security policy compliance on any devices that are trying to access network resources. Using NAC, you can allow network access to devices that are compliant and trusted, and you can restrict access for devices that are noncompliant. NAC is both a framework that includes infrastructure to support compliance checks based on industry-common AV and security management products, and a product called NAC Appliance that you can drop in and use to build your compliance checking, remediation, and enforcement infrastructure.

Network Access Protection (NAP) in Windows Server 2008 is another solution, and it’s one that is rapidly gaining recognition in the enterprise IT community. NAP consists of a set of components for both servers (Windows Server 2008 only) and clients (Windows Vista now, Windows XP soon), together with a set of APIs that will be made public once Windows Server 2008 is released. NAP is not a product but a platform that is widely supported by over 100 different ISVs and IHVs, including AV vendors like McAfee and Symantec, patch management companies like Altiris and PatchLink, security software vendors like RSA Security, makers of security appliances including Citrix, network device manufacturers including Enterasys and F5, and system integrators such as EDS and VeriSign. Those are all big names in the industry, and the number of vendors supporting NAP is increasing daily. And that’s not marketing hype, it’s fact-and it’s important to IT pros like us because we want a platform like NAP to support our existing enterprise networks, which typically already have products and solutions from many of the vendors I just listed.

Source of Information : Introducing Windows Server 2008

Planning for IPv6 Transition Technologies

You will need to use one or more IPv6 transition technologies during the IPv6 migration process. The below provide planning information for ISATAP, 6to4, and Teredo.

ISATAP
By default, ISATAP hosts will obtain the IPv4 address of the ISATAP router by using DNS and other IP name resolution techniques to resolve the name ISATAP to an IPv4 address. Once the host has identified the ISATAP router’s IP address, it uses IPv4 unicast messages to acquire autoconfiguration information from the router. To ensure that clients can find the ISATAP router, plan to use one of the following techniques:

• If the ISATAP router is a computer, configure the computer name as ISATAP.
• Create a DNS entry for every DNS domain.
• Add an entry to the Hosts file.
• Create a static WINS record.
• Run a netsh command on all ISATAP hosts.


6to4
6to4 allows you to access the IPv6 Internet by using your existing IPv4 Internet connection. 6to4 requires you to have a 6to4 router and a public IPv4 address (such as an address assigned by your ISP). IPv4-only routers cannot act as 6to4 routers. Therefore, if you are currently using an IPv4-only router for your Internet access, you will need to upgrade or replace your router. Before investing in upgrades to support 6to4 or Teredo, evaluate whether the benefits of connecting to the IPv6 Internet outweigh the costs. Unless you need to access a specific resource on the IPv6 Internet that is not accessible on the IPv4 Internet, there might be no practical benefit to connecting to the IPv6 Internet. Generally, public IPv6 Internet resources (such as Web sites) are also available on the IPv4 Internet.


Teredo
You can use Teredo to provide hosts with IPv6 Internet connectivity when you do not have a 6to4 router with a public IPv4 address. For best results with Teredo, choose cone or restricted NATs that support UDP port translation, and avoid symmetric NATs. While implementing Teredo, you might discover that you need to change the NAT or firewall configuration. Therefore, you should be prepared to work with network administrators to provide the connectivity you require. You can use Microsoft’s Internet Connectivity Evaluation Tool to determine whether your current NAT supports Teredo. To use the online tool, open http://www.microsoft.com/windows/using/tools/igd/ in Microsoft Windows Internet Explorer, and follow the prompts.

Source of Information : Microsoft Press Windows Server 2008 Networking and Network Access Protection NAP

Migrating to IPv6

Upgrading to an exclusively IPv6 environment should be a long-term goal. You will need to follow these general steps (with proper testing prior to any implementation) to migrate to IPv6:

1. As you deploy new computers or operating systems, configure them to support both IPv6 and IPv4. If you plan to continue using computers running Windows XP and Windows Server 2003, gradually enable IPv6 across your infrastructure for those hosts.

2. Upgrade your routing infrastructure to support native IPv6 routing.

3. Upgrade your DNS infrastructure to support IPv6 AAAA records and PTR records in the IP6.ARPA reverse domain.

4. Connect your routing and DNS infrastructures to the IPv6 Internet by using technologies such as 6to4 or Teredo, if necessary.

5. Work with internal and external developers to upgrade your applications to be independent of IPv6 or IPv4.

6. After thorough testing, convert IPv4/IPv6 nodes to use only IPv6.

Of those steps, the single greatest challenge will be upgrading applications to support IPv6. Enterprises often have thousands of applications that must be tested. Many existing applications will not work properly in an IPv6 environment and will need to be either upgraded or replaced before IPv4 can be entirely disabled.


It’s worth remembering that deploying IPv6 is not trivial. Just having IPv6 enabled isn’t a big deal because most of the other devices on your network aren’t going to be using IPv6 (except for other computers running Windows Vista and Windows Server 2008), so if your machine is talking to a printer or other device, it will use just IPv4 by default. Once you start trying to roll out IPv6, though, there is a lot to consider. There are a lot of variables, and not all of the skills you used in IPv4 transfer to IPv6.

You need to plan it out and be ready to troubleshoot during rollout. Maybe some of your applications are just not IPv6 capable, or your older hardware doesn’t understand what an IPv6 address is. Maybe it is a configuration error on a host or router. There could be any number of issues that might cause problems, which is why I strongly recommend setting up an IPv6 test lab now—TODAY!—and testing your devices and applications to determine how they will work in an IPv6 network while building your IPv6 skills.

A lot of people are running IPv6 with Windows and a wide array of stuff and are making it work. To help you get there, we provide tools such as checkv4.exe (available at http://msdn2.microsoft.com/en-us/library/ms740624.aspx) to help you figure out whether your code has any IPv4 calls hardcoded into it, in addition to white papers such as “Manageable Transition to IPv6 using ISATAP” (available at http://www.microsoft.com/downloads/details.aspx?FamilyId=B8F50E07-17BF-4B5C-A1F9-5A09E2AF698B), a joint white paper with Cisco describing how to ease the deployment of IPv6, and “Enabling the Next Generation of Networking with End-to-End IPv6” (available at http://www.microsoft.com/downloads/details.aspx?FamilyID=b3611543-58b5-4ccc-b6ce-677ebb2a520d), a joint white paper with Juniper discussing IPv6 deployment and benefits. All of these and more are available from http://www.microsoft.com/ipv6. In short, we are working with lots of industry partners to simplify IPv6 deployment and make sure that all of our customers can gain the maximum value from IPv6.

Source of Information : Microsoft Press Windows Server 2008 Networking and Network Access Protection NAP

Advancing Microsoft’s Strategy for Virtualization

Microsoft is leading the effort to improve system functionality, making it more self-managing and dynamic. Microsoft’s main goal with virtualization is to provide administrators more control of their IT systems with the release of Windows Server 2008 and Hyper-V. This includes a faster response time to restore that is head and shoulders above their competition. Windows Server 2008 provides a total package of complimentary virtualization products that range in functionality from desktop usage to datacenter hubs. One of their major goals is to provide the ability to manage all IT assets, both physical and virtual, from a single remote machine. Microsoft is also forwarding an effort to cut IT costs with their virtualization programs to better help customers take advantage of the interoperability features their products have to offer as well as data center consolidation.

This also includes energy efficiency due to the use of less physical machines. This fact alone reduces the consumption of energy in the data center and helps to save money long term. By contributing to and investing in the areas of management, applications, and licensing they hope to succeed in this effort.

Windows Server 2008 has many of these goals in mind, providing a number of important assets to administrators. The implementation of Hyper-V for virtualization allows for quick migration and high availability. This provides solutions for scheduled and unscheduled downtime, and the possibility of improved restore times. Virtual storage is supported for up to four virtual SCSI controllers per virtual machine, allowing for ample storage capacity. Hyper-V allows for the import and export of virtual machines and is integrated with Windows Server Manager for greater usability options.

In the past compatibility was always an issue of concern. Now the emulated video card has been updated to a more universal VESA compatible card. This will improve video issues, resulting in noncompatibility with operating systems like Linux. In addition Windows Server 2008 also includes integration components (ICs) for virtual machines. When you install Windows Server 2008 as a guest system in a Hyper-V virtual machine, Windows will install the ICs automatically. There is also support for Hyper-V with the Server Core in the parent partition allowing for easier configuration. This as well as numerous fixes for performance, scalability, and compatibility make the end goal for Hyper-V a transparent end user experience.


Windows Hypervisor
With Windows Server 2008 Microsoft introduced a next-generation hypervisor virtualization platform. Hyper-V, formerly codenamed Viridian, is one of the noteworthy new features of Windows Server 2008. It offers a scalable and highly available virtualization platform that is efficient and reliable. It has an inherently secure architecture. This combined with a minimal attack surface (especially when using Server Core) and the fact that it does not contain any third-party device drivers makes it extremely secure. It is expected to be the best operating system platform for virtualization released to date. Compared to its predecessors, Hyper-V provides more options for specific needs because it utilizes more powerful 64-bit hardware and 64-bit operating systems. Additional processing power and a large addressable memory space is gained by the utilization of a 64-bit environment. It also requires no need for outside applications, which increase overall compatibility across the board. Hyper-V has three main components: the hypervisor, the virtualization stack, and the new virtualized I/O model. The hypervisor (also known as the virtual machine monitor or VMM) is a very small layer of software that interacts directly with the processor, which creates the different “partitions” that each virtualized instance of the operating system will run within. The virtualization stack and the I/O components act to provide a go-between with Windows and with all the partitions that you create. Hyper-V’s virtualization advancements will only help to further assist administrators in quicker easier responses to emergency deployment needs.

Source of Information : Syngress The Best Damn Windows Server 2008

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...