Publishing a Windows Vista Calendar on the Network

Windows Calendar, a decent little program for managing your schedule. You can create appointments (both one-time and recurring), set up all-day events, schedule tasks, apply reminders to appointments and tasks, and view appointments by day, week, or month. This all works great for individuals, but a busy family needs to coordinate multiple schedules. The analog method for doing this is the paper calendar attached to the refrigerator by magnets. If you want to try something a bit more high tech, you can use Windows Calendar to publish a calendar to a network share. (Note that this is something that even the mighty Microsoft Outlook can’t do. With Outlook’s Calendar, you need to be on a Microsoft Exchange network to publish your calendar data.) You can configure the published calendar so that it gets updated automatically, which means the remote calendar always has current data. Your family members can then subscribe to the calendar to see your appointments (and optionally, your notes, reminders, and tasks).

First, start Windows Calendar using either of the following methods:
• Select Start, All Programs, Windows Calendar.

• In Windows Mail, select Tools, Windows Calendar, or press Ctrl+Shift+L.


Publishing Your Calendar
Here are the steps you need to follow in Windows Calendar to publish your calendar:
1. In the Calendars list, click the calendar you want to publish.

2. Select Share, Publish to open the Publish Calendar dialog box.

3. Use the Calendar Name text box to edit the calendar name, if necessary.

4. Use the Location to Publish Calendar text box to type the network address of the shared folder where you want to store the published calendar. Alternatively, click Browse and then use the Browse for Files or Folders dialog box to select the network share, and then click OK.

5. If you want Windows Calendar to update your calendar whenever you make changes to it, activate the Automatically Publish Changes Made to This Calendar check box. (If you leave this option deactivated, you can still publish your changes by hand.

6. In the Calendar Details to Include section, activate the check box beside each item you want in your published calendar: Notes, Reminders, and Tasks.

7 Click Publish. Windows Calendar publishes the calendar to the network share by creating a file in the iCalendar format (.ics extension) and copying that file to the share. Windows Calendar then displays a dialog box to let you know the operation was successful.

8. To let other people know that your calendar is shared and where it can be found, click Announce. Windows Calendar creates a new email message that includes the following in the body (where address is the address of your published calendar):

You can subscribe to my calendar at address

9. Click Finish.


Subscribing to a Calendar Using the Subscribe Message
How you subscribe to another person’s published calendar depends on whether you receive a subscription invitation via email. If you have such a message, follow these steps to subscribe to the calendar:
1. Open the invitation message.

2. Click the link to the published calendar. Windows Mail asks you to confirm that you want to open the iCalendar file.

3. Click Open. If your user account doesn’t have access to the network folder, the Connect to Computer dialog box appears (where Computer is the name of the computer where the calendar was published).

4. Use the User Name and Password text boxes to type the credentials you need to access the shared folder, and then click OK. Windows Calendar opens and displays the Import dialog box.

5. If you want to merge the published calendar into your own calendar, use the Destination list to select the name of your calendar; otherwise, the published calendar appears as a separate calendar.

6. Click Import. Windows Calendar adds the published calendar.


Subscribing to a Calendar Using Windows Calendar
If you don’t have a subscription invitation message, you can still subscribe to a published calendar using Windows Calendar. Here are the steps to follow:
1. In Windows Calendar, select Share, Subscribe to open the Subscribe to a Calendar dialog box.

2. Use the Calendar to Subscribe To text box to type the address of the published calendar.

3. Click Next. Calendar subscribes you to the published calendar and then displays the Calendar Subscription Settings dialog box.

4. Edit the calendar name, if necessary.

5. Use the Update Interval list to select the interval at which you want Calendar to update the subscribed calendar: Every 15 Minutes, Every Hour, Every Day, Every Week, or No Update.

6. If you want to receive any reminders in the calendar, activate the Include Reminders check box.

7. If you also want to see the published calendar’s tasks, activate the Include Tasks check box.

8. Click Finish. The published calendar appears in your Calendars list.


Working with Shared Calendars
After you publish one or more of your calendars and subscribe to one or more remote calendars, Windows Calendar offers a number of techniques for working with these items. Here’s a summary:

• Changing a calendar’s sharing information. When you select a published or subscribed calendar, the Details pane displays a Sharing Information section, and you use the controls in that section to configure the calendar’s sharing options.

• Publishing calendar changes. If your published calendar isn’t configured to automatically publish changes, you can republish by hand by selecting the calendar and then selecting Share, Sync.

• Updating a subscribed calendar. If you didn’t configure an update interval for a subscribed calendar, or if you want to see the latest data in that calendar before the next update is scheduled, select the calendar and then select Share, Sync.

• Synchronizing all shared calendars. If you have multiple shared calendars (published and subscribed), you can synchronize them all at one time by selecting Share, Sync All.

• Sending a published calendar announcement. If you didn’t send an announcement about your published calendar, or if you want to send the announcement to different people, select the calendar and then select Share, Send Publish E-Mail.

• Stopping a published calendar. If you no longer want other people to subscribe to your calendar, select it and then select Stop Publishing. When Calendar asks you to confirm, click Unpublish. (Note, however, that if you want your calendar file to remain on the network share, you first need to deactivate the Delete Calendar on Server check box.)

• Stopping a subscribed calendar. If you no longer want to subscribe to a remote calendar, select it and then press Delete. When Calendar asks you to confirm, click Yes.

Source of Information : Que Networking with Microsoft Windows Vista

Setting Windows Vista Sharing Permissions on Shared Folders

You can share a folder with advanced permissions. You use these permissions to decide who has access to the folder and what those users can do with the folder. You can also apply advanced permissions to entire security groups rather than individual users. For example, if you apply permissions to the Administrators group, those permissions automatically apply to each member of that group.

Before continuing, make sure you have a user account set up for each person who will access the share. Follow these steps to share a folder with advanced permissions:

1. Select Start, and then click your username to open your user profile folder.

2. Click the folder you want to share. If you want to share a subfolder or file, instead, open its folder and then click the subfolder or file.

3. Click the Share button in the task pane. Windows Vista displays the object’s Properties sheet with the Sharing tab selected.

4. Click Advanced Sharing. The User Account Control dialog box appears.

5. Enter your UAC credentials to continue. The Advanced Sharing dialog box appears.

6. Activate the Share This Folder check box.

7. By default, Vista uses the folder name as the share name. If you prefer to use a different name, edit the Share Name text box.

8. In a small network, it’s unlikely you’ll need to restrict the number of users who can access this resource, so you’re probably safe to leave the Limit the Number of Simultaneous Users To spin box value at 10.

9. Click Permissions to display the Permissions for Share dialog box, where Share is the share name you specified in step 7.

10. Select the Everyone group in the Group or User Names list, and then click Remove.

11. Click Add to display the Select Users or Groups dialog box.

12. In the Enter the Object Names to Select text box, type the name of the user or users you want to give permission to access the shared resource (separate multiple usernames with semicolons). Click OK when you’re done.

13. Select a user in the Group or User Names list.

14. Using the Permissions list, you can allow or deny the following permissions:

• Read. Gives the group or user the ability only to read the contents of a folder or file. The user can’t modify those contents in any way.

• Change. Gives the group or user read permission and allows the group or user to modify the contents of the shared resource.

• Full Control. Gives the group or user change permission and allows the group or user to take ownership of the shared resource.

15. Repeat steps 11–14 to add and configure other users or groups.

16. Click OK to return to the Advanced Sharing dialog box.

17. Click OK to return to the Sharing tab.

18. Click Close to share the resource with the network.

Source of Information : Que Networking with Microsoft Windows Vista

Deactivating Windows Vista Sharing Wizard

You can use the File Sharing Wizard to apply simple permissions to folders that you’re sharing with the network. If you’ve used Windows XP in the past, you no doubt noticed that the File Sharing Wizard is at least an improvement over XP’s useless Simple File Sharing feature, and it certainly makes it easy to apply basic permissions, which novice users appreciate. However, Vista has a larger range of permissions and other sharing features, and these can make your network shares more secure. To work with these features, you need to deactivate the File Sharing Wizard.

For the details on using the File Sharing Wizard, see “Sharing a Resource with the File Sharing Wizard”. Here are the steps to follow to turn off the File Sharing Wizard:

1. Select Start, Control Panel, to open the Control Panel window.

2. Select Appearance and Personalization.

3. Select Folder Options to open the Folder Options dialog box.

4. Display the View tab.

5. Deactivate the Use Sharing Wizard check box.

6. Click OK.

Source of Information : Que Networking with Microsoft Windows Vista

What Is Facebook?

In 2007, Facebook launched its own platform for application development. The platform consists of an HTML-based markup language called Facebook Markup Language (FBML), an application programming interface (API) for making representational state transfer (REST) calls to Facebook, a SQL-styled query language for interacting with Facebook called Facebook Query Language (FQL), a scripting language called Facebook JavaScript for enriching the user experience, and a set of client programming libraries. Generically, the tools that make up the Facebook platform are loosely called the Facebook API.

By releasing this platform, Facebook built an apparatus that allows developers to create external applications to empower Facebook users to interact with one another in new and exciting ways—ways that you, as a developer, get to invent. Not only can you develop web applications, but Facebook has also opened up its platform to Internet-connected desktop applications with its Java client library. By opening this platform up to both web-based and desktop applications and offering to general users the same technology that Facebook developers use to build applications, Facebook is positioning itself to be a major player in the future of socio-technical development.

A Brief History of Facebook
In 2003, eUniverse launched a new social portal called MySpace. This web site became wildly popular very quickly, reaching the 20-million-user mark within a year. Just a year earlier, a bright young programmer named Mark Zuckerberg matriculated at Harvard University. The year in which MySpace launched, Zuckerberg and his friend Adam D’Angelo launched a new media player, called Synapse, that featured the Brain feature. Synapse’s Brain technology created playlists from your library by picking music that you like more than music than you don’t. Although this type of smart playlist generation is common in today’s media players, at its launch, it was an innovation. Synapse’s launch was met with positive reviews, and several companies showed interest in purchasing the software; however, ultimately no deals were made, and the media player never took off.

Unfortunately (or fortunately, depending on your perspective), one of Zuckerman’s next projects created quite a bit more controversy. He created Facemash.com, a variant of the HOTorNOT.com web site for Harvard students. To acquire images for the web site, Zuckerberg harvested images of students from the many residence hall web sites at Harvard. Because Zuckerberg was running a for-profit web site and had not obtained students’ permission to use their images, Zuckerberg was brought before the university’s administrative board on charges of breaching computer security and violating Internet privacy and intellectual property policies. Zuckerberg took a leave of absence from Harvard after the controversy and then relaunched his site as a social application for Harvard students in 2004. The viral nature of the web site allowed it to grow quickly, and a year later Zuckerberg officially withdrew from Harvard to concentrate his efforts on developing what was first known as thefacebook.com.

Relaunched as Facebook in 2005, the social network quickly expanded to the rest of the Ivy League. Soon after, Facebook expanded dramatically across university and college campuses across the nation. Facebook’s focus on the college and university demographic helped catapult it into what any marketing manager will tell you is the most difficult demographic to crack, the 18–24 young adult market.

To keep its growing momentum, Facebook opened its doors to nonacademic users for the first time in 2007. Since this time, Facebook has grown to be the second largest social network with more than 30 million users. And with any growth comes opportunities both for the company and for its users.

Source of Information : Apress Facebook API Developers Guide

IPv6 troubleshooting - Netsh

The netsh interface ipv6 command context contains many commands that are useful for analyzing the current IPv6 configuration and troubleshooting problems. The most useful commands are:

netsh interface ipv6 show global. Displays general IPv6 settings, including the default hop limit. Though you rarely need to modify these settings, you can use the netsh interface ipv6 set global command to change them.

netsh interface ipv6 show addresses. Displays all IPv6 addresses in a much more compact format than ipconfig /all.

netsh interface ipv6 show dnsservers. Displays all DNS servers that have been configured for IPv6. This does not display any DNS servers that might be configured with IPv4 addresses.

netsh interface ipv6 show potentialrouters. Displays all advertising IPv6 routers that have been detected on the local network.

netsh interface ipv6 show route. Lists the automatically and manually configured routes, including tunneling routes.

netsh interface ipv6 show tcpstats. Lists various IPv6 TCP statistics, including the current number of connections, the total number of both incoming and outgoing connections, and the number of communication errors.

netsh interface ipv6 show udpstats. Lists various IPv6 UDP statistics, including the number of UDP datagrams that have been sent or received and the number of datagrams that resulted in an error.

netsh interface ipv6 show neighbors. Displays all cached IPv6 neighbors. To flush the neighbor cache, run the command netsh interface ipv6 delete neighbors.

netsh interface ipv6 show destinationcache. Displays all cached IPv6 hosts that the computer has communicated with. To flush the destination cache, run the command netsh interface ipv6 delete destinationcache.


When troubleshooting IPv6 transition technologies, you can use the following commands:

netsh interface ipv6 show teredo. Displays the Teredo configuration, including the Teredo server name and the client port number. You can use the netsh interface ipv6 set teredo command to change these configuration settings.

netsh interface ipv6 6to4 show command. By using one of the four commands in this context (interface, relay, routing, and state), you can examine the current 6to4 configuration.

netsh interface isatap show command. By using one of the two commands in this context (router and state), you can examine the current ISATAP configuration.

Source of Information : Microsoft Press Windows Server 2008 Networking and Network Access Protection NAP

IPv6 Terminology

IPv6 common terms and concepts are defined as follows:

Node Any device that runs an implementation of IPv6. This includes routers and hosts.

Router A node that can forward IPv6 packets not explicitly addressed to itself. On an IPv6 network, a router also typically advertises its presence and host configuration information.

Host A node that cannot forward IPv6 packets not explicitly addressed to itself (a nonrouter). A host is typically the source and a destination of IPv6 traffic, and it silently discards traffic received that is not explicitly addressed to itself.

Upper-layer protocol A protocol above IPv6 that uses IPv6 as its transport. Examples include Internet layer protocols such as ICMPv6 and Transport layer protocols such as TCP and UDP (but not Application layer protocols such as FTP and DNS, which use TCP and UDP as their transport).

Link The set of network interfaces that are bounded by routers and that use the same 64-bit IPv6 unicast address prefix. Other terms for “link” are subnet and network segment. Many link-layer technologies are already defined for IPv6, including typical LAN technologies (such as Ethernet and Institute of Electrical and Electronics Engineers [IEEE] 802.11 wireless) and wide area network (WAN) technologies (such as the Point-to-Point Protocol [PPP] and Frame Relay). Additionally, IPv6 packets can be sent over logical links representing an IPv4 or IPv6 network, by encapsulating the IPv6 packet within an IPv4 or IPv6 header.

Network Two or more subnets connected by routers. Another term for network is internetwork.

Neighbors Nodes connected to the same link. Neighbors in IPv6 have special significance because of IPv6 Neighbor Discovery, which has facilities to resolve neighbor linklayer addresses and detect and monitor neighbor reachability.

Interface The representation of a physical or logical attachment of a node to a link. An example of a physical interface is a network adapter. An example of a logical interface is a “tunnel” interface that is used to send IPv6 packets across an IPv4 network by encapsulating the IPv6 packet inside an IPv4 header.

Address An identifier that can be used as the source or destination of IPv6 packets that is assigned at the IPv6 layer to an interface or set of interfaces.

Packet The protocol data unit (PDU) that exists at the IPv6 layer and is composed of an IPv6 header and payload.

Link MTU The maximum transmission unit (MTU)—the number of bytes in the largest IPv6 packet—that can be sent on a link. Because the maximum frame size includes the link-layer medium headers and trailers, the link MTU is not the same as the maximum frame size of the link. The link MTU is the same as the maximum payload size of the link-layer technology. For example, for Ethernet using Ethernet II encapsulation, the maximum Ethernet frame payload size is 1500 bytes. Therefore, the link MTU is 1500. For a link with multiple link-layer technologies (for example, a bridged link), the link MTU is the smallest link MTU of all the link-layer technologies present on the link.

Path MTU The maximum-sized IPv6 packet that can be sent without performing host fragmentation between a source and destination over a path in an IPv6 network. The path MTU is typically the smallest link MTU of all the links in the path.


A site is an autonomously operating IP-based network that is connected to the IPv6 Internet. Network architects and administrators within the site determine the addressing plan and routing policy for the organization network. An organization can have multiple sites. The actual connection to the IPv6 Internet can be either of the following types:

Direct The connection to the IPv6 Internet uses a wide area network link (such as Frame Relay or T-Carrier) and connects to an IPv6-capable.

Tunneled The connection to the IPv6 Internet uses an IPv6 over IPv4 tunnel and connects to an IPv6 tunneling router.

Source of Information : Microsoft Press Understanding IPv6 2nd Edition

Building Windows Server 2008 Network: Organization Size Definitions

Windows Server 2008 has been designed to respond to the needs of organizations of all sizes, whether you are a company of one working in a basement somewhere or whether your organization spans the globe, with offices in every continent. Obviously, there is a slight difference in scale between the two extremes. Each of these is defined as follows:

• Small organizations are organizations that include only a single site. They may have several dozens of workers, but given that they are located in a single site, their networking needs are fairly basic.

• Medium organizations are organizations that have more than one site but less than ten. The complexities of having a network with more than one site address the networking needs of medium organizations.

• Large organizations are organizations that have ten sites or more. In this case, organizations need more complex networks and will often rely on services that are not required at all by the two previous organization sizes.

Small organizations have all of the requirements of a basic network and will normally implement a series of technologies, including directory services, e-mail services, file and printer sharing, database services, and collaboration services. Even if the organization includes a very small number of people, these services will often be at the core of any networked productivity system. For this reason, it is often best for this type of organization to use Windows Small Business Server 2008 (SBS08), because it is less expensive and it includes more comprehensive applications for e-mail and database services. Nevertheless, some organizations opt for Windows Server 2008 anyway, because they are not comfortable with the limitations Microsoft has imposed on the Small Business Server edition. For example, it is always best and simpler to have at least two domain controllers running the directory service because they become automatic backups of each other. SBS08 can only have a single server in the network and therefore cannot offer this level of protection for the directory service. This is one reason why some small organizations opt for Windows Server 2008 even if it is more costly at first. However, realizing this business need, Microsoft is releasing Windows Essential Business Server 2008 (WEBS) as a multi-component server offering for these organizations. WEBS is made up of three server installations:

• Windows Essential Business Server Management Server To manage the WEBS network as well as worker collaboration and network services centrally.

• Windows Essential Business Server Security Server To manage security, Internet access, and remote-worker connectivity.

• Windows Essential Business Server Messaging Server To provide messaging capabilities.

Medium organizations face the challenge of having to interconnect more than one office. While small organizations have the protection of being in a single location, medium organizations often need to bridge the Internet to connect sites together. This introduces an additional level of complexity.

Large organizations have much more complex networks that provide both internal and external services. In addition, they may need to interoperate in several languages and will often have internally developed applications to manage. Large organizations may also have remote sites connected at varying levels of speed and reliability: Integrated Services Digital Network (ISDN) or dial-up. From a Windows standpoint, this necessitates a planned replication and possibly an architecture based on the Distributed File System (DFS). For this reason, they include many more service types than small or medium organizations.

Source of Information : McGraw Hill Microsoft Windows Server 2008 The Complete Reference

Frequency versus Wavelength

Frequency and wavelength are inseparably related to each other. As frequency increases, wavelength decreases and vice versa.

• Frequency: The rate at which a radio signal oscillates from positive to negative.

• Wavelength: The length of a complete cycle of the radio signal oscillation.

Wavelength is, of course, a length measurement, usually represented in metric (meters, centimeters, and so on). And frequency is a count of the number of waves occurring during a set time, usually per second. Cycles per second is represented as Hertz (Hz). The dimensions are important to note, because the physical properties of the wave define antenna, cable, and power requirements. Wavelength is critical for antenna design and selection.

Wi-Fi signals operating at a frequency of 2.4 GHz have an average wavelength of about 12 cm. Since the wavelength is so short, antennas can be physically very small. A common design for antennas is to make them 1/4 of a wavelength or less in length, which is barely more than an inch long. That’s why Wi-Fi antennas can perform so well even though they are physically very small. As a comparison, a car radio antenna is much longer to get a decent signal because FM radio signals are an average of 10 feet long.

Wavelength and antenna length go together. To oversimplify, the longer the antenna, the more of the signal it can grab out of the air. Also, antenna length should be in whole, halves, quarters, eighths, and so on of the intended wavelength for best signal reception. The highest reception qualities come from a full wavelength antenna.

Perform this simple math formula to find wavelength: 300 / frequency in megahertz. The answer will be the wavelength in meters. So, 300 / 2437 _ 0.12 meters or 12 cm.

About Wi-Fi

Wireless networking is accomplished by sending a signal from one computer to another over radio waves. The most common form of wireless computing today uses the IEEE 802.11b standard. This popular standard, also called Wi-Fi or Wireless Fidelity, is now supported directly by newer laptops and PDAs, and most computer accessory manufacturers. It’s so popular that “big box” electronics chain stores carry widely used wireless hardware and networking products.

The IEEE 802.11b Wi-Fi standard supports a maximum speed of 11 megabits per second (Mbps). The true throughput is actually something more like 6 Mbps, and can drop to less than 3 Mbps with encryption enabled. Newer standards like 802.11a and the increasingly popular 802.11g support higher speeds up to 54 Mbps. So why is 802.11b so popular? Because it was first and it was cheap. Even 3 Mbps is still much faster than you normally need to use the Internet.

The 802.11a standard, which operates in the 5 GHz frequency band, is much faster than
802.11b, but never caught on, partly because of the high cost initially and partly because of the actual throughput in the real-world conditions of a deployed wireless network.

The fast and inexpensive 802.11g standard (which uses the same 2.4 GHz band as 802.11b) is rapidly moving to unseat 802.11b from the top of the heap. The very cool thing about “g” is the built-in backwards compatibility with 802.11b. That means any “b” product can connect to a “g” access point. This compatibility makes 802.11g an easy upgrade without tossing out your old client hardware.

Because of the compatibility with 802.11b and 802.11g, there is no great hurry to push the myriad of funky wireless products to the new “g” standard. Most manufacturers have support for basic wireless infrastructure using 802.11b and 802.11g with access points and client adapter. Wi-Fi 802.11b really shines when you look at the host of wireless products available. Not only are there the basic wireless networking devices, like adapters, base stations, and bridges, there are also new products that were unthinkable a few years ago.Wireless disk drive arrays, presentation gateways, audiovisual media adapters, printer adapters,Wi-Fi cameras, hotspot controllers, and wireless broadband and video phones dominate the consumer arena. And the enterprise market is not far behind.

Wi-Fi is the root of a logo and branding program created by the Wi-Fi Alliance. A product that uses the Wi-Fi logo has been certified by the Wi-Fi Alliance to fulfill certain guidelines for interoperability. Logo certification programs like this one are created and promoted to assure users that products will work together in the marketplace. So, if you buy a Proxim wireless client adapter with the Wi-Fi logo branding, and a Linksys access point with the same logo on the product, they should work together.

A megabit is one million binary digits (bits) of data. Network speed is almost always measured in bits per second (bps). It takes 8 bits to make a byte. Bytes are used mostly to measure file size (as in files on a hard disk). A megabyte is about 8 million bits of data. Don’t confuse the term megabyte for megabit or you will come out 8 million bits ahead.

Source of Information : Wi-Fi Toys - 15 Cool Wireless Projects For Home, Office, And Entertainment

Features of IPv6

The following list summarizes the features of the IPv6 protocol:

New Header Format
The IPv6 header has a new format that is designed to minimize header processing. This is achieved by moving both nonessential and optional fields to extension headers that are placed after the IPv6 header. The streamlined IPv6 header is more efficiently processed at intermediate routers. IPv4 headers and IPv6 headers are not interoperable. IPv6 is not a superset of functionality that is backward compatible with IPv4. A host or router must use an implementation of both IPv4 and IPv6 to recognize and process both header formats. The new default IPv6 header is only twice the size of the default IPv4 header, even though the number of bits in IPv6 addresses is four times larger than IPv4 addresses.

Large Address Space
IPv6 has 128-bit (16-byte) source and destination addresses. Although 128 bits can express over 3.4 × 1038 possible combinations, the large address space of IPv6 has been designed to allow for multiple levels of subnetting and address allocation, from the Internet backbone to the individual subnets within an organization. Even with all of the addresses currently assigned for use by hosts, plenty of addresses are available for future use. With a much larger number of available addresses, address-conservation techniques, such as the deployment of NATs, are no longer necessary.

Stateless and Stateful Address Configuration
To simplify host configuration, IPv6 supports both stateful address configuration (such as address configuration in the presence of a DHCP for IPv6, or DHCPv6, server) and stateless address configuration (such as address configuration in the absence of a DHCPv6 server). With stateless address configuration, hosts on a link automatically configure themselves with IPv6 addresses for the link (called link-local addresses), with IPv6 transition addresses, and with addresses derived from prefixes advertised by local routers. Even in the absence of a router, hosts on the same link can automatically configure themselves with link-local addresses and communicate without manual configuration. Link-local addresses are autoconfigured within seconds, and communication with neighboring nodes on the link is possible immediately. In comparison, some IPv4 hosts using DHCP must wait a full minute before abandoning DHCP configuration and self-configuring an IPv4 address.

IPsec Header Support Required
Support for the IPsec headers is an IPv6 protocol suite requirement. This requirement provides a standards-based solution for network protection needs and promotes interoperability between different IPv6 implementations. IPsec consists of two types of extension headers and a protocol to negotiate security settings. The Authentication header (AH) provides data integrity, data authentication, and replay protection for the entire IPv6 packet (excluding fields in the IPv6 header that must change in transit). The Encapsulating Security Payload (ESP) header and trailer provide data integrity, data authentication, data confidentiality, and replay protection for the ESP-encapsulated payload. The protocol typically used to negotiate IPsec security settings for unicast communication is the Internet Key Exchange (IKE) protocol. However, the requirement to process IPsec headers does not make IPv6 inherently more secure. IPv6 packets are not required to be protected with IPsec and IPsec is not a requirement of an IPv6 deployment. Additionally, the IPv6 standards do not require an implementation to support any specific encryption methods, hashing methods, or negotiation protocol (such as IKE).

Better Support for Prioritized Delivery
New fields in the IPv6 header define how traffic is handled and identified. Traffic is prioritized using a Traffic Class field, which specifies a DSCP value just like IPv4. A Flow Label field in the IPv6 header allows routers to identify and provide special handling for packets that belong to a flow (a series of packets between a source and destination). Because the traffic is identified in the IPv6 header, support for prioritized delivery can be achieved even when the packet payload is encrypted with IPsec and ESP.

New Protocol for Neighboring Node Interaction
The Neighbor Discovery protocol for IPv6 is a series of Internet Control Message Protocol for IPv6 (ICMPv6) messages that manages the interaction of neighboring nodes (nodes on the same link). Neighbor Discovery replaces and extends the Address Resolution Protocol (ARP) (broadcast-based), ICMPv4 Router Discovery, and ICMPv4 Redirect messages with efficient multicast and unicast Neighbor Discovery messages.

Extensibility
IPv6 can easily be extended for new features by adding extension headers after the IPv6 header. Unlike options in the IPv4 header, which can support only 40 bytes of options, the size of IPv6 extension headers is constrained only by the size of the IPv6 packet.

Source of Information : Microsoft Press Understanding IPv6 2nd Edition

Limitations of IPv4

The current version of IP (known as version 4 or IPv4) has not changed substantially since Request for Comments (RFC) 791, which was published in 1981. IPv4 has proven to be robust, easily implemented, and interoperable. It has stood up to the test of scaling an internetwork to a global utility the size of today’s Internet. This is a tribute to its initial design. However, the initial design of IPv4 did not anticipate the following:

• The recent exponential growth of the Internet and the impending exhaustion of the IPv4 address space Although the 32-bit address space of IPv4 allows for 4,294,967,296 addresses, previous and current allocation practices limit the number of public IPv4 addresses to a few hundred million. As a result, public IPv4 addresses have become relatively scarce, forcing many users and some organizations to use a NAT to map a single public IPv4 address to multiple private IPv4 addresses. Although NATs promote reuse of the private address space, they violate the fundamental design principle of the original Internet that all nodes have a unique, globally reachable address, preventing true end-to-end connectivity for all types of networking applications. Additionally, the rising prominence of Internet-connected devices and appliances ensures that the public IPv4 address space will eventually be depleted.

• The need for simpler configuration Most current IPv4 implementations must be either manually configured or use a stateful address configuration protocol such as Dynamic Host Configuration Protocol (DHCP). With more computers and devices using IP, there is a need for a simpler and more automatic configuration of addresses and other configuration settings that do not rely on the administration of a DHCP infrastructure.

• The requirement for security at the Internet layer Private communication over a public medium such as the Internet requires cryptographic services that protect the data being sent from being viewed or modified in transit. Although a standard now exists for providing security for IPv4 packets (known as Internet Protocol security, or IPsec), this standard is optional for IPv4 and additional security solutions, some of which are proprietary, are prevalent.

• The need for better support for prioritized and real-time delivery of data Although standards for prioritized and real-time delivery of data—sometimes referred to as Quality of Service (QoS)—exist for IPv4, real-time traffic support relies on the 8 bits of the historical IPv4 Type of Service (TOS) field and the identification of the payload, typically using a User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) port. Unfortunately, the IPv4 TOS field has limited functionality and, over time, has been redefined and has different local interpretations. The current standards for IPv4 use the TOS field to indicate a Differentiated Services Code Point (DSCP), a value set by the originating node and used by intermediate routers for prioritized delivery and handling. Additionally, payload identification that uses a TCP or UDP port is not possible when the IPv4 packet payload is encrypted.

To address these and other concerns, the Internet Engineering Task Force (IETF) has developed a suite of protocols and standards known as IP version 6 (IPv6). This new version, previously called IP-The Next Generation (IPng), incorporates the concepts of many proposed methods for updating the IPv4 protocol. IPv6 is designed intentionally to have minimal impact on upper- and lower-layer protocols and to avoid the random addition of new features.

Source of Information : Microsoft Press Understanding IPv6 2nd Edition

Windows Server 2008 Network Load-Balancing Clusters

NLB in Windows Server 2008 is accomplished by a special network driver that works between the drivers for the physical network adapter and the TCP/IP stack. This driver communicates with the NLB program (called wlbs.exe, for the Windows Load Balancing Service) running at the application layer—the same layer in the OSI model as the application you are clustering. NLB can work over FDDI- or Ethernet-based networks—even wireless networks—at up to gigabit speeds.

Why would you choose NLB? For a few reasons:
• NLB is an inexpensive way to make a TCP/IP-dependent application somewhat fault tolerant, without the expense of maintaining a true server cluster with fault-tolerant components. No special hardware is required to create an NLB cluster. It's also cheap hardware-wise because you need only two network adapters to mitigate a single point of failure.

• The "shared nothing" approach—meaning each server owns its own resources and doesn't share them with the cluster for management purposes, so to speak—is easier to administer and less expensive to implement, although there is always some data lag between servers while information is transferred among the members. (This approach also has its drawbacks, however, because NLB can only direct clients to backend servers or to independently replicated data.)

• Fault tolerance is provided at the network layer, ensuring that network connections are not directed to a server that is down.

• Performance is improved for your web or FTP resource because load is distributed automatically among all members of the NLB cluster.

NLB works in a seemingly simple way: all computers in an NLB cluster have their own IP address just like all networked machines do these days, but they also share a single, cluster-aware IP address that allows each member to answer requests on that IP address. NLB takes care of the IP address conflict problem and allows clients who connect to that shared IP address to be directed automatically to one of the cluster members.

NLB clusters support a maximum of 32 cluster members, meaning that no more than 32 machines can participate in the load-balancing and sharing features. Most applications that have a load over and above what a single 32-member cluster can handle take advantage of multiple clusters and use some sort of DNS load-balancing technique or device to distribute requests to the multiple clusters individually.

When considering an NLB cluster for your application, ask yourself the following questions: how will failure affect application and other cluster members? If you are running a high-volume e-commerce site and one member of your cluster fails, are the other servers in the cluster adequately equipped to handle the extra traffic from the failed server? A lot of cluster implementations miss this important concept and later see the consequence—a cascading failure caused by a perpetually growing load failed over onto servers perpetually failing from overload. Such a scenario is very common and entirely defeats the true purpose of a cluster. Avoid this by ensuring that all cluster members have sufficient hardware specifications to handle additional traffic when necessary.

Also examine the kind of application you are planning on clustering. What types of resources does it use extensively? Different types of applications stretch different components of the systems participating in a cluster. Most enterprise applications have some sort of performance testing utility; take advantage of any that your application offers in a testing lab and determine where potential bottlenecks might lie.

Web applications, Terminal Services, and Microsoft's ISA Server 2004 product can take advantage of NLB clustering.

It's important to be aware that NLB is unable to detect whether a service on the server has crashed but not the machine itself, so it could direct a user to a system that can't offer the requested service.


NLB Terminology
Some of the most common NLB technical terms are as follows:

NLB driver : This driver resides in memory on all members of a cluster and is instrumental in choosing which cluster node will accept and process the packet. Coupled with port rules and client affinity (all defined on the following pages), the driver decides whether to send the packet up the TCP/IP stack to the application on the current machine, or to pass on the packet because another server in the cluster will handle it.

Unicast mode : In unicast mode, NLB hosts send packets to a single recipient.

Multicast mode : In multicast mode, NLB hosts send packets to multiple recipients at the same time.

Port rules : Port rules define the applications on which NLB will "work its magic," so to speak. Certain applications listen for packets sent to them on specific port numbers—for example, web servers usually listen for packets addressed to TCP port 80. You use port rules to instruct NLB to answer requests and load-balance them.

Affinity : Affinity is a setting that controls whether traffic that originated from a certain cluster member should be returned to that particular cluster node. Effectively, this controls which cluster nodes will accept what types of traffic.

Source of Information : OReilly Windows Server 2008 The Definitive Guide

Windows Server 2008 Clustering Technologies

Clusters work to provide fault tolerance to a group of systems so that the services they provide are always available—or are at least unavailable for the least possible amount of time. Clusters also provide a single public-facing presence for a set of systems, which means end users and others who take advantage of the resources the cluster members provide aren't aware that the cluster comprises more than one machine. They see only a single, unified presence on the network. The dirty work of spreading the load among multiple machines is done behind the scenes by clustering software.

Microsoft provides two distinct types of clustering with Windows Server 2008:

Network load-balancing (NLB) clusters
These types of clusters allow for the high availability of services that rely on the TCP/IP protocol. You can have up to 32 machines running any edition of Windows Server 2008, Windows Server 2003, and Windows 2000 Server participating in an NLB cluster.

True server clusters
Server clusters are the "premium" variety of highly available machines and consist of servers that can share workloads and processes across all members of the cluster. Failed members of the cluster are automatically detected and the work being performed on them is moved to other, functional members of the cluster. True server clusters are supported in only the Enterprise and Datacenter editions of Windows Server 2008.

Where might each type of cluster be useful? For one, NLB is a very inexpensive way to achieve high TCP/IP availability for servers that run web services or other intranet or Internet applications. In effect, NLB acts as a balancer, distributing the load equally among multiple machines running their own, independent, isolated copies of IIS. NLB only protects against a server going offline, in that if a copy of IIS on a machine fails, the load will be redistributed among the other servers in the NLB cluster. Dynamic web pages that maintain sessions don't receive much benefit from this type of clustering because members of the cluster are running independent, unconnected versions of IIS and therefore cannot continue sessions created on other machines. However, much web content is static, and some implementations of dynamic web sites do not use sessions. Thus, chances are that NLB can improve the reliability of a site in production. Other services that can take advantage of NLB are IP-based applications such as FTP and VPN.

If you have business-critical applications that must be available at all times, true server clustering is a better fit. In true server clusters, all members of the cluster are aware of all the other members' shared resources. The members also maintain a "heartbeat" pulse to monitor the availability of services on their fellow members' machines. In the event of a resource or machine failure, the Windows Server 2008 clustering service can automatically hand off jobs, processes, and sessions begun on one machine to another machine. That isn't to say this swapping is completely transparent. When the application is moved or falls to another member in the cluster, client sessions are actually broken and re-established on the new owners of the resources. Although this happens relatively quickly, depending on the nature of your application it probably will not go unnoticed by your users. Often, your clients will be asked to re-authenticate to the new cluster owner. However, the cluster effectively acts as one unit and is completely fault-tolerant, and if you design the structure of your cluster correctly, you can avoid any one single point of failure. This decreases the chance that a single failed hardware or software component will bring your entire business-critical application to its knees.

Source of Information : OReilly Windows Server 2008 The Definitive Guide

Windows Server 2008 Windows Security - Security Identifiers

Security principal is an entity that can have a security identifier (SID), SID is a (mostly) numeric representation of a security principal. The SID is actually what is used internally by the operating system. When you grant a user, a group, a service, or some other security principal permissions to an object, the operating system writes the SID and the permissions to the object’s Access Control List (ACL).

History of SIDs. The original concept of the SID called out each level of the hierarchy. Each layer included a new sub-authority, and an enterprise could lay out arbitrarily complicated hierarchies of issuing authorities. Each layer could, in turn, create additional authorities beneath it. In reality, this created a lot of overhead for setup and deployment, and made the management model group even more baroque. The notion of arbitrary depth identities did not survive the early stages of development, although the structure was already too deeply ingrained to be removed. In practice, two SID patterns developed. For built-in, predefined identities, the hierarchy was compressed to a depth of two or three sub-authorities. For real identities of other principals, the identifier authority was set to five, and the set of sub-authorities was set to four.


SID Components
A SID is composed of several required elements. SIDs always start with the literal “S,” which denotes them as a SID. They also always end with a relative identifier (RID). In between, they have 0 or more sub-authorities. The second value in a SID is always a revision level, which currently is always 1.


SID Authorities
After the S-1- prefix, the remainder of a SID can vary greatly, but it always begins with an identifier authority denoting what entity issued them. The below shows the currently used identifier authorities.

0 : SECURITY_NULL_SID_AUTHORITY. Used for comparisons when the identifier authority is unknown.

1 : SECURITY_WORLD_SID_AUTHORITY. Used to construct SIDs that represent all users. For example, the SID for the Everyone group is S-1-1-0, created by appending the
WORLD RID (0) to this identifier authority, thereby selecting all users from that authority.

2 : SECURITY_LOCAL_SID_AUTHORITY. Used to build SIDs representing users that log on to a local terminal.

3 : SECURITY_CREATOR_SID_AUTHORITY. Used to construct SIDs that represent the creator or owner of an object. For example, the CREATOR OWNER SID is S-1-3-0, created by appending the creator owner RID (also 0) to this identifier authority. If S-1-3-0 is used in an inheritable ACL, it will be replaced by the owner's SID in child objects that inherit this ACL. S-1-3-1 is the CREATOR GROUP SID and has the same effect but will take on the SID for the creator's primary group instead.

5 : SECURITY_NT_AUTHORITY. The operating system itself. SIDs starting with S-1-5 were issued by a computer or a domain. Most of the SIDs you will see start with S-1-5.

After the identifier authority the SID has some number of sub-authorities. The last of these is called the relative identifier and is the identifier of the unique security principal within the realm where the SID was defined. To make this idea a little more concrete, consider the following SID:

S-1-5-21-1534169462-1651380828-111620651-500

As you have seen, the SID starts with S-1-5, indicating that it was issued by Windows NT. The first sub-authority is 21 (0x15 in hexadecimal). The 21 defines this as a Windows NT SID that is not guaranteed to be universally unique. It will be unique within the domain of its issuance, but there may be other SIDs in the universe of computers that have the same exact value. The first of the sub-authorities is very often a well-known sub authority. The below lists the more commonly encountered well-known sub-authorities.

Our SID then has three additional sub-authorities: 1534169462, 1651380828, and 111620651. These do not in and of themselves have any implicit meaning, but together they denote the domain or computer that issued the SID. In fact, the SID for the domain is S-1-5-21-1534169462-1651380828-111620651, and all SIDs issued in that domain will start with that value and end with some unique RID for the user or computer they denote. In this case the SID ends with 500, which is a well-known RID denoting the built-in Administrator account. 501 is the well-known RID for the built-in Guest account and 502 is the well-known RID for the Kerberos Ticket Granting Ticket (krbtgt).

Well-Known Sub-authorities
5 : SIDs are issued to log-on sessions to enable permissions to be granted to any application running in a specific log-on session. These SIDs have the first sub-authority set to 5, and take the form S-1-5-5-x-y.

6 : When a process logs on as a service it gets a special SID in its token to denote that.
This SID has the sub-authority 6, and is always S-1-5-6.

21 : SECURITY_NT_NON_UNIQUE. Denotes user and computer SIDs that are not guaranteed to be universally unique.

32 : SECURITY_BUILTIN_DOMAIN_RID. Denotes built-in SIDs. For example, the well-known SID for the built-in Administrators group is S-1-5-32-544.

80 : SECURITY_SERVICE_ID_BASE_RID. Denotes SIDs for services.


Service SIDs
As mentioned earlier, services also have SIDs in Windows Vista and Windows Server 2008. Service SIDs always start with S-1-5-80 and end with a number of sub-authorities that are deterministic based on the name of the service. This means that a given service has the same SID on all computers. It also means that you can retrieve the SID for an arbitrary service even if it does not exist. For example, to see what the SID would be for the “foo” service, run the sc showsid command, as follows:

C:\>sc showsid foo
NAME: foo
SERVICE SID: S-1-5-80-2639291829-767035215-3510963033-3734144485-3832470211

If you try this on one of your servers, you will come up with the same answer. If you would rather have the friendly name for the service, use NT SERVICE\foo.


Well-Known SIDs
When a developer writes a program for Windows, he often needs to know the SID of some security principal. Usually SIDs can be easily constructed if only the RID is known because it is just appended to the computer or domain SID, as in the case of the Administrator account. However, for convenience, it is often desirable to have a shorter and static form of some SIDs. To provide this, the security model used in Windows includes a significant number of well-known SIDs—SIDs that are always the same across all computers. A few universally well-known SIDs are the same on all operating systems using this security model. These are the SIDs that start with S-1-1, S-1-2, or S-1-3.

In addition, Windows NT has a significant number of well-known SIDs. S-1-5-32 is the wellknown SID for the built-in domain, for example. It can, in turn, be combined with a well-known RID to form a well-known SID for a particular account. For example, the SID for the built-in Administrators group, whether on a domain or on a stand-alone computer, is always S-1-5-32-544. In the case of built-in groups the domain-relative RIDs can be combined with S-1-5-32 to form a SID that is valid on any computer where that user or group is relevant. Other accounts are appended to the domain to form the complete SID. This is the case with Domain Admins, for example, which takes the well-known RID 512 to create a SID such as S-1-5-21-1534169462-1651380828-111620651-512.

Well-Known Domain-Relative RIDs
500 : Administrator
501 : Guest
502 : Krbtgt
512 : Domain Admins
513 : Domain Users
514 : Domain Guests
515 : Domain Computers
516 : Domain Controllers
544 : Built-In Administrators
545 : Built-In Users
546 : Built-In Guests

SIDs may look very complicated, but once you understand the structure, they become quite simple to decipher. With a little practice, you will easily be able to tell whether a SID refers to a service, a well-known principal, or a user in a domain.

Source of Information : Microsoft Press Windows Server 2008 Security Resource Kit

TCP/IP-Based Security - IP Address Security

The base protocol over which Web traffic is carried is the Hypertext Transfer Protocol (HTTP). HTTP is generally carried over TCP/IP, the standard Internet Protocol, in most environments. IIS 7 supports HTTP over TCP/IP version 4 (IPv4), the more common protocol of today’s Internet and the same network-level protocol as every version of IIS to date has supported. IIS 7 also adds support for HTTP over TCP/IP version 6 (IPv6), the protocol version for the next generation of Internet support.

The first consideration regarding blocking or allowing access is whether your target users can get traffic routed to and from your servers. Using an internally routed IP address will help prevent outside parties from accessing an intranet site. If your organization’s routers are correctly configured, an internally routed IP address can be relied on as proof that the client requesting a connection is within your organization. Use external IP addresses with care to identify connecting clients, because they can incorrectly describe the location of the client in many ways. An external IP address may have belonged to one user yesterday, and a different user today. External IP addresses are shared between multiple users, as in the case of an anonymizing proxy, which is designed to hide the identity of a client.

Whitelisting by source IP address—rejecting all connections other than those from addresses of known partners—is mostly reliable, because it is very difficult to forge a TCP connection for any purpose more complex than a simple denial of service attack.

Blacklisting by source IP address—accepting all connections, and rejecting connections from hosts known to be bad—is generally not successful, because attackers are generally able to move to a new host that is not known to be bad.


The ipSecurity element has two attributes:

• allowUnlisted, which affects whether the list of child elements of ipSecurity is checked for known bad or known good source IP addresses. When you set allowUnlisted to false, only the child elements marked with allow can access this Web server, and all others will be denied. When you set allowUnlisted to true, all IP addresses will be allowed to connect except those specifically marked as deny.

• enableReverseDns, which you must set to true if you are going to filter by domain name. Because the reverse DNS lookup can take some time that would delay connections, leave this attribute at its default value of false if you are not interested in filtering by domain name. Remember that many DNS zones do not feature reverse lookups, and Windows does not create reverse lookup zones by default.

As a result, reverse DNS lookup will very often fail. Child elements are added to the ipSecurity collection by the usual add, clear, or remove directives, with the following attributes:

• ipAddress is the numerical IP address of the host or network being allowed or denied access. For instance, 127.0.0.1 would represent the localhost. If you are denying access by default (setting allowUnlisted to false), you will generally want to enable access by 127.0.0.1 and other locally bound IP addresses for testing.

• subnetMask is the numerical subnet mask that defines which network is being referenced by IP address. For instance, to cut off the entire 10.*.*.* network, you would use an ipAddress of 10.0.0.0 and a subnetMask of 255.0.0.0. The default value is 255.255.255.255, which masks to a single IP address.

• domainName is a fully qualified domain name of a host that this rule should apply to.

• allowed will—when set to true—allow all clients that match the rule given, and when set to false will deny all clients that match the rule. The rules are processed in order from first to last, and the first match dictates what is done with the incoming connection.

Source of Information : Microsoft Press Windows Server 2008 Security Resource Kit

How Windows Server 2008 fit in enterprises needs

Enterprise wants and needs far exceed anything the desktop or workstation consumer group can possibly offer. Most of those wants and needs center around managing resources or maintaining connections among desktops, workstations, and other server computers.


Active Directory
Active Directory (AD) is an implementation of the Lightweight Directory Access Protocol (LDAP), a protocol and service framework that delivers directory services to Windows-based networks. AD provides central authentication and authorization services, global policy assignment, widespread software deployment, and large-scale updates for an entire organization. AD Directory Service (DS) is used to centrally store and manage information about the network resources spread across a given domain. The framework itself holds a number of levels that include forests, domains, and trees.


Access controls
Employees are defined by their roles or capacities within an organization. There are leadership roles, management roles, and general occupational roles to fulfill, each defined by separate duties, privileges, and responsibilities. Among those privileges and responsibilities are varying layers of access to business-related information. For example, a general employee has no real reason to access or modify management-related information, such as work schedules or other employees’ contact information.

In much the same way, users are defined in a system by their access privileges on that system. Access controls are captive restrictions set in place on server computers necessary to prevent accidental, intentional, and unauthorized use of data, files, and settings, particularly those critical to system operation.

One feature Windows Server 2008 brings to the table is Network Access
Protection (NAP), which enforces strict health checks on all incoming client connections. That is, it inspects the state of the client to make sure it meets requirements for antivirus and antispyware coverage and currency, Windows update currency, and so forth.


Policy-based controls
Policy-based controls on the Windows Server 2008 platform are evident virtually anywhere a user or process interacts with the system. Active Directory (AD) Domain Services are a global configuration policy-driven framework used to define various Windows network parameters for an entire organization. Policy-based control is also apparent in protective access mechanisms deployed on the network to enforce certain requirements for connecting computers.

Authentication Dial-In User Service (RADIUS), a network-policy checking server and proxy for Windows Server 2008. NPS replaces the original Internet Authentication Service (IAS) in Windows Server 2003 and performs all the same functions for VPN and 802.1x-based wired and wireless links, and performs health evaluations before granting access to NAP clients.

Policy-based controls also encompass the variety of various Windows Server 2008 core components and features like network protocol-oriented QoS and system-wide directory services provided through AD.


Client management
In addition to NAP features that ensure an optimal level of health for Windows Server 2008 networks, a number of other useful client management tools are natively available on the platform. TS Remote Desktop Connection (RDC) 6.0 remotely verifies that clients are connecting to the correct computers or servers. This prevents accidental connections to unintended targets and the potential to expose sensitive client-side information with an unauthorized server recipient.

TS Gateway also provides for endpoint-to-endpoint sessions using the Remote Desktop Protocol (RDP) with HTTPS for a secure, encrypted communications channel between various clients that include FreeBSD, Linux, Mac OS X, and Solaris.


Software deployment
There’s a lot of redundancy in virtually every modern computing and networking environment. There are multiple workstation computers for multiple employees, possibly built with dual memory banks, dual-core processors, and doubled-up RAID drives and NICs, communicating with load-balanced servers operating in round-robin fashion — just to give a thumbnail perspective of a much bigger portrait. Chances are good that in an environment like this, when you configure, install, or modify something once, you’ll have to repeat that same action elsewhere.

Large-scale software deployments are one clear instance of this observation. Generally, you don’t install just one computer but several. It may be a few dozen, or it may be several hundreds or thousands. Either way, do you really want to process each case individually by hand? We didn’t think so, and neither do most administrators, which is why you hear things like “unattended” or “automated” installation.

Windows Server 2008 further enhances the software deployment cycle by realizing a simple principle: Build a modular, easily modified, unified image format through which all subsequent installation images are created, each unique only in the features it removes or adds to the base. The Windows Imaging Format (WIF) creates an abstract modular building block for operating system deployment so that you can create in-house install images that incorporate whatever applications, configurations, or extensions you deem necessary. Then, you can roll out multiple installs at a time in a completely self-contained, automated fashion that can even include previously backedup personal user data and settings.

Source of Information : For Dummies Windows Server 2008 For Dummies

Pushing toward Windows Server 2008 Virtualization Envelope

Windows Server 2008 is designed from the ground up to support virtualization. This means that you have the opportunity to change the way you manage servers and services. With the Windows Server 2008 hypervisor, Hyper-V, there is little difference between a machine running physically on a system and a machine running in a virtual instance. That’s because the hypervisor does the same thing as a physical installation would by exposing hardware to VMs. The real difference between a physical installation and a VM running on the hypervisor is access to system resources.

That’s why we propose the following:

• The only installation that should be physical is the hypervisor or the Windows Server Hyper-V role. Everything else should be virtualized.

• Instead of debating whether service offerings—the services that interact with end users—should be physical versus virtual installations, make all of these installations virtual.

• The only installation that is not a VM is the host server installation. It is easy to keep track of this one installation being different.

• It takes about 20 minutes to provision a VM-based new server installation, which is much shorter than that of a physical installation.

• Creating a source VM is easier than creating a source physical installation because you only have to copy the files that make up the VM.

• The difference between a traditional “physical” installation and a virtual installation is the amount of resources you provide the VM running on top of the hypervisor.

• All backups are the same—each machine is just a collection of files, after all. In addition, you can take advantage of the Volume Shadow Copy Service to protect each VM.

• All service-offering operations are the same because each machine is a VM.

• Because all machines are virtual, they are transportable and can easily be moved from one host to another.

• Because VMs are based on a set of files, you can replicate them to other servers, providing a quick and easy means of recovery in the event of a disaster.

• You can segregate the physical and virtual environments, giving them different security contexts and making sure they are protected at all times.

• You can monitor each instance of your “physical” installations, and if you see that it is not using all of the resources you’ve allocated to it, you can quickly recalibrate it and make better use of your physical resources.

• Every single new feature can be tested in VMs in a lab before it is put into production. If the quality assurance process is detailed enough, you can even move the lab’s VM into production instead of rebuilding the service altogether.

• You are running the ultimate virtual datacenter, because all systems are virtual and host systems are nothing but resource pools.

Source of Information : McGraw Hill Microsoft Windows Server 2008 The Complete Reference

Build the Dynamic Datacenter in Windows Server 2008

A dynamic datacenter is one where all resources are divided into two categories:

Resource Pools consist of the hardware resources in your datacenter. These hardware resources are made up of the server hardware, the network switches, and the power and cooling systems that make the hardware run.

Virtual Service Offerings consist of the workloads that each hardware resource supports. Workloads are virtual machines that run on top of a hypervisor—a code component that exposes hardware resources to virtualized instances of operating systems.

In this datacenter, the hardware resources in the resource pool are host systems that can run between 10 and 20 guest virtual machines that make up the virtual service offerings.

This approach addresses resource fragmentation. Today, many datacenters that are not running virtual machines will most often have a utilization that can range from 5 to perhaps 15 percent of their actual resources. This means that each physical instance of a server is wasting more than 80 percent of its resources while still consuming power, generating heat, and taking up space. In today’s green datacenters, you can no longer afford to take this approach. Each server you remove from your datacenter will save up to 650,000 kilowatthours per year. By turning your hardware resources into host systems, you can now recover those wasted resources and move to 65 to 85 percent utilization. In addition, the dynamic datacenter will provide you with the following benefits:

High availability. Virtual workloads can be moved from one physical host to another when needed, ensuring that the virtual service offering is always available to end users.

Resource optimization. By working with virtual workloads, you can ensure that you make the most of the hardware resources in your datacenter. If one virtual offering does not have sufficient resources, fire up another hardware host and move it to that host, providing the required resources when the workload demands it.

Scalability. Virtualization provides a new model for scalability. When your workloads increase, you can add the required physical resources and control growth in an entirely new manner.

Serviceability. Because of built-in virtualization features, your hosts can move one virtual workload from one host to another with little or no disruption to end users. This provides new serviceability models where you can manage and maintain systems without having to cause service disruptions.

Cost savings. By moving to virtualization, you will earn savings in hardware reductions, power reductions, and license cost reductions.

The result is less hardware to manage and a leaner, greener datacenter.

Improvements in the Windows Server 2008 Windows Firewall

Better Management Interface
The most significant improvement is a new graphical interface for managing the Windows Firewall locally and through Active Directory domain-based group policies. The old Control Panel item, Windows Firewall, still provides access to basic controls. The new user interface is a Microsoft Management Console (MMC) snap-in. For controlling local settings you can access Windows Firewall with Advanced Security console in the Administrative Tools folder. This snap-in is also part of the Group Policy editor console for managing the firewall via Active Directory domain group policies. Improvements have also been made to netsh.exe, the command-line tool for managing the firewall and IPsec. The netsh command now has a new context, advfirewall, which you can use to script configuration of the firewall or to manage it on a Server Core installation.


Windows Service Hardening
While many steps have been taken to protect the services themselves, an attacker can possibly still find a way to exploit a Windows system service. If a service is compromised, Windows Service Hardening will help to reduce the impact in several ways: The firewall will block abnormal behavior such as a service that does not need to access the network trying to send out HTTP traffic


Outbound Filtering
After many years observing hand-wringing, hyperventilation, and multi-page walls of text from vendors, pundits, and security experts (both genuine and self-proclaimed), I concluded that in most cases, outbound filtering of network traffic on host firewalls is wasted effort. The key words in that sentence are most cases; I will address what I believe are legitimate uses of outbound filtering in a moment. Outbound filtering is typically nothing more than “security theater.” Inbound filtering is what will stop malicious network traffic such as Nimda, Slammer, Sasser, Blaster, or anything else that sends unwanted network traffic to your server.

A whole lot of bloviating was directed at Microsoft when it released the much-improved firewall in Service Pack 2 for Windows XP because it did not do outbound filtering. I am here to tell you that most of those complaints came from people who do not have a good grasp (to say it politely) of what is feasible in computer security or from organizations marketing their own client firewall products. If an attacker (or a piece of malware) has taken control of your computer, what will stop them from reconfiguring the firewall to allow traffic from whatever applications they want to run? The attacker probably does not even have to reconfigure the firewall—they could simply use whatever ports are already allowed, or take control of an application that can already send out traffic. Another really bad aspect of most client firewall solutions that do outbound filtering is the miserable user experience: After you install the cursed thing you are barraged by hundreds of pop-up dialog boxes asking if you really want to let Internet Explorer open a connection to www.microsoft.com or if you are absolutely certain it is a good idea for MSN Web Messenger to send traffic to msn.com. After a week of seeing several score of these dialog boxes, users—including paranoid systems administrators and security experts—tend to either disable the thing completely or become trained to click Yes or
Accept immediately so they can get on with whatever it was they were hoping to accomplish on the computer!

Microsoft’s newest server operating system makes intelligent use of outbound filtering by blocking system services from initiating network connections except for what they require to function properly. If a service is exploited, it is not going to be able to reconfigure the firewall without alerting the user because it is blocked from modifying the firewall settings. By default, the new firewall allows all other outbound network packets. You could change the default behavior to block all outbound traffic, but I do not recommend it because you will spend many hours, days, and perhaps even weeks trying to figure out every exception you need to make to allow your server to do everything you need it to do.


Granular Rules
In Windows Server 2008 and Windows Vista the firewall is enabled for both inbound and outbound connections. The default policies block most inbound traffic and allow most
outbound traffic. The firewall supports filtering any IP protocol number, unlike the
Windows XP firewall that could only filter Transmission Control Protocol (TCP), User
Datagram Protocol (UDP), and Internet Control Message Protocol (ICMP) traffic.You can configure specific rules for blocking or allowing traffic by using IP addresses, IP protocol numbers, Active Directory directory service accounts and groups, system services, UDP and TCP source and destination ports, specific types of interfaces, and ICMP by type and code.


Location-Aware Profiles
Windows Firewall takes advantage of the new TCP/IP stack’s ability to automatically track what network it is connected to. You can configure rules and settings for each of the three profiles: Domain, Private, and Public. The Domain profile applies when all of the computer’s networks include Active Directory domain controllers for the domain that the computer belongs to. The Private profile is used when all active network connections have been designated by an administrator as a private ones protected by a firewall. The public profile is used when the computer is connected directly to the Internet, or the network has not been defined as Private or Domain.


Authenticated Bypass
You can configure rules that allow specific computers or groups of computers to bypass other firewall rules by using IPsec authentication in those rules. This means that you can block a particular type of traffic from all other hosts, but allow a select few systems to bypass that restrictions and access the blocked service. The rules can be even more specific, detailing which ports or programs can receive the traffic.


Active Directory User, Computer, and Groups Support
Rules can include users, computers, and groups defined in Active Directory, but you must secure connections affected by these types of rules with IPsec using an Active Directory account credential.


IPv6 Support
Windows Firewall with Advanced Security fully supports IPv6.


IPsec Integration
In Windows XP and Windows Server 2003, rules for the Windows Firewall and IPsec are configured separately. Because both can block or allow inbound traffic, you could accidentally create redundant or even contradictory rules. These types of configuration errors can be difficult to troubleshoot. The new Windows Firewall combines the configuration of both the firewall and IPsec into the same graphical and command-line tools, which both simplifies management and reduces the risk of misconfiguration.

Source Of Information : Microsoft Press Windows Server 2008 Security Resource Kit.

Windows Filtering Platform in Windows Server 2008

To facilitate the development of network traffic filtering products, Microsoft created the Windows Filtering Platform (WFP). It is available in both Windows Vista and Windows Server 2008. WFP is not a firewall but rather a set of system services and Application Programming Interfaces (APIs) for use by Microsoft and third-party developers. WFP enables unparalleled access to the Transmission Control Protocol/Internet Protocol (TCP/IP) stack so that inbound and outbound network packets can be examined or changed before allowing them to proceed. Developers can use WFP to build a variety of diagnostic and security tools, including firewalls and antivirus software.


WFP includes the following architectural components:

• The RPC Interface provides access to the WFP. Firewalls and other applications make calls to the WFP API, which are then passed to the Base Filtering Engine (BFE).

• The Base Filtering Engine is the user-mode component that arbitrates between applications making filter request and the Generic Filter Engine, which runs inside the driver that implements the next-generation TCP/IP stack. The BFE adds and removes filters from the system, stores filter configuration, and enforces WFP configuration security.

• The Generic Filtering Engine (GFE) is the kernel mode component that receives filter information from the Base Filtering Engine, interacts with callout drivers, and interacts with the TCP/IP stack. As packets are processed up and down the new TCP/IP stack they are evaluated by the Generic Filter Engine to see whether they should be allowed through. The Generic Filter Engine performs this evaluation by comparing each packet with the relevant filters and callout modules.

• Callout modules are used when an application wants to perform deep packet inspection or data modification. For example, an antivirus tool may want to inspect traffic at the application layer before it is actually forwarded to the target application to ensure that no malware is present in the data.

Source of Information : Microsoft Press Windows Server 2008 Security Resource Kit

Run Physical or Virtual Machines in Windows Server 2008

With the coming of Windows Server 2008 and its embedded virtualization technology, or hypervisor, you need to rethink the way you provide resources and build the datacenter.
With the advent of powerful new 64-bit servers running either WS08 Enterprise or
Datacenter edition, it has now become possible to virtualize almost every server type, with little or no difference in performance, especially if you base your host server builds on Server Core. Users do not see any difference in operation, whether they are on a virtual or physical machine. And with the advent of the new hypervisor built into WS08, the virtualversus-physical process becomes completely transparent. That’s because unlike previous Microsoft virtualization technologies, which actually resided over the top of the operating system, the new hypervisor resides below the operating system level.

In addition, the WS08 hypervisor has a very small footprint and does not need an additional operating system to run. When you install the WS08 Hyper-V role with Server
Core, the hypervisor is installed directory on top of the hardware. An advantage this model gives you is that all system drivers reside in the virtual machine itself, not in the hypervisor.

All the hypervisor does is expose hardware resources to the virtual machine (VM). The VM then loads the appropriate drivers to work with these hardware resources. VMs have better access to the host system’s resources and run with better performance because there are fewer translation layers between them and the actual hardware. To further support the move to the dynamic datacenter, Microsoft has changed the licensing mode for virtual instances of Windows Server. This change was first initiated with WS03 R2. In WS03 R2, running an Enterprise edition version on the host system automatically grants four free virtual machine licenses of WS03 R2 Enterprise edition (EE). Add another WS03 R2 EE license, and you can build four more VMs. On average, organizations will run up to 16 virtual machines on a host server, requiring only four actual licenses of WS03 R2 EE.

Microsoft carries over this licensing model with WS08. The first Enterprise edition license grants one license for the host and four licenses for VMs. Each other license grants four more licenses for VMs. If you purchase the WS08 Datacenter edition, you can run an unlimited number of VMs on that host. Remember also that the licenses for VMs support any version of Windows Server. This means you can run Windows NT, Windows 2000, or Windows Server 2003, as well as WS08. Virtualization provides great savings and decreases general server provisioning timelines, as well as reducing management overhead. For example, one system administrator can manage well over 100 virtualized servers, as well as the hosts required to run them.

Source of Information : McGraw Hill Microsoft Windows Server 2008 The Complete Reference

Windows Server 2008 Web-based services

Task-based Web server management is handled in Internet Information Services (IIS) 7.0, a powerful, modular platform for remote applications and services with enhanced security, featuring health monitoring for Web services. IIS 7.0 and .NET Framework 3.0 provide the basis for application and user connectivity, enabling users to distribute and visualize shared information.

Windows Server 2008 SharePoint Services is a scalable, manageable platform for the collaboration and development of Web-based business applications. This can be installed as an integrated server role through the new Server Manager console — no more downloading and running Setup. The SharePoint Products and Technologies Configuration Wizard runs you through the installation process for server farm configurations, dramatically easing the deployment options for large-scale enterprise networks. Consult Microsoft TechNet articles for more information on SharePoint Services.

Source of Information : For Dummies Windows Server 2008 For Dummies

Windows Server 2008 Data-based services

Centralized application and data access helps secure sensitive and/or personally identifying information to the remote working environment. Less data leaving the corporate network reduces the risk of accidental or incidental data loss through the interception, theft, or misplacement of company notebooks. Through TS Gateway and TS RemoteApp, participants can be limited to a single application or several resources, without exposing any more information than necessary to do their jobs.

For those mobile users out in the field, BitLocker Drive Encryption provides a complete cryptographic solution to safely and securely store sensitive data at rest. Everything up to core Windows operating system data and files gets cryptographic coverage so that tampering by unauthorized parties is thwarted, even if the hard drive is removed and the notebook is manipulated in any way.

Windows Server 2008 File Services are technological provisions that facilitate storage management, file replication, folder sharing, fast searching, and accessibility for UNIX client computers. See Microsoft TechNet articles for information on these features.

Source of Information : For Dummies Windows Server 2008 For Dummies

Windows Server 2008 Application access

Terminal Services (TS) in Windows Server 2008 implements Microsoft’s most powerful centralized application access platform and offers an array of new capabilities that reshape administrator and user experiences alike.

TS provide centralized access to individual applications without requiring a full-fledged remote desktop session (although that’s still an option). Applications operating remotely are integrated on local user desktops, where they look and feel like local applications. An organization can employ HTTPS over VPN to secure remote access to centralized applications and desktops.

Using TS in a Windows Server 2008 environment enables you to:

• Deploy applications that integrate with the local user desktop.
• Provide central access to managed Windows desktops.
• Enable remote access for existing WAN applications.
• Secure applications and data within the data center.

Windows Server 2008 TS includes the following features:

• TS RemoteApp: Programs accessed through TS behave as if they run locally on a remote user’s computer. Users may run TS RemoteApp programs alongside local applications.

• TS Gateway: Authorized remote users may connect to TS servers and desktops on the intranet from any Internet-accessible device running Remote Desktop Connection (RDC) 6.0. TS Gateway uses Remote Desktop Protocol (RDP) via HTTPS to form a secure, encrypted channel between remote users.

• TS Web Access: TS RemoteApp is made available to remote end users through TS Web Access, which can be a simple default Web page used to deploy RemoteApp via the Web. Resources are accessible via both intranet and Internet computers.

• TS Session Broker: A simpler alternative to load-balancing TS is provided through TS Broker, a new feature that distributes session data to the least active server in a small (two to five) farm of servers. IT administrators can even map several TS IP addresses to a single human-addressable DNS name, so end users needn’t be aware of any specific settings to connect and reconnect TS broker sessions.

• TS Easy Print: Another new feature in Windows Server 2008 enables users to reliably print from a TS RemoteApp program or desktop session to either a local or network printer installed on the client computer. Printers are supported without any installation of print drivers on the TS endpoint, which greatly simplifies the network sharing process.

In addition, the Application Server role in Windows Server 2008 provides an integrated environment for customizing, deploying, and running server-based business applications. This supports applications that use ASP.NET, COM+, Message Queuing, Microsoft .NET Framework 2.0/3.0, Web Services, and distributed transactions that respond to network-driven requests from other applications and client computers.

The Application Server role is a requirement for Windows Server 2008 environments running applications dependent upon role services or features selected during the installation process. Typically, this role is required when deploying internally-developed business applications, which might be database-stored customer records, interfaced through Windows Communication Foundation (WCF) Web Services.

Source of Information : For Dummies Windows Server 2008 For Dummies

Windows Server 2008 Name services

Windows Internet Naming Service (WINS) is Microsoft’s implementation of NetBIOS Name Server (NBNS) on Windows and is very similar to the relationship between DNS and domain names. This is a basic service for NetBIOS computer names, which are dynamically updated and mapped through DHCP. WINS allows client computers to register and request NetBIOS names and IP addresses in a dynamic, distributed fashion to resolve locally-connected Windows computer resources.

A single network may have several WINS servers operating in push/pull replication, perhaps in a decentralized, distributed hub-and-spoke configuration. Each WINS server contains a full copy of every other WINS server’s records because there’s no hierarchy as with DNS — but the database may still be queried for the address to contact (rather than broadcasting a request for the right one).

WINS is only necessary if pre-Windows 2000 clients or servers or Exchange 2000/2003 clients are present and resolving NetBIOS names. Realistically, most networking environments are better served by DNS as a preferable alternative to WINS, particularly in Windows Server 2003 or 2008 environments. However, WINS remains an integral function in Windows network to support older clients using legacy software.

Source of Information : For Dummies Windows Server 2008 For Dummies

Windows Server 2008 NAT

Network Address Translation (NAT), network masquerading, and IP masquerading are all terms used to describe rewriting packets as they pass through an intermediary networking device to appear as if they originated from that device. There are many NAT arrangements, types, and variations, but all operate along the same lines.

NAT confers two profound advantages on outbound network traffic:

• It permits internal networks to use private IP addresses as per RFC 1918 and handles incoming and outgoing traffic to make that arrangement work.

• It hides the details of internal network addresses, whether public or private — which explains the masquerading terminology used in the preceding paragraph.

There are several distinct advantages to this kind of arrangement. For starters, NAT insulates internal computers from external probes, keeping crime out like a security fence. At the same time, NAT enables many internal computers to utilize a single external network connection where only a single IP address is assigned. NAT originally began as a response to the IPv4 address space shortage but has proven useful in many other ways.

Sometimes, communications protocols can be sensitive to alterations in packet header data. NAT mangles the original data contained in a packet, which can disrupt certain types of security protocols that absolutely require a packet to pass from sender to receiver unaltered. This was the case for IPSec when it first arrived on the scene because critical portions of header elements were modified by NAT, upon which IPSec relied. As a result, connections failed, and trouble followed close behind. Today, such traffic is handled without much difficulty, thanks to innovations in how NAT works and how security protocols are used.

NAT can be used for load-balancing for connection redirection, as part of a failover design to establish high-availability, as a transparent proxy for content caching and request filtration, or to connect two networks with overlapping addresses.

Source of Information : For Dummies Windows Server 2008 For Dummies

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...