Windows Server 2008 Hyper-V New Features - Provide Better Reliability Capabilities

Another critical area of improvement in Hyper-V is its support for capabilities that improve reliability and recoverability of the Hyper-V host and guest environments. The technologies added to Windows 2008 and Hyper-V are clustering technologies as well as server snapshot technologies.

Clustering is supported on Hyper-V both for host clustering and guest clustering. The clustering capabilities allow redundancy both at the host server level and the Hyper-V guest level, with both areas of clustering greatly improving the uptime that can be created for applications.

Another capability added to Hyper-V for better reliability is the ability to take snapshots of virtual guest sessions. A snapshot allows the state of a guest image to be retained so that at any time an administrator wants to roll back to the state of the image at the time of the snapshot, the information all exists. This capability is used frequently to take a snapshot before a patch or update is applied so that the organization can, if need be, quickly and easily roll back to that image. Snapshots are also used for general recovery purposes. If a database becomes corrupted or an image no longer works, the network administrator can roll back the image to a point before the corruption or system problems started to occur.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Windows Server 2008 Hyper-V New Features - Provide Better Administration and Guest Support

Better Administration Support. Hyper-V guest sessions can be administered by two separate tools. One tool, the Hyper-V Administration tool, comes free out of the box with Windows Server 2008. The other tool, System Center VMM, can be purchased separately. Some overlap exists between what the Hyper-V Administration tool and the VMM tool do. For the most part, however, the builtin tool enables you to start and stop guest sessions and to take snapshots of the sessions for image backup and recovery. The VMM tool provides all those capabilities, too. But, it also enables an administrator to organize images across different administrative groups, as. Thus, the VMM tool allows for the creation and management of template images for faster and easier image provisioning, provides a way to create a virtual image from existing physical or running virtual sessions, and provides clustering of virtual images across multiple VMM manage host servers.


Better Guest Support. Hyper-V added several new features that provide better support for guest sessions, such as 64-bit guest support, support for non-Windows guest sessions, and support for dedicated processors in guest sessions.

Hyper-V added the ability to support not only 32-bit guest sessions as earlier versions of
Microsoft’s Virtual Server 2005 product provided, but also 64-bit guest sessions. This
improvement allows guest sessions to run some of the latest 64-bit-only application software from Microsoft and other vendors, such as Exchange Server 2007. And although
some applications will run in either 32-bit or 64-bit versions, for organizations looking for faster information processing, or support for more than 4GB of RAM, the 64-bit guest
session provides the same capabilities as if the organization were running the application
on a dedicated physical 64-bit server system.

With Hyper-V, you can also dedicate one, two, or four processor cores to a virtual guest
session. Instead of aggregating the performance of all the Hyper-V host server’s processors and dividing the processing performance for the guest images somewhat equally, an administrator can dedicate processors to guest images to ensure higher performance for the guest session. With hardware supporting two or four quad-core processors in a single server system, there are plenty of processors in servers these days to appropriately allocate processing speed to the server guests that require more performance.

Support for non-Windows guests, such as Linux, was an indication from Microsoft that they are serious about providing multiplatform support within their Hyper-V host servers. Linux servers are not only supported to run as guest sessions on Hyper-V, but Microsoft has developed integration tools to better support Linux guest integration into a managed Hyper-V host environment.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

Windows Server 2008 Hyper-V New Features - Provide Better Virtual Host Capabilities

The broadest improvements made by Microsoft to the virtual host capabilities of Hyper-V are the core functions added in to Windows Server 2008 that relate to security, performance, and reliability. However, the addition of a new virtual switch capability in Hyper-V provides greater flexibility in managing network communications among guest images, and between guest images and an organization’s internetworking infrastructure.

Effectively, Windows Server 2008 and Hyper-V leverage the built-in capabilities of Windows 2008 along with specific Hyper-V components to improve overall support, administration, management, and operations of a Hyper-V host server. When Hyper-V host server is joined to a Microsoft Active Directory environment, the host server can be managed and administered just like any other application server in the Active Directory environment. Security is centralized and managed through the use of Active Directory organizational units, groups, and user administrators. Monitoring of the Hyper-V host server and its guest sessions is done through the same tools organizations use to monitor and manage their existing Windows server systems.

Security policies, patch management policies, backup procedures, and the corresponding tools and utilities used to support other Windows server systems can be used to support the Hyper-V host server system. The Hyper-V host server becomes just another managed Windows server on the network.

Also important is the requirement for the Hyper-V host server to run on a 64-bit system, to not only take advantage of hardware-assisted virtualization processors like the AMD64 and Intel IA-32E and EM64T (x64) but also to provide more memory in the host server to distribute among guest sessions. When a 32-bit host server was limited to about 4GB of RAM memory, there weren’t too many ways to divide that memory among guest sessions in which guests could run any business application. With 64-bit host servers supporting 8GB, 16GB, 32GB, or more, however, guest sessions can easily take 4GB or 8GB of memory each and still leave room for other guest sessions, tasks, and functions.

Unlike multiple physical servers that might be connected to different network switches, the guest sessions on a Hyper-V host all reside within a single server. Therefore, the virtual switch capability built in to the Hyper-V Administration tool enables the Hyper-V administrator to create special network segments and associate virtual guest sessions to specific network adapters in the host server to ensure that virtual guests can be connected to network segments that meet the needs of the organization.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed 08

The Rationale for Windows Server Core

The need for something like the Windows server core installation option of Windows Server 2008 is pretty obvious. Windows Server today is frequently deployed to support a single role in an enterprise or to handle a fixed workload. For example, organizations often deploy the DHCP Server role on a dedicated Windows Server 2003 machine to provide dynamic addressing support for client computers on their network. Now think about that for a moment- you’ve just installed Windows Server 2003 with all its various services and components on a solid piece of hardware, just to use the machine as a DHCP server and nothing more. Or maybe as a file server as part of a DFS file system infrastructure you’re setting up for users. Or as a print server to manage a number of printers on your network. The point is, you’ve got Windows Server 2003 with all its features doing only one thing. Why do you need all those extra binaries on your machine then? And think about when you need to patch your system- you’ve got to apply all new software updates to the machine, even though the functionality that many of those updates fix will never actually be used on that particular system. Why should you have to patch IIS on your server if the server is not going to be used for hosting Web sites? And might not having IIS binaries on your server make it more vulnerable even though the IIS component is not actually being used on it or is even installed? The more stuff you’ve got on a box, the more difficult it is to secure (or to be sure that it’s secure) and the more complex it is to maintain.

Enter the Windows server core installation option of Windows Server 2008. Now, instead of installing all of Windows Server 2008 on your box while using only a portion of it, you can install a minimal subset of Windows Server 2008 binaries and you need to maintain only those particular binaries. The value proposition for enterprises of the Windows server core installation option is plain to see:

• Fewer binaries mean a reduced attack surface and, hence, a greater degree of protection for your network.

• Less functionality and a role-based paradigm also mean fewer services running on your machine and, therefore, again less attack surface.

• Fewer binaries also mean a reduced servicing surface, which means fewer patches, making your server easier to service and orienting your patch management cycle according to roles instead of boxes. Estimates indicate that using the Windows server core installation option can reduce the number of patches you need to apply to your server by as much as 50 percent compared with full installations of Windows Server 2008.

• Fewer roles and features also mean easier management of your servers and enable different members of your IT staff to specialize better according to the server roles they need to support.

• Finally, fewer binaries also mean less disk space needed for the core operating system components, which is a plus for datacenter environments in particular.

The Windows server core installation option of Windows Server 2008 is all of these and more, and it’s included in the Standard, Enterprise, and Datacenter editions of Windows Server 2008. Windows server core is not a separate product or SKU-it’s an installation option you can select during manual or unattended install. And it’s available on both the x86 and x64 platforms of Windows Server 2008. (It’s not available on IA64 and on the Web edition SKU of Windows Server 2008.) The bottom line? The Windows server core installation option of Windows Server 2008 is more secure and more reliable, and it requires less management overhead than using a full installation of Windows Server 2008 for an equivalent purpose in your enterprise.

A Windows server core server provides you with minimal server operating system functionality and a low attack surface for targeted roles.

Source of Information : Introducing Windows Server 2008

Choosing to Virtualize Servers

“Virtualization as an IT Organization Strategy” identified basic reasons why organizations have chosen to virtualize their physical servers into virtual guest sessions. However, organizations also benefit from server virtualization in several areas. Organizations can use virtualization in test and development environments. They can also use virtualization to minimize the number of physical servers in an environment, and to leverage the capabilities of simplified virtual server images in high-availability and disaster recovery scenarios.


Virtualization for Test and Development Environments
Server virtualization got its start in test and development environments in IT organizations. The simplicity of adding a single host server and loading up multiple guest virtual sessions to test applications or develop multiserver scenarios without having to buy and manage multiple physical servers was extremely attractive. Today, with physical servers with 4, 8, or 16 core processors in a single system with significant performance capacity, organizations can host dozens of test and development virtual server sessions just by setting up 1 or 2 host servers.

With administrative tools built in to the virtual server host systems, the guest sessions can
be connected together or completely isolated from one another, providing virtual local area networks (LANs) that simulate a production environment. In addition, an administrator can create a single base virtual image with, for example, Windows Server 2003 Enterprise Edition on it, and can save that base image as a template. To create a “new server” whenever desired, the administrator just has to make a duplicate copy of the base template image and boot that new image. Creating a server system takes 5 minutes in a virtual environment. In the past, the administrator would have to acquire hardware,
configure the hardware, shove in the Windows Server CD, and wait 20 to 30 minutes before the base configuration was installed. And then after the base configuration was installed, it was usually another 30 to 60 minutes to download and install the latest service packs and patches before the system was ready.

With the addition of provisioning tools, such as Microsoft System Center Virtual Machine Manager 2008 (VMM), “Using Virtual Machine Manager 2008 for Provisioning,” the process of creating new guest images from templates and the ability to delegate the provisioning process to others greatly simplifies the process of making virtual guest sessions available for test and development purposes.


Virtualization for Server Consolidation
Another common use of server virtualization is consolidating physical servers. Organizations that have undertaken concerted server consolidation efforts have been able to decrease the number of physical servers by upward of 60% to 80%. It’s usually very simple for an organization to decrease the number of physical servers by at least 25% to 35% simply by identifying low-usage, single-task systems.

Servers such as domain controllers, Dynamic Host Configuration Protocol (DHCP) servers, web servers, and the like are prime candidates for virtualization because they are typically running on simple “pizza box” servers (thin 1 unit high rack-mounted systems).

Beyond just taking physical servers and doing a one-for-one replacement as virtual servers in an environment, many organizations are realizing they just have too many servers doing the same thing and underutilized because of lack of demand or capacity. The excess capacity may have been projected based on organizational growth expectations that never materialized or has since been reduced due to organization consolidation.

Server consolidation also means that organizations can now decrease their number of sites and data centers to fewer, centralized data centers. When wide area network (WAN)
connections were extremely expensive and not completely reliable, organizations distributed servers to branch offices and remote locations. Today, however, the need for a fully distributed data environment has greatly diminished because the cost of Internet connectivity has decreased, WAN performance has increased, WAN reliability has drastically improved, and applications now support full-feature robust web capabilities.

Don’t think of server consolidation as just taking every physical server and making it a virtual server. Instead, spend a few moments to think about how to decrease the number
of physical (and virtual) systems in general, and then virtualize only the number of systems required. Because it is easy to provision a new virtual server, if additional capacity is required, it doesn’t take long to spin up a new virtual server image to meet the
demands of the organization. This ease contrasts starkly with requirements in the past: purchasing hardware and spending the better part of a day configuring the hardware and installing the base Windows operating system on the physical use system.


Virtualization as a Strategy for Disaster Recovery and High Availability
Most use organizations realize a positive spillover effect from virtualizing their environments: They create higher availability and enhance their disaster-recovery potential, and thus fulfill other IT initiatives. Disaster recovery and business continuity is on the minds of most IT professionals, effectively how to quickly bring back online servers and systems in the event of a server failure or in the case of a disaster (natural disaster or other). Without virtualization, disaster-recovery plans generally require the addition (to a physical data center perhaps already bloated with too many servers) of even more servers to create redundancy (both in the data center and in a remote location).

Virtualization has greatly improved an organization’s ability to actually implement a disaster-recovery plan. As physical servers are virtualized and the organization begins to decrease physical server count by 25%, 50%, or more, the organization can then repurpose spare systems as redundant servers or as hosts for redundant virtual images both within the data center and in remote locations for redundant data sites. Many organizations have found their effort to consolidate servers is negated because even though they virtualized half their servers, they went back and added twice as many servers to get redundancy and fault tolerance. However, the net of the effort is that the organization has been able to get disaster recovery in place without adding additional physical servers to the network.

After virtualizing servers as guest images, organizations are finding that a virtualized image is very simple to replicate; after all, it’s typically nothing more than a single file sitting on a server. In its simplest form, an organization can just “pause” the guest session temporarily, “copy” the virtual guest session image, and then “resume” the guest session to bring it back online. The copy of the image has all the information of the server. The image can be used to re-create a scenario in a test lab environment; or it can be saved so that in the event that the primary image fails, the copy can be booted and bring the server immediately back up and running. There are more elegant ways to replicate an image file. However, the ability for an IT department to bring up a failed server within a data center or remotely has been greatly simplified though virtualization technologies.

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed

What Is Server Virtualization and Microsoft Hyper-V?

Hyper-V is a long-awaited technology that has been anticipated to help Microsoft leap past rival virtual server technologies such as VMware and XenServer. Although Microsoft has had a virtual server technology for a few years, the features and capabilities have always lagged behind its competitors. Windows Server 2008 was written to provide enhanced virtualization technologies through a rewrite of the Windows kernel itself to support virtual server capabilities equal to, if not better than, other options
on the market. The Hyper-V server role in Windows Server 2008 and provides best practices that organizations can follow to leverage the capabilities of server virtualization to lower costs and improve the manageability of an organization’s network server environment.

Server virtualization is the ability for a single system to host multiple guest operating system sessions, effectively taking advantage of the processing capabilities of very powerful servers. Most servers in data centers run under 5% to 10% processor utilization, meaning that excess capacity on the servers goes unused. By combining the workloads of multiple servers onto a single system, an organization can better utilize the processing power available in its networking environment.

Hyper-V enables an organization to consolidate several physical server systems into a single host server while still providing isolation between virtual guest session application operations. With an interest to decrease costs in managing their information technology (IT) infrastructure, organizations are virtualizing servers. Bringing multiple physical servers into a single host server decreases the cost of purchasing and maintaining multiple physical server systems, decreases the cost of electricity and air-cooling systems to maintain the physical servers, and enables an organization to go “green” (by decreasing the use of natural resources in the operation of physical server systems).

Source of Information : Sams - Windows Server 2008 Hyper-V Unleashed

Commonly used TCP/IP ports

When your web browser or email program connects to another computer on the Internet, it does so through a TCP/IP port. If you have a web server or FTP server running on your PC, it opens a port through which other computers can connect to those services. Port numbers are used to distinguish one network service from another.

Mostly, this is done behind the scenes. However, knowing which programs use a specific port number becomes important when you starting considering security. A firewall uses ports to form its rules about which types of network traffic to allow, and which to prohibit. And the Active Connections utility (netstat.exe), used to determine which ports are currently in use, allows you to uncover vulnerabilities in your system using ports. Ports, firewalls, and the Active Connections utility are all discussed in Chapter 7.

Some firewalls make a distinction between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) ports, which is typically unnecessary. In most cases, programs that use the more common TCP protocol will use the same port numbers as their counterparts that use the less reliable UDP protocol.

Ports are divided into three ranges:
Well-known ports: 0–1023
Registered ports: 1024–49151
Dynamic and/or private ports: 49152–65535

The below lists are most commonly used ports. For a more complete listing, see any of these resources:
http://www.iana.org/assignments/port-numbers
http://www.faqs.org/rfcs/rfc1700.html
http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

Those ports marked with an ✗ in are commonly exploited by worms and other types of remote attacks. Unless you specifically need them, you should block them in your firewall or router.

20–21 FTP (File Transfer Protocol)
22 SSH (Secure Shell)
23 Telnet
25 SMTP (Simple Mail Transfer Protocol), used for sending email
42 WINS (Windows Internet Name Service)
43 WhoIs
50–51 IPSec (PPTP Passthrough for VPN, Virtual Private Networking)
53 DNS (Domain Name Server), used for looking up domain names
67 DHCP (Dynamic Host Configuration Protocol)
69 ✗ TFTP
70 Gopher
79 Finger
80 HTTP (Hyper Text Transfer Protocol), used by web browsers to download standard web pages
110 POP3 (Post Office Protocol, version 3), used for retrieving email
119 NNTP (Network News Transfer Protocol), used for newsgroups
123 NTP (Network Time Protocol), used for Windows’ Internet Time feature
135 ✗ RPC (Microsoft Windows Remote Procedure Call)
137–139 ✗ NETBIOS Services
143 IMAP4 (Internet Mail Access Protocol version 4)
161–162 SNMP (Simple Network Management Protocol)
194 IRC (Internet Relay Chat)
220 IMAP3 (Internet Mail Access Protocol version 3)
443 HTTPS (HTTP over TLS/SSL), used by web browsers to download secure web pages
445 ✗ Active Directory, file sharing for Microsoft Windows networks (445 UDP used for SMB/Samba)
500 IPSec (PPTP Passthrough for VPN, Virtual Private Networking)
514 RSH (Remote Shell)
531 AOL Instant Messenger (AIM)
554 RTSP (Real Time Streaming Protocol), used for streaming audio and video
563 NNTPS (Network News Transfer Protocol over SSL), used for secure newsgroups
593 ✗ RPC (Microsoft Windows Remote Procedure Call) over HTTP
691 Microsoft Exchange Routing
750 Kerberos IV email authenticating agent
989–990 FTP over SSL (secure File Transfer Protocol)
992 Telnet over SSL (secure Telnet)
993 IMAP4 over SSL (secure Internet Mail Access Protocol version 4)
995 POP3 over SSL (secure Post Office Protocol, version 3)
1026 ✗ Windows Messenger - pop ups (spam)
1194 OpenVPN
1214 ✗ Kazaa peer-to-peer file sharing
1270 Microsoft Operations Manager 2005 agent (MOM 2005)
1352 Lotus Notes/Domino mail routing
1433–1434 Microsoft SQL database system, monitor
1503 Windows Messenger - application sharing and whiteboard
1512 WINS (Windows Internet Name Service)
1701 VPN (Virtual Private Networking) over L2TP
1723 VPN (Virtual Private Networking) over PPTP
1755 MMS (Microsoft Media Services) for Windows Media Player
1812–1813 RADIUS authentication protocol
1863 Windows Live Messenger - instant messenging
1900 Microsoft SSDP Enables discovery of UPnP devices
3074 Xbox Live (Microsoft gaming console)
3306 MySQL database
3389 Remote Desktop Sharing (Microsoft Terminal Services), used for remote control
4444 ✗ W32.BLASTER.WORM virus
5004 and up Windows Messenger - audio and video conferencing (port is chosen dynamically)
5010 Yahoo! Messenger
5190 AOL Instant Messenger
5631, 5632 pcAnywhere, used for remote control
5800, 5801
5900, 5901
VNC (Virtual Network Computing), used for remote control
6699 Peer-to-peer file sharing, used by Napster-like programs
6891–6901 Windows Live Messenger - file transfer, voice
6881–6999 BitTorrent peer-to-peer file transfer clients

Source of Information : OReilly Windows Vista Annoyances Tips Secrets and Hacks

How to protect and clean your PC

The most popular and typically the most effective way to rid your computer of malware is to use dedicated antivirus software and antispyware software. These programs rely on their own internal databases of known viruses, worms, Trojans, spyware, and adware, and as such, must be updated regularly (daily or weekly) to be able to detect and eliminate the latest threats.

Vista is the first operating system to include an antispyware tool, known as Windows Defender (found in Control Panel). The best part about it is that, left to its own devices, Windows Defender will regularly scan your system and even keep its spyware definitions up to date. But Vista still doesn’t come with an antivirus tool, mostly to appease the companies that make money selling aftermarket antivirus software (which is ironic, since the best tools are free). Following is a list of the more popular antivirus products.

Avast Home Edition (http://www.avast.com)
Freeware, with a slick interface and good feature set.

Avira AntiVir Classic (http://www.free-av.com)
Freeware, with frequent updates, but only average detection rates.

AVG Anti-Virus (http://free.grisoft.com)
Freeware, a popular yet poor-performing antivirus solution.

Kaspersky Antivirus Personal (http://www.kaspersky.com)
Very highly regarded solution with an excellent detection record.

McAfee VirusScan (http://www.mcafee.com)
Trusted and well-established all-around virus scanner with an intuitive interface and few limitations.

Panda Anti-Virus Titanium & Platinum (http://www.pandasecurity.com)
Lesser-known but capable antivirus software.

Symantec Norton AntiVirus (http://www.symantec.com)
Mediocre, slow antivirus software with a well-known name—but beware of its expensive subscription plan to keep virus definitions updated.


Antispyware software is a more complex field, and as a result, you’ll have the best luck using multiple tools in addition to Windows Defender. The top antispyware products include:

Ad-Aware Personal Edition (http://www.lavasoft.de)
Ad-Aware is one of the oldest antispyware tools around, but its definitions are still updated frequently. The personal edition is free and very slick, although it’s not usually as effective at removing spyware as Spybot or Spysweeper, both discussed next.

Spybot - Search & Destroy (http://www.spybot.info)
Not quite as nice to look at as Ad-Aware, Spybot excels at purging hard-to-remove spyware. And while both Ad-Aware and Spybot remove tracking cookies from Internet Explorer, Spybot supports Firefox as well.

Spy Sweeper (http://www.webroot.com)
This highly regarded antispyware tool, while not free like the first two, is still a welcome addition to any spyware-fighter’s toolbox, and can often remove malware that the others miss.


So, armed with proper antivirus and antispyware software, there are four things you should do to protect your computer from malware:

• Place a router between your computer and your Internet connection.

• Scan your system for viruses regularly, and don’t rely entirely on your antivirus program’s auto-protect feature. Run a full system scan at least every two weeks.

• Scan your system for spyware regularly, at least once or twice a month. Do it more often if you download and install a lot of software.

• See the the ways that malware spreads, and the for some of the things you can do to reduce your exposure to viruses, spyware, adware, and other malware.

Source of Information : OReilly Windows Vista Annoyances Tips Secrets and Hacks

Increasing Windows Vista File System Performance

When we talk about increasing or improving file system performance, we have to talk about the hard drive. On new systems today with such fast processors, the biggest bottleneck is the performance of the hard drive itself. All Windows operating systems use virtual memory.They do this by a process referred to as paging. Paging is the process of moving virtual memory back and forth between physical memory and the hard drive. By optimizing the performance of the file system on the hard drive, we in essence improve the performance of the system itself. There are ways in Windows Vista to improve file system performance. Some are through regular maintenance, and others are through settings within the operating system. We will go over seven different ways of improving file system performance.
• Disk defrag
• Cluster size adjustment
• Short filenames
• Folder structure
• Compression
• Relocate pagefile.sys


Disk Defrag
File system fragmentation is the inability of the file system to lay out related data contiguously. Thus, the hard drive works harder during seeks, which hinders performance. Defragging hard drives is an effort to improve performance and has been going on for years regardless of the file system. In the early days of NTFS, very little talk about fragmentation came up. So little was the subject brought up that some quite mistakenly believed that NTFS couldn’t be fragmented? This is not the case at all. NTFS can and does become fragmented the longer a system is used and the more data that is put on the drive.


Cluster Size (Allocation Unit) Adjustment
As you’ve seen, the performance of NTFS can be affected by disk fragmentation. NTFS performance can also be affected by cluster size (allocation unit). Regardless whether a file is smaller than the cluster size, the size of that cluster will fill the drive, which means if you don’t choose the right cluster size when creating a volume, you could face a significant performance hit. Some things to think about when creating an NTFS volume and determining the cluster size:
• Will the files typically be about the same size?
• Will the files be smaller than the default cluster size?
• Will the files on the drive remain about the same size or will they grow larger and by how much?

Files that are smaller than the default cluster size, and stay relatively the same size, should use the default size to reduce disk space. The caveat to smaller clusters though is that they tend to fragment much easier and more often than larger cluster sizes. If the files you will be storing on the drive tend to be large (for example, CAD drawings), then you may want to use 16 or 32KB clusters instead of the default 4KB size.


Short Filenames
On NTFS volumes, each time a user creates a file with a long filename,Windows Vista creates a second file entry that has a similar 8.3 short filename. Remember the old 8.3 limitation of FAT12 and FAT16? No? Well, filenames back then could only have a maximum of eight characters for the filename itself, plus a three-character extension. An example would have been genedoc1.doc.With restrictions like that, it was very difficult to know what file contained what without a decent description in the filename. This all changed in the Windows world when NTFS came on. Now we have a maximum of 256 characters for our filenames.

On systems with a large number of files with long filenames that contain the same initial characters, the time required to create the files increases, thus file system performance is hurt. This is because NTFS bases the 8.3 filename on the first six characters of the long name. When you have a large number of files with similar long names under the same folder, this can cause problems to reduce the time required to create files, use the FSUTIL command, as shown next, to disable the 8.3 short filename services. After disabling 8.3, don’t forget to restart the system. fsutil behavior set disable8dot3 1


Folder Structure
One of the advantages of NTFS is its ability to support volumes that contain large numbers of files and folders. Some guidelines, however, can improve performance by altering your folder structure. First, do not put a large number of files into a single folder. Users that have programs that quickly open and close files frequently can cause a performance hit. Try separating the files into folders that will distribute the workload on multiple folders at a time. If this is not possible, keep the files in one folder, and just like we did previously in this section, disable the 8.3 filename structure.


Compression
It’s no secret that file compression causes a performance hit. This is because on a compressed NTFS file it is decompressed, copied, and then recompressed as a new file, even if it’s copied onto the same computer. Systems that are CPU-bound should not use compression.


Relocating Pagefile.sys
Another way to improve file system performance is to relocate pagefile.sys to another physical drive and dedicate that drive to it. By relocating the page file to a separate dedicated hard drive, you take advantage of the spindles of that drive and speed up the process of paging. This improves system performance and limits the amount of fragmenting on each physical drive.

Source of Information : Syngress How to Cheat at Microsoft Vista Administration

Windows Vista Configuration Files

Windows stores its configuration information in a variety of files of different formats, including files for configuring Windows itself and files for running Windows and DOS programs. Windows comes with the System Configuration tool (or Msconfig, for short) to help make controlled changes to some of its configuration files.


What Kinds of Configuration Files Does Windows Vista Use?
Other than the Registry, most of Windows' control information is stored in text files that you can open with Notepad or any other text editor. Although changing these files is usually a bad idea unless you're quite sure you know what you're doing, looking at their contents is entirely safe-and provides fascinating glimpses into how Windows works.


Making Configuration Files Visible
Most of the control information is stored in hidden, system, and read-only files. Hidden and system files are like any other files, except that they don't normally appear in file listings when you use Windows Explorer to display a folder that contains them. (Any file can be hidden, but only a couple of required files in the root folder of the boot drive are system files.) Read-only files can't be changed or deleted.

You can tell Windows to show you all the hidden files on your computer. In an Explorer window, select Tools | Folder Options and click the View tab. The list of Advanced Settings includes a Hidden Files And Folders category. Click the Show Hidden Files And Folders check box so that a check appears. This setting reveals hidden files in all folders, not just the current folder. Hidden files appear listed with regular files, but their icons are paler than those of regular files. To reveal the hidden files that Windows considers "special," uncheck the Hide Protected Operating System Files (Recommended) check box. Click Yes in the warning dialog box. Click OK when you have finished making changes in the Folder Options dialog box to make the changes active.

You can change a file's hidden or system status by right-clicking the file and selecting Properties. Click the Hidden and Read-only check boxes at the bottom of the Properties dialog box to select or deselect these attributes.


Windows Initialization Files
Since Windows 95, Microsoft has moved most Windows initialization information into the Registry, but Windows still uses two initialization files for 16-bit support: Win.ini and System.ini. Some Windows 3.1 applications stored their setup information in individual INI files (initialization files), such as Progman.ini for the Windows 3.1 Program Manager. Other Windows 3.1 programs used sections in the general-purpose Win.ini file.

All INI files have the file extension .ini, and nearly all reside in the folder in which Windows is installed (usually C:\Windows). All INI files have a common format, of which the following is a typical example (it contains configuration information for the WS_FTP file transfer program):

[WS_FTP]
DIR=F:\Program Files\WS_FTP
DEFDIR=F:\Program Files\WS_FTP
GROUP=WS_FTP Pro
INSTOPTS=4

[Mail]
MAPI=1

An INI file is divided into sections, with each section starting with a section name in square brackets. Within a section, each line is of the form parameter=value, where the value may be a filename, number, or other string. Blank lines and lines that start with a semicolon are ignored.

In general, editing the Win.ini or System.ini file is a bad idea, but if you need to do so, use the System Configuration tool.

The Win.ini File
In Windows 3.1, nearly every scrap of setup information in the entire system ended up in the Win.ini file in C:\Windows, meaning that if any program messed up Win.ini, the system could be nearly unusable. More recent versions of Windows alleviate this situation by moving most configuration information into the Registry, but Win.ini is retained to offer support for 16-bit applications. You'll typically find sections for a few of your application programs in Win.ini, plus a little setup information for Windows itself.
You can use Notepad to edit Win.ini, but it can be dangerous. Be sure to make a backup of the file first.

The System.ini File
In Windows 3.1, the System.ini file in C:\Windows listed all the Windows device and subsystem drivers to be loaded at startup. In Windows Vista the vast majority of the driver information is in the Registry, but System.ini still contains driver configuration information for 16-bit applications. You can edit the System.ini file in Notepad or similar ASCII text editor.


The Registry
The Windows Registry contains all of the configuration information that is not in an INI file, including the vast majority of the actual information used to control Windows and its applications. Use the Registry Editor to examine and manage the Registry.

Source of Information : Windows Vista The Complete Reference

Understanding Windows Server Core

Windows server core is a “minimal” installation option for Windows Server 2008. What this means is that when you choose this option during setup (or when using unattended setup), Windows Server 2008 installs a minimum set of components on your machine that will allow you to run certain (but not all) server roles. In other words, selecting the Windows server core installation option installs only a subset of the binaries that are installed when you choose the full installation option for Windows Server 2008.

Here are some of the Windows Server 2008 components that are not installed when you specify the Windows server core installation option during setup:

• No desktop shell (which means no glass, wallpaper, or screen savers either)

• No Windows Explorer or My Computer (we already said no desktop shell, right?)

• No .NET Framework or CLR (which means no support for managed code, which also means no PowerShell support)

• No MMC console or snap-ins (so no Administrative tools on the Start menu-whoops! I forgot, no Start menu!)

• No Control Panel applets (with a few small exceptions)

• No Internet Explorer or Windows Mail or WordPad or Paint or Search window (no Windows Explorer!) or GUI Help and Support or even a Run box.

Wow, that sounds like a lot of stuff that’s missing in a Windows server core installation of Windows Server 2008! Actually though, it’s not-compare the preceding list to the following list of components that are available on a Windows server core server.

First, you’ve still got the kernel. You always need the kernel.

Then you’ve got hardware support components such as the Hardware Abstraction Layer (HAL) and device drivers. But it’s only a limited set of device drivers that supports disks, network cards, basic video support, and some other stuff. A lot of in-box drivers have been removed from the Windows server core installation option, however-though there is a way to install out-of-box drivers if you need to.

Next, you’ve still got all the core subsystems that are needed by Windows Server 2008 in order to function. That means you’ve got the security subsystem and Winlogon, the networking subsystem, the file system, RPC and DCOM, SNMP support, and so on. Without these subsystems, your server simply wouldn’t be able to do anything at all, so they’re a necessity for a Windows server core installation.

Then you’ve got various components you need to configure different aspects of your server. For example, you have components that let you create user accounts and change passwords, enable DHCP or assign a static IP address, rename your server or join a domain, configure Windows Firewall, enable Automatic Updates, choose a keyboard layout, set the time and date, enable Remote Desktop, and so on. Many of these configuration tasks can be performed using various command-line tools included in a Windows server core installation (more about tools in a moment), but a few of them use scripts or expose minimal UI.

There are some additional infrastructure components present as well on a Windows server core installation. For instance, you still have the event logs plus a command-line tool for viewing, configuring, and forwarding them using Windows eventing. You’ve got performance counters and a command-line tool for collecting performance information about your server. You have the Licensing service, so you can activate and use your server as a fully licensed machine. You’ve got IPSec support, so your server can securely communicate on the network. You’ve got NAP client support, so your server can participate in a NAP deployment. And you’ve got support for Group Policy of course.

Then there are various tools and infrastructure items to enable you to manage your Windows server core server. You’ve got the command prompt cmd.exe, so you can log on locally to your server and run various commands from a command-prompt window. In fact, as we saw, a command-prompt window is already open for you when you first log on to a Windows server core server. What happens, though, if you accidentally close this window? Fortunately, a Windows server core installation still includes Task Manager, so if you close your command window you can start another by doing the following:

1. Press CTRL+SHIFT+ESC, to open Task Manager.

2. On the Applications tab, click New Task.

3. Type cmd and click OK.

In addition to the command prompt, of course, there are dozens (probably over a hundred, and more when different roles and features are installed) of different command-line tools available on Windows Server 2008 for both full and server core installation options. What I’m talking about is Arp, Assoc, At, Attrib, BCDEdit Cacls, Certutil, Chdir, chkdsk, Cls, Copy, CScript, Defrag, Dir, and so on. A lot of the commands listed in the “Windows Command-Line Reference A–Z,” found on Microsoft TechNet, are available on a Windows server core server- not all, mind you, but a lot of them.

You can also enable Remote Desktop on a Windows server core installation, and this lets you connect to it from another machine using Remote Desktop Connection (RDC) and start a Terminal Services session running on it. Once you’ve established your session, you can use the command prompt to run various commands on your server, and you can even use the new Remote Programs feature of RDC 6.0 to run a remote command prompt on a Windows server core server from an administrative workstation running Windows Vista.

There’s also a WMI infrastructure on your Windows server core server that includes many of the usual WMI providers. This means you can manage your Windows server core server either by running WMI scripts on the local machine from the command prompt or by scheduling their operation using schtasks.exe. (There’s no Task Schedule UI available, however.) Or you can manage your server remotely by running remote WMI scripts against it from another machine. And having WMI on a Windows server core server means that remote UI tools such as MMC snap-ins running on other systems (typically, either a full installation of Windows Server 2008 or an administrator workstation running Windows Vista with Remote Server Administration Tools installed) can connect to and remotely administer your Windows server core server. Plus there’s also a WS-Management infrastructure on a Windows server core installation. WS-Management is a new remote-management infrastructure included in Windows Vista and Windows Server 2008, and involves Windows Remote Management (WinRM) on the machine being managed and the Windows Remote Shell (WinRM) for remote command execution from the machine doing the managing.

Then there are various server roles and optional features you can install on a Windows server core server so that the machine can actually do something useful on your network, like be a DHCP server or a domain controller or print server. Then there are a few necessary GUI tools that actually are present on a Windows server core server. For example, we already saw that the command prompt (cmd.exe) is available, and so is Task Manager. Another useful tool on a Windows server core server is Regedit.exe, which can be launched either from the command line or from Task Manager. Next is Notepad. During the early stages of developing and testing Windows Server 2008, one of the most common requests from participants in the Microsoft Technology Adoption Program (TAP) for Windows Server 2008 was this: We need a tool on Windows server core servers that we can use to view logs, edit scripts, and perform other essential administrative tasks.

Who ever expected that the lowly and oft-maligned Notepad would be so important to administrators who work in enterprise environments? Anyway, before we move on and talk a bit about the rationale behind why Microsoft decided to offer the Windows server core installation option in Windows Server 2008, let’s hear from one of our experts about how the Windows server core product team managed to make this thing work. After all, Windows components have a lot of dependencies with one another and especially with the desktop shell and Internet Explorer, so it will be interesting to hear how they took so many components out of this installation option for the product without causing it to break.

Source of Information : Introducing Windows Server 2008

Creating Linked or Embedded Objects in Windows Vista

The way you link or embed an object depends on the application programs you're using-the program into which you want to embed or link the object. Most programs have a menu command to create an object by using OLE, but you may have to use the online help system to find the command. In Microsoft Word, for instance, you can use Home Paste Paste Special to create an object by using OLE. When using Home Paste Paste Special in Microsoft Word or its equivalent in another program (usually Edit | Paste Special), you may see the Display As Icon option. This option allows you to create a package, an icon that, when clicked, opens the object in its native application. The following two techniques may also work to link or embed an object: dragging-and-dropping and using Home | Paste | Paste Special. Neither technique is supported by all applications.


Embedding an Object by Dragging-and-Dropping
The easiest way to embed an object is to drag the information from one program and drop it in the other program. For this method to work, both applications must support drag-and-drop embedding. Check the documentation for the program that contains the information you want to embed. When dragging the information you want to embed, use the same technique you use to copy selected information within the application (some applications require you to hold down the CTRL key while dragging the information). For instance, in Excel, you have to click-and-drag the border of the selected area to move or copy it. Follow these steps to use drag-and-drop embedding:

1. Select the information you want to embed.

2. Use drag-and-drop to drag the selected information to the other application; use the same drag-and-drop technique you use to copy information within an application. If the second application isn't visible on the screen, you can drag the information to the application's taskbar button-hold the mouse pointer there for a second, and the application window opens.

3. Drop the information where you want it; if the application supports OLE, you automatically create an embedded object.


Linking or Embedding an Object Using Paste Special
You may want a little more control over the object than you have when you drag-and-drop it. To achieve more control over the object, use the Home | Paste | Paste Special command found in many applications. The procedure is much like using the Clipboard to cut-and-paste, except you paste by using OLE instead, as follows:

1. Select the information you want to link or embed.

2. Press CTRL-C or CTRL-X to copy or cut the information (or use another method to copy or cut).

3. Move the cursor where you want the object to appear.

4. Choose Home | Paste | Paste Special, choose the correct application from the choices displayed. Make sure to choose the application you want to use to edit, that is Microsoft Office Excel. If you choose another option, you won't be using OLE-instead, you will be using the Clipboard to do a simple paste of information from one application to another.

5. Choose the correct setting either to embed the object in the new file or to link the two files together. To embed the object, choose the Paste option; to link the object, choose the Paste Link option.

6. Change the Display As Icon check box setting, if necessary. If you choose to display the object as an icon, you don't see the information itself. Instead, you create a packaged object that shows the information it contains only when you open its icon.

7. Click OK to link or embed the object. You see the object in the container file.


Editing a Linked or Embedded Object
Editing a linked or embedded object is simple-in most applications, you just double-click the object. For other applications, you may need to right-click the object to display a menu with an Edit option or change modes so you are in Edit mode (if you're having trouble, check the help system of the application containing the object). Once you figure out how to edit the object, the object's application opens. Next, the menu and toolbars of the window in which the object appears are replaced by the menu and toolbars of the application assigned by the Registry to that file type (usually the application used to create the object). In other words, if you're editing an Excel object in a Word document, double-click the object to display Excel's menu and toolbars in Word's window. You can edit the object by using that application's tools. When you're done, click outside the object to reinstate the regular menu and toolbars, or choose File | Update or Exit in some applications. If you're asked whether you want to update the object, answer Yes.

If the object is linked rather than embedded, you can also edit the object by editing the source file itself. If the file containing the object is also open, you may have to update it manually to see the new information in the object. Closing and opening the file containing the object may be the easiest way to update the object. To delete an object, click it to select it-you'll probably see a box around it-and then press the DELETE or BACKSPACE key.

Source of Information : Windows Vista The Complete Reference

Sharing Information Using OLE in Windows Vista

Object Linking and Embedding is far more flexible and can be far more complicated than cut-and-paste or drag-and-drop. OLE enables you to use all your software applications to create an integrated document. (Find special prices for all components online.) For instance, you might want to create an annual report that includes these components:

• Text you create and format by using a word processor, such as Microsoft Word or Corel WordPerfect

• A company logo stored in a graphics file created by Adobe Photoshop, Paint, or some other graphics application

• Data and calculations on operating costs stored in a Microsoft Excel spreadsheet

• Graphs and charts, which may come from your spreadsheet package or another graphics package

These components may not reflect exactly what you want to do, but the point is the same-if you want to combine the output of different applications, OLE offers many advantages over the Clipboard. Why? Because, when you use OLE, the original program retains ownership of the object, and you can use the program to edit the object. For instance, if you use OLE to embed a portion of a spreadsheet in a word processing document, you can always use the spreadsheet application to edit the object and the spreadsheet in the word processing document will reflect those changes. If, instead, you use the Clipboard to copy the numbers from the spreadsheet and then you paste the numbers to the word processor, they would just sit in the word processor, oblivious to their origins-you could use only the tools available in the word processor to edit the numbers. This means that if you later change the original spreadsheet, the numbers pasted in the word processing document won't change.

In OLE, an object refers to a piece of information from one application that is placed in a container file created by another application. For example, a spreadsheet or graphic is an object when it is included in a word processing document. OLE actually is two similar methods of sharing information between applications-embedding and linking. Sticking with the previous example, embedding means putting the spreadsheet object in the word processing document (container file) and asking the word processor to take care of storing the object. So, although the word processor enables you to edit the spreadsheet object by using the spreadsheet application, the spreadsheet object is stored with the word processing document. Linking, on the other hand, allows the object to retain a close relationship with its origins-so close, in fact, if the numbers in the original spreadsheet file change, the linked spreadsheet object in the word processing document changes to match. This occurs because the word processing document doesn't really contain the object it displays-it only contains a reference to the file where the information is stored.

You may also choose to insert a package into another file. A package is a small file that uses OLE, but instead of displaying content owned by another application, it displays an icon, which, when clicked, opens the owner application and displays the object. Packages can be either linked or embedded. Whether you choose to embed or link objects, the process is similar: you create an object in one application, and then link or embed the object into another application.

Although using OLE to link files can be wonderfully convenient and can save you hours of revisions, it should be used judiciously. If you plan ever to move the file containing linked objects or to send it to someone, you must make sure one of the following occurs:

• The linked files also get moved or sent.

• The linked objects don't get updated. This means the host application won't go looking for the information in the linked file. To break the link, delete the object and paste in a nonlinked version instead.

• You edit the links so the host file knows where to find the source files for the linked objects.

Otherwise, your beautifully organized and time-saving document can become a complete mess. If you are going to move a document with linked objects in it, you need to know how to maintain links.

If you don't need the automatic updating you get with linked objects (for instance, if the source file isn't going to change, or if you don't want the object to reflect changes) or if you know you are going to move or send files, then stick with embedded objects-they're easier to maintain. However, embedding a large object may take more disk space than linking.

Some applications enable you to link one file to another in a different way-by using a hyperlink. A hyperlink actually takes you from one file to another, opening the application for the second file, if necessary.

Source of Information : Windows Vista The Complete Reference

Windows Vista Web Browser Security

The web browser, an attacker can also get into your computer by using an exploit in a web browser by tricking you into installing a web component that has malicious code inside. Internet Explorer has many security settings built in that will help you keep safe. However, there are often tradeoffs including ease of use and convenience. For example, you can disable the installation of all web components for maximum security, but when you really need to install one, it can take longer and require more work than normal.


Internet Explorer 7
Internet Explore in Windows Vista has undergone massive changes and has many new security features, such as Protected Mode. What does that mean? In the past, Internet Explorer was prone to various different attacks, leaving it one of the weakest parts of the entire Windows operating system. Microsoft tried to stop automatic downloading and installation, and Web site exploits, in its release of Service Pack 2 for Windows XP, but we all know that worked only a little. Flaws are still being discovered in Internet Explorer and attackers are trying to find new ways to trick users into installing their malicious code. How do you fix this problem? Simple-you isolate Internet Explorer into a secure environment so that in the future, if exploits are found, they will not work because IE cannot access resources other than its own. That new protection is found only in the Windows Vista version of Internet Explorer 7 and is called Protected Mode.

Protected Mode, the phishing filter that protects you against fake Web sites, combined with other security options in Internet Explorer 7, will help you secure your web browser and the other major point of entry for spyware, malware, and attackers


Fine-tuning security settings
You can adjust the security settings in Internet Explorer within Internet Options. Follow these steps to adjust the security settings in IE7:

1. Open Internet Explorer 7.

2. Click Tools and select Internet Options.

3. After Internet Options loads, click the Security tab. The Security tab enables you to manage the individual settings for what is allowed in each of the browser zone settings-for example, if ActiveX controls are allowed to be automatically downloaded and installed in the Internet zone. You can adjust these zones by selecting the zone and then clicking the Custom Level button.

4. After the security settings for the zone selected load, you can scroll through the list of settings and check or uncheck any of the settings to enable or disable them, respectively. For optimal security, I recommend disabling a lot of these features beyond what is normally disabled. I recommend that you change for best security practices. When you are finished modifying all the settings, click OK to return to Internet Options.

Internet Explorer Security Zone Settings:
Loose XAML. I like to select Disable for this option because few sites use it and disabling it means one less feature to worry about getting exploited.

XPS documents. Disable this option for tighter security. If you don't use this document format, you should have no problems disabling it.

Run components not signed with Authenticode. For tighter security, select Disable.

Font download. Consider yourself very lucky if you ever run across a Web site that uses this feature. Disable it to be safe.

Enable .NET framework setup. Disable this setting. I do not understand why this option is even listed here.

Include local directory path when uploading files to a server. I like to disable this option for privacy and because it should never be needed.

Launching programs and files in an IFRAME. Disable this feature. Really, this should never be done.

Logon. I usually set this option to Prompt for user name and password for maximum security.


5. After you are back on the Security tab of Internet Options, make sure that the Enable Protected Mode box is checked for each of the zones. This is one feature that I believe should be enabled for all zones.

6. You are now ready to move on to the Advanced tab to adjust more security settings. Click the Advanced tab and scroll down the list to the Security section.

7. In the Security section, I recommend selecting Do not save encrypted pages to disk and Empty Temporary Internet Files folder when browser is closed. These two settings will help protect your privacy as well as keep your important online data from Web sites, such as your bank's, safe.

8. When you are finished, click OK to save your changes.

You are now finished configuring Internet Explorer to run more securely and protect you even better when you are online.

Source of Information : Hacking Windows Vista ExtremeTech

IPv6 Stack

Basic IPv6 Stack Support
The IPv6 protocol for Windows Server 2008 and Windows Vista supports Internet Engineering Task Force (IETF) standards for IPv6 protocol stack functionality, including the following:

• The IPv6 header (RFC 2460)

• Unicast, multicast, and anycast addressing (RFC 4291)

• The Internet Control Message Protocol for IPv6 (ICMPv6) (RFC 4443)

• Neighbor Discovery (ND) (RFC 4861)

• Multicast Listener Discovery (MLD) (RFC 2710) and MLD version 2 (MLD v2)
(RFC 3810)

• Stateless address autoconfiguration (RFC 4862)


IPv6 Stack Enhancements
The IPv6 protocol for Windows Server 2008 and Windows Vista also supports the following enhancements:

• Dead gateway detection through neighbor unreachability detection Dead gateway detection automatically switches the currently used default router to the next one in a configured list when the current default router becomes unavailable, as detected through neighbor unreachability detection.

• Explicit Congestion Notification support (RFC 3168) When a TCP segment is lost, TCP assumes that the segment was lost due to congestion at a router and performs congestion control, which dramatically lowers the TCP sender’s transmission rate. With Explicit Congestion Notification (ECN) support on both TCP peers and in the routing infrastructure, routers experiencing congestion mark the packets as they forward them. TCP peers receiving marked packets lower their transmission rate to ease congestion and prevent segment losses. Detecting congestion before packet losses are incurred increases the overall throughput between TCP peers. Windows Server 2008 and Windows Vista support ECN, but it is disabled by default. You can enable ECN support with the netsh interface tcp set global ecncapability=enabled command.

• Default route preferences and Route Information options in router advertisements (RFC 4191) With default router preferences, you can configure the advertising routers on a subnet to indicate a preference level so that hosts use the most preferred router as their default router. With Route Information options in router advertisements, routers that do not advertise themselves as default routers can advertise directly attached routes to hosts.

• Strong host model for both sending and receiving The strong host model requires that unicast traffic sent or received must be associated with the network interface on which the traffic is sent or received. For sent traffic, IPv6 can send packets on an interface only if the interface is assigned the source IPv6 address of the packet being sent. For received traffic, IPv6 can receive packets on an interface only if the interface is assigned the destination IPv6 address of the packet being received.

Source of Information : Microsoft Press Understanding IPv6 2nd Edition

Architecture of the IPv6 Protocol for Windows Server 2008 and Windows Vista

For Windows Server 2008 and Windows Vista, the TCP/IP protocol stack is a dual IP layer implementation, where only a single implementation of the Transport Layer protocols Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) operate over both Internet layer protocols: Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6).

The TCP/IP driver file, Tcpip.sys, contains both IPv4 and IPv6 Internet layers. Tcpip6.sys operates between Windows Sockets and the Network Device Interface Specification (NDIS) layers in the Windows network architecture. The architecture of Tcpip.sys consists of the following layers:

• Transport layer Contains the implementations of TCP and UDP.
• Network layer Contains implementations of both IPv4 and IPv6.
• Framing layer Contains modules that frame IPv4 or IPv6 packets. Modules exist for

IEEE 802.3 (Ethernet), IEEE 802.11, and Point-to-Point Protocol (PPP) links. Modules also exist for logical interfaces such as the loopback interface and IPv4-based tunnels. IPv4-based tunnels are commonly used for IPv6 transition technologies. The IPv4 Internet layer appears as the Internet Protocol Version 4 (TCP/IPv4) component in the list of protocols from the properties of a local area network (LAN) connection in the Network Connections folder. The IPv6 Internet layer appears as the Internet Protocol Version 6 (TCP/IPv6) component. You can enable or disable these components per connection in the Network Connections folder, but you cannot uninstall them. You can uninstall the IPv4 Internet layer with the netsh interface ipv4 uninstall command, but you cannot uninstall the IPv6 Internet layer.

Source of Information : Microsoft Press Understanding IPv6 2nd Edition

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...